Reddit is testing a feature that would use the iPhone’s TrueDepth camera — the same hardware that powers Face ID — to verify that a real human being is scrolling through its app. Not a bot. Not a script. A person, sitting there, actually looking at the screen.
The feature, spotted in Reddit’s iOS app code, doesn’t attempt to identify users by their facial features. Instead, it checks for “liveness” — confirming that a three-dimensional human face is present in front of the device. It’s a distinction that matters enormously in the privacy calculus, but one that is already generating fierce debate among Reddit’s notoriously skeptical user base.
As first reported by Digital Trends, the discovery came from a user on the social platform itself who noticed new code referencing Apple’s ARKit framework — the augmented reality toolkit that taps into the TrueDepth camera system. Reddit confirmed the test to Digital Trends, stating that the feature is part of an experiment aimed at reducing bot activity and ensuring content engagement metrics reflect actual human attention. The company emphasized that no facial data is stored, transmitted, or used for identification purposes.
That reassurance hasn’t calmed everyone down. And it shouldn’t, necessarily.
The timing of this experiment is no accident. Reddit went public in March 2024, and since its IPO, the company has faced mounting pressure to demonstrate that its advertising inventory is being consumed by real people, not automated accounts that inflate engagement numbers. Advertisers are increasingly demanding proof of authentic human attention, and platforms that can’t provide it risk losing billions in ad revenue. Reddit’s experiment with liveness detection is, at its core, an attempt to answer a question that has haunted digital advertising for two decades: Is anyone actually there?
Bot traffic on Reddit has been a persistent headache. The platform has long struggled with coordinated inauthentic behavior — from political astroturfing operations to crypto pump-and-dump schemes executed by networks of automated accounts. Reddit’s own transparency reports have acknowledged the scale of the problem, with millions of accounts banned annually for violating policies against spam and manipulation. But banning accounts after the fact is a game of whack-a-mole. Liveness detection, if implemented broadly, would represent a fundamentally different approach: verifying human presence in real time, before engagement is counted.
The technical mechanism is straightforward. Apple’s TrueDepth camera projects thousands of infrared dots onto a user’s face and reads the resulting depth map. This is the same technology that lets you unlock your iPhone without a passcode. ARKit, Apple’s developer framework, gives third-party apps access to some of this depth-sensing capability without exposing raw biometric data. Reddit’s implementation would use ARKit to confirm that a real face — with three-dimensional depth, not a flat photograph — is present. It wouldn’t know whose face. Just that one exists.
Privacy advocates have responded with a mix of cautious acknowledgment and deep concern. The Electronic Frontier Foundation and similar organizations have long warned about the normalization of facial scanning in consumer applications, even when the stated purpose is benign. The worry isn’t necessarily about what Reddit does with this data today. It’s about what precedent it sets for tomorrow.
Consider the trajectory. If liveness detection becomes standard practice for social media engagement verification, it’s a short conceptual leap to requiring it for posting content, voting on posts, or accessing certain communities. And once users are accustomed to pointing their phone’s camera at their face to prove they’re human, the barrier to more invasive forms of biometric verification drops considerably. That’s the slippery slope argument, and while it’s sometimes overused, it carries real weight in this context.
Reddit’s experiment also raises questions about accessibility. Not every iPhone has a TrueDepth camera — older models lack the hardware entirely. And Android users, who make up a significant portion of Reddit’s global audience, would be excluded from this verification method altogether unless a parallel system is developed. Reddit hasn’t publicly addressed how it plans to handle these disparities, though the company told Digital Trends that the feature is still in early testing and may never roll out broadly.
That qualifier — “may never roll out” — is doing a lot of heavy lifting. Companies test features all the time that never see the light of day. But the fact that Reddit built this, integrated it into production code, and is actively experimenting with it suggests more than idle curiosity.
The advertising angle is impossible to ignore. Reddit reported $804 million in revenue for fiscal year 2023, the vast majority from advertising. Since going public, the company has been aggressively courting brand advertisers who have historically viewed Reddit as too chaotic and too risky for their campaigns. Proving that engagement on the platform comes from verified humans would be a powerful selling point in pitch meetings. It would also give Reddit a competitive edge over platforms like X (formerly Twitter), which has faced persistent criticism over bot prevalence since Elon Musk’s acquisition — criticism that Musk himself amplified during his attempt to back out of the deal.
There’s an irony here. Reddit’s culture is built on anonymity. Pseudonymous accounts. Throwaway identities. The freedom to speak without your real name attached. Asking users to scan their faces — even without storing identifying data — cuts against that ethos in a way that feels jarring. Reddit’s leadership seems aware of this tension. The company’s public statements have been careful to frame the feature as optional and non-identifying, a technical safeguard rather than an identity check.
But the Reddit community isn’t buying it wholesale. Threads discussing the feature have drawn thousands of comments, many expressing skepticism about the company’s long-term intentions. “Today it’s optional. Tomorrow it’s required to post. Next year they’re selling the data,” one highly upvoted comment read. That cynicism reflects years of watching tech companies gradually expand data collection practices, each step presented as innocuous until the cumulative effect becomes something users never agreed to.
Apple’s role in this is worth examining. The company has positioned itself as the champion of user privacy, building its brand identity around the idea that what happens on your iPhone stays on your iPhone. ARKit’s design reflects this philosophy — it provides developers with processed outputs, not raw biometric data. Apple’s App Store review guidelines also impose strict limits on how apps can use camera and face-tracking data, requiring explicit user consent and prohibiting the sale of such data to third parties. So there are guardrails. Whether they’re sufficient is another question.
The broader industry context matters too. Liveness detection isn’t new. Banks and financial institutions have used it for years to prevent fraud during remote account opening. Identity verification companies like Jumio, Onfido, and iProov have built entire businesses around the technology. What’s new is its potential application in social media, where the user expectation is casual browsing, not high-security authentication. Asking someone to verify their identity to open a bank account feels proportionate. Asking them to prove they’re human to scroll through memes? That’s a harder sell.
And yet the bot problem is real, and it’s getting worse. Generative AI has made it trivially easy to create convincing fake accounts that post, comment, and interact in ways that are increasingly difficult to distinguish from genuine human behavior. Traditional anti-bot measures — CAPTCHAs, email verification, phone number requirements — are being defeated at scale by sophisticated automation tools. Reddit’s liveness detection experiment can be understood as an acknowledgment that the old playbook isn’t working anymore.
The question is whether users will accept the tradeoff. More authentic engagement in exchange for periodic face scans. Fewer bots in exchange for a camera pointed at your face. It’s a bargain that some will find reasonable and others will find unconscionable. The answer will likely depend on implementation details that Reddit hasn’t yet disclosed: How often would verification be required? Would it be a one-time check or a recurring demand? Would users who decline be penalized with reduced visibility or restricted features?
None of these questions have public answers yet. Reddit is still in the experimental phase, and the company has every incentive to tread carefully. Its IPO gave it access to public market capital, but it also subjected it to public market scrutiny. A privacy backlash — particularly one that resonates with Reddit’s core user base of tech-savvy early adopters — could damage the brand in ways that take years to repair.
So Reddit finds itself in a familiar bind for modern tech companies: caught between the demands of advertisers who want verified human attention, the expectations of users who value their anonymity, and the technical reality that bots are becoming indistinguishable from people. Liveness detection is one answer. Whether it’s the right answer depends entirely on how it’s deployed — and whether Reddit can resist the temptation to expand its scope once the infrastructure is in place.
For now, it’s an experiment. A signal of intent. And a reminder that in the ongoing war between platforms and bots, your face might become the next battlefield.
