Essay · Surveillance Infrastructure

The Surveillance Pivot

Published March 13, 2026
Topic Device Sensors · Platform Surveillance · Mental Health Data

The phone in your pocket has been a medical instrument for years. It reads your gait, your heart rate variability, the ambient sound patterns around you, and, per a 2023 Dartmouth study, the micro-expressions on your face while it lies face-up on a table. It has never been regulated as a medical instrument. It has been regulated as a phone.

This is not a metaphor. The MoodCapture project at Dartmouth College used the front-facing camera to detect signs of depression in 177 participants, achieving 75% accuracy before the subject had self-reported any change in mood. The sensor was already in the device. The algorithm was the addition. The study was a proof of concept. The commercial deployment of the same principle, passive affect inference from camera and microphone data, requires no new hardware and no disclosure to the person being inferred.

The sensors themselves are worth cataloguing because the industry prefers to discuss them in the abstract. A modern smartphone carries an accelerometer and gyroscope that together can infer whether you are walking with a limp, how often you rise from bed, whether your hand tremor exceeds a clinical threshold. The microphone, running passively through certain apps, captures not just what you say but ambient audio signatures, a television in the background or a particular type of street noise, that function as location inference without GPS. The camera detects you. The barometer places you on a specific floor of a building. The combination produces a behavioral record with medical-grade resolution.

The sensor was already in the device. The algorithm was the addition. The commercial deployment requires no new hardware and no disclosure.

In September 2024, the FTC published a staff report examining the data practices of nine major platforms, Meta, TikTok, YouTube, and six others, characterizing the result as "vast surveillance." The report documented the commercial trade in inferred attributes: mental health status, political affiliation, socioeconomic class. It noted the absence of meaningful opt-out mechanisms and the opacity of the downstream data supply chain. The report generated press coverage for approximately one news cycle. The platforms continue to operate under the same consent architecture the report described.

The consent architecture deserves examination on its own terms. The standard consent flow presents a permissions screen at first launch, asks whether to allow microphone access, camera access, location access, and proceeds on the assumption that tapping "Allow" constitutes informed consent to the downstream data supply chain those permissions feed. This is not a reasonable assumption. The average app privacy policy, at average reading speed, requires roughly 40 minutes to read. The average person encounters between 1,500 and 3,000 apps over the life of a device. Nobody reads them. The policy is not designed to be read. It is designed to be accepted.

The Platform Infrastructure

What the FTC's September 2024 report documented was not a set of isolated violations but an architecture: the behavioral surplus produced by social media and video streaming platforms is collected, combined with data purchased from brokers, and used to build user models that are then monetized through ad targeting, risk scoring, and algorithmic recommendation. The platforms did not build this infrastructure accidentally. They built it because it is profitable, and it was profitable because the regulatory environment allowed it.

HIPAA, the primary US health data protection law, applies to healthcare providers, insurers, and their business associates. It does not apply to an app that infers your health status from accelerometer data and sells that inference to an advertiser. The sensor data bypasses the regulatory category entirely, not through a loophole but through a definitional gap that the law was written before it needed to address. A hospital cannot sell your diagnosis. An app can sell a behavioral signal statistically equivalent to it.

The Gravy Analytics breach in January 2025, which exposed precise movement history for tens of millions of people, illustrated the fragility of the infrastructure in a different register. The data was collected legally, aggregated legally, and stored in a system that was then compromised. The FTC order issued the same month prohibited Gravy Analytics from selling location data associated with sensitive venues, including medical facilities, religious institutions, political gatherings. It did not prohibit the collection or internal use of that data. The breach was an accident. The accumulation was the policy.

What the Inference Knows

The Dartmouth MoodCapture study is useful not because front-camera depression detection is the near-term commercial threat. It was not, and the study was conducted under IRB protocols with participant consent. It is useful because it demonstrates what the hardware already enables. The sensors that detected depression onset at 75% accuracy are the same sensors present in every consumer device. The gap between research proof-of-concept and commercial deployment has historically been measured in product cycles, not decades.

The more immediate picture is the one the FTC described: platforms inferring mental health status, political affiliation, and economic vulnerability from behavioral data already collected, using those inferences to target advertising and, in some cases, to sell audience segments to third parties. This is not a hypothetical. The FTC's report was based on compulsory responses from the platforms themselves. The surveillance the report described is the current state of operation, not a projected risk.

Privacy, in the commercial sense the word is usually deployed, describes a setting in a menu. It is a toggle, a permissions screen, a cookie consent banner. The infrastructure underneath those controls does not change when you toggle the setting. The data already collected is not deleted. The model already trained on your behavior does not unlearn you. The word is doing the work of reassurance without doing the work of protection. We note this not as a counsel of despair but as a terminological correction: what is being discussed, when the platforms discuss "privacy," is the appearance of control, not its substance.

The phone is a medical instrument. It will be regulated as one, eventually, in response to a harm specific enough and documented enough and affecting a constituency influential enough to compel legislative action. The history of consumer protection suggests this takes between ten and thirty years from the identification of the harm. In the interim, the sensors continue to run.

Sources

FTC Staff Report, "A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services," September 2024. FTC press release ↗

FTC, "FTC Finalizes Order Prohibiting Gravy Analytics, Venntel From Selling Sensitive Location Data," January 14, 2025. FTC press release ↗

Nicholas C. Jacobson et al., "MoodCapture: Depression Detection Using In-the-Wild Smartphone Images," Dartmouth College, 2023. Study of 177 participants; 75% accuracy detecting depression onset via front camera. ACM Digital Library ↗

University of Washington follow-up research on human-AI bias alignment, November 2025. On AI bias propagation when humans defer to biased AI recommendations. Related UW research ↗