Expansion of AI-Enabled Camera Surveillance Raises Privacy and Biometric Identification Concerns
The New York Metropolitan Transportation Authority (MTA) is testing new subway gates that use AI-powered cameras to capture short recordings when riders are suspected of fare evasion and to generate a physical description that is transmitted to the MTA, prompting criticism from privacy advocates concerned about persistent monitoring in public transit. The MTA has also solicited vendor input for systems using computer vision and AI to detect “unusual or unsafe behaviors,” reflecting broader growth in surveillance deployments across New York City.
In parallel, consumer AI smart glasses are re-emerging with built-in cameras and microphones, intensifying concerns that everyday wearables can enable covert recording and downstream biometric identification. Reporting highlighted that footage from Ray-Ban Meta smart glasses can be paired with external facial-recognition services to identify strangers, and noted policy issues such as cloud storage of wake-word voice recordings (potentially retained up to a year) and uncertainty about future features like on-device facial recognition; retailers in New York (e.g., Wegmans and others) are also expanding facial-recognition use, underscoring the convergence of AI, biometrics, and surveillance in both public and commercial spaces.
Timeline
Feb 6, 2026
Wegmans deployed facial-recognition cameras in some New York stores
Wegmans said it had deployed cameras with facial-recognition capabilities in some New York stores to identify people previously flagged for misconduct. The company did not disclose how long related data would be retained.
Feb 6, 2026
MTA began testing AI-enabled subway gates to flag fare evasion
The MTA started testing subway gates equipped with cameras that record short clips when AI suspects a person of fare evasion. According to manufacturer Cubic, the system generates a physical description of the person and sends it to the MTA.
Feb 5, 2026
Privacy concerns intensified over AI smart glasses data practices
Reporting highlighted renewed scrutiny of AI-enabled smart glasses, especially Meta and Ray-Ban devices, over risks tied to covert recording, cloud retention of wake-word voice recordings, and possible linkage of captured footage to facial recognition systems. The concerns were amplified by reports of people being filmed without consent and posted online, as well as claims that recording indicator lights can be disabled by third parties.
Feb 5, 2026
University warned students after reports of women being recorded near campus
A university issued a warning after reports that a man was using smart glasses to record women near campus. The incident highlighted real-world misuse concerns tied to AI-enabled eyewear that can discreetly capture video.
Dec 1, 2025
MTA issued AI surveillance vendor request in December
In December 2025, the New York Metropolitan Transportation Authority issued a request for vendors offering AI and computer-vision tools to detect 'unusual or unsafe behaviors.' Critics viewed the request as a step toward broader automated surveillance in the transit system.
Apr 1, 2020
NYPD had spent over $159 million on facial recognition by April 2020
Records obtained through a five-year lawsuit showed that by April 2020 the NYPD had spent more than $159 million on facial recognition technology and related surveillance capabilities. The disclosure provided historical context for New York City's expanding biometric surveillance infrastructure.
See the full picture in Mallory
Mallory subscribers get deeper analysis on every story, including:
Who’s affected and how
Deep-dive technical analysis
Actionable next steps for your team
IPs, domains, hashes, and more
Ask questions and take action on every story
Filter by topic, classification, timeframe
Get matching stories delivered automatically
Sources
Related Stories

Challenges and Implications of AI-Driven Surveillance and Facial Recognition
Artificial intelligence and machine learning have fundamentally transformed the landscape of surveillance, shifting from labor-intensive, targeted operations to pervasive, automated monitoring. In the past, surveillance required significant human effort, such as physically following suspects, intercepting mail, or installing wiretaps, which inherently limited the scale and scope of government monitoring. The digitization of society, however, has enabled the collection and analysis of vast amounts of data through interconnected devices, sensors, and networks. Modern surveillance now leverages technologies like automated license plate readers, geofence warrants, and a proliferation of smart devices, all of which generate continuous streams of telemetry stored in the cloud. This shift has made it possible for authorities and private entities to monitor individuals on an unprecedented scale, raising significant concerns about privacy and civil liberties. One of the most prominent applications of AI in surveillance is facial recognition technology, which is increasingly used for identity verification in both public and private sectors. However, the widespread adoption of facial recognition systems has exposed critical flaws, particularly for individuals with facial differences or disabilities. People with conditions such as Freeman-Sheldon syndrome report being repeatedly rejected by automated systems, leading to exclusion from essential services like renewing a driver's license. These failures highlight the lack of inclusivity and robustness in current AI models, which often do not account for the diversity of human appearances. The reliance on facial recognition for access to services can result in humiliation, frustration, and systemic discrimination for affected individuals. As more organizations and government agencies implement these technologies, the risk of marginalizing vulnerable populations increases. The integration of AI into surveillance also raises questions about data security, consent, and the potential for abuse by both state and non-state actors. The aggregation of personal data from wearables, smart home devices, and public cameras creates rich profiles that can be exploited for commercial or political purposes. Civil liberties advocates warn that the efficiency and scale of AI-driven surveillance erode traditional safeguards against overreach, making it easier to monitor entire populations without due process. The debate continues over how to balance the benefits of enhanced security and convenience with the need to protect individual rights and ensure equitable access to services. Policymakers and technologists are called upon to address these challenges by developing more inclusive algorithms, establishing clear regulations, and promoting transparency in the deployment of surveillance technologies. The evolution of surveillance in the AI era underscores the urgent need for societal dialogue and legal frameworks that keep pace with technological advancements.
1 months ago
Meta Ray-Ban Smart Glasses Recordings Reviewed by Human Contractors, Triggering Privacy Scrutiny
Investigations reported by Swedish outlets *Svenska Dagbladet* and *Göteborgs-Posten* found that recordings captured by **Meta Ray-Ban smart glasses**—including video and audio—are being reviewed by human contractors as part of AI training and quality assurance workflows. Workers employed by **Sama**, a Meta subcontractor in **Nairobi, Kenya**, described routinely handling highly sensitive content inadvertently recorded by users, including bathroom visits, undressing, sex/pornography, and private conversations, as well as incidental capture of **bank cards** and other identifying details; interviewees said they feared reprisals for raising concerns and described strict on-site controls intended to prevent leaks. Following the reporting, the UK’s privacy regulator, the **Information Commissioner’s Office (ICO)**, confirmed it is contacting Meta to ask questions about the devices and associated data-handling practices. While Meta’s terms reportedly disclose that some interactions may be reviewed by humans to improve the system, the reporting and worker accounts suggest the review pipeline can include intimate or identifying moments that wearers may not expect to be viewed by third parties, raising regulatory and reputational risk around consent, transparency, and safeguards for bystander and user privacy.
1 months ago
Privacy Risks of Smart Glasses in Healthcare Environments
Smart eyewear devices such as Meta Ray Ban glasses, equipped with microphones, cameras, and AI connectivity, present significant privacy and data security risks when used in hospital settings. These devices can inconspicuously record or livestream protected health information (PHI), including patient images and conversations, often without the knowledge or consent of those being recorded. The presence of a small LED indicator is insufficient as a safeguard, especially since third-party products exist to obscure the light, making unauthorized recording even harder to detect. Healthcare organizations face challenges as these are often unmanaged devices brought in by patients or staff, bypassing institutional controls and oversight. The direct connectivity of these glasses to social media platforms like Facebook and Instagram increases the risk of inadvertent or malicious disclosure of sensitive information, potentially violating HIPAA/HITECH regulations. The inconspicuous nature of smart glasses differentiates them from more obvious recording devices like smartphones, heightening the risk of unnoticed privacy breaches in clinical environments.
1 months ago