AI smart glasses are the latest addition to fashion, and they include a camera, a microphone, AI, and privacy risks. After Google Glass failed to gain traction more than a decade ago, the category is seeing renewed interest as companies redesign the technology to look like ordinary eyewear.
Meta and privacy
The most popular model on the market comes from a partnership between Ray-Ban and Meta, combining mainstream fashion with a company known for privacy controversies.
Meta’s position in the market draws attention because of its history with privacy and data practices, raising questions about how information captured by these devices is collected, stored, and used.
Two Harvard University students demonstrated that footage from Meta’s Ray-Ban smart glasses could be connected to external facial recognition systems to identify strangers in public.
Meta chose not to include facial recognition in the first generation of Ray-Ban Meta glasses, citing ethical concerns. Recent reports suggest the company may revisit that decision for future models.
Another issue is what happens after a recording is made. In April 2025, the smart glasses privacy policy was updated. According to the company, photos and videos captured by the glasses are stored on the user’s phone. They fall under AI or cloud policies only when shared to those services.
Voice recordings triggered by the wake word are stored in the cloud by default and can be kept for up to a year to help improve AI systems, with no option to opt out beyond manual deletion.
What features will be introduced in the future remains to be seen. Encouraged by Meta’s success, other major players have entered the race. Apple and Samsung could be developing similar products.
Recording in public spaces is a slippery area
The Washington Post reports that devices like Meta Ray-Bans are drawing pushback from Generation Z, who see them as a threat to personal privacy. The issue is not the technology itself, it is how it is used.
Content creators are using this technology to film strangers for social media, often without their knowledge or consent. Recording in public spaces is generally legal because people usually have a lower expectation of privacy. Legal experts say the deciding factor is the location where the recording takes place.
A number of women have shared experiences about being secretly filmed by people wearing smart glasses in public. One woman said she was approached on a walk, had a conversation with a man wearing glasses that looked like ordinary sunglasses, and only later discovered a video of her was posted online with nearly a million views.
Similar incidents have also been reported. In October 2025, the University of San Francisco issued a warning after reports that a man wearing Ray-Ban Meta smart glasses was approaching women on and around campus and recording interactions that may have been shared on social media.
Meta Ray-Ban smart glasses include an LED that signals recording, though reports suggest some wearers can pay third parties to disable it. Researchers are working on ways to give bystanders more control and visibility when camera-enabled devices are in use.
Smart glasses enter the workplace
Use in workplaces and organizations raises additional questions, since these spaces fall outside public settings.
A woman visiting a beauty a beauty salon in Manhattan said she was unpleasantly surprised to see her aesthetician wearing Meta Ray-Bans. The worker told her the batteries were not charged, but the encounter still left her uneasy. The company later said employees keep the glasses turned off during appointments. The incident sparked a broader debate about privacy and when it is acceptable to record other people.
AI glasses can capture and process sensitive information such as facial features, voiceprints, eye tracking, and other identifiers. In a lot of cases, this data falls under biometric personal data in laws like the GDPR, Illinois’ Biometric Information Privacy Act (BIPA) or the California Consumer Privacy Act (CCPA).
For employers and organizations, this matters because biometric data comes with stricter expectations around notice, consent, retention, and use. Missing those obligations can lead to serious legal exposure.
Organizations may need written policies on when, where, and how AI glasses can be used. Privacy risks should be assessed based on industry, location, and use case before deployment.
