Meta's Ray Ban shades add AI vision
Augmented reality seems poised to really hit its stride once we get to a wearable device that doesn’t look like a crazy pair of ski goggles or worse. It’s going to be a long road to get there, but Meta’s collaboration with Ray Ban gives us a glimpse of that future, having recently added computer vision to identify objects you are looking at and provide voice feedback.
"Hey, Meta. Look at this and tell me which of these teas is caffeine-free."
I spoke these words as I wore a pair of Meta Ray-Bans at the tech giant's New York headquarters. I was staring at a table with four tea packets, their caffeine labels blacked out with a marker. A little click sound in my ears was followed by Meta's AI voice telling me that the chamomile tea was likely caffeine-free. It was reading the labels and making judgments using generative AI.
I was testing a feature that's rolling out to Meta's second-generation Ray-Ban glasses starting today -- a feature that Meta CEO Mark Zuckerberg had already promised in September when the new glasses were announced. The AI features, which can access Meta's on-glasses cameras to look at images and interpret them with generative AI, were supposed to launch in 2024. Meta has introduced them more quickly than I expected, although the early-access mode is still very much a beta. Along with adding Bing-powered search into Ray-Bans as part of a new update, which ups the power of the glasses' already-available voice-enabled capabilities, Meta's glasses are gaining new abilities fast.
The demo wowed me because I had never seen anything like it. I have in parts: Google Lens and other on-phone tools use cameras and AI together already, and Google Glass -- a decade ago -- had some translation tools. That said, the easy-access way that Meta's glasses invoke AI to identify things in the world around me feels pretty advanced. I'm excited to try it a lot more.
Full story at CNET.