Meta has integrated its artificial intelligence (AI) technology into a pair of Ray-Ban smart glasses. This fusion of cutting-edge technology and iconic fashion has the potential to revolutionize the wearable technology landscape, which has thus far struggled to gain widespread adoption.
The Ray-Ban Meta smart glasses are now equipped with Meta’s AI virtual assistant software, allowing wearers to engage in hands-free conversations with their glasses. Leveraging multimodal AI technology, the device can process queries that combine audio and visual input, enabling it to respond intelligently based on what the wearer is observing.
Imagine traveling to a foreign country and struggling to decipher a menu written in an unfamiliar language. With these smart glasses, all you need to do is ask, and the AI will translate the text for you, eliminating the need to juggle your phone or stare at a screen. This seamless integration of AI into your field of vision could revolutionize how we interact with and navigate the world around us.
Meta first introduced this multimodal AI capability in a limited release of the Ray-Ban Meta smart glasses in December 2023. Early reviews from tech publications like The Verge have been promising, with the AI demonstrating impressive accuracy in identifying objects like cars and cats when prompted. However, it has faced challenges in precisely identifying certain plant species and animals like groundhogs.
The true power of the Meta AI lies in its multimodal nature, which sets it apart from traditional virtual assistants like Siri, Alexa, and the Google Assistant. By fusing data from multiple sensor modules, such as cameras and microphones, the AI can generate more accurate and sophisticated outcomes than its unimodal counterparts.
This capability is akin to Google’s Gemini multimodal AI model, which can process an image of cookies and provide the recipe in response. Trained on identifying patterns across various data inputs through multiple neural networks, multimodal AIs can process text, images, audio, and more, enabling them to comprehend and respond to the world in a more human-like manner.
In the context of smart glasses, this multimodal AI can make sense of the wearer’s visual surroundings by combining the glasses’ sensors with these neural networks. As a result, the system can answer more complex queries and offer smarter, contextual information tailored to the user’s immediate environment.
However, the Ray-Ban Meta device’s AI processing capabilities still lag behind those found in modern smartphones, which benefit from more powerful chipsets and advanced sensor fusion techniques. Smartphones can intelligently adjust lighting and color balance for scene recognition in camera apps, while smartwatches can offer better workout feedback by combining data from thermometers and optical sensors.
Despite its current limitations, Meta’s integration of AI into the Ray-Ban smart glasses represents a significant step towards realizing the potential of wearable technology. As the technology continues to evolve and become more sophisticated, we can expect to see even more seamless and intuitive interactions between humans and their wearable devices, blurring the lines between the digital and physical worlds.