fbpx

Mysterious ChatGPT hardware must be smart glasses, given what OpenAI just unveiled

After months of speculation, Jony Ive confirmed in mid-September that he and a team of former Apple designers are working on hardware that will have ChatGPT at the core. While Ive said his LoveFrom design company will be involved in creating the product (or products?), he didn’t reveal what form factor(s) we should expect.

I labeled the product an iPhone competitor because the iPhone is an AI device, just like the Pixel and any other smartphone that can run native or third-party AI apps. The ChatGPT hardware will compete against the iPhone no matter what it looks like. The only thing we know about the gadget is that it “uses AI to create a computing experience that is less socially disruptive than the iPhone.”

Nearly three months later, I believe the ChatGPT device has to feature a key component, a pair of smart glasses that will truly let the user make the most of OpenAI’s AI models. It’s all thanks to what we witnessed on December 12th, a few short hours apart.

First, Samsung and Google unveiled the Android XR experience and teased the first devices with AI at the center. Project Moohan is Samsung’s obvious Vision Pro alternative, and yes, it looks too much like the latter. Project Moohan will be a spatial computer that supports VR, AR, and AI.

Tech. Entertainment. Science. Your inbox.

Sign up for the most interesting tech & entertainment news out there.

By signing up, I agree to the Terms of Use and have reviewed the Privacy Notice.

All the acronyms are there, with AI giving Samsung a theoretical advantage over the Vision Pro. That will be Galaxy AI and Gemini AI, in case you were wondering.

Samsung's Project Moohan Android XR headset.Samsung’s Project Moohan Android XR headset. Image source: Samsung

More interesting than Moohan is Google’s unannounced pair of smart glasses. Samsung is probably working on its own smart glasses, but the company didn’t feel compelled to announce them on Thursday. 

Google demoed the smart glasses during its Gemini 2.0 announcement, showing how Project Astra can work on them. The wearable device is paired with a Pixel phone, which will handle the processing, including Gemini. The glasses give the AI eyes and ears so it can see everything around you and communicate information as you seek help while on the go.

Add the Android XR platform, and you get augmented reality features. Think AI notification summaries, Google Maps navigation, and real-time translation. According to Google’s demo, these are all part of Android XR.

All of that further reinforces my belief that standalone AR glasses are the future of mobile computing. They’ll complement the iPhone first and then replace it.

Google Maps AR navigation on smart glasses.Google Maps AR navigation on smart glasses. Image source: Google

Seeing Samsung and Google’s announcements was enough to make me realize OpenAI will need similar abilities from ChatGPT. And the only way to deliver them is by making smart glasses of its own.

Little did I know that OpenAI’s “12 Days” live stream, which followed Samsung and Google’s surprise announcement, would further drive that point home.

OpenAI on Thursday announced that ChatGPT Advanced Voice Mode is finally getting support for real-time video streaming and screen sharing. We saw these features demoed for GPT-4o back in May, but OpenAI needed time to bring them to all users.

The ChatGPT mobile app will let you use the camera of your iPhone or Android device to see the world and hold a conversation about it with the user.

The demos OpenAI offered showed that the AI can recognize people and remember details about them. Also, the AI can recognize objects and provide tips and tutorials related to them if asked.

When I first tried Advanced Voice Mode, I wanted to use ChatGPT as a museum voice guide. However, the experience lacked a key feature: the live video stream support that OpenAI just made available to ChatGPT users. Instead, I had to upload photos whenever I had questions about something.

Back to Thursday’s OpenAI updates, the ChatGPT demos showed that you can share your phone screen with the AI and ask questions about the content. It’s another way of giving the AI the ability to see what you’re doing.

This settled it for me. Any multimodal AI is a great tool to enhance your productivity, but it can get miles better if the AI gets eyes. Smart glasses are the best way to wear the AI’s eyes. The glasses don’t even have to support augmented reality features. AR would be just the cherry on top. 

It turns out Meta was right all along with the Ray-Ban AI project. As such, I think OpenAI and LoveFrom have to bundle a pair of smart glasses with whatever ChatGPT hardware product they end up making. I don’t think they can make standalone smart glasses. The technology isn’t ready for that.

Solos AirGo Vision ChatGPT smart glasses: Front look.Solos AirGo Vision ChatGPT smart glasses: Front look. Image source: Solos

They could always create only ChatGPT smart glasses that could then connect to the iPhone, Mac, or any smart device. But in such a case, they won’t control the underlying platform. On that note, I did show you a pair of smart glasses earlier this week (above) which put ChatGPT front and center. They might not be a first-party device, but they’re available for preorder.

This is all speculation from this ChatGPT enthusiast. I have no way of knowing what Ive & Co. are actually designing. But smart glasses seem like a key piece of the puzzle. And no, placing a camera on clothing will not work. Humane tried that and failed miserably. Eyewear is a whole different ball game.

Source

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top