Apple has big GenAI plans for Vision Pro, claim

0

Apple may be preparing to blow everybody’s mind with a next edition Vision Pro device that will put artificial intelligence on your head, making your entire reality an actionable space.

That’s the big line within the latest set of claims from Apple analyst Ming-Chi Kuo. He says the latter half of 2025 may see Apple introduce a similar appearing Vision Pro model with a much faster, M5, processor.

Though the big glass goggles

A release later in 2025 also hints at a brand new visionOS 3.0, which in itself hints at a lot of Apple Intelligence development time, to burnish its bid to make spatial computing a reality.

Apple already told us it’s working on a version of Siri with more contextual intelligence it hopes to ship in 2025, so the two things twin well together.

The combination of visionOS and AI will deliver profound experiences.

So, what else did the analyst claim?

I think it’s time for a list:

Apple Silicon M5 processor. (What a busy year the silicon team are facing).
Mass production in 2H 2025.
Other components won’t change much in comparison to the existing Vision Pro, which may help lower costs a little.
Supply chain remains the same.
Cost and weight reductions and improved battery life remain on the road map for Apple’s visionOS hardware development, but it isn’t expected in the 2025 device.
Apple Intelligence + Spatial Computing = A far more intuitive UI.

The big improvement in the model will be the combination of Apple Intelligence with Spatial Computing. That’s a combination that effectively turns your entire existence into a digital existence, for all the twin pros and cons that transformation brings.

visionOS 2 got now stage time at the iPhone reveal, but had good improvements all the same

The whole shebang

“Combining eye tracking, gesture control, and Apple Intelligence should provide a better user experience for spatial computing,” the analyst says. He’s particularly interested in the impact of text-to-video AI models.

I think it can go much further.

After all, if you can wear a set of these things and actually work on a virtual Mac using your voice, gesture, and eye movement, that’s a whole new computing dynamic.

Equally, the fact you will be able to walk into a room (virtual or otherwise) and interact with every object in that room in forms supplemented by generative AI has implications on everyone.

Consumers may get a lot of fun and some convenience out of that, but enterprise users will be particularly excited. Think of the opportunities in training, education, architecture, medicine, engineering and more. Think how if used in conjunction with robotic solutions visionOS with GenAI makes it possible for humans to travel into the very depths of the ocean and interact – and recognize – what they find there.

Even that may turn out to be just a few more steps towards an even more augmented reality.

An interesting look. Thanks to Charline Tetiyevsky and Flickr

What’s that coming over the hill, is it a monster?

Apple has quite evidently pushed vast quantities of its internal development resources into building Apple Intelligence this year.

That’s why the Apple AI is the big focus in its marketing, and also I suspect why some products – including visionOS – received a little less love this year than perhaps some expected.

That temporary lack of affection shouldn’t be mistaken for anything other than that, however – it is just about prioritization.

Now Apple has Apple Intelligence 1.0, don’t be too surprised to see AI 2.0 as a set of tools, some of which go absolutely gangbusters on Vision Pro running visionOS 3.

I suspect this will be a transformative release.

Please follow me on LinkedInMastodon, or join me in the AppleHolic’s bar & grill group on MeWe.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.