Discussion in 'Article Discussion' started by bit-tech, 19 Apr 2018.
I think this is going to be a "why the hell did we do that" move for Intel in a few years, akin to missing the boat on the mobile market segment. It;s a half a decade to a decade too early to release a device for AR (display and tracking are nowhere close to ready yet) but to cut the entire group means they will have given up a multi-year head start when such devices do become viable.
They're not removing the IP, just stopping development. If the market begins to gain shape, they can re-open, and still be in the lead. You're right about display and tracking not being there, although I think those areas aren't far behind. The biggest question is application ... I can see this used in medical fields and emergency response as the best markets to start with, but not sure how quickly the consumer market emerges. We're already self-absorbed by virtue of our smart phones and social media ... imagine if everything we looked at was augmented to make us feel better about ourselves ...
They could restart development, but they would X years of non-development behind everyone who did not stop development.
They're really far behind, it;s just not obvious without doing a good amount of research into how VR and AR works at a low level.
Back in the early 90s there were TONS of tech demos, investment, etc into VR. Demos worked (mostly), everyone knew what needed to be done in order to turn things into consumer products, indeed, those things are being done today, the people back then were absolutely correct. It appeared that VR was only a few years away, and even large companies announced (and in a few cases, actually released) VR devices for consumers with the promise of meeting those expectations. All failed, miserably. Computers were FAR to slow to drive them at acceptable framerates, tracking was too slow to keep up and/or had tiny working volumes, displays were extremely low resolution and blurred during motion, fields of view were tiny due to the need for rectilinear optics (because in-software distortion compensation ran into the 'slow computer' problem, and the few HMDs with non-rectilinear optics like the LEEP used dedicated custom hardware for lens compensation at great expense), the HMDs themselves were very heavy and cumbersome, and the whole lot cost so much that even the not-completely-irredeemably-terrible devices like the Virtuality system were not something anyone could actually afford to buy.
The problem they hit was the technology to actually implement VR was not available at the time. Likewise with AR today: we can demo aspects of it, but are nowhere close to getting everything together at once into a device that is actually viable to do anything with, even though we know what needs to be done. You can have a self-contained HMD with almost-decent position tracking but an incredibly tiny FoV, single focal plane, and super dodgy real-time segmentation and only offboard object recognition (Hololens, maybe Magic Leap if the whole thing turns out not to be a Thernaos-like ***********); you can have a bench-sized immobile display prototype that can do multiple focal planes or fully lightfield display over a very small FoV (e.g. Oculus' Focal Surface Display); you can have a HMD with decent FoV and acceptable segmentation, but reliant on external environment tracking and with hilariously enormous optics and still only a single focal plane (e.g. Leap Motion's NorthStar); etc. Even the individual parts needed are not of sufficient capability for a viable device, let alone the problem of getting them all together in a single unit, that won't snap your neck when worn, has a battery life above single-digit seconds, and doesn't cost 6-7 figures. And then there's the computer you need to drive the thing, which needs to operate within a microsecond-scale motion-photons loop because you're racing to fool the visual system's perception of the real world rather than just your vestibular system. That's going to need a change to how GPU architectures currently work, so there's many years of lead-time there too (plus several more years from "needs hundreds of watts to run" to "can put it on your head").
But it's all rather moot: Anandtech clarifies that the development arm continues as part of the NTG, only the "... and put it in a consumer device!" division has been dissolved:
I think you're projecting on a larger scale, and with more quality, than I am. That being said, you make one hell of an argument, and I salute you for that!
Separate names with a comma.