Got it in one. Important things we don't yet know, and are very important to know BEFORE making a decision: - Pricing of both systems - Required specifications for the Vive (if you don't meet the minimum requirements, then don't bother getting the HMD. Just don't. You'll have a terrible time. Upgrades come before HMD) - Launch lineups for both This simply isn't true. The Rift uses the same sort of outside-in optical marker tracking as commercial MOCAP systems. Those effortlessly scale up to large tracking volumes with multiple cameras (every time you see actors in a huge set wearing mocap suits? Probably Vivon, OptiTrack or similar). Constellation specifically uses the same pulsed marker sequence the DK2 does, and as long as you tell each camera what point in the sequence you are in the current frame (i.e. run the sync cable between cameras) you can scale from the current two cameras up to as many as you want for the desired tracking volume and occlusion robustness. It remains to be seen which system is cheaper in volume production. Constellation uses a relatively common sensor, some OK optics, and some degree of onboard processing (i.e. a low-end SoC to do some basic thresholding and compression). Lighthouse is electronically simpler on the emitter side, but has multiple high-speed precision mechanical elements, some actually-decent lasers, and needs robust safety interlocks to achieve consumer-grade laser safety classification. On the 'marker' end, the two systems are very equivalent in terms of components: Constellation has a timer driving pulses of multiple photodiodes (LEDs) per tracked object, Lighthouse has multiple photosensors driving one or more timers per tracked object. Electromechanical devices have invariably proved to be more expensive to manufacture than purely electronic devices, but Lighthouse's emitters can potentially cover a larger volume per-emitter than a lensed camera. On the other hand, Lighthouse currently uses sequential operation of the basestations (one will active both axes in sequence then blast a global sync pulse, the the next base station will use that as its cue to start its scans, etc), which would drop tracking update rates as more basestations are added. Constellations cameras are globally synchronised so would scale with the number of objects tracked rather than the number of cameras. Adding more objects could be accomplished by increasing pulse code length (or reusing pulse codes) at the expense of initial constellation acquisition time, and just hope that nothing gets occluded from ALL cameras for long enough for the IMU based position tracking to drift too far (1 second is enough for a few metres of drift, so we're talking about sub-second total occlusions here) to prompt reacquisition from first principles (needs to wait for a whole code sequence).