1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

News Rumour Control: Intel's Light Peak to ditch light

Discussion in 'Article Discussion' started by Lizard, 13 Dec 2010.

  1. Altron

    Altron Well-Known Member

    Joined:
    12 Dec 2002
    Posts:
    3,186
    Likes Received:
    61
    Yes, but you still have to drive to the train station and find parking at their garage.

    My point is the intermediate steps required for this conversion. Your mobo and peripherals and expansion cards are all electrical on the inside. It's very easy for them to communicate over an electronic interface, because their internal circuitry is all electrical. They can just keep pushing electrons around, no need to convert to a fundamentally different interface. With LightPeak, your circuitry is still electrical. You're just converting the electrical signals to optical ones, transmitting the optical ones, then converting them back to electrical so you can use them.

    I got my "panties in a twist" when a previous LightPeak article claimed it as an advance in "optical computing". LightPeak is not optical computing. No calculations are being done with photons. It's not computing anything, just transmitting data.

    These letters I type are converted to binary by my computer, sent to your computer, and converted back into English. You and I are communicating in English. We can't speak binary, but our communications are converted to and from binary by our computers.

    LightPeak is the same idea. The computers aren't computing optically. They're computing electrically, then using a piece of equipment to convert it to optical, then using a similar piece of equipment to convert it back to electrical.

    My aim isn't to criticize LightPeak. It seems fine and dandy. My aim is to clear up some myths about it. It's not fundamentally a better technology than copper for short distances that the average user would encounter (<100m), and it's not a revolutionary advance in computing any more than being able to convert to and from binary was a revolutionary advance for the English language.

    Also, I LOL'ed at the picture with the rainbow of light. Optical communication occurs at 1.3 microns and 1.5 microns, both well outside the range of human vision. This is because of a dispersion minimum in fused silica fibers around those wavelengths which reduces pulse broadening and other signal degradation effects.
     
  2. Deadpunkdave

    Deadpunkdave ...why you need a 20-sided die

    Joined:
    9 May 2009
    Posts:
    193
    Likes Received:
    8
    It shouldn't be forgotten that half of Light Peak is making a single technology to do everything transmission-wise, replacing both usb and hdmi by having a multi-protocol controller. Have a read at http://techresearch.intel.com/ProjectDetails.aspx?Id=143 and click around if you're interested. This is why this rumour kind of makes sense, though the name certainly won't if they are launching with copper.

    Also, no one beats the BBC for completely nonsensical pictures of optical fibres:
    [​IMG]

    and even better...
    [​IMG]

    I kid you not.
     
  3. tugboat

    tugboat New Member

    Joined:
    10 Jul 2010
    Posts:
    2
    Likes Received:
    0
    @ Altron:

    In some respects your analogy is correct, but Is too narrow in scope and not, I feel, quite accurate.

    In each and every current data communication standard/protocol there are latencies introduced by everything from controllers to drivers to quality of the connections be they soldered or mechanical, shielded or not. So Latencies introduced by changing venues getting from point A to point B I think is a wash. Personally, I think Light Peak, or whatever the final version, is the answer to completely overhaul the way our computers do business in the future.

    Consider that it can in one fell swoop exceed the capacity of virtually every buss on the mobo. SATA, PCIe, memory, USB, all of it. Talk about speeding things up. Regarding the comment above by Play_Boy_2000, What is wrong with SATA exceeding 10 year old memory buss speeds? Hell lets put the memory on Light Peak also.

    The main point I'm trying to make is that there are at least 3 standards that could get replaced now. SATA, USB, and PCIe. There are others. They could all be rolled into one now (relatively speaking time wise). Why should you want to pay for a couple more iterations of USB or SATA just because they can make it past a couple of gig's/sec speed. Personally I hate nickle and dime tech upgrades. I personally wouldn't hesitate dumping legacy USB, SATA, and whatever. Mfr's can continue, and you can bet your ass they would as long as there was a dollar in it, to make legacy mobo's and components. Those building new could go new tech and never look back.

    I also submit to you that the sooner the PC world accepts this new technology, the sooner we can keep it from getting locked up by the likes of Apple or Sony. Hell just look at Blu-Ray, as good as it is it will always be to damned expensive for what it is. And Sony would I think rather die than let the price go down and it become a mainstream component. I won't even talk about Apple so as not to start a flame war.
     
  4. Altron

    Altron Well-Known Member

    Joined:
    12 Dec 2002
    Posts:
    3,186
    Likes Received:
    61
    I'd rather have my dual-channel DDR3 that gets 340 gigabits per second through two inches of copper than memory connected via 10 gigabit fiber.

    It's a more complicated connection. Instead of traces on a PCB meeting a pin, which meets traces on another PCB, it's a PCB trace to miniaturized light source transmitting through a fiber into a small optical detector, which then puts its signal through a PCB trace.

    Having one interface for everything is a poor idea. Different interfaces are optimized for different tasks.

    High data rate, long distance, low cost. Pick any two.

    Look at PCIe x16. It's capable of an astonishing 128 gbps with the new 3.0 standard, and is found on a $40 motherboard. It's a cheap interface that's very very fast, because it only has to go a couple of inches.

    Look at USB. It's a pathetic 480mbps, but it's also very inexpensive, and can signal over much longer distances, 10m between repeaters.

    Future advances are not going to have PC components linked by fiber (until processors are optical and not electrical). They're going to have very fast and very short copper links.

    Look at older motherboards. They have a processor, a northbridge that has a memory controller and handles expansion slots, a southbridge, and a graphics processor.

    The current generation of processors has all of that functionality integrated onto the die. The CPU is on it. The memory controller is on it. A significant amount of L3 cache is on it. A basic graphics processing unit is on it. The PCIe controller is on it.

    We're taking more and more functions onto the die, because it makes them fast. That's where the tech is heading. Make the memory controller and PCIe controller on the die. Put a GPU on the die. Put more and more cache on the die.

    Fiber optics have one great area of strength, and that's distances and networking.

    I'd like to see a standard that integrates the functionality of USB, Ethernet, and Displayport. Give me the ability to stream audio and video, send and receive data, and control my computer with a single cable. Make it capable of 100m without repeaters, and make it inexpensive. Fiber can do it, and that's what we need.

    What we don't need is fiber inside of the computer connecting one circuit board to another, until the circuits themselves are optical.
     
  5. Landy_Ed

    Landy_Ed Combat Novice

    Joined:
    6 May 2009
    Posts:
    1,428
    Likes Received:
    39
Tags: Add Tags

Share This Page