1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Hardware Editing Memory and Multi-core Programming

Discussion in 'Article Discussion' started by Tim S, 8 Nov 2009.

  1. Tim S

    Tim S OG

    Joined:
    8 Nov 2001
    Posts:
    18,882
    Likes Received:
    89
  2. eek

    eek CAMRA ***.

    Joined:
    23 Jan 2002
    Posts:
    1,600
    Likes Received:
    14
    Another article, another mention of bing!

    From what I can see, all this does is simplify the use of locks? I'd still prefer to use locks over this, you have more control!
     
  3. geekboyUK

    geekboyUK What's a Dremel?

    Joined:
    1 Sep 2003
    Posts:
    17
    Likes Received:
    0
    Exactly the same problems as solved years ago.... Anyone remember the Transputer?
     
  4. mi1ez

    mi1ez Modder

    Joined:
    11 Jun 2009
    Posts:
    1,624
    Likes Received:
    105
    I'll have to read this without a hangover me thinks!
     
  5. Hustler

    Hustler Minimodder

    Joined:
    8 Aug 2005
    Posts:
    1,039
    Likes Received:
    41
    I always thought adding extra cores was an easy way out of the Ghz race for the chip makers....

    Shift the burden to the software guys and make life easier for the engineers and fab plants.
     
  6. shadow12

    shadow12 I lie

    Joined:
    30 Sep 2007
    Posts:
    231
    Likes Received:
    1
    Anyone who payed attention at Uni (Computer Science) in the last decade will have been taught threading. Threading was pushed as the way to program. This meant that when we moved to multiple cores your programms would execute in parallel over them. At the time though threading introduced overheads in single CPU systems so this practice was often abandoned.
     
  7. Sebbo

    Sebbo What's a Dremel?

    Joined:
    28 May 2006
    Posts:
    200
    Likes Received:
    0
    yes, it does seem that it simplifies the use of locks and semaphores, but like they said, it's something alot of people get wrong alot of the time. If these can be simplified, that means less code to write, and less time spent debugging to make sure your locks were done correctly. much welcomed imo

    any indication of when we'll see this in VS? just VS2010 or will we see it in 2008 and perhaps 2005 as well?
     
  8. Skill3d

    Skill3d Minimodder

    Joined:
    29 Sep 2005
    Posts:
    205
    Likes Received:
    1
    amen to that o_0
     
  9. ano

    ano 8-bit Bandit

    Joined:
    11 Sep 2009
    Posts:
    30
    Likes Received:
    0
    C'mon guys. Why didn't this go to an editor before posting? 4 typos in the first 3 paragraphs makes it awkward to read.

    threatening to taking over the world

    Rather than writing a instructions that are

    "thread' - single or double quotes, choose your style and stick to it.

    This requires further steps need to be taken
     
  10. wyx087

    wyx087 Homeworld 3 is happening!!

    Joined:
    15 Aug 2007
    Posts:
    11,994
    Likes Received:
    714
    why is this so hard? all embedded programmers such as all Electronic engineers in ECS, Southampton uni understands and knows how to implement concurrency on a hardware level.

    software programmers are too used to sequentially writing their code. emply a couple embedded systems engineer they'd tell you how to take advantage of parallelism available in any embedded chip.
     
  11. Redsnake77

    Redsnake77 Useless Idiot

    Joined:
    7 Jun 2009
    Posts:
    282
    Likes Received:
    3
    What on earth is that red branch tree thing in the Barcelona Supercomputing Centre picture?
     
  12. Pelihirmu

    Pelihirmu What's a Dremel?

    Joined:
    2 Jan 2005
    Posts:
    292
    Likes Received:
    0
    It's top desing Microsoft style ^^
     
  13. chrisuk

    chrisuk What's a Dremel?

    Joined:
    23 Jun 2005
    Posts:
    57
    Likes Received:
    0
    wuyanxu, there is a huge difference between the messing around Electronic Engineers do and what Software Engineers do...the two are not interchangeable, they are totally different skill sets.
     
  14. BrightCandle

    BrightCandle What's a Dremel?

    Joined:
    30 Apr 2009
    Posts:
    74
    Likes Received:
    5
    Lets for arguments sake say that programmers did manage to find a way to write multi threaded programs without increasing the bug count through some technique. Its unlikely because we've been struggling with multicore programming for over 40 years but lets hope for a break through beyond languages such as say Erlang.

    Even in this new utopia of computing marvel we have a problem. Not all algorithms behave well with multiple cores. The major algorithms fall into 3 general categories:
    1) There are thankfully a set of embarrassingly parallel algorithms which will scale for the coming years to thousands of cores without loosing speed. They have almost no serial element at all, are driven entirely by CPU cycles applied and have no locks and just work well with multiple cores. One day these problems will struggle to fill the cores (screen drawing hits a problem once you have a CPU per pixel for instance) but that problem is a decade away at least.

    2) A set of mostly parallel problems where something like 95% of the algorithm will run in parallel, but where the rest must be executed in a lock or atomically. But that means that 5% of the algorithms time has to run in a single thread and assuming an infinite number of cores the best speed up we could thus get is around 20x. 1000 cores wont make any performance difference to these problems, they are dominated by the serial part of the algorithm once you have enough cores. Early on we'll see the problem, 20 CPUs on this problem will only provide around a 10x speedup. Many of the programs we're seeing running on multiple cores today fall into this category,

    3) Then there are a class of algorithms that naturally fall into a tree structure, such as say a large arithmetic problem. Right now they might be O(n) algorithms but with multiple cores they become log(n) algorithms. But that is also far from a linear speed up that we want from adding additional cores. These problems have diminishing parallelism such that the height of the tree dominates processing time assuming we have enough cores to cope with the maximum width of the problem. Don't worry about the details, but 1000 cores is only going to provide around 10x the performance of 1 core, and 2000 cores just 11x the performance. Serious diminishing returns on these algorithms and yet they will consume all those cores to achieve the speed up, but only for some of the time.

    4) Algorithms that are simply serial and have no obvious parallel implementation at all.

    In addition to all this the parallel algorithms cost more to run in general. On a single core they take considerably more time to execute and they are notoriously hard to debug. Multicore CPUs wont provide the necessary computing power going forward except on a subset of problems so it really will become a big challenge to keep making programs faster. We'll soon be measuring progress based on algorithm breakthroughs rather than on clock speed. Algorithm breakthroughs are rare and getting rarer. With clock speeds stagnated we're at the end of the ever increasing performance every 18 months. TM is not a silver bullet for this problem, it simply makes one small class of problems easier to program, and that although a problem isn't really the reason why multicore programming hasn't taken off. Its because many problems don't lend themselves to multicores as the algorithms don't get faster.
     
    perplekks45 and B3CK like this.
  15. geekboyUK

    geekboyUK What's a Dremel?

    Joined:
    1 Sep 2003
    Posts:
    17
    Likes Received:
    0
    Seems to be a fair amount of negativity towards programmers - perhaps I'm being over-sensitive....

    A lot of us do know how to write highly-threaded applications and have done for many years.
     
  16. B3CK

    B3CK Minimodder

    Joined:
    14 Jun 2004
    Posts:
    402
    Likes Received:
    3
    +rep BrightCandle, was easy to read your post, and well thought out.

    Isn't Intel working on a way to combine core's to allow an optimized amount of cores to run threads so that 2 cores could be working on one thread, while 2 other cores run 1 thread each, depending on how much they load the cores are using?
    I thought I heard the Intel rep saying this is what the I5, & I7 cpu's could do, while at CES.
     
  17. Star*Dagger

    Star*Dagger What's a Dremel?

    Joined:
    30 Nov 2007
    Posts:
    882
    Likes Received:
    11
    There is no editing or proofreading, period.
     
  18. perplekks45

    perplekks45 LIKE AN ANIMAL!

    Joined:
    9 May 2004
    Posts:
    7,552
    Likes Received:
    1,791
    ... is very interesting. rep++

    It's all nice and shiny when it comes to multi-CPU/GPU algorithms marketing-wise but right now, as we all know, there aren't too many programs [other than encoding software and number crunchers] that REALLY benefit from having more cores.

    Give BC's post a read, I think it pretty much sums it up nicely. :thumb:
     
  19. karx11erx

    karx11erx What's a Dremel?

    Joined:
    17 Dec 2004
    Posts:
    124
    Likes Received:
    1
    What's new about locking critical code areas for concurrent CPU access? OpenMP btw does the same with the omp atomic (single statement) and omp criticial (multiple statement) pragmas. That's parallel programming basics.

    BrightCandle, your post sounds all nice and dandy in theory, but there are a lot of real-word problems that lend themselves very well to parallelization, and often they are very simple to implement as parallel code. No big deal. You should be careful not to throw out the baby with the bath water.
     
    Last edited: 9 Nov 2009
  20. bb_vb

    bb_vb What's a Dremel?

    Joined:
    27 Sep 2004
    Posts:
    4
    Likes Received:
    0
    The trick to beating Amdahl's law (what you describe in point 2) is to increase your problem size as the number of cores increases. You may not be able to do the same thing faster, but you can do more for no (or little) added cost. If your problem size scales, you will commonly see quite good parallel scalability (until you run out of memory :duh:).

    While many current applications will wilter in a massively-parallel environment, IMHO people aren't going to just lie back and say "oh well, too bad, this is a fast as things will ever be". If there's processing power to be used, they will use it, even if it means inventing new applications.

    Regarding the article, transactional memory sounds like a useful model, but it doesn't exactly scream "scalability" to me. I'd like to say though that I'm really happy to see this kind of article on Bit :thumb:. The whole desktop parallelism issue is rapidly becoming critical and it's great to see some discussion of the finer points.
     
Tags: Add Tags

Share This Page