Rotten to the Multi-Core

Everyone’s trying to figure out the next big software gimmick that’s going to make utilizing your multi-core machines super easy. Let’s face it, having to write code with locks and threads is not going to be it. We’ve had that capability for a long time and only the cream of the crop developers even dared to tread there, and even fewer were actually capable of getting it right. The average programmer, including me on most days when I’m not hyper caffeinated, need a better mouse trap to make writing and executing code in parallel an everyday task.

Most language pundits and concurrency gurus think it’s straightforward, there have already been mechanisms and metaphors designed to do this kind of stuff for a long, long time. But their solutions turn your programming task on end, stick needles it and make it cry for Mama. We need a better way.

Not only that, if we do succeed at making it easy we have another problem on our hands. If it’s easy, everyone is going to be doing it. If it’s cheap to do, even mundane little tasks will make use of parallelism to boost performance.

And that’s bad, how?

It’s bad, because I don’t think the operating system is up for it. Not the resource allocation, mind you, or the ability to switch tasks fast; it’s the problem in deciding if and how much parallelism your task should be able to use.

Still not with me?

The operating system is scheduling work for applications, deciding how much CPU time to dole out to each operation, keeping high-priority stuff running more often, and so on. In a world of only applications and processes this is generally a simple task of allocating time. Each app has one vote, skewed by some weighting algorithm. When an application can spin up multiple threads the scheduling becomes more difficult, but given there is only one CPU it’s really not much different. With multiple CPU’s now you’ve got a really good problem. The apps using more threads get more time.

Yet it’s not been a problem to date. Most likely because even though apps can spin up threads, they are generally used to achieve asynchronous behavior rather than access more CPU time. However, these types of apps do automatically run faster on multi-core machines because of it.

If some apps become highly parallel, because you no longer need a PhD in concurrency to build one, the OS is going to be flooded with threads per app, each trying to get their app to achieve a performance boost. Other apps that cannot or have not been parallelized might become paralyzed, unable to get enough time on their ‘one’ CPU because of all the other, possibly inconsequential, tasks being sped up.

 Or maybe I’m just fretting about nothing. Maybe it all just works out. Certainly, I’d rather be in a position right now to have that problem than having every app stuck in the slow lane. J

What do you think?

Comments

  • Anonymous
    June 05, 2007
    The same way a programmer know he can't create a infinite loop without getting his process draining the processor... he must learn that creatting lots of threads will also do something similar. Besides, what kind of applications need more than two threads?

  • Anonymous
    June 06, 2007
    The comment has been removed

  • Anonymous
    June 06, 2007
    BTW, to fabiopedrosa: "what kind of applications need more than two threads?" Web servers, database servers, load testing tools, any kind of app that needs a responsive UI yet can do more than one thing in the background at once...

  • Anonymous
    June 06, 2007
    Any app that wants to speed up some number crunching or any other type of parallizable operation will eventually be using up to as many threads to solve the problem as there are 'cores'.  This might not sound like a lot when the most common multi-core is just DUAL, but imagine what it will be like when you have 64 cores, 128, 512?  Now that's a lot of potential threads.

  • Anonymous
    June 06, 2007
    The comment has been removed

  • Anonymous
    June 06, 2007
    The comment has been removed