eyt*
March 22, 2006

Adapting for Concurrency...

Herb Sutter gave a talk on concurrency at PARC earlier this week, and the Audio, Video, and Slides are available.

Where as even up to a few years ago, multiple processor (and multiple core) machines were only available for niche markets, the talk highlights the fact that very soon multiple core machines will be everywhere. This is very exciting, however, we must change how we develop our applications to take advantage of this. From this perspective, it is not just simply using the existing tools that we have such as locks and threads, but also expanding our tools in order to take advantage of these language features.

Herb gives the analogy that this is similar to the early 1990’s when Object Orientation was new. While you could write an object-oriented application in C (for example, consider the FILE structure and methods), it is far easier to write such applications in languages that have native support for objects. Herb states that while we currently can write multithreaded software in our existing tools, writing correct multithreaded software is hard, pointing out that some of the class libraries and even some of his own examples have been incorrect.

During the discussion, Herb mentioned something that I was unaware of. A while ago, I discussed Double Check Locking and how it was broken and later about Scott Meyers and Andre Alexandrescu’s C++ and the Perils of Double-Check Locking. One part that I had not noticed in Scott and Andre’s paper is that, thanks to reworking the memory model, the Double Check Locking Pattern once again works in Java 1.5/5.0 (JSR 133) and .NET 2.0 (CLI 2.0 Section 12.6). The solution is to use the keyword volatile as follows:

  1. private static volatile Singleton instance= null;
  2. public static Singleton get() {
  3.   if ( instance == null ) {
  4.     synchronized ( Singleton.class ) {
  5.        instance = new Singleton();
  6.     }
  7.   }
  8.   return instance;
  9. }

Again, this only works in Java 1.5 and .NET 2.0 because of the changes in the memory model. There are some discussions that the restrictions on volatile may not make it faster than volatile, but then there’s a solution that uses the Memory Barriers instead of volatile that could get around this in .NET. Of course, some of this feels like premature optimization, as synchronization and volatile improvements are things that compiler vendors are likely to working on. I guess its like the free lunch.

And speaking of which, if you haven't really thought about having your desktop application take advantage of a 32 core machine, this is good discussion to get you thinking about that and why.


Posted 16 years, 8 months ago on March 22, 2006
The trackback url for this post is https://www.eyt.ca/blog/bblog/trackback.php/122/

Comments have now been turned off for this post. If you want to share something, please e-mail me.

Navigation

Recent Posts

eyt*