dbhi! Eric Y. Theriault: blog
eyt*

Find your next scenic drive!

March 22, 2006

Adapting for Concurrency...

Herb Sutter gave a talk on concurrency at PARC earlier this week, and the Audio, Video, and Slides are available.

Where as even up to a few years ago, multiple processor (and multiple core) machines were only available for niche markets, the talk highlights the fact that very soon multiple core machines will be everywhere. This is very exciting, however, we must change how we develop our applications to take advantage of this. From this perspective, it is not just simply using the existing tools that we have such as locks and threads, but also expanding our tools in order to take advantage of these language features.

Herb gives the analogy that this is similar to the early 1990’s when Object Orientation was new. While you could write an object-oriented application in C (for example, consider the FILE structure and methods), it is far easier to write such applications in languages that have native support for objects. Herb states that while we currently can write multithreaded software in our existing tools, writing correct multithreaded software is hard, pointing out that some of the class libraries and even some of his own examples have been incorrect.

During the discussion, Herb mentioned something that I was unaware of. A while ago, I discussed Double Check Locking and how it was broken and later about Scott Meyers and Andre Alexandrescu’s C++ and the Perils of Double-Check Locking. One part that I had not noticed in Scott and Andre’s paper is that, thanks to reworking the memory model, the Double Check Locking Pattern once again works in Java 1.5/5.0 (JSR 133) and .NET 2.0 (CLI 2.0 Section 12.6). The solution is to use the keyword volatile as follows:

  1. private static volatile Singleton instance= null;
  2. public static Singleton get() {
  3.   if ( instance == null ) {
  4.     synchronized ( Singleton.class ) {
  5.        instance = new Singleton();
  6.     }
  7.   }
  8.   return instance;
  9. }

Again, this only works in Java 1.5 and .NET 2.0 because of the changes in the memory model. There are some discussions that the restrictions on volatile may not make it faster than volatile, but then there’s a solution that uses the Memory Barriers instead of volatile that could get around this in .NET. Of course, some of this feels like premature optimization, as synchronization and volatile improvements are things that compiler vendors are likely to working on. I guess its like the free lunch.

And speaking of which, if you haven't really thought about having your desktop application take advantage of a 32 core machine, this is good discussion to get you thinking about that and why.

Long-Lived IIOP.NET Objects...

I have been working with IIOP.NET, a CORBA toolkit for .NET, a bit more than with Using Multiple Vendor's ORBs with Name Services. While my test application was working great, my actual application was not, however.

The server component started up, instantiated a CORBA object, and saved the IOR into the registry. Then my client component would start up and take the IOR from the registry, send a message to the CORBA server, do some data processing for about 15-30 minutes, and send another message to the CORBA server. This issue I ran into is that usually the first message made it to the server, but the second one rarely did, with the error message:

  • CORBA system exception : omg.org.CORBA.OBJECT_NOT_EXIST, completed: Completed_No minor: 0 omg.org.CORBA.OBJECT_NOT_EXIST: CORBA system exception : omg.org.CORBA.OBJECT_NOT_EXIST, completed: Completed_No minor: 0
  • Server stack trace:
  • Exception rethrown at [0]:
  •   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
  •   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
  •   ...

At this level of detail, it would appear that something goes wonky on the client side, since the first message usually gets to the server, however, I also failed to mention that I usually started the server and client at the same time. When I did not start the server at the same time of the client, I would get the exception above.

In light of this detail, it became very obvious that I had an object lifetime issue on the server. Coming from more of a Java CORBA experience, I had assumed that the lifetime of the object would by-default be long-lived, however, it appears that this is not the case. To make your IIOP.NET objects long-lived (or to control the default behaviour), your objects that derive from System.MarshalByRefObject must override the method InitializeLifetimeService(). The follow makes the object live forever:

  1. public class TimeImpl : System.MarshalByRefObject, TimeTools.Time {
  2. // ...
  3.   public override object InitializeLifetimeService() {
  4.     return null;
  5.   }
  6. // ...
  7. }

Since my objects are all long-lived, the above fixed my problem. The documentation for InitializeLifetimeService() does, however, have some other examples in the event that you need more fine-grained control.

OpenNETCF.org Desktop.Communication Library 2.9...

About a month ago, a new version of OpenNETCF.org Desktop.Communication Library was released, and I finally got a chance to work with it last night. Unfortunately, the problem I discussed here regarding the FileInformation buffer is too small is still an issue in this release. I think that it was accidently removed, since the file has a number of whitespace-type changes from the previous release.

Another issue that has not been fixed is the issue I reported to the forums. The issue is that if the RAPI fails to communicate with ActiveSync, an exception is thrown, but the finalizer is still called, and the finalizer does not properly handle the case where the constructor is not ran to completion. This is handled by a quick fix of (changes are in bold):

  1. // ...
  2. ~RAPI()
  3. {
  4.   if ( m_activesync != null ) {
  5.     m_activesync.EndListen();
  6.   }
  7. // ...

And aside from hitting the first problem in my application, the new release seems good after patching the above two issues.

January 18, 2006

java.util.concurrent Memory Leaks...

About two months ago, one of the Java servers my group maintains ran out of memory, and since then, if this service was up for more than a week, again, we get the dreaded java.lang.OutOfMemoryError. Each time that this would happen, we would take a histogram of the memory usage (jmap -histo pid). After gathering a few of those and comparing those results to what we expected, there are two classes that seemed suspiciously using a lot of memory, java.util.concurrent.LinkedBlockingQueue$Node and java.util.concurrent.locks.AbstractQueuedSynchronizer$Node, so I started looking at how these classes were used.

The java.util.concurrent.LinkedBlockingQueue was used in several places in the program, however, its usage was fairly straight forward. In addition to usually being the one that used the most memory, the other thing that made this class suspect was that its usage was just added a week before the problem first occurred. Furthermore, in all of the various tests that I ran, the number of Nodes would increase significantly, but when the full garbage collector ran (either automatically or when forced), the number of nodes would drop to the number of entries that I expected there to be. This made me suspect that perhaps the garbage collector was not running when the OutOfMemoryError was thrown or that perhaps there was not enough memory at that point to even run the garbage collector, but alas, I was quickly brought back to my senses.

During the investigation of the above, I googled on the class and it brought some interesting results. On the Concurrency Interest mailing list, an entry from last year suggests a memory leak in LinkedBlockingQueue and it was confirmed by Doug Lea. I re-wrote the program (in the method: originallyReportedLeak) suggested in the message to confirm that it was an actual issue, and surely, it is a current problem. Running the program with the JMX arguments, you can attach jconsole to see the memory grow, and no amount of forcing the garbage collector to run will bring it down.

But of course, that was not my problem. The issue with the leak in the above program has to do with a timeout occurring. In all but one of the cases was not using the methods with a timeout, and in that case, however, the timeout value was MAX_INT, and because of the way we use this particular service, there is no way that it would have been able to timeout and leak. And as I mentioned above, forcing a garbage collection always brought it back to normal, so alas, back to the drawing board.

The other class that used outrageous amounts of memory, AbstractQueuedSynchronizer’s Node inner class, however, was not as easy to find where it was used. By using Borland OptimizeIt, however, I found out that this class was used by java.util.concurrent.CountDownLatch, where the code was calling await() with a one second timeout, and in this case, the timeout was always triggered. Sound familiar? I wrote a program (in the method: anotherSimilarLeak) that demonstrates this precise memory leak, and surely, it is the same underlying problem as in the LinkedBlockingQueue above.

In this particular program, we decided that the latch was not necessary for this particular method, and that replacing it with a sleep was all that was necessary. This Bug is fixed in Mustang, the next version of Java, so hopefully your project can wait until it fixed. Otherwise, you may unfortunately need to roll-your-own timeout functionality... just make sure to properly document such hacks so that you can remove them in the near future!

January 15, 2006

Thoughts on Mobile Web Best Practices...

It has been reported in several places that this year will definitely be the year of the mobile web, and thus, it is only fitting that the W3C has released a draft of the Mobile Web Best Practices 1.0 on Friday. And while there is a good number of advice that is particular to mobile browsing (many of which dovetail nicely with my own complaints), I find that most of the content could be summarized by design with web standards.

What is nice about this best practices is that it starts off by discussing some of the requirements, one of these includes highlighting user’s goals, where there is more focused browsing than on a desktop. I agree with this completely, especially since a desktop computer typically has tabs or windows where you can open content. Most mobile web browsers do not have this ability, or in browsers such as Minimo, it is not as easily used as on a desktop.

But in reality, this is not the only or even most important of the issues. The major issue is that a good number of sites render horribly in the mobile web browsers, such as making a paragraph that is one centimeter (or one word, which ever is greater) wide and billions of lines long, surrounded by inches and inches of pure white, unused screen real estate. I think this curves my own browsing techniques to looking for particular information.

Two other issues that are highlighted are advertising and scrolling, both of which I would personally group together. Most of the scrolling that is required when browsing on a mobile device is related to advertising, where many of the advertisements cause substantial vertical scrolling in order to get to the content that you are actually interested in. One such site is PocketPC Thoughts, where if you choose the Pocket Internet Explorer One-Column layout, you are forced to scroll 80% downwards to get the content of the site.

Advertising is not the only cause of this unnecessary scrolling, but it one of the major offenders. Another group in this list are sites that use a predefined size for a div, but on mobile devices, this renders into a very skinny, long column, such as many blogs, including Asa’s blog.

Navigation is a huge one. Similar to desktops, frames should simply be avoided. For example, last week I was tracking the arrival time of my wife’s flight on Jet Blue, and I thought it was rather cool that she was going to be about 20 minutes early. I later checked the site by simply reloading the site in Firefox on my desktop, and that is when I noticed that they used frames, so instead of seeing that instead of being 20 minutes early, she was just going to be on-time, I was redirected to their start page. This is frustrating from a user experience perspective, for both refreshes and bookmarking, but on a mobile device, it gets even worst. In Pocket Internet Explorer, it usually ends up that the main content frame is the smallest frame displayed on the device; Pocket Internet Explorer deals with this by drawing huge border around each frame that allows you to (arguably) quickly resize the frames, or you can take your stylus on that frame, hold it down, and select Go To This Frame, which will focus the content that you want to read, but then you lose all the navigational content that you had previously. In other words, there is nothing that frames offer that make the user experience better; all of these techniques could be rendered with CSS and be significantly more usable and portable.

But navigation also includes buttons like Back and Forward, or even bookmarks, and the recommendations suggest that some devices do not offer these operations. I find hiding the back button a bit odd though. The Pocket Internet Explorer interface deals with this nicely, by making the back button easily accessible, and it hides the forward and refresh button under a menu. And for the few times that I have to use those buttons, I am glad they aren't using valuable screen space.

The lack of the Back, Forward, and Refresh buttons definitely changes design a bit. One of the places in the recommendations that they explicitly talk about this is in error reporting. The first part of the error reporting section is simply presentation, and it is just a matter of telling users what went wrong and what they can do about it in a language that they understand. It is still surprising to me how many web sites put these cryptic error messages that make web sites much less usable. But the other part of the error section is simply about giving the users some options of what to do with the error. Should the user press the back button and try again? If so, don't assume they have a back button, and give them something to bring them back, or even better if it relates to information in a form, do not bring them back, but rather allow them to change the erroneous input in the current page. Will a refresh help the condition? If so, give them a button or a link that will retry the link. And more importantly, give them links to bring them back to some index page based on what they are browsing or some home page, both of these options are basically to keep the users on your site.

Also related to navigation is searching. There are many sites that I do not find what I am looking for quickly, so I want to search their content to find what I am looking for. The way that many pages render on mobile devices, however, the search option can be difficult to locate and can require some scrolling. In my opinion, if your site has the ability to search, place it somewhere noticeable and towards the top of the rendered page, and this way, users will remember where the search is in the event that they want to use it (and yes, I know, this site does not follow that advise :-)).

In the adaptation section, there was some good separation between server, network, and client side adaptations. This is interesting to separate them out. A great example of a server-side adaptation is actually Google. When it detects that you are browsing from a mobile device, it throws you onto a lighter-weight web site for mobile devices. And while this is a great way to optimize for mobile devices, it is not the approach that I think the average web content developer will take, but I would love to be surprised here. I would imagine that most content developers will simply provide the content in a fashion that is, instead, friendly to most devices, only optimizing little things here and there. One aspect of Google’s approach that does not necessarily jive with the recommendations is that the URL is noticeably different, and thus, bookmarking a search that I did on my PDA will not change the presentation when I display it on a desktop, therefore, not making it transparent and compatible with the OneWeb vision.

An example of a network-side adaptation is actually Palm’s Web Pro. This web browser has the ability to use a proxy server that optimizes HTML and images for your device, and this makes a huge difference for mobile usability in my experience. Pages would load significantly faster and be far more usable when you used the proxy than when you didn’t. I would imagine that some mobile phones out there would also have a similar proxy.

Finally, examples of the client-side adaptations are numerous, but the one that I have been using the most lately is the Pocket Internet Explorer’s various views. You have the option to view the pages in its default mode, a one-column view, and a desktop view. In its normal view, you will typically have to scroll horizontally to center your content, then only have to scroll vertically, and your page is typically usable, though there are exceptions (such as the very skinny columns). In the one-column view, the page is basically manipulated to place all div’s so that they follow one another and are rendered as paragraphs. In the desktop view, the page is rendered closer to your desktop view, and therefore requires a significant amount of scrolling. The Palm Web Pro previously mentioned also has similar options.

One of the topics that is mentioned in the recommendation is access keys; I would imagine that this feature would probably be more important on phones than on PDA’s, but not having every used them, it is hard to comment on them more than this.

Globally, however, I felt like the recommendations was like reading through Jeffrey Zeldman’s Designing With Web Standards, since most of the content of the recommendations are great for any environment. For example, designing towards web standards, avoiding tables for presentation and using elements to structure documents instead of focusing on presentation.  The OneWeb vision is simply to create web content that is accessible no matter what type of browser you are using.

An example of not being specific to the mobile web are forms, in that you should only ask for the information that you need, only require what you really really need, and make the difference clear. This is nothing specific to Mobile devices. As hard as pecking in information is on some mobile devices, there are a lot of users that do not like typing.

Another great, non-specific example is making links obvious, giving users a good description about what the link will give them, and possibly more important for mobile devices, but also for browser and operating system interoperability, noting the type of content that will be downloaded if the link is not to a web page. And what about web sites that open new windows? Nevermind mobile devices: desktop users are now using enough Popup Blockers that no web-content developer should really depend on new windows opening on a desktop. And finally, short and descriptive titles are ideal in any environment, particularly that in most desktop environments, the default bookmark text is the title.

In summary, I think that the recommendations are great, and I think that most of the recommendations are simply handled by generating web sites that use web standards and just generally designing user interfaces for great user experiences, regardless of their browsing environments.

Earlier Entries

<  1  2  3  4  5  6  7  8  9  10  >

Navigation

Recent Posts

eyt*