Find your next scenic drive!

October 16, 2004

Using Exceptions...

A little while ago, I was talking about exceptions. During this discussion, I was going to say the old adage that exceptions are only to be used for exceptional cases. In Sutters Mill #31, “When and How to Use Exceptions,” from the August issue of C++ Users Journal, Herb Sutter provides a clear, objective, and measurable answer of when and how to use the exceptions.

In this discussion, Herb points out that most languages created in the last twenty years have exceptions in them, and there is a good reason for this. Without exceptions, it is a little too easy to simply ignore errors, but with exceptions, to ignore them, you must write code to explicitly ignore them, and as such, you are less likely to write such code (and it is more likely to be caught in a code-review).

In the above, I mention that it is easy to ignore errors, but I did not mention what errors are. Herb defines errors as preconditions, postconditions, and invariants that are either not met or cannot be met, which Design-by-Contract comes to mind, and is strict in stating that any other causes are not errors, and should not be using exception constructs. As assert is not error handling and ignoring errors is not acceptable, your choices to report errors is to use errors codes or exceptions. Herb mentions the following points in favour of exceptions:

  • Exceptions cannot be ignored.
  • Exceptions propagate automatically, whereas error codes can easily get lost, especially in translation.
  • Exception handling separates error recovery and handling from the application logic.
  • Exceptions are ideal for reporting errors from Constructors and Destructors.

By contrast, Herb mentions that you should only consider error codes when none of the benefits of exceptions apply directly, when you are providing a library with either mixed compilers or mixed languages, and when profiling shows that they occupy too much time, although he does point out that this is usually because the exceptions being launched are not truly errors after all.

Herb reminds us that the standard library uses exceptions, and therefore, you are going to be exposed to exceptions whether you want to or not, but the advantage here is that compiler vendors are optimizing for them, since they are used.

With either error codes or exceptions, the program should be left in a valid state, which is called a basic guarantee, and is especially important is releasing resources. If you can, you should likewise prefer to guarantee that the state of the program will either be the original state in the event of a failure or the intended state state when the function completes, which is known as a strong guarantee. Finally, if your function allows, you should prefer to guarantee that a function will never fail, such as destructors and swap functions.

The article provides a lot of good examples and a lot more discussion than the above. It definitely is worth a read.

One of the things that I mentioned above deals with performance. In the same issue, Andrew Koenig and Barbara E. Moo measure the performance between a vector, list, and deque, and they observed that vector::reserve on medium-sized vectors increased performance, but on small or large vectors, this was not the case, appending elements to a list was almost 40 times longer than appending the element to an identical deque, and that inserting elements at the beginning of a list takes longer than appending them to the end.

In the end, they recommend the same advice as others, in that vectors should be used except if there is a better container, such as if you are inserting at the beginning of the container, you will probably want a deque instead. They also recommend using reserve, as does Effective STL.

But the point that really ties into the above is their second conclusion, which is that if you care about performance, you should measure it yourself.

October 14, 2004

PayPal Upgrade Brings Instability... But Its Back (at least most of it)

It seems that it is not a good time to be in the on-line payments business. In addition to exploiting problems with Internet Explorer such as incorrectly showing the secure lock and deceptive domain names, there have also been a lot of DDoS attacks on companies such as Authorize.net [Story] and Worldpay [Story]. While this is disturbing, it is not as disturbing as the latest PayPal saga.

Accord to two Netcraft articles, PayPal's site was redesigned a couple days ago, and this crippled PayPal's site. While performance has improved today, there are still some services that are not fully on-line (SlashDot Article and SlashBack).

With the WorldPay and Authorize.net problems, the problem is not software-related, but with PayPal, this is an unbelievable mistake. Books like Core Servlets and JavaServer Pages make it a point to distinguish between the test environment and the production environment, and I would have thought that an organization like PayPal would not only have a test environment, but also use it. Interestingly enough, to help developers integrate with PayPal services, PayPal does have a developer network that allows you to play with fake money to test your applications (I hope that this service is on a separate server and database), and so it would seem to simply be a practice what you teach-type of situation. It seems strange that a valuable service like PayPal would have testers to ensure that such situations do not occur, or maybe they took this weeks Java Developers Journal newsletter a little too seriously. Who knows.

One of the Netcraft articles states that eBay (PayPal's parent company) can roll-back to a previously stable software version, which PayPal does not yet have that functionality, but in saying this, they also point-out that eBay is running Windows 2003, whereas PayPal is running Linux. I do not see a correlation here, but I also have not done a lot of Web-development with Windows, so maybe it is something that I am unaware of it; to me, it just seems like poor planning and testing.

October 13, 2004

The Passion Will Leave You...

Peter Varhol has recently sat in on an interoperability of .NET and Java discussion presented by Mark Driver. In this discussion, Mark has characterized the present Java developers as Type-A developers, that are described as being highly technical, code-centric, and spend some of their personal time learning new technologies and subtle details of the platform. Mark predicts that the majority of Java developers will be Type-B developers in the next several years. He characterizes Type-B developers as developers whose job is software development, where the passion is lacking.

In saying this, Peter focuses on how development tools will change to cater to this new group, however, the question in my mind is whether this means that the passionate Java developers will lose interest in Java or if Java will become so popular that everyone and their mother knows it? I really think that it is the latter case, and that Java has a lot of things going for it.

In this particular blog entry, Peter does not really say a lot about the tools, but he does talk about how IDE's will need to be able to work at abstraction layers instead of only at the code layer. By this, I assume that he means the concepts like Microsoft's Visual Studio 2005 plans, which attempt to tightly integrate modeling with source code. This is one of the only ways that I see his points in being accomplished.

In another article, however, Peter discusses tools a little more. As a continuation of Productivity is Tools, Peter discusses some of the complexities of our world, where developers are no longer just working in a single language, but are not also working with SQL, JavaScript, HTML, XML, XSLT, etc., etc., etc. This complexity will definitely not lessen, however, our tools will definitely need to deal with this via abstractions, exactly like the above IDE problem is all about.

But Peter takes this one step further. The issue now is now with the IDE, but rather with the debugger. Debugger technology has not really changed a lot over the years. There are some very interesting research items that could make debugging easier, but they have yet to have a debugger that is feature-rich enough to seriously consider it. As such, we are still stuck debugging our applications line-by-line.

Peter suggests, however, that instead of using a debugger, we should utilize testing frameworks, such as JUnit; he is obviously not the first one to suggest this. Martin Fowler states in his Refactoring: Improving the Design of Existing Code that “Whenever you are tempted to type something into a print statement or a debugger expression, write it as a test instead.

By writing, maintaining, and using unit, functional, and system test tools, you can easily narrow in on exactly what the problem is within the application. As Peter describes, the only way to build truly complete, functional, and reliable software is to test it.

This does not remove the need for a debugger, but it does narrow down how many times that you need to use the debugger, not to mention the repeated use of the debugger to pin-point the error. I would not exactly call this a goal in writing test programs though. I see the debugger as another tool in our tool chest; now that you have that screw gun, your hammer is used less, but it is not to say that you should throw your hammer away.

October 11, 2004

The Ouroboros-Like Patent System...

The September issue of ACM Queue has an article entitled “Patent Frenzy” by Aaron Weiss which reminds me of my More Lawyers Than Programmers reaction.

The article discusses companies like Amazon with ridiculous claims likes the One-Click patent, in which Jeff Bezos says that Amazon had to patent it, else another company would have, and states that they are not planning to sue people over the patent, but just have it in their portfolio in case someone else tries to sue them. Some think that this reaction exists only because of the reaction of Unisys's patent on the LZW algorithm that was used in the GIF image format, and once the GIF image format became popular, Unisys started requesting royalties for the LZW algorithm. This has unfortunately tainted people against using GIF images, even though the last Unisys patent covering LZW expired in July 2004, although, IBM also has a patent on the LZW algorithm that expires in August 2006.

This highlights one of the largest problems with patent laws, where two organizations have patents on the same technology or where prior art is easily proven. But as Aaron quickly points out, the 3,200 underpaid patent examiners are not only overwhelmed by over 290,000 applications, but are also not well versed in the domains that these patents are for. Another example of this is a patent for hyper-light speed antenna, in which purports a method for sending electromagnetic waves faster than the speed of light. Obviously this patent system assumes that applications are valid until proven otherwise.

The article also describes one of the more sickening industries that are being created, where organizations are created to essentially purchase, enforce, and collect royalties for patents that they did not create. An example of this was the PanIP suits over sales using textual and graphical information to match customers. This article talks about Acacia Technologies and IdeaFlood, in which analysts are stating that this business model is sound.

How can we fix this situation? This is unclear. Amazon's Jeff Bezos suggests limiting patents to 5 years instead of 20 years (which was 17 years prior to a 1995 WTO decision). The FTC recommends adding a mechanism in the patent system to challenge patents out of the court system and to broaden the patent invalidation criteria. Whatever the approach, something needs to be done, as this is obviously being abused.

October 9, 2004

The survey says Java Generics helps Code Maintainability...

In Java Generics: Better Code or Worse?, I mentioned that I feel that Java Generics will help code maintainability, and according to Java Specialists' Issue 96, three-quarters of the respondents agree with me, but are as inexperienced as I am with Tiger, so this opinion could change overtime, but I really doubt it.

The real point of Issue 96, however, is not to discuss this. I was actually hoping to see some key points of why people are for it and why people are against Java generics, as I think that this would be interesting. Instead, the discussion in this newsletter is about how a variable declared final is not necessarily constant under J2SE 1.5, whilst it was under 1.4.2. Of course, this is not from a direct-use point of view, as I was initially under the impression of, but rather via reflection on certain types. Heinz provides a full historical viewpoint on this, and why these rules are different under 1.5. While reflection is not something I personally use regularly, it does provide some good insight to the Java language.