eyt*

Find your next scenic drive!

October 21, 2004

C++'s Export revisited...

As promised, Danny from the InformIT C++ Newsletter has revisited the C++ export keyword. It provides some history and some facts surrounding the issue, and is a good summary on the keyword.

I am not surprised by the ISO C++ Standards Committee standardizing the export keyword without having at least one implementation; there are other things that were standardized with implementations that are not fantastic today, such as the std::allocator concept (as described by items 10 and 11 of Scott Meyers' Effective STL) and the std::vector<bool> (as described in Item 18 of the aforementioned book).

It is also nice to know that Comeau C++ is not the only compiler that supports the keyword. It appears that Intel's C++ compiler also deals with it, albeit in an unsupported and undocumented role.

Danny's description of how the export keyword actually works is similar to the way that I expected it to work. I expect it to work like a precompiled header, that is maintained by the compiler, and any change to this file requires the recompilation of most of your tree. But I envisioned this precompiled header to be more like the Java-based file that would be expanded when required during linking, but to my understanding, the standard does not choose the actual implementation, such that this latter method could potentially be done at the cost of a significant effort.

I am surprised by the fact that some vendors claim that this is not a demanded feature. When I talk to people about templates, people generally complain about the complexity of debugging them and the instantiations of them (not the fail to mention the group of people who hate templates because of one specific template implementation, such as a recent person I spoke to who hated templates just because of std::auto_ptr). While debugging has been addressed in the more popular compilers, there are some debuggers that are not ideal with templates. By far, however, it usually has to do with template instantiations. Yes, I agree that no one has downright came up to me and said that they want export support in the compiler, it is undeniably a possible solution to some users needs.

Of course, if I were a compiler vendor, would I implement the export keyword? Probably not. I would focus my energy on making standard template compilation better, by working on the implicit instantiation and linker resolution phases, since these are the areas that people are presently working the most with. A feature such as the export keyword, as highlighted in the article, is not easy to implement, especially to do it any justice, and for people to get any benefit out of it, they will have to port some code to use the keyword. These two issues seem to go against implementing it. The request to remove export [PDF] from the C++ Standard is also an interesting read on the topic, and although Danny did not specify the reason it was rejected, I hope that vendors will perhaps make templates faster and better, no matter which keyword do it with.

October 16, 2004

Using Exceptions...

A little while ago, I was talking about exceptions. During this discussion, I was going to say the old adage that exceptions are only to be used for exceptional cases. In Sutters Mill #31, “When and How to Use Exceptions,” from the August issue of C++ Users Journal, Herb Sutter provides a clear, objective, and measurable answer of when and how to use the exceptions.

In this discussion, Herb points out that most languages created in the last twenty years have exceptions in them, and there is a good reason for this. Without exceptions, it is a little too easy to simply ignore errors, but with exceptions, to ignore them, you must write code to explicitly ignore them, and as such, you are less likely to write such code (and it is more likely to be caught in a code-review).

In the above, I mention that it is easy to ignore errors, but I did not mention what errors are. Herb defines errors as preconditions, postconditions, and invariants that are either not met or cannot be met, which Design-by-Contract comes to mind, and is strict in stating that any other causes are not errors, and should not be using exception constructs. As assert is not error handling and ignoring errors is not acceptable, your choices to report errors is to use errors codes or exceptions. Herb mentions the following points in favour of exceptions:

  • Exceptions cannot be ignored.
  • Exceptions propagate automatically, whereas error codes can easily get lost, especially in translation.
  • Exception handling separates error recovery and handling from the application logic.
  • Exceptions are ideal for reporting errors from Constructors and Destructors.

By contrast, Herb mentions that you should only consider error codes when none of the benefits of exceptions apply directly, when you are providing a library with either mixed compilers or mixed languages, and when profiling shows that they occupy too much time, although he does point out that this is usually because the exceptions being launched are not truly errors after all.

Herb reminds us that the standard library uses exceptions, and therefore, you are going to be exposed to exceptions whether you want to or not, but the advantage here is that compiler vendors are optimizing for them, since they are used.

With either error codes or exceptions, the program should be left in a valid state, which is called a basic guarantee, and is especially important is releasing resources. If you can, you should likewise prefer to guarantee that the state of the program will either be the original state in the event of a failure or the intended state state when the function completes, which is known as a strong guarantee. Finally, if your function allows, you should prefer to guarantee that a function will never fail, such as destructors and swap functions.

The article provides a lot of good examples and a lot more discussion than the above. It definitely is worth a read.

One of the things that I mentioned above deals with performance. In the same issue, Andrew Koenig and Barbara E. Moo measure the performance between a vector, list, and deque, and they observed that vector::reserve on medium-sized vectors increased performance, but on small or large vectors, this was not the case, appending elements to a list was almost 40 times longer than appending the element to an identical deque, and that inserting elements at the beginning of a list takes longer than appending them to the end.

In the end, they recommend the same advice as others, in that vectors should be used except if there is a better container, such as if you are inserting at the beginning of the container, you will probably want a deque instead. They also recommend using reserve, as does Effective STL.

But the point that really ties into the above is their second conclusion, which is that if you care about performance, you should measure it yourself.

October 14, 2004

PayPal Upgrade Brings Instability... But Its Back (at least most of it)

It seems that it is not a good time to be in the on-line payments business. In addition to exploiting problems with Internet Explorer such as incorrectly showing the secure lock and deceptive domain names, there have also been a lot of DDoS attacks on companies such as Authorize.net [Story] and Worldpay [Story]. While this is disturbing, it is not as disturbing as the latest PayPal saga.

Accord to two Netcraft articles, PayPal's site was redesigned a couple days ago, and this crippled PayPal's site. While performance has improved today, there are still some services that are not fully on-line (SlashDot Article and SlashBack).

With the WorldPay and Authorize.net problems, the problem is not software-related, but with PayPal, this is an unbelievable mistake. Books like Core Servlets and JavaServer Pages make it a point to distinguish between the test environment and the production environment, and I would have thought that an organization like PayPal would not only have a test environment, but also use it. Interestingly enough, to help developers integrate with PayPal services, PayPal does have a developer network that allows you to play with fake money to test your applications (I hope that this service is on a separate server and database), and so it would seem to simply be a practice what you teach-type of situation. It seems strange that a valuable service like PayPal would have testers to ensure that such situations do not occur, or maybe they took this weeks Java Developers Journal newsletter a little too seriously. Who knows.

One of the Netcraft articles states that eBay (PayPal's parent company) can roll-back to a previously stable software version, which PayPal does not yet have that functionality, but in saying this, they also point-out that eBay is running Windows 2003, whereas PayPal is running Linux. I do not see a correlation here, but I also have not done a lot of Web-development with Windows, so maybe it is something that I am unaware of it; to me, it just seems like poor planning and testing.

October 13, 2004

The Passion Will Leave You...

Peter Varhol has recently sat in on an interoperability of .NET and Java discussion presented by Mark Driver. In this discussion, Mark has characterized the present Java developers as Type-A developers, that are described as being highly technical, code-centric, and spend some of their personal time learning new technologies and subtle details of the platform. Mark predicts that the majority of Java developers will be Type-B developers in the next several years. He characterizes Type-B developers as developers whose job is software development, where the passion is lacking.

In saying this, Peter focuses on how development tools will change to cater to this new group, however, the question in my mind is whether this means that the passionate Java developers will lose interest in Java or if Java will become so popular that everyone and their mother knows it? I really think that it is the latter case, and that Java has a lot of things going for it.

In this particular blog entry, Peter does not really say a lot about the tools, but he does talk about how IDE's will need to be able to work at abstraction layers instead of only at the code layer. By this, I assume that he means the concepts like Microsoft's Visual Studio 2005 plans, which attempt to tightly integrate modeling with source code. This is one of the only ways that I see his points in being accomplished.

In another article, however, Peter discusses tools a little more. As a continuation of Productivity is Tools, Peter discusses some of the complexities of our world, where developers are no longer just working in a single language, but are not also working with SQL, JavaScript, HTML, XML, XSLT, etc., etc., etc. This complexity will definitely not lessen, however, our tools will definitely need to deal with this via abstractions, exactly like the above IDE problem is all about.

But Peter takes this one step further. The issue now is now with the IDE, but rather with the debugger. Debugger technology has not really changed a lot over the years. There are some very interesting research items that could make debugging easier, but they have yet to have a debugger that is feature-rich enough to seriously consider it. As such, we are still stuck debugging our applications line-by-line.

Peter suggests, however, that instead of using a debugger, we should utilize testing frameworks, such as JUnit; he is obviously not the first one to suggest this. Martin Fowler states in his Refactoring: Improving the Design of Existing Code that “Whenever you are tempted to type something into a print statement or a debugger expression, write it as a test instead.

By writing, maintaining, and using unit, functional, and system test tools, you can easily narrow in on exactly what the problem is within the application. As Peter describes, the only way to build truly complete, functional, and reliable software is to test it.

This does not remove the need for a debugger, but it does narrow down how many times that you need to use the debugger, not to mention the repeated use of the debugger to pin-point the error. I would not exactly call this a goal in writing test programs though. I see the debugger as another tool in our tool chest; now that you have that screw gun, your hammer is used less, but it is not to say that you should throw your hammer away.

October 11, 2004

The Ouroboros-Like Patent System...

The September issue of ACM Queue has an article entitled “Patent Frenzy” by Aaron Weiss which reminds me of my More Lawyers Than Programmers reaction.

The article discusses companies like Amazon with ridiculous claims likes the One-Click patent, in which Jeff Bezos says that Amazon had to patent it, else another company would have, and states that they are not planning to sue people over the patent, but just have it in their portfolio in case someone else tries to sue them. Some think that this reaction exists only because of the reaction of Unisys's patent on the LZW algorithm that was used in the GIF image format, and once the GIF image format became popular, Unisys started requesting royalties for the LZW algorithm. This has unfortunately tainted people against using GIF images, even though the last Unisys patent covering LZW expired in July 2004, although, IBM also has a patent on the LZW algorithm that expires in August 2006.

This highlights one of the largest problems with patent laws, where two organizations have patents on the same technology or where prior art is easily proven. But as Aaron quickly points out, the 3,200 underpaid patent examiners are not only overwhelmed by over 290,000 applications, but are also not well versed in the domains that these patents are for. Another example of this is a patent for hyper-light speed antenna, in which purports a method for sending electromagnetic waves faster than the speed of light. Obviously this patent system assumes that applications are valid until proven otherwise.

The article also describes one of the more sickening industries that are being created, where organizations are created to essentially purchase, enforce, and collect royalties for patents that they did not create. An example of this was the PanIP suits over sales using textual and graphical information to match customers. This article talks about Acacia Technologies and IdeaFlood, in which analysts are stating that this business model is sound.

How can we fix this situation? This is unclear. Amazon's Jeff Bezos suggests limiting patents to 5 years instead of 20 years (which was 17 years prior to a 1995 WTO decision). The FTC recommends adding a mechanism in the patent system to challenge patents out of the court system and to broaden the patent invalidation criteria. Whatever the approach, something needs to be done, as this is obviously being abused.