Find your next scenic drive!

October 1, 2004

Buying Borland JBuilder 2005...

On July 30th, I purchased a seat of Borland JBuilder Developer X from Borland's online store for download, but for some odd reason, I never got a download link. As is so trendy these days, Borland has outsourced their on-line store to Digital River, and I was rather persistent in calling, however, Digital River insisted that the problem was with Borland, and that I should e-mail them directly. However, when I e-mailed Borland, no one ever responded.

Around the same time, I purchased a seat of Gentleware's Poseidon for UML, and like many on-line orders, my license information came almost instantly; this is the entire point of on-line orders, though, where impulse buyers simply purchase software.

In the case of my purchase of JBuilder, Digital River required several follow-ups. When JBuilder 2005 was announced, I figured that the reason for the delay was the fact that they were planning to send me 2005. But alas, when Programmers Paradise had JBuilder availability, I started calling more frequently. Finally on Monday, I got to speak to a manager at Digital River, who promised me to open a new order and send a physical shipment of JBuilder, of which I would receive two e-mails. However, come Tuesday, I had not received any e-mails from them, so I followed up with Digital River again. Again they attempted to brush me off the phone, however, when I insisted that the order be canceled, suddenly she placed me on hold, and offered to send me a physical JBuilder Developer 2005 seat. But as this is a new order, their systems must charge me again, and then credit me the previous amount plus shipping and handling; I am not sure exactly how this is going to work, but alas, I have my seat of JBuilder Developer 2005.

The packaging is very cool, and I quickly threw the first CD into my Linux box. It should be noted that officially, Borland only supports RedHat Enterprise Linux or Java Desktop System, whereas I am running SuSE Linux Pro 9.1, but normally this is just normally the fine print.

I saw that it had a install-linux.sh script, so I ran it, and I waited. And waited. And waited. Finally, it came up, but it really took longer than I was expecting, especially for a menu with three options, viz. Install, Web information, and Close. I clicked Install, and again waited. Again, this took longer than I expected, but eventually the installer took over, and this went fairly smoothly.

After it was installed, I started it up, by pecking in /usr/local/JBuilder2005/bin/jbuilder, and shortly thereafter I got:

Invalid maximum heap size: -Xmx-600m
Unable to create JVM.

What? This is weird. After a trying various things, I eventually thought of looking in the /usr/local/JBuilder2005/bin/jbuilder.config file, which read:

vmmemmin 32m
vmmemmax 75%

Because of the 75% bit, I decided to change vmmemmax to 1024M, and finally it started. I guess JBuilder has some problems detecting 75% of 3,371,548 bytes for some reason.

I have not used JBuilder 2005 a lot yet, however, it looks a lot like JBuilder Developer X, but more refined, such as it shows you which files are checked out of CVS, even at the package level. Borland's Code Audit feature is also notably different than other code audits, in that it does not do your entire code-base, but rather the file that you are in. While this is good for new code, it is not possible to get a full report of all your existing code. In addition to this, it does not seem to be as complete as others that I have used, in the sense that its basic goal it to enforce coding style. Other code analysis programs, such as JLint report on thinks like potential deadlocks and other such issues.

Globally, it seems great, but I am still disappointed in the process that I had to go through in order to get my order. Two months to acquire a download purchase is extreme. Not responding to customer's e-mails is not acceptable. Things like this go against the entire premise of on-line ordering, where there is no second guessing. This is similar to web sites that use a mailto: link instead of a form for acquiring e-mail addresses; the mailto: link not only assumes that e-mail is configured, but instead of just a pecking in an e-mail address, customers are presented a message window, which is more of a commitment than sending their e-mail addresses regularly.

One man at Digital River that I spoke to at least twice on different occasions did indicate that he had never seen this before. While I am happy to hear that it was not a regular occurrence, they did not handle it well at all. But I am glad that it is over, and I can finally enjoy my JBuilder Developer 2005 :-).

September 28, 2004

Lack of Standards Effects .NET?

In a letter to the editor of Visual Studio Magazine, Frank Molnar states that there is not much buy into .NET because of the lack of standards. The argument that he provides is that that previous model of was to standardize the language, thereby placing it in the public domain, and earn tools by providing compilers, interpreters, and other components that support this standard for various platforms.

Instead of this model, Frank says that Microsoft has completely changed its paradigm for the simple reason of increasing sales by making previous versions of its software not only obsolete, but also unsupported. By not standardizing the language, Microsoft is essentially selling its language, and this makes it difficult for people to buy into the language.

In Leveraging Frameworks, Douglas Schmidt et al mentioned that it takes up to nine months for developers to become productive with a framework (what .NET is). As such, for an organization to adopt Microsoft's latest tools, it must factor in at least six months in learning. This can obviously be shortened by hiring employees with experience with .NET and/or providing appropriate, timely training.

But Franks argument is really more that when a language is standard, this investment can be amortized over a number of products. When you take the time to learn a standard language like C++, the language remains the same, no matter the operating system and environments you work in (of course, techniques and culture pertinent to the domain may change your language usage habits). The only portion that needs investment is learning the new tools available in the new environment.

Another advantage to the standard language is portability. This is slightly touched upon in the above paragraph, but here I really mean source code portability. Per se that you require a real-time operating system, and you buy into a companies product. If that company fails, you probably do not want to rewrite your entire application and tools on a new platform. Instead, you would probably want to port these applications to another vendor's operating system. With standard languages and proper abstractions, this can easily happen.

If any other company had come out with .NET, it would probably have been a complete failure, but as Windows usage is high, it is unlikely that Windows would suddenly go away, so this protects it a bit. But per se that Longhorn license fees go up considerably or you just need another platform, it looks like Mono is your current bet, but based on experience with Samba, changes in the language or protocols (that are closed source) could take a little time before appearing here. Depending on what the hiccup is, this could be as simple as “Do not use feature X”, but if your software interoperates with other .NET software, it is possible that updates could render the program broken for sometime. Again, this is just looking at the worst-case scenarios.

There is, however, one place where Frank and I disagree, and that is in Java. He seems to think that it had a chance and now it is done, but I see Sun has been paying more attention to Java in the last year, and with the Tiger release probably coming on Thursday, I think it is just the beginning of the future evolution of Java. Of course, Java is not a standard language (they tried and failed), but it is freely available and there is the Java Community Process to change the language. And furthermore, there are a number of implementations that do not come from Sun.

Interestingly enough, also in the news today is Azul Systems, which plan to make a network appliance for plug-and-play boosting of applications. They are initially targeting J2EE over .NET, because “most of the market is J2EE”. Depending on what you read, the opposite phrase could easily be found, but I am sure part of the reason that J2EE is popular in this space is that no one is selling the Java language, just Java tools.

September 20, 2004

Debug using Logs...

Last weeks Developer Central newsletter (#79) had a piece entitled “Improving Software Development: Logging Strategies” by Mike Gunderloy which discusses some of the things that I have been meaning to write here myself.

When you talk to developers about logging, it really means different things to different developers. Generally developers who have the luxury of working with a good logger really want one, whereas those who have either never used a logger or used a poorly designed logger are generally in the camp that it is not particularly useful.

Even with the excellent tools that we now have available, there are a number of developers that still prefer printf() style debugging. In this style of debugging, a developer generally inserts some printf()s in the code, recompiles, runs, evaluates the output, and repeats the process.

This style of debugging is perhaps tried and true, especially in development environments where you know exactly what you are tracing down. Generally you know what you were doing, or the tester can generally give you enough context to narrow it down. Martin Fowler states in his Refactoring: Improving the Design of Existing Code that “Whenever you are tempted to type something into a print statement or a debugger expression, write it as a test instead;” in other words, instead of extracting energy in writing a printf() function to ensure that the variable you think is 10 now is, write a test program to ensure that it is that value. Martin's premise is that the test program does not go away, whereas your printf() statement will be removed when you merge the code, or perhaps at some point in the future, hopefully before a customer sees some cryptic debug message.

While this approach is interesting, the problem with the approach is that it is only useful in the development environment, presuming that the test programs are executed and inspected regularly. That is a lot of conditions to be true. All too often, the software works perfectly in-house and once released, it fails. This phenomenon rarely fails, especially when it is important that the software works perfectly. But, how do you debug this problem?

While some problems are easily reproduced in-house such as different work flows, some other issues can be extremely difficult to trace, such as those that involve timings, and this is really where good logging will be the difference between finding the bug quickly, and, well, not.

As Mike highlights, there are a number of things that log files are good for. Because log files provide the history (hopefully) surrounding the problem, it allows developers to understand the actions that were actually done and the sequence of those actions. These two items alone are extremely important. There are many times that people will overlook something that they have done that is actually very important in the grand scheme of things, which changes the conditions for the bug. Logging such conditions will ensure that this information is recorded to help in reproducing the problem.

One of the hardest types of bugs to catch are timing related bugs, and debuggers are notorious for making the software behave differently, which usually makes the bug change behaviour. However, since you will be shipping with the logger in the source code (perhaps in a different state), the behaviour of the software should be similar regardless of the surrounding conditions.

Another feature of log files is that you can sample the information within them to either look for patterns, and this search for patterns does not really need to be related to a bug. While you can look through the file and say that after X operations, the bug always occurs and try to see some conditions of why that could be or anything that was different before that time, it could also be used to come up with some real-life usage information to guide optimizations and understand work flow. There are sometimes where the work flow that developers envision is not really the work flow that users use. While there is no better way that to talk with customers to learn how they use the application and what they would like to see differently, sometimes people only think in the current context and do not realize all the potential that software really has, and gathering information from log files is a great idea to get useful statistics on the application. This information not only helps the current developers of the project, but also support and future developers on the project.

In one of the last Journal of Object-Oriented Programming (October/November 2001), Jerry Gao, Eugene Y. Zhu, and Simon Shim wrote “Tracking Software Components” which discusses distributed logging. Of the many things that they mentioned, one of the most interesting ideas presented was the ability to be able to play-back a log file. In other words, the application would record a number of events, and using a supplementary tool, the events could be executed in the same sequence to try to reproduce the bug. This obviously requires some careful planning and in some cases, significant effort to write a playback tool, however, this example is simply provided to demonstrate some of the power that a properly used logging system can do.

As with many techniques, adding logging to new code is usually easier than adding it to existing code, but deciding what to log makes the difference. As mentioned here, error conditions and exceptions should, at the very least, by logged. While the condition is potentially of mediocre concern today, as the software evolves, it can become more important in the future, and logging such conditions will allow you to see some areas that may have gone wrong.

In addition to this, you will want to know some of the user's actions, such as the command-line arguments, the buttons pressed, menus selected, et cetera, as well as information about the environment, such as environment labels, hardware, memory, et cetera. This will allow you to see what the user has done and see the errors in relation to the operation that was being performed. Likewise, information about the internal state or length operations should also be logged, but probably at a lower level to allow developers to see them, but clients to disable them.

One of the things that I mentioned above is that the logger is not removed from the shipping software, but it can be disabled, in many fashions. Mike actually mentions disabling the logger completely, but in my opinion, this is not a good idea, as you never know when an issue can pop up. Besides, the logger can easily be integrated with your user interface and provide users with more context of what is happening within the system.

A better approach is to actually use a logger that filters messages based on some configuration option. Most loggers present some way of assigning a level to a log message, and this determines how the error message is propagated. The range of available levels, however, ranges from logger implementation to logger implementation.

Off-the-shelf loggers are available for most programming languages available, and many modern languages come with a logging service (such as Java 1.4's java.util.logging). No matter which logger you decide on using, it is important to select a flexible logger. The flexibility is usually obtained by using levels to discriminate the messages, the option to be threaded or not, and the ability define an Observer that allows for custom reporting services, such as presenting certain messages in the status bar. The log file utilized must also provide a wrapping feature after either a certain number of messages or a particular file size, and must contain, at the very least, a time stamp, a level, and a message, but more context is always desirable, such as the filename and line number where the error is logged, and the host name and/or thread identifier that the error message is from.

Debugging client applications is much easier with the proper context. Logs provide context in an application with minimal overhead. The advantages greatly outweight most excuses people will bring against them; there are few applications where loggers do not make sense, and usually all that is needed, if anything, is a better logger.

September 18, 2004

The Rise and Continued Rise of Object Orientation...

Over at Informit, Danny Kalev has an article entitled “The Rise & Fall of Object Orientation.” I find that this the article is trying to suggest something, but does not provide any real substance to support the idea.

For example, in the examples that he presented for the inheritance flaws, in all of these cases it seems that the rule to only use inheritance as a Is-A relationship was not held, and the real reasons for inheritance was code reuse. The realization of reuse comes from more than simply inheritance; it comes from the fact that a class should be written to do a single function, and therefore, this representation can be reused for multiple projects. This is not limited to an object-oriented paradigm, but this can be easier to do in object-oriented languages. As mentioned in Leveraging Frameworks, writing frameworks is not easy, and many framework projects fail; as such, using examples that are frameworks is probably not the best example. What about other object-oriented projects, such as ACE, for example. This project has been around for a while, and to my knowledge, has not had a huge rearchitecture, rewrite, or was renamed, such as the examples that he provided.

C++ has historically been known for having shallow class hierarchies, whereas other object oriented languages such as SmallTalk and Java generally have deep class hierarchies. The lack of derivation should not been seen as bad, but merely a different approach that C++ culture prefers. The argument of inheritance propagating bugs is also lame, as your classes should have some genre of test cases, and per se that there is a bug in a class that has five classes that derive from it, in this case, you have just fixed bugs for all of them. When a class hierarchy, containment, or generic algorithms are not preferred, this will usually result in classes maturing at different rates, as bugs will usually only be resolved in portions of the code instead of the entire code base.

The example of avoiding member functions is also weak, as a library of containers should probably be deriving the from a base class of some sort for the size() function, with the possibility of being specialized. While it is true that you could implement this via the std::distance algorithm, this could be optimized by having a member function. As a complete counter to the argument, think about Item 44 of Effective STL, which prescribes that you should prefer member functions to non-member functions. If member functions were included in each container, then it would be easier to implement code using an std::list and then change it to a std::vector with minimal changes.

This article appears to be suggesting that everything should be generic, but interestingly enough, Danny also wrote an article on over-engineering, in which talks about being over generic. This appears to be a mixed message. Personally, I think it generic programming and object-oriented development are techniques that developers should know how to use, and when developing code, choose the technique that is the best for the situation.

September 14, 2004

The Tools of Software Development...

A little while ago, Andrew Koenig and Barbara E. Moo wrote an article in C++ Users Journal discussing some of the parallels between house construction and software engineering, as they were doing some renovations. One of the differences they highlighted and that others have also highlighted is how construction works own their own tools and software developers, well, our tools are different, to an extent.

The major difference is with tool sharing. When a carpenter does not have a particular tool, they can generally borrow the said tool from another craftsman, since the tools are easily portable and shared. When a software developer does not have a particular tool, however, sharing another developer's tool is not always easy. For small tools like scripts, these can generally be copied or shared, and other tools need to be installed, presuming that it can be installed on your workstation workstation. In the latter category, licenses sometimes need to be obtained.

This can be very painful, especially with floating licenses of popular software. Imagine a world where software developers were like carpenters, and went to work with their own tools; could this approach work?

I do not think out tools are mature enough yet to do this, and this is one of the goals behind Eugene Eric Kim's A Manifesto for Collaborative Tools. By this, I mean that the interoperability of the tools is not there yet. Take, for instance, UML tools; the standard interchange format for UML is XMI, and most UML tools available support one version or another of XMI. But even when your applications understand the same versions of XMI, the interoperability of the products really depends on their interpretation of the standards; for example, I saw an XMI file generated by Visual Paradigm for UML's C++ reverse-engineering module that generated a document that was not a valid XML file. Furthermore, for some elements in the XMI, Visual Paradigm had put the visibility attribute to none, however, according to the DTD, the only allowed values for this attribute are private, protected, and public. Only once these issues were resolved was I able to import this model into Poseidon. It is for reasons like these that products like XMI exist, but this is unfortunate.

In addition to inconsistencies such as the examples aforementioned, it is possible for some features of a tool to not directly map into the interchange format. In cases like these, the vendor has two choices: do not export those features, or export them in a non-compatible fashion. Depending on the feature, not exporting the feature may go unnoticed, but it could also be a main reason for using the product. In cases like these, this must be documented clearly. But extending a standard interchange format that prevents other tools from accepting the file is obviously not a good idea. These decisions, coupled with what services the tools provide, directly effect how a piece of work in one tool will look in another tool, therefore, interoperability is rarely guaranteed.

As another example, if we consider IDEs, one thing that IDEs do is generally create some type of project file. When all developers are using the same IDE, it is very easy to stay in sync with the team, since when you update your view, you can immediately use the project file that is in the view. However, if you were using two different IDEs to build the same source code, it is possible that the additions of files, changes of settings, and differences in compiler could go unnoticed, incompatible, or new build issues could occur. Please note, however, that using different tools is different than targeting different environments. More to the point, if you are targeting multiple compilers, then you must build each branch on each compiler prior to merging; but supporting various developer's preferences is a non-problem, as the released software will only be built by one of those configurations, and therefore, must only work on that one configuration.

Another issue with supporting multiple development tools for a single target environment is the question of support. When a problem occurs or a developer is unsure how to use a particular feature, talking with the team members will likely help get an answer, instead of a “why don't you just use my tool.” Also a point in this is purchasing tools. You could not simply purchase X seats of a tool without ensuring that it is compatible with most developer's environments. In the event that it is not, you either have to select another vendor or purchase a few copies of this and a few of that, which will eventually end up with the same problem that it started with.

Sharing a UML tool, for instance, is more complicated than sharing a hammer. In addition to the installation and licensing issues, the time to get comfortable within the tool can be more time consuming than learning how to use another drill. In the instance of having no standard development tools, however, developers will generally procure their own (hopefully legally), and what will generally occur is that many different development environments will be created. But until Eric Eugene Kim's vision is realized, true collaboration will suffer.