Find your next scenic drive!

November 1, 2004

J2EE and .NET are friends after all...

Java Developers Journal has an article entitled “J2EE + .NET, in which suggests that a number of organizations are combining J2EE and .NET, taking the best of the two worlds into creating new applications. This is about taking a look at the proper tool for the job and selecting the tool based on all the factors, not just what you know.

While I agree with a good portion of the article, one thing that really sticks out of the article (and perhaps I am just reading into it) is that Microsoft has revolutionized development environments by allowing all developers to be highly productive in the environment, suggesting that the “type in which ever language you prefer” is an advantage. This is not to say that it does not have any advantages, but one major issue that is sometimes overlooked here is that of maintenance of the code. While yes, it is great for the new intern to be completely productive in a non-mainstream language, such as Eiffel ENViSioN, this environment is not a “read in which ever language you prefer” and since code is read more than it is written, the choice of language is still an important issue for maintenance, such as for resolving bugs or to extend the existing functionality.

The other issue is how some of the languages supported are more-or-less coerced onto the CLI framework. An example of this was when I was talking about the C++/CLI extensions, but an even better article on the topic is Richard Grimes article in the November 2004 issue of C++ Users Journal, where he describes extensions for properties, events (delegates), interfaces, generics, and more, and I find these extensions even more disturbing than I did before. Take for example the following sample class from Listing 2 of the article:

ref class Person {
  property String^ Name
      String ^ get() { return name; }
      void set(String^ b) { name = n; }
  property int Age;
  String^ name;

While these constructs are extremely powerful and all, this is not C++, and even with using the preprocessor, it looks like it would be difficult to make this portable in any sense of the word. In order for C++ to be a first-class citizen in CLI and to allow the other languages to work with these assemblies, such changes will need to be throughout the code-base, thus making projects less portable to other platforms, not to fail to mention some initial investment in exposing your API to non-C++ languages. Portability was obviously not one of their aims.

As a few groups that I was talking to at OOPSLA regarding the gcnew changes said, when in Rome, do as the Romans. If you were targeting a .NET application, why would you really want to use C++ in it? Yes, there are some things that C++ can presently do that C# and other languages are not as well suited to do or that you have an investment in current C++ libraries, but developing full applications in C++ would not seem to be a natural fit. This is a similar synopsis that JavaWorld gave to J# a while ago.

I realize that Microsoft is attempting to target C++ for Systems Development, but C++ is already a very confusing language, and adding more confusion to it is not necessary. The effort to migrate all Java applications onto .NET and vice versa was not always a good decision, and the first article mentioned discusses how we can actually bridge the two technologies together. Bridging C++ into the .NET world would have had a vastly different feel than the present Visual Studio 2005 line-up, and as strange as Managed C++ was, at least it was mostly standard C++.

October 31, 2004

The Shared User Vision...

One thing that have been discussing off and on has been Alan Cooper's “The Inmates are Running the Asylum.”  En route to OOPSLA, I decided to pickup this book again and I have finally finished reading it.  The last time I left it off was in Chapter 9, which was discussing the importance of user-centric design by use of personas and instead of thinking of software in terms of the features that it could provide, and that it was much more important to concentrate on the goals that the user had.  Accomplishing this requires some interaction design, which is what the next portion of the book deals with.

Alan states that the developers often confuse tasks with goals.  Tasks, he defines, are the steps that are done to accomplish goals, but unfortunately developers usually design user interfaces to complete tasks instead of accomplishing goals.  Take a step back from the context that Alan is talking about, one of my courses at OOPSLA has been with Kevlin Henney, and he reminds us that when developing object oriented software, the getters and setters are generally not the proper abstraction for an object.  More specifically, his example was for a Bank Account, where the methods provided are getBalance() and setBalance().  While this is a very powerful interface, this is not the proper abstraction to use, especially since in this case, the object could change state between the initial get and the final set.  As such, a better interface would be a method to acquire the current balance, a method to withdraw funds, and a method to deposit funds.  In addition to being a more coherent interface, the business logic can easily be applied in this design.

Likewise, the context that Alan is referring to is precisely the same issue, only it is a different user.  When designing user interfaces, the same comes out.  Instead of acquiring the information that the underlying algorithms require, consider the user's goals first, and these goals are not necessarily computer related.  Alan starts off the list of user goals, for example, by stating that one of the goals is to not feel stupid, for example.  One issue that Alan discusses when this happens, however, is that developers make up false goals for users, such as it is important to save memory, there must be a web interface, and more.  Perhaps they are requirements, but they are not goals.  Alan goes into great detail here, and even provides some information on making software polite.  Many of his ideas are great, but are not immediately obvious however.  It is not a notion of simply saying please and thank you, but rather becoming more human, as Clifford Nass and Byron Reeves' "The Media Equation".  In their research, they have discovered that humans interact with computers in ways similar to the ways that they interact with humans.  This means that when software indicates that they are doing something wrong, the interaction must be the same as when another person points out that they are doing something wrong.  In particular, you would not like someone to come over to you and tell you that the work that you have been doing is completely wrong, yet repeatedly in software, this happens.  For example, people will select options that appear to be usable in the current function and the software will come out with a phrase that would definitely not be appropriate when coming from a human.

In the next chapter, Alan discusses usage scenarios, which are essentially use-cases.  The important thing is with scenarios is that most of the time, the users will only use a finite set of functionality, as the 80-20 rule would suggest.  Although this subset could change overtime and depending on the task, these tasks should be easily accessible to users.  The problem is probably determining proper use-cases and which functions are in that finite set.  To help in this regard, Alan provides his table of Perpetual Intermediates, which describes the number of users that are beginners, intermediates, and experts, and most users are intermediates.  The users will evolve from beginner to intermediates quite fast, however, most users will stop here, and only a few will progress to experts.  The situation is, however, that experts can get bored of the software, and could migrate to other software, so targeting these users is not ideal.  Your goal should be to optimize your software for your intermediate-level users, and make it easy for beginners to learn.  One major component of this, however, is the use of vocabulary, where the vocabulary should be consistent between the developers and interfaces and the definition should be shared. 

Software companies have repeatedly continued shipping software that does not meet user expectations, but this is not for a lack of trying. The example that Alan provides is that of Microsoft Windows, where up until Windows 3.0, people really laughed at the usability of Windows. Alan argues that this situation exists because software vendors are more interested in shipping regular releases of software rather than focusing on usability and stability. Each release, they ship a version of software, and based on user reactions, future versions of the software address those complaints, and this is an acceptable practice in software, and many software companies attempt to emulate our friends at Microsoft.

But why? According to one of the case studies that Alan provides in the book, the proper interaction design of the Logitech Peacock met user's needs so well that it resulted in fewer support calls than any of its other products. Support and maintenance of the software being more expensive than properly designing the software from the start, it just seems completely backwards to continue developing software in this fashion.

Another aspect that I totally agree with Alan about is the fact that some interfaces are developed without showing them to users first.  In many organizations, software is developed and by the time users really get a feel for the software, the development is almost completed.  The analogy that Alan uses that is fantastic is that a piece of sand paper can make a chair or a table smoother, but the sand paper cannot turn a table into a chair or a table.  This is a great analogy because if you have created the wrong interface to the application, the code that you had will probably need to be rewritten entirely and cannot just be adjusted directly as well as we would like, but this is unlikely to occur at the end of a development cycle.  The only way to develop for this case is to prototype and let user's experiment with the software well before it is fully developed, where the code is easily formed into another object.  One analogy that I like is that of pottery, where when you are creating a vase, the clay can be easily maneuvered into a shape and form, however, once it is baked in the kiln, there is no way to change its form without breaking it.

Innovation is another topic that is discussed throughout the remaining chapters of the software. I never thought of it in this fashion, but the style guidelines provided by Apple and Microsoft for their respective operating systems stifles the true innovation of interaction design, especially since these style-guidelines are not done by a third party. In other words, this would be like dictating that all of Microsoft's competition work in visually compatible fashions to this document, and those that do not, are not compliant. The ability to innovate and to make new controls and to think of software from a new perspective is blocked by the need to be comformant.

On the same token, Alan points out that many companies that are on a regular release cycle, such as a software version per year, are likewise stifling innovation. The need to deliver software in a finite fixed time forces the time to do the proper analysis of what users require and proper interaction analysis to in a short time. While I agree with Alan on this front, there are some organizations that have addressed this by having concurrent development of software versions. Doing this is difficult to manage, however, it does provide for more time to do upfront design. But there is definitely a need for this regular release cycle; companies that do not release regular software versions appear to be idle. For example, when I was in the market for a design tool, one of the tools that I was considering's product name ended with 2000, and the last press release of this company was in 2002; being 2004, it seemed to me that this organization was idle. When I contacted them regarding this issue, they said that they have released some products in between, but have not really changed the version number; even with this answer, I did not feel comfortable investing in this product.

I regularly have to perform interviews for interns at my current organization, and during these interviews, one of my all-time favourite questions to ask is, "if I were to give you a project to develop from scratch, which programming language would you use?"  This is one of my favourite questions because I have yet to have any applicant answer it correctly.  There is absolutely no one that has to date asked me what the project is, and based on this, they generally answer the question based on the language that they are most comfortable in or the language that they want to learn the most.  My desired answer is one that takes into consideration the strengths of the language to solve the problem.  There are definitely some tasks that a scripting language is more well suited for than a compiled language like C++.  These developers are focused on solving a problem without knowing what the problem is.  But this issue is more fine grain than interns.  I have also experienced similar situations with managers and experienced developers, who ask prematurely whether the interface will be a rich-text interface or a web interface, or whether it will be in C++ or Java, and many people simply do not understand that to answer these questions, there is very little opinion in it.  The decision to prefer one technology over any other should definitely be the technology that solves the majority of the user's issues and meets the user's goals.  Likewise, one thing that is mentioned in the book that I agree with is that user interfaces do not necessarily imply a graphical user interface.  All too frequently, we begin developing graphical user interfaces out of habit or thinking that this is what users need, however, there are times where the use of text interfaces is appropriate, and one of the examples that Alan provides for this is a point of sale application; in the minds of many people, it would seem that this application would imply a graphical user interface, but a text interface was created on one project, and it was quite easy to use.

Part of the problem that I see here is that applications unfortunately suffer from an identify crisis, and the sad part of this is that there are too many such applications on the market right now.  The identify crisis goes way beyond the screen full of buttons, but also in the ways that these products are created.  In one of the previous chapters, a great example of this is the executive who prefers the navigation via the mouse, and therefore, this becomes a requirement, whereas the application may be better suited for the industry by use of the keyboard.  All of these crazy visions of the user are resolved by using personaes, because the right application will be created and the question of whether t should be a graphical user interface or a textual user interface, a keyboard-driven or mouse-driven interface, and the multitude of other options are not decisions that come from the developers, marketing, executives, etc., these are decided upon by the needs of the person who will actually use the application, and so stated, probably the one that influences purchasing of it at some level or another.

Alan discusses upfront design in the context of interaction design, and states that once the interaction design is completed and documented, it can be easily implemented by developers. He states that most of the time (probably 80%) should be spent on interaction design and the remaining time is spent on the implementation. But the interaction design, by definition, only narrows the scope into what features are delivered and how these features work. The entire topic of software architecture is not discussed, but this is equally important to successful and maintainable software.  I am not sure if Alan has put this in with the implementation.  Either way, he does clearly state that this interaction design must be documented and presented to everyone involved to acquire the shared vision of the software products, arguing that the shared vision is required to avoid people doing tasks that are unrelated to the end goal of the product.

Alan argues that doing the design prior to the implementation is something that is fairly new to software development.  In some environments this can be true.  His analogy for this is rather interesting, in that he picks the creation of a modern movie, which proceeds in three stages: pre-production, production, and post-production.  As bizarre as this may seem, it actually dovetails quite nicely with modern software development.  During the pre-production stage, the script is written, production of the film is designed, actors are casted, and money is raised.  During production, the movie is filmed per the plan in the pre-production.  During the post-production, the film is edited, visual effects are added, and marketing starts.  The first aspect to note is that there are many people involved in creating movies, so any thing that is changed in one of the later stages becomes harder and harder to change and the cost of change is prohibitedly expensive, and people know this in the movie industry, so they attempt to do everything in the earliest stages possible, and avoid changes that should have been done in previous stages.

Software development essentially has the same stages, although there names are different, but many organizations are not as organized as the film industry in not going back to earlier stages, or even observing that there are stages.  Books like Software Engineering by Robert Pressmen discuss these software stages, but the reality is that many organizations do not observe them fully, and we do not have a clear, shared vision of what is being created.  Design is intended to describe this shared vision so that people can understand what is happening.  Developers understand what they must create and marketing understands what the developers will deliver.  This is not a situation where the developers believe that they are delivering one product and the marketing department believes they are delivering another, and the customer has another vision.  In reality, there should only be a single vision, and that should be the customers visions, as denoted by the personae for the application; it does not really matter what the other people want in the software and what they think the software should do, in reality it is the customer that will use the software, and all their needs must be met.

Another aspect that he states is new is performing usability testing prior to the product being created.  At present, most software development goes to both software testing and usability testing only after the software is in a completed (at least partially) state.  Similar to security, software usability cannot be an after thought.  It must be designed from the start; attempts to resolve user interaction issues after the core application is developed will be more expensive and difficult to do than if done prior.  This is similar to the aforementioned analogy of the creation of a movie; it is easy to change the object that explodes in the pre-production scene, but after this point, it can be very difficult and expensive.  Such kinks should be ironed out prior to the software being built; this is generally done via prototypes.  This tool makes sure that there is a shared vision and that users understand what is being created.  This is not to say that prototypes are not created today, but rather, users are not always involved with them.  In may cases, the prototype is created as part of the estimation process or as a validation that the product can actually be created, but in many cases, such interfaces are not shown to the actual users of the application.

I say these items in rather broad, all inclusive terms similar to the way that they are presented in the book.  In reality, not all software is created this way, but in my experience, there are still many that proceed in this fashion.  New methodologies such as extreme programming involve the user in the building of the application and the proponents of short-iteration iterative development, but this is unfortunately such practices are not the norm.  I clearly see this as an aspect of our development paradigm that needs to change.  As denoted in Steve McConnel's keynote in OOPSLA, our software is a little too complex at this state to do the entire design at the beginning, but we still need to understand what the software does in simple terms and what we are actually delivering.  This shared vision of the software must still hold.

And this is really what the Inmates Are Running The Asylum is all about.  To create software that meets our customers needs, we need to know that customer and we have to design the software for that customer.  The book highlights the reasons that there is a problem right now, and presents some ideas to changing the present situation.  I recommend this book (or its companion, About Face 2.0: The Essentials of Interaction Design) to every developer.

October 28, 2004

Extending C++ and Java...

For the past week, I have been at OOPSLA, and I have been rather busy collecting all kinds of information that I will eventually talk about here, but I thought that I would start slowly discussing the conference.

Herb Sutter presented yesterday a talk entitled, “Concrete Languages on Virtual Platforms,” and the talk was slightly differently than what I had in mind. What I had imagined was more of a discussion of why we use virtual platforms and targeting language features on the platform, but alas the discussion was about the coercion of C++ onto the .NET platform for Version 2.0.

As you are aware, C++ has, for the sake of simplification, two types of variable allocation methods, viz. stack variables and heap-allocated variables. Stack variables are constructed when introduced, and are destroyed at the end of the scope of where they are introduced, whereas heap-allocated variables are constructed by calling new and are destroyed by calling delete.

The situation is, however, that the CLI platform uses garbage collection. Porting C++ onto the CLI means introducing garbage collection into the language. Stroustrup has always said that garbage collection would be added to the language, however, the approach that was chosen is not standard, and will result in code specific to the .NET platform, such as follows:

MyClass ^t = gcnew MyClass()

As you see, there is a new keyword, gcnew, and this is assigned to a garbage collected pointer, denoted by the caret (^, or the hat), and following this, you can use the pointer as a standard pointer. The other change is with the destructor of the class, whereby the finalizer method is now !MyClass(). You can read this for yourself over at MSDN. Herb does, however, point out that this is a standard language extension, but I agree with Kevlin Henney that this is like saying that C++ is a standard language extension of C.

In addition to bringing up some bad memories from Pascal, I am not convinced that this is the best way to implement garbage collection within the C++ language, but this is not what I really wanted to talk about today, so I will leave this topic for another day. But while discussing all of this, one of the topics that Herb focused on is, what he finds, the horrors of finalization. In Herb's mind, we need to distinguish between object destruction and freeing memory, and garbage collection is only focused on the latter, but today, this is not really true. As an example, in your finalizer, an object could be resurrected by simply assigning the this pointer into a static variable. As another example, per se that you have a set of objects that depend on each other in a circular fashion. In this case, the garbage collector must then choose one of those objects to destroy first. This means that when the object calls the finalizer, it cannot access other objects, as these objects may have been destroyed prior to this finalizer. Because of these reasons (and more), Herb insists that garbage collected languages separate the concept of object destruction and freeing memory.

The only facility, however, where I feel that this makes a difference is with patterns, such as the Scope Locking Pattern [POSA2] and other scope-related patterns, which are important in languages like C++. As an illustrative example, the following code acquires a lock and then calls doSomeOperation():

void MyClass::myFunction()
  LockGuard lock( objectLock );

In the above pattern, the lock constructor acquires a lock, and the destructor releases the lock, and this is a very safe way of implementing multithreaded applications, but in languages like Java and C#, the above Pattern is not utilized, because there are no destructors, and, if they run at all, finalizers may be ran much later than you expect.

For the sake of argument, let us ignore the fact that Java has the synchronize keyword (which actually Herb's presentation did) and imagine that we are using a set of classes, similar to those discussed here. Now our implementation in Java becomes a little heavier than the above C++ implementation:

void javaMethod() {
  LockGuard lock = new LockGuard( objectLock );
  try {
  } finally {

When there is more code, the more chance that developers will get it wrong (an another argument against the gcnew and ^). In so saying, Herb also points out that the C# fashion of accomplishing this is via:

void csharpMethod() {
  using ( new LockGuard( objectLock ) ) {

In such a comparison, this looks horrible for Java. In fact, in my first exposure to C#, I mentioned the fact that I really liked the using keyword, and as such, I hoped that this would eventually be brought over the Java.

One very interesting part of attending OOPSLA is that there are many big names in the software industry. In fact, sitting right beside me during the above discussion was Joshua Bloch. Earlier today Joshua was signing copies of his Effective Java book, and having read this book in my early Java days, I approached him and asked him about the types of things that he would change in the book. It turns out that he is presently writing a book on Java Puzzles (I presume something like Herb's Exceptional C++ series) and after this, he will be updating this book, and to remain on topic for once, I will just say that there are some very interesting additions to this book coming soon.

Following the book discussion, I asked Joshua about what he thought about adding the using keyword to Java, and it turns out that he was presenting some ideas at The Java Platform: Tiger and Beyond workshop, and essentially what he says is that the using keyword only does something when exiting the scope, and does not do something when entering the scope. In addition to not adding a new keyword to the Java language, his idea is to essentially introduce a new interface which we will call, for lack of a better name, Doable, which is defined as follows:

public interface Doable {
  void begin();
  void end();

From this interface, the idea is that the developer would use it via the following:

do ( new MyDoableClass() ) {

And underneath the hood, some equivalent structure as the original Java implementation above would be created and that would call the interface methods also. This is a very exciting idea, but now that I think about it, I think that perhaps the begin() method would be rarely used. Specifically, if you consider the above implementation, I am constructing a new concrete interface, therefore most of what could be placed in the begin() method would usually be placed in the constructor. This is not to say that you will never have a class passed in that will already exist that may benefit from this begin() method, but I would imagine that this would not be the norm, and in the rare case that a method like begin() would be required, the method could be explicitly called prior to the doSomething() from above. More specifically, I see the more popular uses of this construct being more localized references, such as manipulation of I/O, Sockets, SQL, etc., which currently require a close() method to be called in a finally section. I cannot think of any case where the begin() would be useful off the top of my head, but I am sure there are some cases; I just am not sure how necessary this is.

Nevertheless, I look forward to seeing something like the above implemented in a future Java version, in addition to some of the other proposals that I have heard, including switch statements on Class and Strings and adding a const-like feature to Java, such as is implemented in javari, of which I may describe at a later time.

October 21, 2004

C++'s Export revisited...

As promised, Danny from the InformIT C++ Newsletter has revisited the C++ export keyword. It provides some history and some facts surrounding the issue, and is a good summary on the keyword.

I am not surprised by the ISO C++ Standards Committee standardizing the export keyword without having at least one implementation; there are other things that were standardized with implementations that are not fantastic today, such as the std::allocator concept (as described by items 10 and 11 of Scott Meyers' Effective STL) and the std::vector<bool> (as described in Item 18 of the aforementioned book).

It is also nice to know that Comeau C++ is not the only compiler that supports the keyword. It appears that Intel's C++ compiler also deals with it, albeit in an unsupported and undocumented role.

Danny's description of how the export keyword actually works is similar to the way that I expected it to work. I expect it to work like a precompiled header, that is maintained by the compiler, and any change to this file requires the recompilation of most of your tree. But I envisioned this precompiled header to be more like the Java-based file that would be expanded when required during linking, but to my understanding, the standard does not choose the actual implementation, such that this latter method could potentially be done at the cost of a significant effort.

I am surprised by the fact that some vendors claim that this is not a demanded feature. When I talk to people about templates, people generally complain about the complexity of debugging them and the instantiations of them (not the fail to mention the group of people who hate templates because of one specific template implementation, such as a recent person I spoke to who hated templates just because of std::auto_ptr). While debugging has been addressed in the more popular compilers, there are some debuggers that are not ideal with templates. By far, however, it usually has to do with template instantiations. Yes, I agree that no one has downright came up to me and said that they want export support in the compiler, it is undeniably a possible solution to some users needs.

Of course, if I were a compiler vendor, would I implement the export keyword? Probably not. I would focus my energy on making standard template compilation better, by working on the implicit instantiation and linker resolution phases, since these are the areas that people are presently working the most with. A feature such as the export keyword, as highlighted in the article, is not easy to implement, especially to do it any justice, and for people to get any benefit out of it, they will have to port some code to use the keyword. These two issues seem to go against implementing it. The request to remove export [PDF] from the C++ Standard is also an interesting read on the topic, and although Danny did not specify the reason it was rejected, I hope that vendors will perhaps make templates faster and better, no matter which keyword do it with.

nextprev for post.html...

On the bBlog forums, someone commented on getting a nextprev-type plugin for the posts.

While perhaps not the most efficient way of doing this, one way to accomplish this is by using the max_pages variable, and adding the following code to your post.html template:

{nextprev num=1 max_pages=1}
{getrecentposts num=1 assign=posts home=true skip=$current_offset}
{if $goprev ne ''}
<a href="{$goprev}" title="Next Post">Next</a>
{if $gonext ne ''}
<a href="{$gonext}" title="Previous Post">Previous</a>