dbhi! Eric Y. Theriault: blog
eyt*

Find your next scenic drive!

December 8, 2004

Coding Standards...

JavaPro has a piece on Why Coding Standards?, and I could not agree more. Each developer having their own style really makes maintenance and reading code much harder, and one problem that generally comes out of this is that branches are merged that reformat the code to the next developers tastes, and they happen to fix some bug in the middle of the code. These types of branches are the worst, because it makes it near impossible to compare the latest version of code to an older versions, since this stylistic factor overtakes the code, and you cannot make heads or tails of the changes.

Of equal annoyance are those developers who redo the code without understanding what the code actually does. Just earlier today I was reminded of this, as I was shown a piece of code that the person purported was mine. I looked at it, and I was rather surprised that it was mine as I quickly noticed a few obvious mistakes in the code. Looking through the history of this particular file, I was able to track down that the same developer who was confused of the code was actually the one who had rewritten the original code for simply stylistic reasons. This cannot be confused with refactoring, since the code was changed without maintaining the original contracts.

Coding standards can help to prevent bugs. I always use and highly recommend the standard that the article discusses from Sun, where statements are always surrounded by brackets, has prevented many bugs, and sometimes some confusion, simply by using this style. The next one regarding commenting is not one that I fully subscribe to though, since this generally produces code that has no comments what so ever, and such code may make sense today, but in the future, it can be hard to actually understand what the algorithm or function is suppose to do. This particular article hints that JavaDocs could be used instead of this, which could work nicely, since your methods should be short.

In the area of coding standards, getting very strict can make all the code look as it were developed by a single developer. To do this, though, some help of tools generally help. Tools like Borland's JBuilder has some reformatting and analysis capabilities that allows you to check your code against the standards while you are writting it, and this can be very useful.

Choosing the right standards, however, can be involved, and this is not covered by the article. For Java, many developers follow the Sun Coding Convension. For C++, this is hard one; from what I have seen, this may be C++ Coding Standards by Herb Sutter and Andre Alexandrescu, but there are really many topics and things to consider in such standards. Definitely some common knowledge, such as that of Effective Java, Effective C++, More Effective C++, Effective STL, Large Scale c++, and a variety of other books should be part of that coding standard.

Standards simply make better code, but you cannot apply such standards blindly; you need to understand why you are applying the standard, and more specifically, you need to know when you can break the rules. If you do not have a coding standard, it is probably time you have one.

December 3, 2004

fdisk's SGI partition table fix for variable sector sizes...

I have recently discussed a bug with Linux's dvhtool when dealing with devices whose block size is greater than 512 bytes. A similar issue also exists in fdisk when creating SGI partition tables, as filed in RedHat's Bugzilla Bug 140437.

The issue is that the sector size in the volume header was hard coded to 512, and therefore, SGI IRIX's fx tool would report errors on such devices and would attempt to create filesystems that exceeded the drive's capacity. The patch resolves this issue by using the sector_size variable. I have also updated the partition table display to correctly denote what is happening.

With this patch, you must partition the drive via the base device (/dev/sda, for example), and not via SGI's volume partition (such as /dev/sda3). In the former case, the number of calculated sectors (represented by llsectors) is in blocks of 512, whereas in the latter case, it is in blocks of 4096, and therefore, the division by sector size constantly makes the device smaller.

The bug was tested between an IRIX and Linux workstation, where the device would be initialized and moved to the other operating system for validation.

This patch has been submitted directly to the maintainers of util-linux, and is based on Debian's util-linux 2.12i. If you see any issues with the patch, please let me know.

November 23, 2004

Got your back...

Peter Varhol has posted a blog entitled Got Your Back, and in it, he talks about ways to ensure stable code, by essentially knowing how to use tools. His examples note code reviews, unit tests, and exception handling in the list of tools, but I would also add in logging in this list, since when you log sufficient information, it can be used to also see situations and bugs that would be extremely harder using other methods.

All of these tools are essentially insurance for your software, which could contain anything from a simple one line bug or a complex architectural problem. The aforementioned tools will attempt to exercise your software in ways to expose these conditions. Of course, this means that you really have to include code reviews in your process, and to actually attempt to break code when creating test suites and run such automated tests regularly. This means that you have to use language features to make your productivity higher, and that you need to actually log information in an accessible and usable fashion.

At present, you need people to ensure that these things happen, and this comes back to the broken window theory. If a developer is used to strict coding standards, and is then exposed to a group with no coding standard, chances are that his own code will get slightly sloppier, simply out of the case that there is no authority to curve this. But such examples are an over abundance in business, as discussed in books such as Database Systems Management and Design.

There is, however, no question that such behaviours are time consuming, and as such, it would be great to automate these to take some burden off the people involved in a project, but as Peter says, we are not there yet, and I think we are far off. Modern IDE's have been improving significantly over the years, and movements into this direction are highlighted by Borland's integration of their Caliber requirements management application into their IDE's. And while this does seem that it would allow for tracking of software features from inception to delivery, I suspect that such products will suffer from how they are used and maintaining that ever important shared vision. Specifically, if the tools are not properly integrated, I see developers using them in vastly different ways, and while this flexibility is good for some aspects, this flexibility could harm the usability of feature tracking. But also such tools will still require human intervention for many years to come, as they will not really be able to decide the completeness of a feature alone, let alone whether the implementation accurately reflects the desires of the user.

Until we are presented with software that can think for itself, there is no way to get rid of people completely, but by using unit tests, language features, and logging coupled with code reviews, quality software can be not only developed, but maintained.

November 10, 2004

SuSE Linux 9.2 Upgrade...

My SuSE 9.2 Update came in today, and my first comment is that Novell is very prominent in the packaging. This is not a big surprise to me, considering all calls and e-mails I make with them regarding the ability to purchase an Update in Canada; more specifically, they announced towards the middle of October that you could pre-order SuSE. In the UK and Germany, you could purchase both the Upgrade and the Full-Version, but in the US and Canada, well, only the Full-Version. A day or so after it officially came out is when I was finally able to order it, and then it was back-ordered. Arrhgg... In the two or so other times in dealing with SuSE, it wasn't like this.

On the other hand, this is the first time that the DVD actually worked in my system. I am not sure if its my system or the other SuSE distributions, but the latest DVD is labeled and it works, and I was happy not changing the CD's around. The installer was much more refined than the last time, and very smooth. Towards the end, however, I guess I answered the question about the Firewall incorrectly, since when my box came back up, the network was not operational, and since my machines are dependent on my network, this resulted in the machine being painfully slow.

It actually took longer to solve this problem than I thought. I tried tweaking the /etc/sysconfig/SuSEfirewall2, but it still got stuck in the NFS mount problem. I then disabled the NFS drives from the fstab, and I was finally able to get it into X, but the remaining system was still slow, so I finally decided to get out of X via CTRL-ALT-F1, and logged into a normal shell. Here, I was able to go into yast and disable the firewall.

Once that was done, I rebooted, and I was then able to login for the first time. The new KDE 3.3 is nice, but I quickly noticed that my system was not normal; in previous versions of SuSE, I used the terminal Eterm, and it is no longer provided by SuSE and the downloadable RPM fails miserably because of missing dependencies that are no longer available. This was disappointing, and I am now trying to get used to konsole.

The worst of this is that it seems that this package was removed without any notification; there were some conflicts, but nothing regarding Eterm. Some of the conflicts I received had to do with the upgrade to Apache TomCat 5.0, where the packages for the installed 4.x version had an upgrade, but it decided to install 5.0 instead.

Also, in the last update, I was wondering about the XFree86 status in 9.1. Well, it seems that in 9.2, they have also moved the move to X.org, and this transition seemed seamless so far.

Overall, there were a few bumps in the road, but it seems nice so far. And at least, I still have pine (which I can't say the same for Fedora Core).

November 9, 2004

The Default CSS File...

When working with CSS and HTML, it is easy to take for granted the default interpretations of tags across various browser vendors. Aside from the fact that some browsers are more standard compliant than others, I would have thought that the default CSS file was a standard. But of course, it is not. Whilst most vendors have similar interpretations to most of the tags, there are some tags that are significantly different amongst browsers.

Debunking the myth of style defaults by Michael Meadhra (mentioned here) talks about this issue. Some web browsers keep there default CSS files hidden away (such as Internet Explorer), and others make it easy to change (such as FireFox, which version 1.0 was released today). The article discusses how you can completely remove this file from FireFox and see how pages are rendered, and well, this may not be an option for everyone, but it certainly shows you how powerful CSS really is.

The article also discusses a few different approaches to ensuring that your sites are consistent no matter which browser is used, which is derived from a few different sources.

This is definitely a page to look through and to actually see what the default CSS files actually does. To paraphrase Effective Enterprise Java, portability must be tested. Just because you abide to standard tags and CSS, each browser can still display them differently, and this article is a good reminder of this. If you want your pages to look great everywhere, you have to test them everywhere.

Earlier Entries

<  4  5  6  7  8  9  10  11  12  13  14  >

Navigation

Recent Posts

eyt*