dbhi! Eric Y. Theriault: blog
eyt*

Find your next scenic drive!

November 8, 2004

Fedora Core 3...

Back when Fedora Core 2 was released, I said that it would not be long before I upgraded to SuSE instead. Well, it never happened, and today, as Fedora Core 3, I am again upgrading it. This time I am also waiting for my shipment of SuSE Linux Pro 9.2, which was also released recently, but we'll see how that transition goes.

I have to say that I was a little nervous about upgrading this time. There seems to be a lot of little changes in the distribution that the average user could overlook, but nothing like the changes in Fedora Core 2, and SuSE 9.2 is similar in this regard also. But this makes the upgrade more interesting, since this sorta minimizes the differences. But the part that really concerned me is that for a while, the latest and greatest Fedora kernels would not boot on my machine, as reported here. Unfortunately no provided any input on this issue, so I was a little concerned as to whether the kernel would boot or not. Of course, I was not prepared for the worst case scenarios :-).

I snagged the CD's via BitTorrent, burnt them on CD's, and got to work. I started the installer, and it recognized my system; I clicked OK to continue, and then realized that I hadn't modified my /etc/fstab to disregard my software RAID, and because of this, it forced me to reboot, and I had to boot back into Fedora, update the file, and restart the process all over again.

Once that the process started going, it went fairly smooth. The upgrade took around 90 minutes; again, it started by stating 60 minutes, and it kept growing by 5 minutes for about 15 minutes. The only user interaction, however, was to change CD's, and the first CD was there for a very long time on my system.

After the process was completed, I rebooted. When I got to the Grub screen, I was expecting to see my old kernel and the new kernel, however, I only saw the new kernel, and I got a bit nervous, but pressed enter, and viola! It booted! I was pretty happy at this point. Almost everything worked out of the box, except for the services that depended on my software RAID, which I'll get to in a second. The core operating system, however, does not seem to have been updated very much, and still seems very texty compared to SuSE.

For the software RAID, however, it seems different than it was in Fedora Core 2. One of the first things I did with the machine was to uncomment my software RAID mount point in the /etc/fstab, but when I went to mount it, it complained that it couldn't and that my superblock was corrupted. My heart started to palpatate. I thought that perhaps a module was not loaded, so I looked around quickly, and then I remembered about raidstart, so I started up my RAID via raidstart /dev/md0, and after this, I was able to mount my filesystem.

But why wasn't this done for me automatically? Well, according to the Linux RAID HOWTO, this is only done if the /etc/fstab file contains a reference to an md device. To test it, I rebooted, and everything came up nicely!

There are many updates that were released today, and unfortunately, the mirrors are not yet updated completely yet, so snagging the updates may take a bit longer. But overall, the machine seems like it did before. I am sure that there will be a few quirks as I play with it some more, but so far so good.

NFS: getfh failed...

Over the weekend, one of the Linux servers that I partially administer was rebooted after its name was changed. This server NFS exports a directory, /data, and I got a call that one of the NFS clients had a problem using this mount point. Upon further investigation, it was all the clients, and the IRIX error that I would receive was a generic NFS permission denied error, like:

Permission denied
NFS version 3 mount failed, trying NFS version 2
mount: access denied for server:/data
mount: giving up on:
  /data

On the server side, the error was:

rpc.mountd: authenticated mount request from client:761 for /data (/data)
rpc.mountd: getfh failed: Operation not permitted

So this looked like a great thing to Google over, but most of the results I found were on specific versions of NFS servers, and did not deal with any of my real problems, except for this, which suggested looking at the host names in the /etc/exports file. When I did this, I noticed that there was one host that was invalid, but it still did not work after both restarting NFS and rebooting.

At this point I decided to only concentrate my efforts on a single host, and I removed everything else from the exports file. When I did this, I decided to force the reloading of the file via exportfs -va, and this is when I noticed:

exporting client:/data to kernel
client:/data: Invalid argument

That seemed strange. Upon regoogling, my problem soon became obvious. The NFS directory /data had a sub-directory, external, which was an NFS-mounted disk (it was done like this to abstract the specifics of that server from all the server's clients), however, this was the first time that this server had been rebooted since it was setup, and the external directory was not mounted automatically. Since you can only export a directory hierarchy once, the latter exporting of the external circumvented the previous directory from being exported. It would have been nice to have an error message that said this, but whatever.

November 7, 2004

Some Environmental Antipatterns...

The October issue of ACM's Queue has an interesting piece by Phillip Laplante, which discusses some anti-patterns in our industry. These antipatterns and more are going to be published in the book Antipatterns for IT, co-authored by Colin Neill.

The first one discussed is called, “The Burning Bag of Dung” and relates to the prank of lighting up a bag of dog-doings on a neighbours porch, ringing the door bell, and fleeing. We have all have been left with some such bag at some point of our lives, as Phillip enumerates, such as poor software development practices, and usually addressing such problems is not easy. He does present an example and some ways to resolve the issue, but these are rather specific to the example.

The next antipattern described is Founderitis, which describes an organization where the founder has problems letting go when the organization gets bigger than the person can handle, since the talents required to startup an organization are different than those required to run a successful organization. The example he presents are a few organizations that have more than $100 million in revenues, and the founder is still an active developer and approves all changes to the code base. He states that most venture capitalists will recognize this, and will remove the founder from the organization, although Phillip does mention other solutions. This is similar to something I was told regarding an organization, where the new management inquired who were the people that the company could not survive without, and when they received this list, they proceeded to fire all those people. While it seemed as strange logic then, in retrospect, the company was able to succeed, partially because of this.

Another very interesting one was the one entitled the “Shoeless Children,” in which is based on the fable that the children of the shoemaker do not have shoes, since the shoemaker is too busy servicing his clients. The analogy is how some organizations are too busy creating new and better tools for their customers and use the latest technology for this, however, they do not create tools to help themselves and their own infrastructure is no where near their client's infrastructure. This is often based on penny-pinching, but its solution is not always as easy as expanding the budget, and Phillip provides a little more description in the article about the situations around it.

The final one is called “worshiping the Golden Calf,” and this is about hoping that a new technology or methodology will solve all the problems, but this new technology or methodology is poorly understood. This situation usually occurs where there is a poor shared vision or poor leadership. This “golden calf” could also be unknown, such as in the “hockey stick” revenue projection, where after a number of poor quarters, there are a few good quarters, and people start to believe that the ship is turning around and that it will continue in this pattern. Solving the pattern is not easy though.

I really enjoyed the article, and I look forward to seeing their book, as I am sure that it will open your eyes to situations which are around you. And knowing that there is a potential problem is the first step towards fixing it.

November 3, 2004

Variable Block Support for Linux's dvhtool...

Unlike the IRIX dvhtool, the Linux dvhtool does not support block sizes other than 512. There are cases where a block size larger than 512 is more optimal for performance reasons, and using the dvhtool from Debian can corrupt data in your volume header. I have submitted a patch to Debian to support variable block factors. This implementation is intended for the 1.0.1-5 release according to the package maintainer, but until it is formally released, the patch is available here, which is based on the 1.0.1-4 unstable release.

The patch uses two mechanisms to communicate the block factor to the tool. By not specifying any option, dvhtool will default by using a mechanism similar to blockdev --getss device, and will default to a 512-block size. If you desire to force another block size, such as when you have a device that does not have a detectable block size, the --blocksize (or -b) can be used to specify the block size. The remaining work in the patch is to essentially pass the block size around the library, allowing for maximum flexibility.

Should you have any questions or problems regarding the patch, please let me know.

November 2, 2004

J#'s raison d'être...

Jim Fawcette has blogged a piece stating that Sun's law suit undermined Java, stating that the law suit against Microsoft discouraged Microsoft from using Java as its primary development language, but I do not buy this. With this logic, I can only guess that one of the UML vendors has also sued Microsoft, and this is what forced Microsoft to develop their new modeling language?

I am not arguing that the Sun law suit was a good thing for Java, but Microsoft has a long history of developing their own proprietary tools, and there are always reasons behind their decisions. Specifically, as noted by here, J# was late in the announcements, and I remember reading at one point that the reason for this was more that Microsoft was originally not going to provide a Java implementation, but later realized that Java was part of the U.S. college and university curricula as soon as late 1997, and by not providing a Java implementation, new developers would not be exposed to their technologies, and this is the reason that J# was added.

In the case of their proprietary modeling language, we received a demo of Visual Studio 2005 at OOPSLA during Richard Rashid's keynote entitled, “The Future Of Programming.” With such a title, I, along with many other attendees, were expecting more of a discussion in the more distant future than next year. Instead he discussed many of the technologies that Microsoft is presently developing and using in-house (such as PreFIX that should come with 2005) and some observations on the future (such as we are now almost at human-scale storage, where we are almost able to keep everything on a hard disk, without ever having to delete anything), but a good portion of this discussion was a demo of Visual Studio, including the model-zooming functionality. Following the keynote, I asked the demo's presenter two questions, first about cross platform tools and secondly about their choice against UML. Regarding the cross platform, he indicated very vaguely that there is something in the plan that he cannot comment right now (Maybe, or rather hopefully, something with Mono?), and regarding the UML question, his answer was simply that not everything is object-oriented. Regarding the latter comment, I find Keith Short's answer at least more in the proper direction, but I am not sure about the approach. I think I will just have to give it a run to see what they have really done.

Jim appears to make some other comments about what might have been, but without any references, it is hard to know whether these are opinions or what. I am not as certain as Jim is that things would be any different than they are now.

Earlier Entries

<  5  6  7  8  9  10  11  12  13  14  15  >

Navigation

Recent Posts

eyt*