Sometimes good things work under the bonnet and you just don't even know about it. The problem is that when good things have been implemented and you don't know about it you can really miss out on all the goodness!
For example avahi - this provides service discovery on your local network. With avahi one can add your computer into a local network and instantly be able to find other machines on the local network without the pain of configuring a server to run services such as DNS. For zero effort you get a top level domain (TLD) of .local - so if your hostname is called mypc, it's resolvable as mypc.local
Some of the details to how this was implemented in Ubuntu is detailed in the ZeroConfigNetworking wiki page if you want to see some of the nuts and bolts behind this.
If you want to discover the machines on your network, use:
avahi-browse -a -t
and you get output something like the following:
+ wlan0 IPv4 mynetbook [00:24:1b:7a:22:c2] Workstation local
+ wlan0 IPv4 delllaptop [00:1f:1c:cd:82:41] Workstation local
Use can use avahi-resolve-host-name to get the IP address of a machine on the .local TLD using:
avahi-resolve-host-name -n hostname
Friday, 31 July 2009
Thursday, 30 July 2009
Twittering with Gwibber
I've just jumped into the micro blogging fad and signed up with Twitter and Identi.ca. I was told I should check out Gwibber - a cool micro blogging feed tool. The beauty of this tool is that it can pull in multiple different feeds and also can push messages out to multiple blogging accounts. I suppose it's a kind of microbogging multiplexer GUI tool.
It can handle the following feeds: BrightKite, Digg, Facebook, Flickr, FriendFeed, Google Reader, Indenti.ca, Jaiku, Laconi.ca, Pidgin, Ping.fm, Qaiku, RSS/Atom and Twitter.
So, using Gwibber I can follow Twitterings from friends and colleagues and RSS newsfeeds from newsites such as Slashdot and also updates on blogs too. Cool!
To install, just use:
apt-get install gwibber
To setup, just select Accounts->Add and then pop in the details of the feed you want to follow or the microblogging accounts you have. It's as easy as that. One can also select colours for each feed to help identify where each message is coming from. Micro blogging messages can be replied to (there is a little gears icon at the bottom right of each message).
Finally, at the bottom of Gwibber is a text entry field where one enters micro blogging messages that get fed out to all your microblogging accounts. Mine is configured to push out to Twitter and Identi.ca.
Why not check it out? Even if you don't microblog it's still a very useful RSS and blog following tool.
It can handle the following feeds: BrightKite, Digg, Facebook, Flickr, FriendFeed, Google Reader, Indenti.ca, Jaiku, Laconi.ca, Pidgin, Ping.fm, Qaiku, RSS/Atom and Twitter.
So, using Gwibber I can follow Twitterings from friends and colleagues and RSS newsfeeds from newsites such as Slashdot and also updates on blogs too. Cool!
To install, just use:
apt-get install gwibber
To setup, just select Accounts->Add and then pop in the details of the feed you want to follow or the microblogging accounts you have. It's as easy as that. One can also select colours for each feed to help identify where each message is coming from. Micro blogging messages can be replied to (there is a little gears icon at the bottom right of each message).
Finally, at the bottom of Gwibber is a text entry field where one enters micro blogging messages that get fed out to all your microblogging accounts. Mine is configured to push out to Twitter and Identi.ca.
Why not check it out? Even if you don't microblog it's still a very useful RSS and blog following tool.
Wednesday, 29 July 2009
Faster bzip2 compression
Compressing files with bzip2 can take a while, especially at the highest compression setting, even when using the faster processor. Nowadays most modern machines have more than one CPU, so why not use all the available cycles for compression?
This is where pbzip2 comes to the rescue. It is a re-implementation of bzip2, but uses pthreads to parallelize the compression on SMP machines. It promises "near linear speedup on SMP machines", so I thought I'd check this out.
To install pbzip2, use:
sudo apt-get install pbzip2
I started with fairly typical 187MB tar archive of an evolution mail directory - I wanted to see how bzip2 and pbzip2 compare with the -9 (best compression).
bzip2: 53.8 seconds, 117339200 bytes.
pbzip2: 31.4 seconds, 117419551 bytes.
(The CPU was an Intel(R) Core(TM)2 Duo CPU T5800 @ 2.00GHz)
So pbzip2 was not quite twice as fast, and the resulting file was just 0.068% larger. So there is little bit of overhead going on but all in all it's a very good result.
I repeated the test with the -1 fastest compression setting:
bzip2: 43.8 seconds, 120885523 bytes
pbzip2: 22.9 seconds, 120915716 bytes
So with the lower compression pbzip2 is almost twice as fast and the file is only 0.025% larger - almost no difference in size.
Next, I tested pbzip on a Quad Xeon server ( Intel(R) Xeon(R) CPU X5350 @ 2.66GHz) on a copy of a 644MB tar'd Jaunty kernel git repository:
bzip2: 1minute 58.4 seconds, 376813785 bytes
pbzip2: 38.2 seconds, 377014857 bytes
So not quite 1/4 of the time with 4 CPUs, so there is some scheduling overhead going on.
More in-depth benchmarks can be found here.
There we have it; pbzip2 works very well on SMP machines. All we now need is a parallel version of bunzip2...
References: http://compression.ca/pbzip2
This is where pbzip2 comes to the rescue. It is a re-implementation of bzip2, but uses pthreads to parallelize the compression on SMP machines. It promises "near linear speedup on SMP machines", so I thought I'd check this out.
To install pbzip2, use:
sudo apt-get install pbzip2
I started with fairly typical 187MB tar archive of an evolution mail directory - I wanted to see how bzip2 and pbzip2 compare with the -9 (best compression).
bzip2: 53.8 seconds, 117339200 bytes.
pbzip2: 31.4 seconds, 117419551 bytes.
(The CPU was an Intel(R) Core(TM)2 Duo CPU T5800 @ 2.00GHz)
So pbzip2 was not quite twice as fast, and the resulting file was just 0.068% larger. So there is little bit of overhead going on but all in all it's a very good result.
I repeated the test with the -1 fastest compression setting:
bzip2: 43.8 seconds, 120885523 bytes
pbzip2: 22.9 seconds, 120915716 bytes
So with the lower compression pbzip2 is almost twice as fast and the file is only 0.025% larger - almost no difference in size.
Next, I tested pbzip on a Quad Xeon server ( Intel(R) Xeon(R) CPU X5350 @ 2.66GHz) on a copy of a 644MB tar'd Jaunty kernel git repository:
bzip2: 1minute 58.4 seconds, 376813785 bytes
pbzip2: 38.2 seconds, 377014857 bytes
So not quite 1/4 of the time with 4 CPUs, so there is some scheduling overhead going on.
More in-depth benchmarks can be found here.
There we have it; pbzip2 works very well on SMP machines. All we now need is a parallel version of bunzip2...
References: http://compression.ca/pbzip2
Tuesday, 28 July 2009
I/O read/writes on "idle" Windows 7 and Karmic systems
How "idle" is an idle Operating System? More specifically, how much I/O is going on when a system is left for about 3 hours doing nothing?
In my test, I rigged up a modified version of QEMU to dump out all I/O block read/write activity. I then installed two Operating Systems for comparison: Windows 7 RC and Ubuntu Karmic Alpha 3. I then left the systems to sit around and be idle for about 3 hours and then afterwards I examined the I/O activity. I then plotted the cumulative block read/writes for both operating systems so one can see the total block I/O over the 3 hours.
If you peer carefully right at the bottom of the graph below, you can see that Karmic flushed some blocks out to disk, and then essentially that's about it - no real I/O activity. As for Windows 7, it seems to delight in rummaging around, doing a load of reads and even more writes:
It's quite shocking how much Windows 7 wants to keep the system busy, and this was a clean install with NO virus checker running in the background. Naively, I suspect Windows 7 is defragging it's filesystem (something you don't need to do on Linux), but I didn't want to poke around and interact with it while running the test. If it is sneakily optimising the system behind one's back while idle, I hope it does not do it too often.
So, any Windows 7 user out there with Solid State Drives (SSD) needs to be aware that an idle Windows 7 will wear out their SSDs quicker than Ubuntu Karmic.
In my test, I rigged up a modified version of QEMU to dump out all I/O block read/write activity. I then installed two Operating Systems for comparison: Windows 7 RC and Ubuntu Karmic Alpha 3. I then left the systems to sit around and be idle for about 3 hours and then afterwards I examined the I/O activity. I then plotted the cumulative block read/writes for both operating systems so one can see the total block I/O over the 3 hours.
If you peer carefully right at the bottom of the graph below, you can see that Karmic flushed some blocks out to disk, and then essentially that's about it - no real I/O activity. As for Windows 7, it seems to delight in rummaging around, doing a load of reads and even more writes:
It's quite shocking how much Windows 7 wants to keep the system busy, and this was a clean install with NO virus checker running in the background. Naively, I suspect Windows 7 is defragging it's filesystem (something you don't need to do on Linux), but I didn't want to poke around and interact with it while running the test. If it is sneakily optimising the system behind one's back while idle, I hope it does not do it too often.
So, any Windows 7 user out there with Solid State Drives (SSD) needs to be aware that an idle Windows 7 will wear out their SSDs quicker than Ubuntu Karmic.
vimacs - a vim plug-in that emulates emacs
Now, I don't want to start a vi vs Emacs editor holy war, so please NO flaming on this blog article! I don't have any personal text editor preference, so long as it's vi :-)
AndrĂ© Pang has written a great article about vimacs, which is a vim plug-in that emulates Emacs. It gives vim the Emacs key bindings and modeless editing features and also keeps vim’s moded editing style. (Not sure if this is a thing of beauty or not!). Emacs mode only operates when in the vim insert mode, and you get nearly all the key binding goodness of Emacs 21.
This plugin can be download from here.
Kudos to Karl Wood for spotting vimacs and passing the details on to me.
AndrĂ© Pang has written a great article about vimacs, which is a vim plug-in that emulates Emacs. It gives vim the Emacs key bindings and modeless editing features and also keeps vim’s moded editing style. (Not sure if this is a thing of beauty or not!). Emacs mode only operates when in the vim insert mode, and you get nearly all the key binding goodness of Emacs 21.
This plugin can be download from here.
Kudos to Karl Wood for spotting vimacs and passing the details on to me.
Monday, 27 July 2009
curlftpfs - mounting ftp sites using FUSE
curlftpfs is a neat package that allows one to mount an FTP host onto a local filesystem. It uses the libcurl package for all the transfer operations and FUSE (filesystem in userspace) for the filesystem glue.
It's also fairly straight-forward to use. To install on a Ubuntu system, use:
apt-get install curlftpfs
Now let's mount the UKC FTP mirror service onto a mountpoint: /mnt/ukcmirror
sudo mkdir /mnt/ukcmirror
sudo curlftpfs -o allow_other ftp://anonymous:anon@ftp.ukc.mirrorservice.org /mnt/ukcmirror
Note that the allow_other flag allows non-root users to access the mount. To unmount, use:
sudo umount /mnt/ukcmirror
However, this will expose your username, password pair as it's shown up using ps aux, so we need to hide the password using the following method:
Edit the file /root/.netrc as root and add in one's username/password details:
machine ftp.ukc.mirrorservice.org
login anonymous
password anonymous
..and change the permissions on this file:
sudo chmod o-rw /root/.netrc
Now you can mount it using:
sudo curlftpfs -o allow_other ftp.ukc.mirrorservice.org /mnt/ukcmirror
Alternatively, you may like to mount this by putting the details into /etc/fstab. Add the following one line of text to /etc/fstab:
curlftpfs#ftp.ukc.mirrorservice.org /mnt/ukcmirror fuse allow_other,rw,user,noauto 0 0
And now you can mount it using:
sudo mount /mnt/ukcmirror
This is also useful for some awkward NAS boxes, such as the Freecom 500 which seems to have a poor SMB implementation, so one has to resort to using FTP access.
It's also fairly straight-forward to use. To install on a Ubuntu system, use:
apt-get install curlftpfs
Now let's mount the UKC FTP mirror service onto a mountpoint: /mnt/ukcmirror
sudo mkdir /mnt/ukcmirror
sudo curlftpfs -o allow_other ftp://anonymous:anon@ftp.ukc.mirrorservice.org /mnt/ukcmirror
Note that the allow_other flag allows non-root users to access the mount. To unmount, use:
sudo umount /mnt/ukcmirror
However, this will expose your username, password pair as it's shown up using ps aux, so we need to hide the password using the following method:
Edit the file /root/.netrc as root and add in one's username/password details:
machine ftp.ukc.mirrorservice.org
login anonymous
password anonymous
..and change the permissions on this file:
sudo chmod o-rw /root/.netrc
Now you can mount it using:
sudo curlftpfs -o allow_other ftp.ukc.mirrorservice.org /mnt/ukcmirror
Alternatively, you may like to mount this by putting the details into /etc/fstab. Add the following one line of text to /etc/fstab:
curlftpfs#ftp.ukc.mirrorservice.org /mnt/ukcmirror fuse allow_other,rw,user,noauto 0 0
And now you can mount it using:
sudo mount /mnt/ukcmirror
This is also useful for some awkward NAS boxes, such as the Freecom 500 which seems to have a poor SMB implementation, so one has to resort to using FTP access.
Sunday, 26 July 2009
Oh Brother!
Last month I blogged about my Brother 2170W laser printer. So far it has been a totally reliable monochrome laser printer. We just keep on feeding A4 paper and it keeps on printing. It works really well with Ubuntu Intrepid, Jaunty and Karmic Alpha 3.
However, it has one downside. It's really slow at handling fairly large graphic images. Today we wanted to print out the maps of Ashdown forest from a PDF (~6MB of data) and after 20 minutes we gave up waiting and killed the print job. Initially the data zipped over to the printer, then it slowed down to ~30K/s and chugged on and on and on... Zzzzzz.
My quick workaround was open the PDF in GIMP and print it out as a smaller 1.5MB JPG and it printed out within 3 minutes. Even so, that's fairly slow. I suspect the 32MB buffer is a teeny bit too small for intensive graphic prints.
However, I shouldn't gripe too much. I bought this printer as a general text printer and for this it does very well. I had read in reviews that it was slow on intensive graphic printing, and this is indeed true, but for 95% of my printing it's fast enough for my requirements.
However, it has one downside. It's really slow at handling fairly large graphic images. Today we wanted to print out the maps of Ashdown forest from a PDF (~6MB of data) and after 20 minutes we gave up waiting and killed the print job. Initially the data zipped over to the printer, then it slowed down to ~30K/s and chugged on and on and on... Zzzzzz.
My quick workaround was open the PDF in GIMP and print it out as a smaller 1.5MB JPG and it printed out within 3 minutes. Even so, that's fairly slow. I suspect the 32MB buffer is a teeny bit too small for intensive graphic prints.
However, I shouldn't gripe too much. I bought this printer as a general text printer and for this it does very well. I had read in reviews that it was slow on intensive graphic printing, and this is indeed true, but for 95% of my printing it's fast enough for my requirements.
Rambling around in Winnie the Pooh Country
Sunday afternoon I took the family down to Ashdown Forest, West Sussex (home to Winnie the Pooh). It's only about 10 miles away from our home in Crawley; basically almost on our door-step. Instead of visiting the Pooh Sticks bridge which is quite a bit further down the road in Hartfield, we stopped off at Hindleap Warren for a walk and a typically English cup of tea from a Thermos flask. The walk is detailed in this PDF from AshdownForest.org
View My Saved Places in a larger map
Like a typical English Summer, it got a bit cloudy and there was spits and spots of rain, but that did not deter us from the short walk. En-route we discovered a lone pheasant and a bunch of Sunday ramblers (fairly typical of the kind of wildlife in the forest!)
Below, looking South towards the South Downs (on horizon):
Not surprisingly, it's kind a like the illustrations by E. H. Shepard in the A.A.Milne books...
Hopefully sometime this Summer we may make it to Pooh Sticks bridge and have a go at Pooh Sticks. If we do make it, I will try and get some photos and attach them to this blog. And if you are wondering, we did not see Tigger, Eeyore or Winne the Pooh.
View My Saved Places in a larger map
Like a typical English Summer, it got a bit cloudy and there was spits and spots of rain, but that did not deter us from the short walk. En-route we discovered a lone pheasant and a bunch of Sunday ramblers (fairly typical of the kind of wildlife in the forest!)
Below, looking South towards the South Downs (on horizon):
Not surprisingly, it's kind a like the illustrations by E. H. Shepard in the A.A.Milne books...
Hopefully sometime this Summer we may make it to Pooh Sticks bridge and have a go at Pooh Sticks. If we do make it, I will try and get some photos and attach them to this blog. And if you are wondering, we did not see Tigger, Eeyore or Winne the Pooh.
Saturday, 25 July 2009
Speedometer - measuring network interface load
On my quest to find the more unusual or interesting network monitoring tools I stumbled upon speedometer. Speedometer is a useful text based network throughput meter - it provides a scrolling load graph and annotates throughput peaks:
To install, use:
sudo apt-get install speedometer
and run by specifying the rx and/or tx network interface:
speedometer -rx wlan0 -tx wlan0
One can also use it to monitor the rate of a running download. For example, suppose I'm downloading a Ubuntu ISO using firefox, one can monitor the download speed using:
speedometer -f Desktop/ubuntu-9.04-desktop-i386.iso.part
The ability to measure the rate that a file is being written to means one can use speedometer to measure the write speed on a filesystem using:
dd if=/dev/zero of=test.dat bs=1M count=4096 &
speedometer -f test.dat
Specifying multiple network interfaces will draw the network activity in graphs stacked one on top of another. However, one can stack the graphs in columns by using the -c flag:
speedometer -rx wlan0 -c -rx wlan0
If you don't like the fancy graphs, a simple text mode output can be selected using the -t flag, as illustrated below:
speedometer -p -tx wlan0
[.]34.1 KB/s [c]34.1 KB/s [A]34.1 KB/s (....::::++++| )
[.]44.7 KB/s [c]37.6 KB/s [A]41.2 KB/s (....::::++++| )
[.]33.8 KB/s [c]38.3 KB/s [A]38.3 KB/s (....::::++++| )
[.]47.2 KB/s [c]39.3 KB/s [A]40.8 KB/s (....::::++++| )
[.]31.1 KB/s [c]39.2 KB/s [A]38.6 KB/s (....::::++++| )
[.]35.0 KB/s [c]38.0 KB/s [A]38.0 KB/s (....::::++++| )
[.]46.7 KB/s [c]38.2 KB/s [A]39.3 KB/s (....::::++++| )
[.]36.9 KB/s [c]38.9 KB/s [A]39.0 KB/s (....::::++++| )
One can also specify the update interval in seconds using the -i flag, e.g. for half second updates use:
speedometer -rx wlan0 -i 0.5
All in all, not bad for a text based console utility.
Postscript. I while back I wrote a network throughput measuring script which can be downloaded from my scripts git repository. It's not as pretty as speedometer, but it's yet tool that you may find useful. To clone this repository, read the instructions here.
To install, use:
sudo apt-get install speedometer
and run by specifying the rx and/or tx network interface:
speedometer -rx wlan0 -tx wlan0
One can also use it to monitor the rate of a running download. For example, suppose I'm downloading a Ubuntu ISO using firefox, one can monitor the download speed using:
speedometer -f Desktop/ubuntu-9.04-desktop-i386.iso.part
The ability to measure the rate that a file is being written to means one can use speedometer to measure the write speed on a filesystem using:
dd if=/dev/zero of=test.dat bs=1M count=4096 &
speedometer -f test.dat
Specifying multiple network interfaces will draw the network activity in graphs stacked one on top of another. However, one can stack the graphs in columns by using the -c flag:
speedometer -rx wlan0 -c -rx wlan0
If you don't like the fancy graphs, a simple text mode output can be selected using the -t flag, as illustrated below:
speedometer -p -tx wlan0
[.]34.1 KB/s [c]34.1 KB/s [A]34.1 KB/s (....::::++++| )
[.]44.7 KB/s [c]37.6 KB/s [A]41.2 KB/s (....::::++++| )
[.]33.8 KB/s [c]38.3 KB/s [A]38.3 KB/s (....::::++++| )
[.]47.2 KB/s [c]39.3 KB/s [A]40.8 KB/s (....::::++++| )
[.]31.1 KB/s [c]39.2 KB/s [A]38.6 KB/s (....::::++++| )
[.]35.0 KB/s [c]38.0 KB/s [A]38.0 KB/s (....::::++++| )
[.]46.7 KB/s [c]38.2 KB/s [A]39.3 KB/s (....::::++++| )
[.]36.9 KB/s [c]38.9 KB/s [A]39.0 KB/s (....::::++++| )
One can also specify the update interval in seconds using the -i flag, e.g. for half second updates use:
speedometer -rx wlan0 -i 0.5
All in all, not bad for a text based console utility.
Postscript. I while back I wrote a network throughput measuring script which can be downloaded from my scripts git repository. It's not as pretty as speedometer, but it's yet tool that you may find useful. To clone this repository, read the instructions here.
Friday, 24 July 2009
Dropping the page and/or inode Cache on Linux
Sometimes I want to run some I/O benchmarks but I don't want cached data from previous test runs to interfere with subsequent tests. Fortunately Linux provides a mechanism to drop cached data as follows (run the commands as root):
Freeing the page cache:
# echo 1 > /proc/sys/vm/drop_caches
Free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
Free the page cache, dentries and the inodes:
# echo 3 > /proc/sys/vm/drop_caches
The notes I've read on this seem to indicate that one should always run sync before doing this; quote "this is a non-destructive operation and dirty objects are not freeable, so the user should run sync(8) first." So now you know!
Freeing the page cache:
# echo 1 > /proc/sys/vm/drop_caches
Free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
Free the page cache, dentries and the inodes:
# echo 3 > /proc/sys/vm/drop_caches
The notes I've read on this seem to indicate that one should always run sync before doing this; quote "this is a non-destructive operation and dirty objects are not freeable, so the user should run sync(8) first." So now you know!
Ubuntu Karmic Alpha 3 ready for testing!
Ubuntu Karmic Koala Alpha 3 is ready for downloading and testing.
This is an Alpha release. Do not install it in production machines. The final version will be released on October 29th 2009.
Features include:
GNOME 2.27.4 development version
Empathy as the default messaging client
gdm 2.27.4 login manager
Ubuntu One file sharing service
Linux 2.6.31 kernel
New UXA Intel Video Driver acceleration method
Kernel Mode Setting (KMS)
gcc 4.4
ext4 filesystem by default
GRUB2 by default
ISOs and torrents available at:
http://cdimage.ubuntu.com/releases/karmic/alpha-3/ Ubuntu Desktop, Server, Netbook Remix
http://cdimage.ubuntu.com/kubuntu/releases/karmic/alpha-3/ Kubuntu Desktop and Netbook
http://cdimage.ubuntu.com/xubuntu/releases/karmic/alpha-3/ Xubuntu
http://cdimage.ubuntu.com/ubuntustudio/releases/karmic/alpha-3/ Ubuntu Studio
Check out the Karmic Alpha 3 webpage for the full details.
This is an Alpha release. Do not install it in production machines. The final version will be released on October 29th 2009.
Features include:
GNOME 2.27.4 development version
Empathy as the default messaging client
gdm 2.27.4 login manager
Ubuntu One file sharing service
Linux 2.6.31 kernel
New UXA Intel Video Driver acceleration method
Kernel Mode Setting (KMS)
gcc 4.4
ext4 filesystem by default
GRUB2 by default
ISOs and torrents available at:
http://cdimage.ubuntu.com/releases/karmic/alpha-3/ Ubuntu Desktop, Server, Netbook Remix
http://cdimage.ubuntu.com/kubuntu/releases/karmic/alpha-3/ Kubuntu Desktop and Netbook
http://cdimage.ubuntu.com/xubuntu/releases/karmic/alpha-3/ Xubuntu
http://cdimage.ubuntu.com/ubuntustudio/releases/karmic/alpha-3/ Ubuntu Studio
Check out the Karmic Alpha 3 webpage for the full details.
Thursday, 23 July 2009
LED console driver
Some of this afternoon I've been hacking away writing a *very* simple console driver that pulses the keyboard LED using asynchronous pulses identical to that of RS232 serial. The idea is to hook up a photo diode and convert the light signal into asynchronous pulses into a USB serial dongle that's connected to a debugging host PC. Then I should be able to do printk() over the keyboard LED and be able do debug the kernel when I don't have the luxury of a video console. :-)
OK, so the speed may be very slow, ~300 baud, but it may just work. If it does, it will get me out of all sorts of painful debugging holes where I don't have the luxury of being able to debug using POST codes by writing to port 0x80.
I will let you know if it works once we get some kit to read the LED reliably and at speed...
OK, so the speed may be very slow, ~300 baud, but it may just work. If it does, it will get me out of all sorts of painful debugging holes where I don't have the luxury of being able to debug using POST codes by writing to port 0x80.
I will let you know if it works once we get some kit to read the LED reliably and at speed...
Fabrice Bellard, Software Genius...(?)
There are a select few people in the world who just keep on producing amazingly useful and cool open source software, over and over again. Take Fabrice Bellard for example; he has contributed so much over the past few years, his work is most incredible. Here are a few of the projects he's created and developed:
QEMU - A machine emulator/virtualiser.
FFMPEG - A cross platform solution to record, convert and stream audio and video.
TCC - A small and fast fully formed ISOC99 C compiler. It can also be used to treat C source as a script.
TCCBOOT - A bootloader able to compile and boot the Linux kernel directly from the source code!
TinyGL - Subset of OpenGL, small + fast!
Pi computation - fastest formula to find the Nth binary digit of Pi - also largest value of Pi
QEmacs - The Quick Emacs clone
A Web Scientific Calculator which includes plotting functions
Harissa, a Java Virtual Machine
stat-1.0, a PPM compressor
..to name but a few... Anyhow, if you want to see more of his amazing work, check out Farbrice's Website.
It's really inspiring to see this kind of technical excellence and producitivity. One wonders what will be Fabrice's next project...
QEMU - A machine emulator/virtualiser.
FFMPEG - A cross platform solution to record, convert and stream audio and video.
TCC - A small and fast fully formed ISOC99 C compiler. It can also be used to treat C source as a script.
TCCBOOT - A bootloader able to compile and boot the Linux kernel directly from the source code!
TinyGL - Subset of OpenGL, small + fast!
Pi computation - fastest formula to find the Nth binary digit of Pi - also largest value of Pi
QEmacs - The Quick Emacs clone
A Web Scientific Calculator which includes plotting functions
Harissa, a Java Virtual Machine
stat-1.0, a PPM compressor
..to name but a few... Anyhow, if you want to see more of his amazing work, check out Farbrice's Website.
It's really inspiring to see this kind of technical excellence and producitivity. One wonders what will be Fabrice's next project...
Wednesday, 22 July 2009
rsyncable gzip
Rsync is an incredibly powerful tool that allows one to do fast incremental file transfer. It's a very smart tool, but sometimes it can be helped to do even better if you know how to help it.
One example of how to help rsync is with gzip compression. Gzip provides the --rsyncable flag which makes the output easier for rsync to spot local changes and just send the delta. According to the manual, the downside is that it makes the compressed about 1% larger, but there should be a big rsync win.
So how much better is it? To test this, I tar'd up a some ARM chumby sources and objects that I've been playing around with; this came to 1.2GB and compressed with gzip --fast came to 510MB.
Copying it over wifi to my server took ~9 mins 50 secs. I then removed 1 object file which was about 50% down the archive, tar'd up the files again, gzip'd again and rsync'd this; this took ~9 mins 12 secs.
I then repeated the exercise, first I removed the original file on the server to start with a clean rsync target. This time I gzip'd with --fast --rsyncable, and the rscync took 9 mins 56 seconds, which is ~1% longer than --rsyncable because the gzip file was 1% larger. Next, I removed 1 object file, again which was 50% down the archive, tar'd and gzip'd again with the --fast --rsyncable option, this took 15 seconds, or a speed up of 1387 times!
Conclusion:
One example of how to help rsync is with gzip compression. Gzip provides the --rsyncable flag which makes the output easier for rsync to spot local changes and just send the delta. According to the manual, the downside is that it makes the compressed about 1% larger, but there should be a big rsync win.
So how much better is it? To test this, I tar'd up a some ARM chumby sources and objects that I've been playing around with; this came to 1.2GB and compressed with gzip --fast came to 510MB.
Copying it over wifi to my server took ~9 mins 50 secs. I then removed 1 object file which was about 50% down the archive, tar'd up the files again, gzip'd again and rsync'd this; this took ~9 mins 12 secs.
I then repeated the exercise, first I removed the original file on the server to start with a clean rsync target. This time I gzip'd with --fast --rsyncable, and the rscync took 9 mins 56 seconds, which is ~1% longer than --rsyncable because the gzip file was 1% larger. Next, I removed 1 object file, again which was 50% down the archive, tar'd and gzip'd again with the --fast --rsyncable option, this took 15 seconds, or a speed up of 1387 times!
Conclusion:
- --rsyncable does add a little over head in the gzip image size, in my test it was indeed ~1% extra in size.
- --rsyncable really does help rsync detect sync points and allows it to just copy over a small delta. Very efficient!
Tuesday, 21 July 2009
Launchpad is now open source!
Launchpad is now open source. Quoting the launchpad blog:
"We released it today under the GNU Affero General Public license, version 3. Note that although we had previously announced that we’d be holding back two components (codehosting and soyuz), we changed our minds: they are included — all the code is open.
Big congratulations (and thanks) to the Canonical Launchpad team, who worked overtime to make this happen sooner rather than later, and to Mark Shuttleworth, whose decision it was to open source Launchpad in the first place."
Looks like this fixes bug LP#393596 :-)
"We released it today under the GNU Affero General Public license, version 3. Note that although we had previously announced that we’d be holding back two components (codehosting and soyuz), we changed our minds: they are included — all the code is open.
Big congratulations (and thanks) to the Canonical Launchpad team, who worked overtime to make this happen sooner rather than later, and to Mark Shuttleworth, whose decision it was to open source Launchpad in the first place."
Looks like this fixes bug LP#393596 :-)
Regression Ponderings
Today I was digging around an audio bug where an earlier fix for one model of hardware was breaking things for a slightly different model.
The fix that had introduced the bug had modified generic code for a subset of hardware and in doing so broke the code for all the other cases. The fix should have been a quirk for a specific subset of hardware. This made me wonder how many bugs are like this - somebody fixes a bug which then causes regressions for almost identical classes of hardware. It would be interesting to know how many regressions occur like this.
The problem is that when dealing low level tinkering, one can only really test one's own hardware and produce a fix for that; making sure it's not causing regressions is hard as one usually does not have all the other hardware to test against. Also, for this particular bug it's not totally clear in hindsight that the original fix was flawed; eye-balling the code may have picked up the bug earlier - but this can be incredibly difficult to do and get right 100% of the time.
After digging down and identifying *where* the code was wrong was the hard part.
As it was, diff'ing the buggy code with some upstream quickly showed me a fix and backporting this was a no-brainer.
The fix that had introduced the bug had modified generic code for a subset of hardware and in doing so broke the code for all the other cases. The fix should have been a quirk for a specific subset of hardware. This made me wonder how many bugs are like this - somebody fixes a bug which then causes regressions for almost identical classes of hardware. It would be interesting to know how many regressions occur like this.
The problem is that when dealing low level tinkering, one can only really test one's own hardware and produce a fix for that; making sure it's not causing regressions is hard as one usually does not have all the other hardware to test against. Also, for this particular bug it's not totally clear in hindsight that the original fix was flawed; eye-balling the code may have picked up the bug earlier - but this can be incredibly difficult to do and get right 100% of the time.
After digging down and identifying *where* the code was wrong was the hard part.
As it was, diff'ing the buggy code with some upstream quickly showed me a fix and backporting this was a no-brainer.
Monday, 20 July 2009
Measuring Audio Quality with Audacity
On some netbooks or laptops you may think the audio quality is good enough for your everyday sound experience; however some audiophiles can pick up audio problems that are sometimes deemed very subjective. For example, the younger people can pick up sounds upto ~20-21Khz and this can drop to ~15Khz or lower as one gets older. Also, we can get tuned to listening to music that has been psycho-acoustically modified, such as when using MP3 compression, which again can reduce and remove subtle harmonics and overtones that most people will just not notice.
But what about the actual hardware on a laptop or netbook? Surely this cannot mess with sound that much. Well, you may be surprised. To remove the subjectivity from my experimentations, I set up some test cases that could be scrutinised by more than the human ear.
Test 1: 440Hz Tone.
I used audacity to generate a 440Hz pure sine wave tone; 440Hz is A above middle C (C4) on an equally tempered musical scale. Then I played this tone at varying volume settings (adjusting them using alsamixer) out through the headphone socket down a low impedance cable into a 44KHz 16 bit sampler. I then took the digitised signal and again used audacity to plot the spectrum (select Analyze->Plot Spectrum).
At low volume settings, I was able to see just the 440Hz peak, but as I increased the volume settings, I was able to see lots of additional harmonics appear. This explained the distortion I was hearing when the volume was cranked up fairly high.
Below are two plots, the first with a low gain, the second with gain fully maxed out:
Test 2, Wider Spectrum Tests
Well, Test 1 is fine for a 440Hz tone, but we do actually use a wider spread of frequencies when listening to music! My second set of test was a repeat of Test 1, but I used two different sound samples, the first was a pure white noise sample and the second was a sine wave sweep from 10Hz upto 20Khz for 30 seconds. I then re-sampled and analysed the spectrum using audacity. On a perfect system one would expect to see an even spread across the spectrum, but again, I was able to see some drop off from 15Hkz upwards. This was another reason for the weird artifacts I was hearing.
Below the white noise test:
Test 3, checking output gain.
This was a bit more hacky. For this tests I played the original 440Hz tone for varying volume settings and sampled this and measured the observable sample amplitude using audacity. I was expecting to see some kind of linearity, but I soon discovered I was only getting linearity from middle to the top volume settings. This seems to imply that the amplifier on my hardware was not biased correctly. Any hardware experts like to comment? :-)
Anyhow, what I learn was that one can check out the audio characteristics of one's hardware using some very simple kit: a good quality audio connection from the headphone socket to a fairly inexpensive digitiser and audacity. I admit that this is fairly hacky as I've not taken into consideration distortion on the cable and the digitizer, but it does allow me to see that my irritation caused by the distortion is legitimate and not to be blamed on old age :-)
But what about the actual hardware on a laptop or netbook? Surely this cannot mess with sound that much. Well, you may be surprised. To remove the subjectivity from my experimentations, I set up some test cases that could be scrutinised by more than the human ear.
Test 1: 440Hz Tone.
I used audacity to generate a 440Hz pure sine wave tone; 440Hz is A above middle C (C4) on an equally tempered musical scale. Then I played this tone at varying volume settings (adjusting them using alsamixer) out through the headphone socket down a low impedance cable into a 44KHz 16 bit sampler. I then took the digitised signal and again used audacity to plot the spectrum (select Analyze->Plot Spectrum).
At low volume settings, I was able to see just the 440Hz peak, but as I increased the volume settings, I was able to see lots of additional harmonics appear. This explained the distortion I was hearing when the volume was cranked up fairly high.
Below are two plots, the first with a low gain, the second with gain fully maxed out:
Test 2, Wider Spectrum Tests
Well, Test 1 is fine for a 440Hz tone, but we do actually use a wider spread of frequencies when listening to music! My second set of test was a repeat of Test 1, but I used two different sound samples, the first was a pure white noise sample and the second was a sine wave sweep from 10Hz upto 20Khz for 30 seconds. I then re-sampled and analysed the spectrum using audacity. On a perfect system one would expect to see an even spread across the spectrum, but again, I was able to see some drop off from 15Hkz upwards. This was another reason for the weird artifacts I was hearing.
Below the white noise test:
Test 3, checking output gain.
This was a bit more hacky. For this tests I played the original 440Hz tone for varying volume settings and sampled this and measured the observable sample amplitude using audacity. I was expecting to see some kind of linearity, but I soon discovered I was only getting linearity from middle to the top volume settings. This seems to imply that the amplifier on my hardware was not biased correctly. Any hardware experts like to comment? :-)
Anyhow, what I learn was that one can check out the audio characteristics of one's hardware using some very simple kit: a good quality audio connection from the headphone socket to a fairly inexpensive digitiser and audacity. I admit that this is fairly hacky as I've not taken into consideration distortion on the cable and the digitizer, but it does allow me to see that my irritation caused by the distortion is legitimate and not to be blamed on old age :-)
Sunday, 19 July 2009
LatencyTop - Measuring Latency in Linux
I'm spending some of my free time fiddling around some of the features with the 2.6.31 kernel in Ubuntu Karmic. While on my quest to check out system latencies I installed LatencyTop and ran a few quick tests.
To install latencytop, use:
sudo apt-get install latencytop
And to run use:
sudo latencytop
LatencyTop typically comes in useful when problems where the kernel does not respond quickly enough to real time requirements, for example, skipping audio, or when a server is fully loaded but one still as plenty of free CPU cycles left over. I'm very interested in the latter case as my Quad Core Xeon server is seeing a CPU load of 85% on all 4 cores when building a kernel and I want to figure out why I'm not able to fully max the system out even when the build is just going to a memory based filesystem (reducing the HDD I/O bottleneck to zero).
Basically, LatencyTop allows one to look at cases where applications want to run but are blocked because a resource is not available so the kernel blocks that process. The beauty of LatencyTop is that it shows what is happening in the kernel and with user space processes, allowing one to get an idea of what's wrong without too much difficulty. Once the reason for the latency is identified then it's a case of fixing code, which is where the fun begins :-)
For more information on LatencyTop, checkout the project home page and the announcement page too.
Below is an example of the tool running on my Lenovo laptop:
..as one can see process hddtemp has suffered from a 115.4 millisecond latency doing a SCSI disk ioctl(). While this is quite a large delay, it's not a mission critical application - it's just data being gathered for my Hardware Sensors applet.
Thanks to the Intel Open Source Technology Center for this neat tool. Keep them coming Intel!
To install latencytop, use:
sudo apt-get install latencytop
And to run use:
sudo latencytop
LatencyTop typically comes in useful when problems where the kernel does not respond quickly enough to real time requirements, for example, skipping audio, or when a server is fully loaded but one still as plenty of free CPU cycles left over. I'm very interested in the latter case as my Quad Core Xeon server is seeing a CPU load of 85% on all 4 cores when building a kernel and I want to figure out why I'm not able to fully max the system out even when the build is just going to a memory based filesystem (reducing the HDD I/O bottleneck to zero).
Basically, LatencyTop allows one to look at cases where applications want to run but are blocked because a resource is not available so the kernel blocks that process. The beauty of LatencyTop is that it shows what is happening in the kernel and with user space processes, allowing one to get an idea of what's wrong without too much difficulty. Once the reason for the latency is identified then it's a case of fixing code, which is where the fun begins :-)
For more information on LatencyTop, checkout the project home page and the announcement page too.
Below is an example of the tool running on my Lenovo laptop:
..as one can see process hddtemp has suffered from a 115.4 millisecond latency doing a SCSI disk ioctl(). While this is quite a large delay, it's not a mission critical application - it's just data being gathered for my Hardware Sensors applet.
Thanks to the Intel Open Source Technology Center for this neat tool. Keep them coming Intel!
Rainy Weekend - Lego Time!
Wet weather over the weekend usually means the kids are trapped indoors and need some form of entertaining. I use this as an excuse to get the box of Lego out and show my 6 year old son David how much fun engineering and construction can be.
Here is a photo of a crane we built using a load of 2x8 and 2x6 beams and 2x2 and 1x1 stud bricks for a tower and a massive block of yellow bricks to balance out the jib. While my son did enjoy helping to get something built that was taller than him, the real fun for him was knocking it over and watching hundreds of bricks fly everywhere! Ho hum.
The Lego City kits look really exciting and promise a lot of building and playing fun, but I still think the kids get more fun when I buy them big buckets of hundreds of bricks and order loads of beams and plates, wheels and window bricks from the Lego website. David can spend hours just building loads of weird and wonderful cars and lorries from his imagination rather than following instructions from a booklet; it's far more creative.
Anyhow, rainy weekends let me get the Lego out and regress back to being a kid again - and also help teach my son the tried and tested techniques of building fun towers and buildings in the process.
Saturday, 18 July 2009
Pulse Audio. Complexity Reigns!
Not sure about you, but I do hear people grumbling about Pulse Audio, generally when they get weird audio sample dropping issues or just things don't seem to work.
The following diagram from Wikipedia allows one to get a better understanding of the complexity of all that audio plumbing between the kernel and application:
(Click on the above image and then click on the SVG to get a full sized diagram).
Hrmm.. so no wonder it can be hard to figure out weird audio issues... :-)
The following diagram from Wikipedia allows one to get a better understanding of the complexity of all that audio plumbing between the kernel and application:
(Click on the above image and then click on the SVG to get a full sized diagram).
Hrmm.. so no wonder it can be hard to figure out weird audio issues... :-)
Meld, a visual diff tool
When dealing with large diffs (several hundred or tens thousands of changes) I normally turn to Meld, a visual diff tool. It allows on to diff two or three files and it can cope with a variety of diff formats, such as Mercurial, CVS, Subversion and Bazaar-ng, diff2 and diff3.
One neat feature is that Meld allows one to diff entire directories; changed files highlighted in red, new files in green and missing files are crossed out. The directory diff is selected by File->New... and then select the Directory Comparison tab in the "Choose Files" dialogue box. Below is a meld comparison between different versions of Mplayer:
Although there are quite a few diff tools out there, I prefer to use meld as it is fairly intuitive to use and just does what I need - no more and no less. The downside is that it's not the speediest tool when computing diffs of very large files, but this is an O(N^2) comparison operation, so that is no surprise really.
To install Meld on a Ubuntu system use:
sudo apt-get install meld
Occasionally I use it to compare kernel dmesg logs to see the differences between working kernels and ones with regressions or different behaviour. I normally turn on the required kernel debug options for the driver or subsystem I'm looking at and then boot two different kernel versions and compare the dmesg output. One trick is to strip off the leading time stamps and then sort the dmesg log before comparing - that way one can quickly see which messages are new or missing between kernels using:
dmesg | cut -c16- | sort > dmesg1.log
and repeat and compare logs with meld.
Below is an example of diff'ing two git commit logs from different kernel versions with meld:
Anyone else got any favourite diff'ing tools such as Kdiff3, Xxdiff, TkDiff, GtkDiff that they would like to recommend?
Friday, 17 July 2009
CD/DVD Drive capabilities
I found the following little nugget while trying to discover the capabilities of my laptop DVD drive:
$ sudo /lib/udev/cdrom_id --export /dev/sr0
ID_CDROM=1
ID_CDROM_CD_R=1
ID_CDROM_CD_RW=1
ID_CDROM_DVD=1
ID_CDROM_DVD_R=1
ID_CDROM_DVD_RAM=1
ID_CDROM_MRW=1
ID_CDROM_MRW_W=1
OK, so it's a little dirty calling /lib/udev/cdrom_id, but it's a quick way of doing the CDROM_GET_CAPABILITY ioctl() on a optical media device.
$ sudo /lib/udev/cdrom_id --export /dev/sr0
ID_CDROM=1
ID_CDROM_CD_R=1
ID_CDROM_CD_RW=1
ID_CDROM_DVD=1
ID_CDROM_DVD_R=1
ID_CDROM_DVD_RAM=1
ID_CDROM_MRW=1
ID_CDROM_MRW_W=1
OK, so it's a little dirty calling /lib/udev/cdrom_id, but it's a quick way of doing the CDROM_GET_CAPABILITY ioctl() on a optical media device.
More Ubuntu Wubi Notes..
Last month I blogged a about the way Wubi filesystems are organised via a FUSE mounted ntfs-3g loop back mounted file system. This blog article has been getting quite a few hits, so I thought some extra Wubi links may be helpful.
1. Wubi is a way to allow Windows users to install (and uninstall) Ubuntu as if it's a Windows application in a simple straight forward way that does not modify the Windows partitioning.
2. The official Wubi Wiki page contains all you need to know to get started on Wubi.
3. Bugs can be reported and tracked on the Wubi LaunchPad project page.
4. The Wubi project page can be found here.
5. The Wubi Blogosphere may have some useful tips and hints about Wubi.
6. Some extra historical context to this project can be found at Wikipedia.
7. A video of Agostino Russo's Wubi talk at UDS Prague (Intrepid Ibex) is worth watching; the talk also contains a demo of the Wubi installer.
Hope that's a helpful start.
1. Wubi is a way to allow Windows users to install (and uninstall) Ubuntu as if it's a Windows application in a simple straight forward way that does not modify the Windows partitioning.
2. The official Wubi Wiki page contains all you need to know to get started on Wubi.
3. Bugs can be reported and tracked on the Wubi LaunchPad project page.
4. The Wubi project page can be found here.
5. The Wubi Blogosphere may have some useful tips and hints about Wubi.
6. Some extra historical context to this project can be found at Wikipedia.
7. A video of Agostino Russo's Wubi talk at UDS Prague (Intrepid Ibex) is worth watching; the talk also contains a demo of the Wubi installer.
Hope that's a helpful start.
Thursday, 16 July 2009
The Ubuntu Museum
Ubuntu has evolved over the past few years and so that we don't lose and forget some of the unsupported versions Dustin Kirkland has created the Ubuntu Museum.
The Museum has screenshots, screencasts, and usable virtual appliance images of each of the retired Ubuntu releases for you to watch, download and re-experience. The virtual images are QEMU qcow2 disk images of the i386 Desktop edition of each version of Ubuntu.
So, got back down memory land and visit the Ubuntu Museum... enjoy!
Wednesday, 15 July 2009
The TTY demystified
Just found a great article by Linus Ă…kesson about TTY handling - it is an excellent detailed and lengthy technical write up, packed full of TTY goodness. Well worth reading when you have a spare 20 minutes!
ESKY Lama back in action!
By the wonders of E-commerce, my early Monday morning order of a replacement helicopter tail boom arrived today (Wednesday). Unfortunately the Royal Mail did not ring the door bell and just dumped the parcel on the door step, so goodness knows how long it had been sitting there for the World and his dog to see.
The tail boom kit from Miracle Mart included a fake engine and side buckets. The engine comprised of 10 tiny components which took me just as many minutes to assemble with some poly-cement glue. It has been a while since I did any gluing at this kind of detail; I used a match stick to help me dab the glue in the correct places, but I've come to realise my eye sight is not as good as it was since I last made Airfix kits when I was 12 years old!
So here's the assembled engine (note the tiny grill on the air intake, sweet!):
Fixing the tail boom was relatively straight forward, it just required unscrewing 4 tiny 2.5mm long screws and screwing on the new flexible replacement. Easy with the right micro screwdriver.
I did not attach the side buckets as they added just a little too much weight for my liking. Also, the instructions suggested they should be attached using double sided tape and I could imaging seeing them pop off during a bad landing and flying up through the blades causing all sorts of blade carnage.
Here's the results of my 20 minutes of work:
I then took it out for a quick late evening spin and it works well; I don't think the new parts are much heavier than the original parts, and they look better and the boom is far more resilient to bad landings. Result!
Now all I need to do is learn to fly properly!
The tail boom kit from Miracle Mart included a fake engine and side buckets. The engine comprised of 10 tiny components which took me just as many minutes to assemble with some poly-cement glue. It has been a while since I did any gluing at this kind of detail; I used a match stick to help me dab the glue in the correct places, but I've come to realise my eye sight is not as good as it was since I last made Airfix kits when I was 12 years old!
So here's the assembled engine (note the tiny grill on the air intake, sweet!):
Fixing the tail boom was relatively straight forward, it just required unscrewing 4 tiny 2.5mm long screws and screwing on the new flexible replacement. Easy with the right micro screwdriver.
I did not attach the side buckets as they added just a little too much weight for my liking. Also, the instructions suggested they should be attached using double sided tape and I could imaging seeing them pop off during a bad landing and flying up through the blades causing all sorts of blade carnage.
Here's the results of my 20 minutes of work:
I then took it out for a quick late evening spin and it works well; I don't think the new parts are much heavier than the original parts, and they look better and the boom is far more resilient to bad landings. Result!
Now all I need to do is learn to fly properly!
Nethogs - yet another top like utility!
On my little quest to find more top like utilities, I've now found a simple network measuring tool called nethogs. It displays network usage per process rather per protocol, so one can easily identify which program is sucking away your network bandwidth.
To install, use:
apt-get install nethogs
To use, run with sudo and specify the network device you want to monitor. For example, to monitor your wifi usage:
sudo nethogs wlan0
To quit the program, simple press the 'q' key.
above, nethogs showing wget, firefox, xchat-gnome and Skype using wlan0
To install, use:
apt-get install nethogs
To use, run with sudo and specify the network device you want to monitor. For example, to monitor your wifi usage:
sudo nethogs wlan0
To quit the program, simple press the 'q' key.
L1, L2, RAM and HDD Latencies - Infographic
I stumbled upon this infographic that illustrates the latencies in L1 cache, L2 cache, RAM and HDD. Each pixel represents 1ns - you will need to zoom into the top of the image to see the detail.
In summary:
L1 Cache (1ns)
L2 Cache (4.7ns)
RAM (83ns)
Hard Disk (13,700,000ns)
In summary:
L1 Cache (1ns)
L2 Cache (4.7ns)
RAM (83ns)
Hard Disk (13,700,000ns)
Tuesday, 14 July 2009
Ubuntu 9.10 Karmic Koala Alpha Testing
Ubuntu 9.10 Karmic Koala is still in its early Alpha stage, but I bit the bullet over the weekend and did a clean install on my Lenovo 3000 N200 laptop. Although I've been running with a Karmic kernel on my laptop for several weeks and also upgraded my servers to Karmic very early on, it's only now that I decided it was time to go the whole hog and upgrade my laptop.
Rather than just do a rolling upgrade from Jaunty to Karmic, I backed up my home directories and some configs in /etc and then did a clean install from Alpha 2 and pulled in all the updates. Then I restored /home and did some minor tweaks to my configs.
Starting afresh is quite cathartic; I got rid of a load of old applications that I'd installed a while ago and don't use any more, and I also started afresh with ext4. Ext4 brings some more speed, especially when fsck'ing the drive on boot. Ext4 is noticeably faster when removing hundreds of files, for example rm -rf on kernel source trees.
Karmic also now uses the Grub2 boot loader; this is working well across a wide variety of machines, as can be seen from the Grub2 testing page.
With Karmic we also get Kernel Mode Setting (KMS) too. So far, this is working fine - you soon notice that there are less screen mode setting flickers on boot and suspend/resume is slicker. Audio works OK with the default audio player Rhythmbox and with proprietary software such as Skype - I've not yet noticed any audio dropping, so this is a good sign so far.
There's going to be a lot more changes made over the next months, hopefully we won't see many regressions on the kernel, but with all the change, stuff does occasionally get broken. So I encourage you to help us by testing the Alpha and Beta releases of Karmic so we can squish those bugs and get changes upstream good and early!
Rather than just do a rolling upgrade from Jaunty to Karmic, I backed up my home directories and some configs in /etc and then did a clean install from Alpha 2 and pulled in all the updates. Then I restored /home and did some minor tweaks to my configs.
Starting afresh is quite cathartic; I got rid of a load of old applications that I'd installed a while ago and don't use any more, and I also started afresh with ext4. Ext4 brings some more speed, especially when fsck'ing the drive on boot. Ext4 is noticeably faster when removing hundreds of files, for example rm -rf on kernel source trees.
Karmic also now uses the Grub2 boot loader; this is working well across a wide variety of machines, as can be seen from the Grub2 testing page.
With Karmic we also get Kernel Mode Setting (KMS) too. So far, this is working fine - you soon notice that there are less screen mode setting flickers on boot and suspend/resume is slicker. Audio works OK with the default audio player Rhythmbox and with proprietary software such as Skype - I've not yet noticed any audio dropping, so this is a good sign so far.
There's going to be a lot more changes made over the next months, hopefully we won't see many regressions on the kernel, but with all the change, stuff does occasionally get broken. So I encourage you to help us by testing the Alpha and Beta releases of Karmic so we can squish those bugs and get changes upstream good and early!
Abusing C for fun
I'm easily amused by the way people can abuse C. Duff's device is a classic example of abusing the C grammar for an optimisation hack. Basically Duff unrolled a loop and realised he could interlace a switch statement into it to jump into the loop and fall through the rest of the memory copy statements:
So, inspired by this madness, I conjured up my own switch statement abuse, this time to show that one can get away without using break statements in a switch by abusing the while loop and continue statements. It's an ugly abuse of C:
The while (0) statement makes one think that the code won't be executed, however the outer switch statement essentially jumps control to the case statements and the continue jumps out to the end of the loop which then never re-iterates. Urgh.
Well, I posted this discovery to comp.lanc.c in 2005 only to be trumped by this ingenious abuse of the switch statement:
This masterpiece(?) above allows one to break out of either of two nested switch statements without a goto. I will leave this as an exercise to the reader to figure out how this works! Thanks to Richard Tobin who posted that follow up code snippet on comp.lang.c too.
So while GCC happily accepts this valid C grammar fortunately we won't be seeing code like this entering the kernel because kernel hackers are far too sensible... :-)
send(to, from, count)
register short *to, *from;
register count;
{
register n=(count+7)/8;
switch(count%8) {
case 0: do { *to = *from++;
case 7: *to = *from++;
case 6: *to = *from++;
case 5: *to = *from++;
case 4: *to = *from++;
case 3: *to = *from++;
case 2: *to = *from++;
case 1: *to = *from++;
} while (--n>0);
}
So, inspired by this madness, I conjured up my own switch statement abuse, this time to show that one can get away without using break statements in a switch by abusing the while loop and continue statements. It's an ugly abuse of C:
void sw(int s)
{
switch (s) while (0) {
case 0:
printf("zero\n");
continue;
case 1:
printf("one\n");
continue;
case 2:
printf("two\n");
continue;
default:
printf("something else\n");
continue;
}
}
The while (0) statement makes one think that the code won't be executed, however the outer switch statement essentially jumps control to the case statements and the continue jumps out to the end of the loop which then never re-iterates. Urgh.
Well, I posted this discovery to comp.lanc.c in 2005 only to be trumped by this ingenious abuse of the switch statement:
int main(int argc, char **argv)
{
int a = atoi(argv[1]), b = atoi(argv[2]);
switch(a) while(0) {
case 1:
printf("case 1 of outer switch\n");
break;
case 2:
printf("case 2 of outer switch\n");
switch(b) {
case 1:
printf("case 1 of inner switch\n");
break;
case 2:
printf("case 2 of inner switch\n");
continue;
}
printf("end of inner switch\n");
break;
}
printf("end of outer switch\n");
return 0;
}
This masterpiece(?) above allows one to break out of either of two nested switch statements without a goto. I will leave this as an exercise to the reader to figure out how this works! Thanks to Richard Tobin who posted that follow up code snippet on comp.lang.c too.
So while GCC happily accepts this valid C grammar fortunately we won't be seeing code like this entering the kernel because kernel hackers are far too sensible... :-)
iotop - I/O monitor
Well, just when I thought I'd stumbled on the final top like utility (see my previous articles on htop and atop), I found another: iotop
iotop watches I/O usage and displays I/O bandwidth read/write usage for processes/threads. It will also display the percentage of time processes spent while swapping in and also waiting for I/O to complete. To install iotop on a Ubuntu system use:
sudo apt-get install iotop
The left/right arrow keys change the sort column order (process ID, PRIO, USER id, disk read, disk write, swapin activity, IO activity and command used). The 'r' key toggles the sort order and the 'a' key toggles the I/O display between bandwidth and accumulated I/O stats.
So, another top tool for you to use to track down rogue processes or I/O hoggers.
iotop watches I/O usage and displays I/O bandwidth read/write usage for processes/threads. It will also display the percentage of time processes spent while swapping in and also waiting for I/O to complete. To install iotop on a Ubuntu system use:
sudo apt-get install iotop
The left/right arrow keys change the sort column order (process ID, PRIO, USER id, disk read, disk write, swapin activity, IO activity and command used). The 'r' key toggles the sort order and the 'a' key toggles the I/O display between bandwidth and accumulated I/O stats.
above, iotop running on my laptop
So, another top tool for you to use to track down rogue processes or I/O hoggers.
Sunday, 12 July 2009
Linux news getting buried?
An article in the Inquirer has reported that Linux news on sites such as Digg, reddit, and StumbleUpon is being buried. According to Computerworld's Steven J. Vaughan-Nichols, pro-Linux and anti-Microsoft stories are getting buried within an hour of being published.
Popularity rating systems can be open to abuse and results can be skewed. One thing for sure, if 95% of users are using Windows and 1-2% are using Linux then Windows supporters will always be able to bury Linux news items where there is a voting mechanism employed.
One may say that the underdog cannot win in this scenario. However, for me, it makes me want to make Linux even better in quality and usability so that we can win the argument based on hard facts rather than arbitrary voting patterns.
Anyone care to add their 2 cents worth?
Popularity rating systems can be open to abuse and results can be skewed. One thing for sure, if 95% of users are using Windows and 1-2% are using Linux then Windows supporters will always be able to bury Linux news items where there is a voting mechanism employed.
One may say that the underdog cannot win in this scenario. However, for me, it makes me want to make Linux even better in quality and usability so that we can win the argument based on hard facts rather than arbitrary voting patterns.
Anyone care to add their 2 cents worth?
ESKY Lama Helicopter Crash!
Bit of a minor disaster today. I took my ESKY Lama helicopter for a quick spin in the back garden and managed to slice the blades through the tail boom without causing any damage to the plastic blades! Owch! Perhaps I should have flown it indoors as there was a slight breeze that made it tricky to control the helicopter....
Well, the tail boom was rather flimsy and was in poor shape after I flew the Lama into my apple tree the other week. Below is a video of the incident for any of you who like to see my misfortune:
The good news is that after a quick Google I found the website "Pimp My Helicopter" that has flexible replacement tail boom, which I hope will be a little more resilient than the original part. They even supply a pimped-up Fly-Bar that has blue LEDs and make it look like some cars I see in Crawley :-)
Well, the tail boom was rather flimsy and was in poor shape after I flew the Lama into my apple tree the other week. Below is a video of the incident for any of you who like to see my misfortune:
The good news is that after a quick Google I found the website "Pimp My Helicopter" that has flexible replacement tail boom, which I hope will be a little more resilient than the original part. They even supply a pimped-up Fly-Bar that has blue LEDs and make it look like some cars I see in Crawley :-)
Saturday, 11 July 2009
Measuring Power Consumption
I got hold of a plug-in mains power and energy monitor several weeks ago to get some idea of the power consumed by various PCs, Servers and Laptops. This has worked out quite effective to seeing how I can save power on a Xeon Server and a Desktop PC, but has proved less rewarding for lower power devices such as Atom based netbooks.
At the moment I am using a Maplin 2000MU and at £14.99 is good value for money. It measures Voltage, Amps, Watts, Volt-Amps, mains frequency and Power Factor (PF). The manual for this device can be found here.
For low power devices it lacks fine resolution, hence it's not been valuable in measuring hibernate or suspend power on laptops and netbooks. Overall I would give it 7 out of 10 just because is lacks the fine precision I need. However, it's been useful to help me tweak the settings on my server and to see just how much power my laser printer consumes when its busy!
Power Consumption During Hibernation
When a Linux laptop or netbook is hibernated you would assume that it's not drawing any power from the battery and so it can be left hibernated for months. Well, that's not strictly true!
Firstly, one has to consider the fact that laptops and netbooks have an embedded controller inside that consume a slight amount of power. Also, for some reason, some machines may flash a LED periodically to show that the machine is hibernated, this too draws a very small amount of power.
One can measure the amount of power being used during hibernation using the following recipe:
First, we need to measure the power used to just hibernate:
1. firstly charge up your laptop battery and then unplug the power plug,
2. measure the battery capacity. I have a simple script to do this,
3. hibernate the machine and then immediately resume it,
4. measure the battery again,
5. subtract the measurement in step 2 from the measurement in step 4. This is the amount of power used to do a hibernate cycle (e.g. power used to write memory to disc, and the reverse on wake up).
Next, we need to measure the amount of power used for a LONG hibernate:
6. measure the battery capacity again.
7. hibernate the machine for more than 30 minutes, e.g. for 1 hour is good, and then wake it up.
8. measure the battery once more. Subtract the figure from step 6 away, and also the figure derived in step 5 away and you then have the amount of power consumed just to do the long hibernate.
I've hacked up a script to do this automatically - it's still work in progress through, it may not work on some machines because the ACPI alarm may not be 100% reliable.
You may be surprised how much power is being used to do nothing!
So what's consuming all this power? Well, some of the power loss could be due to battery leakage - older batteries seem to suffer more than newer ones, but this cannot be the full story. From my tests the power consumption can be due to:
1. Battery leakage. Varies depending on battery age and type.
2. The Embedded Controller just being busy and flashing LEDs etc.
3. Devices not being fully turned off, e.g. Wifi, Bluetooth etc.
4. Others - probably poor laptop design(?)
Hopefully drivers will improve to fix issue 3 and Intel power management in general should improve over time too. Indentifying the bad drivers is the next step in my quest, but don't hold ones breath - there is a lot of code and the information on a lot of hardware is closed, so figuring out how to make it more power efficient is not straight forward!
I also believe the ACPI wake alarm can also suck a little power. This can be used to wake the machine up at some predetermined time in the future, for example MythTV boxes used this feature to wake up a PC to start recording TV. Turning off the alarm probably saves a little amount of power, one trick is to program it to trigger in the next second or so, and then hibernate the machine.
Any other ideas on exactly where all that power is going would be appreciated!
Meanwhile, just don't assume hibernation is totally power friendly :-)
Firstly, one has to consider the fact that laptops and netbooks have an embedded controller inside that consume a slight amount of power. Also, for some reason, some machines may flash a LED periodically to show that the machine is hibernated, this too draws a very small amount of power.
One can measure the amount of power being used during hibernation using the following recipe:
First, we need to measure the power used to just hibernate:
1. firstly charge up your laptop battery and then unplug the power plug,
2. measure the battery capacity. I have a simple script to do this,
3. hibernate the machine and then immediately resume it,
4. measure the battery again,
5. subtract the measurement in step 2 from the measurement in step 4. This is the amount of power used to do a hibernate cycle (e.g. power used to write memory to disc, and the reverse on wake up).
Next, we need to measure the amount of power used for a LONG hibernate:
6. measure the battery capacity again.
7. hibernate the machine for more than 30 minutes, e.g. for 1 hour is good, and then wake it up.
8. measure the battery once more. Subtract the figure from step 6 away, and also the figure derived in step 5 away and you then have the amount of power consumed just to do the long hibernate.
I've hacked up a script to do this automatically - it's still work in progress through, it may not work on some machines because the ACPI alarm may not be 100% reliable.
You may be surprised how much power is being used to do nothing!
So what's consuming all this power? Well, some of the power loss could be due to battery leakage - older batteries seem to suffer more than newer ones, but this cannot be the full story. From my tests the power consumption can be due to:
1. Battery leakage. Varies depending on battery age and type.
2. The Embedded Controller just being busy and flashing LEDs etc.
3. Devices not being fully turned off, e.g. Wifi, Bluetooth etc.
4. Others - probably poor laptop design(?)
Hopefully drivers will improve to fix issue 3 and Intel power management in general should improve over time too. Indentifying the bad drivers is the next step in my quest, but don't hold ones breath - there is a lot of code and the information on a lot of hardware is closed, so figuring out how to make it more power efficient is not straight forward!
I also believe the ACPI wake alarm can also suck a little power. This can be used to wake the machine up at some predetermined time in the future, for example MythTV boxes used this feature to wake up a PC to start recording TV. Turning off the alarm probably saves a little amount of power, one trick is to program it to trigger in the next second or so, and then hibernate the machine.
Any other ideas on exactly where all that power is going would be appreciated!
Meanwhile, just don't assume hibernation is totally power friendly :-)
Friday, 10 July 2009
Debugging Early Resume Issues
The last couple of days I've been digging a little deeper into a tricky resume issue on a laptop; specifically it hangs the second time one does a resume and it hangs up in or around arch/x86/kernel/acpi/wakeup_32.S
It is rather annoying as the hang occurs early in the resume phase so it is not possible to use printk() to dump any information out to a console; one has to resort to low level debugging hacks.
The difficult part is that I cannot attach a Power On Self Test (POST) debug card in to the laptop and wiggle port $80 to see the POST codes as a method of debugging. To see if the BIOS is actually jumping back to the kernel resume code I added some code that just flashes the keyboard LED as a sanity check, and from there I've been able to work down through the code.
I've got a lot of hacky code like this that helps me debug thorny issues. I've put some of this code up in the debug-code git repository as it could be useful to others. I've provided code snippits in C and assembler to buzz the internal PC speaker, flash the keyboard LED and also to change the VGA palette. This code is hacky I know, but gets the job done!
If you have your own favourite way of debugging code, feel free to share the know-how.
It is rather annoying as the hang occurs early in the resume phase so it is not possible to use printk() to dump any information out to a console; one has to resort to low level debugging hacks.
The difficult part is that I cannot attach a Power On Self Test (POST) debug card in to the laptop and wiggle port $80 to see the POST codes as a method of debugging. To see if the BIOS is actually jumping back to the kernel resume code I added some code that just flashes the keyboard LED as a sanity check, and from there I've been able to work down through the code.
I've got a lot of hacky code like this that helps me debug thorny issues. I've put some of this code up in the debug-code git repository as it could be useful to others. I've provided code snippits in C and assembler to buzz the internal PC speaker, flash the keyboard LED and also to change the VGA palette. This code is hacky I know, but gets the job done!
If you have your own favourite way of debugging code, feel free to share the know-how.
Thursday, 9 July 2009
Ubuntu Mainline Kernel Builds
Ubuntu systems generally run with Ubuntu kernels which are based on mainline kernels and also have numerous patches for bug fixes, security fixes and some extra hardware support. Sometimes it can be useful to run your system with an unmodified mailine kernel, for example, when one needs to check if problems have been fixed with an upstream kernel or if a Ubuntu patch causes a regression.
My colleague Andy Whitcroft (~apw) has worked hard to provide us now with mainline kernel builds. These kernels are unmodified kernel source, built against the Ubuntu kernel configuration files and packaged as .deb files for allow simple installation and testing.
The mainline kernels archive is located at: http://kernel.ubuntu.com/~kernel-ppa/mainline/
and there are a full set of instructions available on the MainlineBuilds wiki page.
Buyer beware. These kernels do not include any of the Ubuntu specific drivers and there are no restricted modules either, so your mileage may vary. Also note that the further the kernel version is from the base kernel the more likely one is to get incompatabilites with user space. Also, the kernel team does not support these kernels, so use them at your own risk.
My colleague Andy Whitcroft (~apw) has worked hard to provide us now with mainline kernel builds. These kernels are unmodified kernel source, built against the Ubuntu kernel configuration files and packaged as .deb files for allow simple installation and testing.
The mainline kernels archive is located at: http://kernel.ubuntu.com/~kernel-ppa/mainline/
and there are a full set of instructions available on the MainlineBuilds wiki page.
Buyer beware. These kernels do not include any of the Ubuntu specific drivers and there are no restricted modules either, so your mileage may vary. Also note that the further the kernel version is from the base kernel the more likely one is to get incompatabilites with user space. Also, the kernel team does not support these kernels, so use them at your own risk.
Wednesday, 8 July 2009
Google Chrome OS
Yesterday Google announced it's Google Chrome OS, the keypoints were:
Fast + Lightweight; Boot to Web in a "few" seconds.
Minimal interface, Web based user experience.
Users won't deal with viruses, malware, security updates.
Google Chrome in a Windowing System sitting on the Linux Kernel (is this going to be X based, or something radically new?)
CPUs: x86, ARM
From the BBC news website:
"One of Google's major goals is to take Microsoft out, to systematically destroy their hold on the market," (Rob Enderle, industry watcher and president of the Enderle Group).
My gripe is that Google is able to make an announcement that gets reported by the media as "Google is making a new OS". It is a little disingenuous if Google states it is their new OS, when in fact it's just another Linux based product. Perhaps it's down to branding; mentioning its a Google OS aligns it withe the Google brand, where as saying it's Google Chrome running on Linux makes it only appealing to geeks. Whatever happens, Google needs to let the public know that Linux is core to their success, be it supporting Chrome or in their data centres.
As it is, Linux once again is being used to shake things up. Looks like it's going to be an interesting next 18 months.
Tuesday, 7 July 2009
To Wine, or not to Wine, that is the Question?
Wine (Windows is Not Emulated) is a very cleverly engineered tool; it allows one to run Windows applications natively on Intel based Linux systems, to different degrees of success depending on what underlying Windows APIs are used by the application.
I never know what to think about using Wine. Part of me wants to just refuse to use any Windows application because it's "tainting" my system - I want to have 100% GPL'd Open Source based code running on my machine. Using this argument I should not be using apps like Skype or Flash and hence the BBC iPlayer and I should therefore refuse to use any close proprietary drivers, like the broadcom wl.ko driver module. Where does one draw the line?
Perhaps my machine should be a totally Windows free zone - that's one line I could draw. Alternatively, if I have to run Windows code, maybe it should be just inside a Virtual Machine just to contain it, rather than using apps in Wine.
It's a interesting debate - my current approach is go for as much GPL'd Open Source code I can, and be pragmatic - use some propriety apps like Skype only when I need to, and try to avoid any Windows code on my machine at all possible. So, sorry Wine, your technology is very powerful and enabling, but I don't think I will be using you for the moment.
I never know what to think about using Wine. Part of me wants to just refuse to use any Windows application because it's "tainting" my system - I want to have 100% GPL'd Open Source based code running on my machine. Using this argument I should not be using apps like Skype or Flash and hence the BBC iPlayer and I should therefore refuse to use any close proprietary drivers, like the broadcom wl.ko driver module. Where does one draw the line?
Perhaps my machine should be a totally Windows free zone - that's one line I could draw. Alternatively, if I have to run Windows code, maybe it should be just inside a Virtual Machine just to contain it, rather than using apps in Wine.
It's a interesting debate - my current approach is go for as much GPL'd Open Source code I can, and be pragmatic - use some propriety apps like Skype only when I need to, and try to avoid any Windows code on my machine at all possible. So, sorry Wine, your technology is very powerful and enabling, but I don't think I will be using you for the moment.
Faster ssh X11 Forwarding
I use ssh daily to connect to my servers and laptops around my home office. Most of the time I'm using ssh to login and build software, so it's plain and simple command line activity. However, sometimes I need to run an X11 application on a remote machine, in which case I use X forwarding to display the remote X application on my laptop. However, this can be slow. Today I stumbled on the following incantation to speed up X11 forwarding over ssh:
ssh -c arcfour,blowfish-cbc -X -C user@remotehost
Thanks to Samat Jain for this info.
The choice of cipher is based on some performance benchmarks as noted in LaunchPad bug #54180
ssh -c arcfour,blowfish-cbc -X -C user@remotehost
Thanks to Samat Jain for this info.
The choice of cipher is based on some performance benchmarks as noted in LaunchPad bug #54180
Monday, 6 July 2009
Authorising USB devices for access
Normally when a USB device is connected to a Linux system it is automatically configured and then the device's interfaces become ready for users to access. While this is useful on most desktops use cases, there are situations where you may not want this, for example on Linux kiosks or servers where access must be limited.
Each USB device has an authorized file in the /sys interface, by writing "0" to this one disables authorization, and conversley writing a "1" to this authorizes a device to connect.
For example, to disable authorization:
echo 0 > /sys/devices/pci0000:00/0000:00:1a.7/usb1/authorized
Also, one can enable/disable authorisation on an entire USB hosts by writing to the authorized_default file:
echo 0 > devices/pci0000:00/0000:00:1d.0/usb5/authorized_default
By default, the authorized and authorized_default settings are set to 1, enabled.
Each USB device has an authorized file in the /sys interface, by writing "0" to this one disables authorization, and conversley writing a "1" to this authorizes a device to connect.
For example, to disable authorization:
echo 0 > /sys/devices/pci0000:00/0000:00:1a.7/usb1/authorized
Also, one can enable/disable authorisation on an entire USB hosts by writing to the authorized_default file:
echo 0 > devices/pci0000:00/0000:00:1d.0/usb5/authorized_default
By default, the authorized and authorized_default settings are set to 1, enabled.
Sunday, 5 July 2009
atop - AT Computing's top
Last month I blogged about htop, a process monitoring tool. Today, while I was searching for a way of monitoring disk activity, I stumbled across atop (AT Computing's top). While it shares a lot of similarity with top, it is distinct enough to be useful in it's own rights. Because it is an interactive text based tool, it is driven by simple key presses, for example, pressing 'h' brings up a help page showing how to drive the tool.
Some useful key commands are as follows:
g - generic info (default)
m - memory details
d - disk details
n - network details
s - scheduling and thread-group info
v - various info (ppid, user/group, date/time, status, exitcode)
c - full command-line per process
..to name but a few.
One feature I especially like is atop highlights in red colour the processes that may be misbehaving, such as CPU, memory or disk I/O hoggers; very helpful when a rogue program is killing your machine or sucking away power or I/O.
To install atop, simply use:
sudo apt-get install atop
Below are some screen shots:
So, while I may always turn to ps,top or vmstat as my first port of call when I want to check my systems activity, atop is another very useful utility in my toolkit that comes in useful in seeking and resolving misbehaving systems. Check it out and let me know what you think!
Some useful key commands are as follows:
g - generic info (default)
m - memory details
d - disk details
n - network details
s - scheduling and thread-group info
v - various info (ppid, user/group, date/time, status, exitcode)
c - full command-line per process
..to name but a few.
One feature I especially like is atop highlights in red colour the processes that may be misbehaving, such as CPU, memory or disk I/O hoggers; very helpful when a rogue program is killing your machine or sucking away power or I/O.
To install atop, simply use:
sudo apt-get install atop
Below are some screen shots:
atop highlighting CPU hogging
atop highlighting I/O hogging
So, while I may always turn to ps,top or vmstat as my first port of call when I want to check my systems activity, atop is another very useful utility in my toolkit that comes in useful in seeking and resolving misbehaving systems. Check it out and let me know what you think!
Fixing Linux System Pauses
A couple of weeks ago I was looking at a bug where a netbook would just seem to randomly hang and would only come alive when a key was pressed. After poking around a bit I recalled that my colleague Stefan Bader had seen this issue before, and he told me to try booting with the Linux boot parameter acpi_skip_timer_override. Lo and behold, this workaround worked.
So what was going on? Well, it's a BIOS issue. The BIOS seemed to be claiming that IRQ0 was routed to another IRQ on the IO-APIC and in fact, this was not so. Generally speaking, documentaion for most chipsets is not disclosed, so it's impossible to know how the chipset is configured and worse still, how to fix the problem with a quirk. There are a couple of patches in the kernel (e.g. for the patch for the HP NX6325) where this problem is worked around with a quirk, but for other machines, one has to work around this problem with appropriate boot options.
We see this bug manifest itself because modern kernels use a tickless timer and we hit a state where all the CPUs have gone into a deep C state and need a timer interrupt to wake them up. However, if the routing of the timer interrupt is misconfigured then then CPU is not woken up, hence the hang until we generate an external interrupt, for example, by pressing a key.
One can debug this by booting with kernel boot parameter "debug lapic=debug". This will make Linux dump out the interrupt routing on the IO-APIC and it's worth using to understand what's going on under the bonnet.
Boot options that are worth trying to work around this option are:
acpi_skip_timer_override
- this ignores the IRQ zero / pin 2 interrupt override
hpet=disable
- disable HPET and use PIT instead
idle=poll
- forces a polled idle loop (so CPU won't go into deep C state), hence uses power and makes the system run hot (not recommended)
idle=halt
- halt is forced to be used for CPU idle, C2/C3 states won't be used again. This may only work on systems without C1E (Enhanced Halt State).
So, if you ever see Linux hanging around and not waking from it's idle state, try one of the above and see if this solves your problem.
So what was going on? Well, it's a BIOS issue. The BIOS seemed to be claiming that IRQ0 was routed to another IRQ on the IO-APIC and in fact, this was not so. Generally speaking, documentaion for most chipsets is not disclosed, so it's impossible to know how the chipset is configured and worse still, how to fix the problem with a quirk. There are a couple of patches in the kernel (e.g. for the patch for the HP NX6325) where this problem is worked around with a quirk, but for other machines, one has to work around this problem with appropriate boot options.
We see this bug manifest itself because modern kernels use a tickless timer and we hit a state where all the CPUs have gone into a deep C state and need a timer interrupt to wake them up. However, if the routing of the timer interrupt is misconfigured then then CPU is not woken up, hence the hang until we generate an external interrupt, for example, by pressing a key.
One can debug this by booting with kernel boot parameter "debug lapic=debug". This will make Linux dump out the interrupt routing on the IO-APIC and it's worth using to understand what's going on under the bonnet.
Boot options that are worth trying to work around this option are:
acpi_skip_timer_override
- this ignores the IRQ zero / pin 2 interrupt override
hpet=disable
- disable HPET and use PIT instead
idle=poll
- forces a polled idle loop (so CPU won't go into deep C state), hence uses power and makes the system run hot (not recommended)
idle=halt
- halt is forced to be used for CPU idle, C2/C3 states won't be used again. This may only work on systems without C1E (Enhanced Halt State).
So, if you ever see Linux hanging around and not waking from it's idle state, try one of the above and see if this solves your problem.
Amusing LaunchPad Bug
Every so often while working on bugs reported in LaunchPad one stumbles on some interesting or just plain weird bug reports. LaunchPad bug #107648 probably the most amusing one I read of late.
Bug #1 is the longest running bug (and will take a few minutes to fetch), and will take a while to fix, but we are working on it. :-)
Bug #1 is the longest running bug (and will take a few minutes to fetch), and will take a while to fix, but we are working on it. :-)
Saturday, 4 July 2009
BBC article on Open Source
Open Source software has made it onto the BBC's radar again with an article on their Click programme. It's commendable that they devote some airtime to it; the programme seemed be concluding that Open Source has missed opportunities because it has not marketed or promoted itself. This is quite frustrating, as the BBC normally reports on new software releases from Microsoft and Apple, but pays little attention to the new releases of software such as Open Office or new distro releases.
Also, it amused me that the article mentioned that the Asus Eee-PC has been "quite" successful. In fact, the Linux driven netbook revolution caught Microsoft by surprise and caused a massive change in the way people do low-cost mobile computing. Microsoft just has XP, with Linux there are lots of revolutionary desktop experiences which are tailored for this computing model.
It's a start. The BBC Click programme did mention it would like to hear your comments, so please watch the programme, and post them positive and helpful feedback - hopefully we can focus the BBC to devote more airtime to the benefits of Open Source computing!
Also, it amused me that the article mentioned that the Asus Eee-PC has been "quite" successful. In fact, the Linux driven netbook revolution caught Microsoft by surprise and caused a massive change in the way people do low-cost mobile computing. Microsoft just has XP, with Linux there are lots of revolutionary desktop experiences which are tailored for this computing model.
It's a start. The BBC Click programme did mention it would like to hear your comments, so please watch the programme, and post them positive and helpful feedback - hopefully we can focus the BBC to devote more airtime to the benefits of Open Source computing!
Friday, 3 July 2009
Understanding Linux Early Resume Hacks
When Linux resumes from a suspend, it can be configured to do some really grim BIOS related hacks while in real mode; these hacks are controlled by the acpi_sleep kernel boot parameter.
The settings are as follows:
acpi_sleep=s3_bios
This makes the kernel set the video by directly calling into the video BIOS ROM at address 0xc000.
acpi_sleep=s3_mode
This makes the kernel set the video mode by BIOS interrupt int $10, ax=0x4f02 (select VESA video mode).
acpi_sleep=s3_beep
This causes the kernel to beep. Depending on the kernel version you either get a beep or a beeped Morse code message "...-" (which is Morse for "V" for some reason).
These acpi_sleep settings map to values:
s3_bios 1
s3_mode 2
s3_beep 4
and can be set by writing these magic values to /proc/sys/kernel/acpi_video_flags before you do the next suspend; this is a little more flexible than setting the kernel boot parameter acpi_sleep.
s3_beep is checked for first - so if you want to make sure your machine is coming out of the BIOS and executing the initial kernel resume code, set this option and you will hear a beep during the very early resume execution as long as you have an internal speaker in your PC.
s3_bios is then next checked and if enabled, the kernel calls directly into the video BIOS ROM to force the video state.
Finally, s3_mode is checked, and if enabled a VESA mode setting BIOS interrupt is executed to set the set video mode.
Some BIOSs just croak and die if you do s3_bios and s3_mode calls - which is understandable as the behaviour for doing such calls during a resume is undefined. It may work, it may not!
The settings are as follows:
acpi_sleep=s3_bios
This makes the kernel set the video by directly calling into the video BIOS ROM at address 0xc000.
acpi_sleep=s3_mode
This makes the kernel set the video mode by BIOS interrupt int $10, ax=0x4f02 (select VESA video mode).
acpi_sleep=s3_beep
This causes the kernel to beep. Depending on the kernel version you either get a beep or a beeped Morse code message "...-" (which is Morse for "V" for some reason).
These acpi_sleep settings map to values:
s3_bios 1
s3_mode 2
s3_beep 4
and can be set by writing these magic values to /proc/sys/kernel/acpi_video_flags before you do the next suspend; this is a little more flexible than setting the kernel boot parameter acpi_sleep.
s3_beep is checked for first - so if you want to make sure your machine is coming out of the BIOS and executing the initial kernel resume code, set this option and you will hear a beep during the very early resume execution as long as you have an internal speaker in your PC.
s3_bios is then next checked and if enabled, the kernel calls directly into the video BIOS ROM to force the video state.
Finally, s3_mode is checked, and if enabled a VESA mode setting BIOS interrupt is executed to set the set video mode.
Some BIOSs just croak and die if you do s3_bios and s3_mode calls - which is understandable as the behaviour for doing such calls during a resume is undefined. It may work, it may not!
Ubunty Chumby Hackers Launchpad team
I should have found it earlier, but there is a LaunchPad team called ubuntu-chumby-hackers devoted to hacking away at the Chumby :-)
Whatever next!
Git repository of test scripts
I've created a git repository that will contain some of my day to day test scripts. To grab a copy, do the following:
git clone git://kernel.ubuntu.com/cking/scripts
I hope to populate this over time with an eclectic mix of scripts that help me benchmark and test various parts of the Linux kernel.
git clone git://kernel.ubuntu.com/cking/scripts
I hope to populate this over time with an eclectic mix of scripts that help me benchmark and test various parts of the Linux kernel.
Thursday, 2 July 2009
Upgrading to ext4
Ext4 support has been available as an installation option in Ubuntu Jaunty 9.04 and will be default in Ubuntu Karmic 9.10. I've been using ext4 now for quite a while on my servers (since the alpha releases of Jaunty) as it provides me with some extra benefits:
* fsck'ing a 500GB partition is so much faster - huge improvement!,
* removing entire kernel source trees takes a few seconds (very handy!),
* kernel builds are a little quicker (I've not quantified how much),
* block I/O on large files is quicker.
I've not experienced any problems with ext4; I use it daily to grind out kernels, and the filesystem gets a lot of exercise.
For those with existing ext2 or ext3 based file systems one can upgrade to ext4 using some relatively straight forward steps. Obviously please make sure you do this on an unmounted filesystem! One way of doing this is to boot your machine using a LiveCD and making sure the HDD is not mounted before proceeding...
Upgrading from ext2 to ext3, for example on /dev/sda1, do:
tune2fs -j /dev/sda1
..this basically enables the journal. One can then mount this as an ext3 filesystem.
Upgrading from ext3 to ext4, for example on /dev/sda5, do:
tune2fs -O extents,uninit_bg,dir_index /dev/sda5
Note that this renders the filesystem unmountable as an ext3 filesystem. You have been warned!
After this YOU MUST RUN fsck to fix some tune2fs modified filesystem structures:
e2fsck -fpDC0 /dev/sda5
If you boot off the newly converted ext4 filesystem you need to re-install an ext4 capable version of grub using grub-install. The ext4 support for grub was introduced into Jaunty, thanks to a patch from Quentin Godfroy, and an inode tweak by me, and some help from Colin Watson. Maybe I should write about the fun and games I had in debugging this one day.. :-)
Any how, you may not get any immediate improvement from an ext3 to ext4 upgrade as existing files need to be re-written to get the full advantage of the ext4 extents. But over time with a lot of file activity you should get some improvement. Alternatively, back your data up, re-format to ext4 and restore the data and you get immediate ext4 goodness.
For more information about ext4, I recommend checking out the ext4 wiki.
* fsck'ing a 500GB partition is so much faster - huge improvement!,
* removing entire kernel source trees takes a few seconds (very handy!),
* kernel builds are a little quicker (I've not quantified how much),
* block I/O on large files is quicker.
I've not experienced any problems with ext4; I use it daily to grind out kernels, and the filesystem gets a lot of exercise.
For those with existing ext2 or ext3 based file systems one can upgrade to ext4 using some relatively straight forward steps. Obviously please make sure you do this on an unmounted filesystem! One way of doing this is to boot your machine using a LiveCD and making sure the HDD is not mounted before proceeding...
Upgrading from ext2 to ext3, for example on /dev/sda1, do:
tune2fs -j /dev/sda1
..this basically enables the journal. One can then mount this as an ext3 filesystem.
Upgrading from ext3 to ext4, for example on /dev/sda5, do:
tune2fs -O extents,uninit_bg,dir_index /dev/sda5
Note that this renders the filesystem unmountable as an ext3 filesystem. You have been warned!
After this YOU MUST RUN fsck to fix some tune2fs modified filesystem structures:
e2fsck -fpDC0 /dev/sda5
New files created on this ext4 filesystem will use the extents format, however existing files will not be in extents format, so you won't see much of a speed improvement with existing files.If you boot off the newly converted ext4 filesystem you need to re-install an ext4 capable version of grub using grub-install. The ext4 support for grub was introduced into Jaunty, thanks to a patch from Quentin Godfroy, and an inode tweak by me, and some help from Colin Watson. Maybe I should write about the fun and games I had in debugging this one day.. :-)
Any how, you may not get any immediate improvement from an ext3 to ext4 upgrade as existing files need to be re-written to get the full advantage of the ext4 extents. But over time with a lot of file activity you should get some improvement. Alternatively, back your data up, re-format to ext4 and restore the data and you get immediate ext4 goodness.
For more information about ext4, I recommend checking out the ext4 wiki.
Troubleshooting X issues on Ubuntu
Sorting out X problems can be a headache. The breadth of issues is wide and the complexity can be deep, so it's always helpful to have some where to turn to when you need deep X know-how. Fortunately Bryce Harrington has created a set of X Troubleshooting Wiki pages which have proved helpful to fix X issues.
The wiki covers a range of topics, including: Resolution issues, high CPU utilisation, blank screen issues, freezes, font sizes, X resume crashes, Intel performance issues and hotkey fixes to name but a few.
I'd recommend you to check out the Wiki even if you don't have X issues, just because it's packed full of useful X config nuggets of information.
The wiki covers a range of topics, including: Resolution issues, high CPU utilisation, blank screen issues, freezes, font sizes, X resume crashes, Intel performance issues and hotkey fixes to name but a few.
I'd recommend you to check out the Wiki even if you don't have X issues, just because it's packed full of useful X config nuggets of information.
Subscribe to:
Posts (Atom)