Using backuppc as a dirty distributed shell

Backuppc is a neat server-based backup solution. In Linux envorinments it is often used in combination with rsync over ssh – and, let’s be hontest – often fairly lazy sudo or root rights for the rsync over ssh connection. This has a lot of disadvantages, but at least, you can use this setup as a cheap distributed shell, as a good maintained backuppc server might have access to a lot of your servers.

I wrote a small wrapper, that reads the (especially Debian/Ubuntu packaged) backuppc configuration and iterates through the hosts, allowing you to issue commands on every valid connection. I used it so far for listing used ssh keys, os patch levels and even small system manipulations.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/bin/bash
SSH_KEY="-i /var/lib/backuppc/.ssh/id_rsa"
SSH_LOGINS=( `grep "root" /etc/backuppc/hosts | \
 awk '{print "root@"$1" "}' | \
 sed ':a;N;$!ba;s/\n//g'` )
 
for SSH_LOGIN in "${SSH_LOGINS[@]}"
do
 HOST=`echo "${SSH_LOGIN}" | awk -F"@" '{print $2'}`
 echo "--------------------------------------------"
 echo "checking host: ${HOST}"
 ssh -C -qq -o "NumberOfPasswordPrompts=0" \
 -o "PasswordAuthentication=no" ${SSH_KEY} ${SSH_LOGIN} "$1"
done

You can easily change this to your needs (e.g. changing login user, adding sudo and so on).

$ ./exec_remote_command.sh "date"
--------------------------------------------
checking host: a.b.com
Mo 9. Mai 15:40:26 CEST 2011
--------------------------------------------
checking host: b.b.com
[...]

Make sure to quote your command, especially when using commands with options, so the script can handle the command line as one argument.

A younger sister of the script is the following ssh key checker that lists and sorts the ssh keys used on systems by their key comment (feel free to include the key itself):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash
 
SSH_KEY="-i /var/lib/backuppc/.ssh/id_rsa"
SSH_LOGINS=( `grep "root" /etc/backuppc/hosts | \
 awk '{print "root@"$1" "}' | \
 sed ':a;N;$!ba;s/\n//g'` )
 
for SSH_LOGIN in "${SSH_LOGINS[@]}"
do
 HOST=`echo "${SSH_LOGIN}" | awk -F"@" '{print $2'}`
 echo "--------------------------------------------"
 echo "checking host: ${HOST}"
 ssh -C -qq -o "NumberOfPasswordPrompts=0" \
 -o "PasswordAuthentication=no" ${SSH_KEY} ${SSH_LOGIN} \
 "cut -d: -f6 /etc/passwd | xargs -i{} egrep -s \
 '^ssh-' {}/.ssh/authorized_keys {}/.ssh/authorized_keys2" | \
 cut -f 3- -d " " | sort
 ssh -C -qq -o "NumberOfPasswordPrompts=0" \
 -o "PasswordAuthentication=no" ${SSH_KEY} ${SSH_LOGIN} \
 "egrep -s '^ssh-' /etc/skel/.ssh/authorized_keys \
 /etc/skel/.ssh/authorized_keys2" | cut -f 3- -d " " | sort
done

A sample output of the script:

$ ./check_keys.sh 2>/dev/null
--------------------------------------------
checking host: a.b.com
ccm@host1.key 
backuppc@localhost
some random key comment
--------------------------------------------
checking host: b.b.com
[...]

That’s all for now. Don’t blame me for doing it this way – I am only the messenger :)

When backups fail: A mysql binlog race condition

Today I ran into my first MySQL binlog race condition: The initial problem was quite simple: A typical MySQL master->slave setup with heavy load on the master and nearly no load on the slave, which only serves as a hot fallback and job machine, showed differences on the same table on both machines. The differences showed up from time to time: entries that have been deleted from the master were still on the slave.

After several investigations I started examining the MySQL binlog from the master – a file containing all queries that will be transferred to the slave (and executed there if they don’t match any ignore-db-pattern). I grepped for ids of rows that have not been deleted on the slave as I’s interested if the DELETE statement was in the binlog. In order to read a binlog file just use “mysqlbinlog” and parse the output with grep, less or similar. To my surprise I found the following entries:

$ mysqlbinlog mysql-complete-bin.000335 | grep 1006974
DELETE FROM `tickets` WHERE `id` = 1006974
SET INSERT_ID=1006974/*!*/;

As “SET INSERT_ID” is a result of an INSERT statement it was clear, that MySQL wrote the INSERT => DELETE statements in the wrong order. As INSERT/DELETE sometimes occur quite fast after each other and several MySQL  threads are open in the same MySQL server, you might run into a rare INSERT/DELETE race condition as the master successfully executes them, while the slave receives them in the wrong order.

As a comparision this is a normal order of INSERT and DELETE (please note that the actual INSERT is not displayed here):

$ mysqlbinlog mysql-complete-bin.000336 | grep 1007729<br />SET INSERT_ID=1007729/*!*/;<br />DELETE FROM `tickets` WHERE `id` = 1007729<br />

Actually this all so far. Lesson learned for me: A mysql binlog might get you into serious trouble when firing a MySQL server with INSERT and DELETE on the same rows as the linear binlog file can fail the correct statement order, which might be a result of different MySQL threads and an unclean log behavior. I have not yet found a generic solution for the problem but I am looking forward to it.

sync ruby gems between different installed ruby versions

If you are in the Ruby business (which probably means “in the Ruby on Rails business” nowadays) sooner or later you’ll have to play around with different Ruby versions on the same machine as you might run into crashing ruby processes or performance issues . At least you’ll notice that running the standard Debian/Ubuntu Ruby versions might get you into serious trouble as it is several times slower than a manually compiled version (for reference see this launchpad bug and this blog entry.).

So a common situation with is: you have Ruby and a lot of Ruby gems installed and need to switch to a different Ruby version while making sure that you have all gems installed in the new version that you had in the old version. As gems differ from version to version you should also be interested in installing exactly the same gem versions again and not justing doing a install of all recent versions.

As far as I know there is no official way of syncing gems between two ruby installations. So the common way is something like asking ruby for a list of currently installed gems like

$ gem list
 
*** LOCAL GEMS ***
 
actionmailer (2.3.2, 2.2.2, 2.1.1)
actionpack (2.3.2, 2.2.2, 2.1.1)
activerecord (2.3.2, 2.2.2, 2.1.1)
activeresource (2.3.2, 2.2.2, 2.1.1)
activesupport (2.3.2, 2.2.2, 2.1.1)
[...]
ZenTest (4.0.0)

and then running a

$ gem install actionmailer -v 2.3.2
$ gem install actionmailer -v 2.2.2
$ gem install actionmailer -v 2.1.1
[...]
$ gem install Zentest -v 4.0.0

for every gem and every gem version you probably need. As a couple of gems are native extensions they’ll get compiled and you need to wait some seconds or minutes.

As I had to do this task more than once I wrote a small wrapper script that automates the process completely by fetching the list of gems and installing them again on another ruby version:

#!/bin/sh
GEM_FROM=/path/to/old/gem
GEM_TO=/path/to/new/gem
${GEM_FROM} list | sed -n '4,$ p' | \
 while read gem versions; do
  for version in $(echo ${versions} | sed "s/[(),]//g"); do
  echo ${gem} ${version}
  ${GEM_TO} install --no-rdoc --no-ri ${gem} -v ${version}
 done
done

The script uses some regular expression sed magic, friendly tweaked by Mnemonikk (thank you). Please note, that I prefer not to install rdoc and ri, as it saves time and disk space. Feel free to change this to your needs.

The only caveeat in this script are gems that cannot be installed as they come from unknown external repositories or were manually downloaded/installed. Therefore try to make sure to check this after a run of the gem sync script – it won’t stop when a gem cannot be installed which is intended behaviour.

So far about this. Hope, it helps you out when dealing with different Ruby versions. Do you have similar best practices for keeping Ruby gems in sync?

my package of the day – htmldoc – for converting html to pdf on the fly

PDF creation got actually fairly easy. OpenOffice.org, the Cups printing system, KDE provide methods for easily printing nearly everything to a PDF file right away. A feature that even outperforms most Windows setups today. But there are still PDF related task that are not that simple. One I often run into is automated PDF creation on a web server. Let’s say you write a web application and want to create PDF invoices on the fly.

There are, of course, PDF frameworks available. Let’s take PHP as an example: If you want to create a PDF from a php script, you can choose between FPDF, Dompdf, the sophisticated Zend Framework and more (and commercial solutions). But to be honest, they are all either complicated (as you often have to use a specific syntax) to use or just quite limited in their possibilities to create a pdf file (as you can only use few design features). As I needed a simple solution for creating a 50+ pages pdf file with a huge table on the fly I tested most frameworks and failed with most of them (often just as I did not have enough time to write dozens of line of code).

So I hoped to find a solution that allowed me just to convert a simple HTML file to a PDF file on the fly providing better compatibility than Dompdf for instance. The solution was … uncommon. It was no PHP class but a neat command line tool called “htmldoc” available as a package. If you want to give it a try just install it by calling “aptitude install htmldoc”.

You can test htmldoc by saving some html files to disk and call “htmldoc –webpage filename.html”. There a lot of interesting features like setting font size, font type, the footer, color and greyscale mode and so on. But let’s use htmldoc from PHP right away. The following very simple script uses the PHP output buffer for minimizing the need for a write to disk to one file only (if somebody knows a way of using this without any temporary files from a script, let me know):

// start output buffer for pdf capture
 
ob_start();
?>
your normal html output will be places here either by
dumping html directly or by using normal php code
<?php
// save output buffer
$html=ob_get_contents();
// delete Output-Buffer
ob_end_clean();
// write the html to a file
$filename = './tmp.html';
if (!$handle = fopen($filename, 'w')) {
	print "Could not open $filename";
	exit;
}
if (!fwrite($handle, $html)) {
	print "Could not write $filename";
	exit;
}
fclose($handle);
// htmldoc call
$passthru = 'htmldoc --quiet --gray --textfont helvetica \
--bodyfont helvetica --logoimage banner.png --headfootsize 10 \
--footer D/l --fontsize 9 --size 297x210mm -t pdf14 \
--webpage '.$filename;
 
// write output of htmldoc to clean output buffer
ob_start();
passthru($passthru);
$pdf=ob_get_contents();
ob_end_clean();
 
// deliver pdf file as download
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=test.pdf");
header('Content-length: ' . strlen($pdf));
echo $pdf;

As you can see, this is neither rocket science nor magic. Just a wrapper for htmldoc enabling you to forget about the pdf when writing the actual content of the html file. You’ll have to check how htmldoc handles your html code. You should make it as simple as possible, forget about advanced css or nested tables. But it’s actually enough for a really neat pdf file and it’s fast: The creating of 50+ page pdf files is fast enough in my case to make the on demand access of htmldoc feel like static file usage.

Please note: Calling external programs and command line tools from a web script is always a security issue and you should carefully check input and updates for the program you are using. The code provided should be easily ported to another web language/framework like Perl and Rails.

my package of the day – htop as an alternative top

“top” is one of those programs, that are used quite often but actually nobody talks about. It just does its job: showing statistics about memory, cache and cpu consumption, listing processes and so on. Actually top provides you some more features like batch mode and the ability to kill processes, but it’s all quite low level – e.g. you have to type the process id (pid) of process you want to kill.

So, though an applications like top makes sense on the console, a more sophisticated one would be great, extending the basic top functionality with enhancements to it’s usage. This tool already exists: It’s the ncurses based “htop” and we’ll have a closer look at it now.

For the beginning: Install “htop” by running “aptitude install htop”, Synaptic or the package manager of your choice. As you can see, htop is quite colorful, which is, of course, a matter of taste. In my opinion, colors make sense, when the they mean something or provide better readability. So let’s check the output in brief:

htop1.png

At the upper left corner you see statistics about the usage of cpu cores (in my case there are two of them, marked “1″ and “2″), memory and swap statistics, while on the right side, you have the common uptime/load stats. The interesting part is the usage of colors in cpu/ram/swap bars. If you are new to htop you have to look the colors up at least once. Therefore just stroke “h” (“F1″ should work, too, but Gnome might get in your way) and you’ll see a nice explanation in the help:

htop2.png

Quite interesting is the distribution between green and red in the cpu stats, as a high kernel load often means something goes wrong (with the hardware i/o for instance). In the memory bar the real used ram is marked green – blue and orance actually could be cleared by the kernel if necessary. (People are often confused that their ram seems to be full, when calling a tool like/htop though they are not running that many programs. It’s important to understand, that the memory is also used for buffering/caching and that this memory can often be used by “real” data later on).

So what’s the next htop feature? Use your mouse, if you like! You can test it by clicking on “Help” on the menu bar at the bottom. Maybe while clicking around a bit you already noticed that you can also click on processes and mark them. What for? Well, htop enables you to kill processes quit easy, as you don’t have to type a process id, write a pattern or something, you just can mark them with a mouse or cursor and either click on “Kill” in the menu or stroke the “F9″ or “k” key. “htop” will let you choose from a list of signals afterwards:

htop3.png

Of course you cannot kill processes that belong to your user when htop does not run as root (i.e. with “sudo”). “htop” marks processes that belong to user it is run by with a brighter process id:

htop4.png

Sadfully this also means, that running htop as root/sudo, marks processes that belong to non-root with a darker grey. But hey, that’s a nice missing feature for patch, isn’t it?

If you like to become an advanced htop user, you can check the “Setup” menu (click it or press the “F2″ or “S” key). You will see a menu for configuring the output of htop, enabling you to switch off and on the display of certain information:

htop5.png

Of course you can also sort the process list (click “Sort” or press “F6″) which give you a list of possible sort parameters:

htop6.png

In spite of this, you can switch to a process tree display and sort it by pressing one of the keys showed below:

htop7.png

So let me give you a last nice gimmick and then end for today: You can try to attach “strace” to a running process by marking the process typing “s”. If you don’t know, what strace is, don’t bother, if you do, you will probably like this feature pretty much.

I hope you got the clue about using htop, which is a really neat, full featured console top replacement that is even worth to be used when running X as it supports mouse usage and brings everything you need while still having a small footprint. If you have alternatives, you like mention, feel free to drop them as a comment.

my package of the day – mtr as a powerful and default alternative to traceroute

Know the situation? Something is wrong with the network or you are just curious and want to run a “traceroute”. At least under most Debian based systems your first session will probably look like this:

$ traceroute www.ubuntu.com
command not found: traceroute

Maybe on Ubuntu you will at least be hinted to install “traceroute” or “traceroute-nanog”… To be honest, I really hate this lack of a basic tool and cannot even remember how often I typed “aptitude install traceroute” afterwards (and press thumbs your network is up and running).

But sometimes you just need to dig a bit deeper and this time the surprise was really big as the incredible Mnemonikk told me about an alternative that is installed by default in Ubuntu and nearly no one knows about it: “mtr“, which is an abbreviation for “my traceroute”.

Let’s just check it by calling “mtr www.ubuntu.com” (i slightly changed the output for security reasons):

                 My traceroute  [v0.72]
ccm        (0.0.0.0)          Wed Jun 20 6:51:20 2008
Keys:  Help   Display mode   Restart statistics   Order
of fields      Packets               Pings
 Host        Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 1.2.3.4   0.0%   331    0.3   0.3   0.3   0.5   0.0
 2. 2.3.4.5   0.0%   331   15.6  16.3  14.9  42.6   2.6
 3. 3.4.5.6   0.0%   330   15.0  15.5  14.4  58.5   2.7
 4. 4.5.6.7   0.0%   330   17.5  17.3  15.4  60.5   5.3
 5. 5.6.7.8   0.0%   330   15.7  24.3  15.6 212.3  30.2
 6. ae-32-52 58.8%   330   20.6  22.1  15.9  42.5   4.7
 7. ae-2.ebr 54.1%   330   20.6  25.0  19.0  45.4   4.7
 8. ae-1-100  0.0%   330   21.5  25.4  19.2  41.1   5.1
 9. ae-2.ebr  0.0%   330   27.5  34.0  26.7  73.5   5.2
10. ae-1-100  0.3%   330   28.8  33.6  26.7  72.5   6.0
11. ae-2.ebr  0.0%   330   30.8  32.9  26.7  48.5   5.0
12. ae-26-52  0.0%   330   27.6  34.8  26.9 226.8  26.8
13. 195.50.1  0.3%   330   27.7  28.4  27.2  42.5   1.7
14. gw0-0-gr  0.0%   330   27.9  28.1  27.0  40.5   1.4
15. avocado.  0.0%   330   27.8  28.0  27.2  36.2   1.0

You might notice, that the output is quite well formed (“mtr” uses curses for this). The interesting point is: Instead of running once, mtr continuously updates the output and statistics, providing you with a neat network overview. So you can use it as an enhanced ping showing all steps between you and the target.

For the sake of it: The package installed by default in Ubuntu is actually called “mtr-tiny” as it lacks a graphical user interface. If you prefer a gui you can replace the package with “mtr” by running “aptitude install mtr”. When running “mtr” from the console afterwards you will be prompted with a gtk interface. In case you still want text mode, just append “–curses” as a parameter.

Yes, that was a quick package, but if you keep it in mind, you will save time, you normalle spend for installing “traceroute” and you’ll definitely have better results for network diagnose. Happy mtr’ing!

[update]

sherman noted, that the reason for traceroute not being installed is, that it’s just deprecated and “tracepath” should be used instead. Thank you for the hint, though I’d prefer “mtr” as it’s much more reliable and verbose.

my package of the day: proggyfonts – tiny fonts for programmers and console users

(Well, it is not yet a package, but trust me: I’ll make sure it gets one.)

As a programmer or console user you might know the pain of having not as much characters on you screen as you would like to. You tried around with different fonts, it got better by reducing font size but it is not yet perfect. If I tell you, that you just have the wrong fonts you probably moan “… I tried all installed fonts”. And you are right by that: The fonts I am going to tell you about are definitely not preinstalled.

I ran into the font trouble a couple of years ago. As my eyes are quite good I yearned for a really tiny font to overflow my brain with as much content as possible on the same time. After I a while I started a research on the web and found a page that already sounds like a perfect hit: proggyfonts.com. The site hosts 24 monospaced bitmap programming fonts (licensed under a free BSD-type personal license) enhanced for a small screen footprint and issues that programmers often run into like differing 0 (zero) from O (capital letter “o”).

Font comparison

The font I use is called “ProggyFont Tiny Slashed Zero” which stands for: A real tiny font with a cleary slashed zero. To compare it to a “normal” font let’s see it in action. Here you can see a default installed Monospace font which has been set up to a small font size:

bildschirmfoto-mc-hasung-mnt-cryptdevice-live-home-ccm.png

Concentrate on the characters you see above: They blur a bit. It’s not a big deal but if you are working with it for hours it gets one. Now let’s compare the same screen with ProggyFont Tiny Slashed Zero:

bildschirmfoto-mc-hasung-mnt-cryptdevice-live-home-ccm-1.png

See how clear the characters are? It even got smaller – you could handle one or two lines more within the same space if you would resize the window according to the previous one. What a relief!

Even more fonts

Now the example given is the most aggressive one as it is really small. You might consider other fonts as helpfull. Let me give you another example of a font: Proggy Clean (better to read as it is bigger) Slashed Zero Bold Punc – see yourself:

font.png

What have they done? They assume when you are a programmer you like characters like brackets, colons and so on being bold as the mean something in the code. Often you have to deal with interfaces that don’t mark those characters. Now the font does this for you. Nice, isn’t it? Now even cat and less show you bold coding elements without even configuring them to do so.

Installation

The site hosts the fonts in different formats. As I am lazy and is supported I only use the TTF font. To enroll a font in Gnome you have two ways depending on your Gnome version. First download a font package, unzip it, so you have file named fontname.ttf. To speak in Ubuntu versions: If you running Ubuntu Gutsy or below, open Nautilus, go to “fonts:///” and drag and drop the ttf file into it and just restart your X session. If you have Hardy, create a directory called “.fonts” in your home directory and copy the ttf file into it. Restart X afterwards (though not all applications depend on this).

Now open the application you want to enhace with your shiny new font. Let’s say it’s gnome-terminal. You should be able to choose a font named ProggySomething. Now you have to choose a font size and that is the only tricky thing to do: You have to find out the only possible font size. This setting might differ from application to application. In gnome-termin it is “11″ for instance which seems huge, but in fact is not. Just try it out. Under KDE or even Windows/OSX you’ll find out fast how to enroll the fonts. In fact it works, you just have to try.

So now you have a new set of fonts ready to boost your productivity. Make sure you don’t get a headache when using it and don’t crash your brain with an information overflow. I’ll report back when I packaged those fonts for a simple usage in Debian/Ubuntu.

my package of the day: file – classify (unknown) files and mime-types on the console

You know this? Somebody just sent you a mail with attachments that don’t have usable file extensions so you don’t really know how to handle them. Audio file? PDF? What is it? The same problem might occur after a file recovery, on web pages with upload features or just when you are really and time pressure and have time for messing around with file type guessing.

While you can try to give the file an extension and open it with a software you think might be suitable, the more sophisticated way is to let your computer find out what is all about. As a GNU/Linux user you probably already think “There is surely a command line tool for this”. Of course there is: The package “file“, that often gets automatically installed by dependencies or just an “aptitude install file” will help you out.

“file” depends on “libmagic” which provides patterns for the so called “magic number” detection. You don’t have to know, what that is, but if you want, see this Wikipedia article for reference. So all you have to know, is how to handle the file command. And actually there is not much to learn. Let’s assume we have the following directory with unknown files:

file1.png

Now we want to know what’s inside those black boxes. Therefore we just call “file *” on the console:

file2.png

Hey, that’s all. Pretty impressive, isn’t it? “file” does even not only differs binary from text files, it even tries to guess what programming language a text file is written in. And the magic is not that much magic: In case of the zsh file it just sees a shebang pointing to the zsh in the first line of the file, a PDF file typically starts with “%PDF” and so on. It’s all about patterns.

“file” provides you with some command line options that make it’s usage even more helpful. The most interesting is “-i” as it prints out mime types instead of verbose file types. If you are a web developer and want to know the exact mime type for a file download, this can save you a lot of time:

file3.png

Great, isn’t it? The Apache webserver also uses libmagic for this purpose. With “file” you just use a wrapper for the same task.

That’s all about “file” for today. Happy file detection – and feel free to report back.

my package of the day: listadmin – moderate mailman mailing lists from the console

Are you involved in moderating Mailman mailing lists? Then maybe you know the pain: As you try to stop spammers flooding you list you hold messages from unknown senders back for review. Or you have a moderated mailing list that only allows postings explicitly published. However. In most cases you get mails from Mailman telling you that there are messages you have to moderate. The common way is to enter the web interface, enter your password, read the messages and discard/reject/allow them.

This workflow is easy but it can really get on your nerves as the web stuff is somehow time consuming. Therefore from time to time you get lazy on moderating…

Well, there is light at the end of the tunnel and one cannot repeat this hint often enough: The package “listadmin” provides a powerful console tool for moderating mailman mailing lists. As Debian/Ubuntu user you just have to install it via “aptitude install listadmin” on the console or via Synaptic. You just have to write an .ini file with the configuration (admin url, credentials). The files looks like this:

adminurl https://hostname.tld/mailman/admindb/{list}
default skip
log ~/.listadmin.log
password secret
mailinglist1@hostname.tld
mailinglist2@hostname.tld

So we just give an url with a placeholder named “{list}”. This way we can moderate multiple lists with fewer lines of configuration. Now let’s see how listadmin behaves when checking existing mailing lists (in this case our Berlin based Ubuntu mailing lists):

bildschirmfoto-damokleslilith-listadmin.png

Nothing to do – no messages to moderate in this case. But hey – we just got an incoming request. Let’s rerun listadmin and check:

bildschirmfoto-damokleslilith-listadmin-1.png

A spammer tried to hit our list. We now can decide wether to Approve, Reject or Discard the message. If it’s spam you want to discard as this just deletes the message. When you want to provide feedback to the user, you have to reject and are able to enter a reason. Of you course you also can examine the full body of the message or just skip it and keep for the next session. In our case “d” was entered for deleting the spam and the request was submitted. If you are fast the session will not take more than 10 seconds – try this with the web interface!

So though it’s age and the ajax web 2.0 shiny wysiwyg plinkplonk alternatives Mailman provides you with nice wrappers for moderating larger amounts of mails within seconds. If you stick to a community you will probably sooner or later be asked to moderate a mailing list. Now you can say: “No problem. I have a command line tool for this”.

my package of the day: fish – the friendly interactive shell

Always wanted to learn using a shell more deeply? Maybe “fish“, the “friendly interactive shell” is the right kickoff for you.

If you are already a heavy command line user with customized .bashrc or even .zshrc (like me), thank you probably don’t need another shell. But if this shell thingy is somehow a miracle to you but you saw people using it like wizards with colorful commands and a typing speed that made you jealous then it could help you to start with a shell that concentrates on being very friendly to new users as common shells like Bash and ZSH expect you to read the manual and write a config file (there are aids and defaults that vary from distribution to distribution).

The standard shell for login users in Ubuntu/Debian is “Bash”. Ubuntu already ships the file /etc/bash_completion that is read by default and helps users using the TAB key more exensively. Try it on you bash shell: just type something like “ls –” and press TAB twice. You’ll see a list of options that “ls” provides. Nice but it could be nicer. Let’s compare this to fish. Install fish by using Synaptic or “aptitude install fish”, open a terminal and start the shell by typing “fish”. You should a changed green prompt. Now type “ls -” and press TAB.

Stop: Already while typing you should see a strange color change. When entering “l” the character turns red and underlined. Looks like an error? Well, it is: fish tells you, that “l” is probably not a command. An aid during typing before running a command. Neat. Now, when pressing TAB you should a very clean list of options for “ls” with a short description of each option:

fish11.png

Helpfull, isn’t it? Of course this is not limited to ls. Try it with other commands you are using. If you ask yourself why you have to type “command –” and press TAB: “–” introduces a command line option (“-” does this also – try it!). As you press TAB after this, the shells knows “the user wants to do something and needs help on completing it”. It looks after a pattern and sees that you want to use the given command and are looking for options. That’s all. As I said: This works in Bash often by default also, but not that nice.

Now fish can do more with completion of course. Want to install a program? Try “aptitude install mut” and press TAB. It will show you a list of packages matching that pattern:

fish2.png

Need to kill a process? Type “kill ” and press TAB and you will get a nice list of running processes:

fish3.png

The list of possible TAB completions on fish is endless. Just notice that emphasis has been put on commands like mount, make, su, ssh, apt-get/aptitude. In most commands usernames, process ids will automatically be completed. The trick is just to try TAB when you are too lazy to type or unsure how to proceed. A good shell surprises you from time to time with it’s completion.

Also very helpful is the extended pattern matching for file names. Let’s say you want a list of all pdf files in a directory and all it’s subdirectories. On bash you probably use something like “find . -name “*.mp3″. On fish you use the pattern “**” which means any files and directories in the current directory and all of its subdirectories. So type “ls **.pdf” and you get the list you want as fish crawls through the directories for you. Want alle .mp3 and mp4 files but not files like .mpeg? Use “ls **.mp?” as “?” stands for one character. Of course commands like “rm **.bak” are possible, too. Use them with care! In the following example we are looking for pdf files in all subdirectorie, delete them and afterwards make sure they are really gone:

bildschirmfoto-fish-mnt-cryptdevice-live-home-ccm-work-1.png

So let me stop here. I hope, I was able to show you that using fish instead of an unconfigured shell is a nice way of getting in the command line business. Fish provides you with a lot of more features that you might need and saves you from writing a config file from scratch.

If you want to give fish a try: Install it and run the “help” command. I will launch a nice help page in you browser. Read some parts of the document as they’ll show you nice gimmicks. Or just don’t and start right away. But trust me: Reading hints for a shell from time to time will save you … time.

(Just in case you don’t know: You can change your standard shell by using the “chsh” command. But when being a novice it is always a good idea to stick to the distribution specific default shell and run your shell directly by calling it. When you are more used to it feel free to make it your standard shell…)