Recovering Linux file permissions

I recently ran into a server, where somebody accidently issued a „chown -R www-data:www-data /var“. So all files and directories within /var where chowned to the www-data which actually means a complete system fuckup as everything from logging over mail and caching to databases relies on a correct setup there. Sadfully this was a remote production server so I had to find a quick solution to get a least a state good enough for the next days.

I started peaking around a possibity to reset file permissions based on .deb package details. There are at least approaches (the method there misses a pre-download of all installed .deb packages) to do this (and I remember running a program years ago that checked file permissions based on .deb files – just did not find it via apt-get). Nonetheless this approach lacks the possibility of handling application created files. Files in /var/log for instance don’t have to be declared in a .deb file but urgently need the right file permissions.

So I came to a different approach: cloning permissions. By chance we had a quite similar server running meaning same Linux distribution and nearly the same services installed. I wrote a one liner to save the file permissions on the healthy server:

$ find /var -printf "%p;%u;%g;%m\n" > permissions.txt

The command writes a text file with the following format:

dir/filename;user;group;mode

Please note, I started using „:“ as a separator but noted that at least some Perl related files have a double colon in there name.

Now I only needed a simple shell script that sets the file permissions on the broken server based on the text file we just generated. It came down to this:

#!/bin/bash

ENTRIES=$(cat permissions.txt)

for ENTRY in ${ENTRIES}
do
	echo ${ENTRY} | sed "s/;/ /g" | {
		read FILE USER GROUP MODE
		chown ${USER}:${GROUP} "${FILE}"
		chmod ${MODE} "${FILE}"
	}
done

The script reads every line of the text file, splits it’s content into variables and sets the user and group via „chown“ as well as the mode via „chmod“. It doesn’t check if a directory/file exists before chowning/chmodding it, as it actually doesn’t matter. If it’s not there, it just won’t do something harmfull.

After you’ve run this, it’s a good idea to restart all services and start watching log files. You have to take care of all services that rely on fast changing files in /var. For instance a mail daemon puts a lot of unique file names into /var/spool and the script above won’t be able to take care of that. You have to double check database directories like /var/lib/mysql, hosted repositories and so on. But the script will provide with a state where most services are at least running and you get an idea of how to switch back the remaining directories. It might be helpfull to search for suspicious files, like

$ find /var -user www-data

RubyGems 9.9.9 packaged – Fake install RubyGems on Debian/Ubuntu

For a lot of reasons I often rely on a mixture of a Debian/Ubuntu pre packaged Ruby with a self compiled RubyGems. It helps you in situations where you don’t care that much about the Ruby interpreter itself but need an up to date RubyGems. While this is easy to install, you might run into trouble when installing packages that depend on Ruby and RubyGems, namely packages like „rubygems“, „rubygems1.8“ and „rubygems1.9“.

After unsuccessfully playing around with dpkg for a while (you can put packages on „hold“ which prevents them from being installed automatically, I came to the conclusion, the best way is to install a fake package that is empty but satisfies depencies.

So, here it is: The shiny new RubyGems 9.9.9 which delivers rubygems, rubygems1.8 and rubygems1.9 right away. Just install it (e.g. with dpkg) and you’ll be able installing packages that rely on a rubygems package.

In case you want to play around with the package and customize it to your needs, e.g. only deliver rubygems1.8 or rubygems1.9, take

1. Install equivs

$ sudo apt-get install equivs

2. create a control file

$ equivs-control rubygems

3. edit the control file

$ vim rubygems

You can compare the default settings in the control file with the output of e.g. „apt-cache show rubygems“. The crucial field is „Provides:“ where you can put a comma separated list of packages you want to fake install. Choose a high version for  there „Version: “ field as this will mark the package newer as the distribution’s own package. This prevents the packager from replacing it.

Section: universe/interpreters
Priority: optional
Homepage: http://www.screenage.de/blog/
Standards-Version: 3.6.2
 
Package: rubygems
Version: 9.9.9
Maintainer: Caspar Clemens Mierau <[email protected]>
Provides: rubygems1.8,rubygems1.9,rubygems
Architecture: all
Description: Fake RubyGems replacement
 This is a fake meta package satisfying rubygems dependencies.
 .
 This package can be used when you installed a packaged ruby but want
 to use rubygems from source and still rely on software that depends
 on ruby and rubygems

4. build the package

$ equivs-build rubygems

p.s.: You can also use equivs for easily building meta packages containing a list of packages you want to install at a glance, e.g. for semi automated server bootstrapping.

My (unofficial) package of the day: 3ware-cli and 3dms for monitoring 3ware raid controllers

Having a real hardware raid controller is a nice thing: Especially in a server setup it helps you keeping data safe on multiple disks. Though, a common mistake is, having a raid controller and not monitoring it. Why? Let’s say, you have a simple type 1 array (one disk mirrored to another) and one of the disks fails. If your raid systems works it will continue to work. But if you did not setup a monitoring for it, you won’t notice it and the chance of a total data loss increases as you are running on one disk now.

So monitoring a raid is actually the step that makes your raid system as safe as you wanted it when setting it up. Some raids are quite easy to monitor, like a Linux software raid system. Some need special software. As I recently got a bunch of dedicated (Hetzner DS8000 and other) servers with 3ware raid controllers, I checked the common software repositories for monitoring software and was surprised not finding any suitable. So a web research showed me that there are Linux tools from 3ware. Of course they don’t provide .deb packages so you need to take of this yourself if you don’t want to install the software manually.

But there exists an unofficial Debian repository by Jonas Genannt (thank you!), providing recent packages of 3ware utilities under http://jonas.genannt.name/. Check the repository, it offers 3ware-3dms and 3ware-cli. 3ware-3dms is a web application for managing your raid controller via browser, BUT: think twice, if you want this. The application opens a privileged port (888) as it is not able to bind on the local interface and has a crappy user identification system. As I am not a friend of opening ports and closing them afterwards via firewall I dropped the web solution.

The „3ware-cli“ utility is just a command line interface to 3ware controllers. Just grab a .deb from the repository above and install it via „dpkg -i xxx.deb“. Aftwerwards you stark asking your controller questions about it’s status. The command is called „tw_cli“, so let’s give it a try with „info“ as parameter:

# tw_cli info
Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU
------------------------------------------------------------------------
c0    8006-2LP     2         2        1       0       2       -      -

tw_cli told us, that there is one controller (meaning a real piece of raid hardware) called „c0“ with two drives. No we want more detailed information about the given controller:

# tw_cli info c0
 
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       232.885   ON     -      
 
Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

tw_cli reports that controller c0 has one unit „u0“. A unit is the device that your operating system is working with – the „virtual“ raid drive provided by the raid controller. There are two ports/drives in this unit, called „p0“ and „p1“. Both of them have „OK“ as status message meaning that the drives are running fine.

You also ask a drive directly by asking tw_cli for the port on the controller:

# tw_cli info c0 p0

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9            

# tw_cli info c0 p1

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

So you might already got the clue: As tw_cli is just a command line tool your task for an automated setup is setting up a cronjob checking the status of the ports (not the unit! the ports – trust me) regularly and sending a mail or nagios alarm when necessary. I just started writing a little shell script which, right now, just returns an exit status – 0 for a working raid and 1 for a problem:

#!/bin/bash
 
UNIT=u0
CONTROLLER=c0
PORTS=( p0 p1 )
 
tw_check() {
  local regex=${1:-${UNIT}}
  local field=3
  if [ $# -gt 0 ]; then
    field=2
  fi
  local check=$(tw_cli info ${CONTROLLER} $1 \
    | awk "/^$regex/ { print \${field} }")
  [ "XOK" = "X${check}" ]
  return $?
}
 
tw_check || exit 1
for PORT in ${PORTS[@]}; do
tw_check ${PORT} || exit 1
done

As you see you can configure unit, controller and ports. I have not checked this against systems with multiple controllers and units as I don’t have such a setup. But if you need you could just put the configuration stuff in a sourced configuration file.

After writing this little summary I checked all servers I am responsible of and noticed that nearly every server with hardware raid has a 3ware controller and can be checked with tw_cli. Fine…

Let me know how you manage your 3ware raid monitoring under GNU/Linux and Debian/Ubuntu based systems.

my package of the day – htmldoc – for converting html to pdf on the fly

PDF creation got actually fairly easy. OpenOffice.org, the Cups printing system, KDE provide methods for easily printing nearly everything to a PDF file right away. A feature that even outperforms most Windows setups today. But there are still PDF related task that are not that simple. One I often run into is automated PDF creation on a web server. Let’s say you write a web application and want to create PDF invoices on the fly.

There are, of course, PDF frameworks available. Let’s take PHP as an example: If you want to create a PDF from a php script, you can choose between FPDF, Dompdf, the sophisticated Zend Framework and more (and commercial solutions). But to be honest, they are all either complicated (as you often have to use a specific syntax) to use or just quite limited in their possibilities to create a pdf file (as you can only use few design features). As I needed a simple solution for creating a 50+ pages pdf file with a huge table on the fly I tested most frameworks and failed with most of them (often just as I did not have enough time to write dozens of line of code).

So I hoped to find a solution that allowed me just to convert a simple HTML file to a PDF file on the fly providing better compatibility than Dompdf for instance. The solution was … uncommon. It was no PHP class but a neat command line tool called „htmldoc“ available as a package. If you want to give it a try just install it by calling „aptitude install htmldoc“.

You can test htmldoc by saving some html files to disk and call „htmldoc –webpage filename.html“. There a lot of interesting features like setting font size, font type, the footer, color and greyscale mode and so on. But let’s use htmldoc from PHP right away. The following very simple script uses the PHP output buffer for minimizing the need for a write to disk to one file only (if somebody knows a way of using this without any temporary files from a script, let me know):

// start output buffer for pdf capture
 
ob_start();
?>
your normal html output will be places here either by
dumping html directly or by using normal php code
<?php
// save output buffer
$html=ob_get_contents();
// delete Output-Buffer
ob_end_clean();
// write the html to a file
$filename = './tmp.html';
if (!$handle = fopen($filename, 'w')) {
	print "Could not open $filename";
	exit;
}
if (!fwrite($handle, $html)) {
	print "Could not write $filename";
	exit;
}
fclose($handle);
// htmldoc call
$passthru = 'htmldoc --quiet --gray --textfont helvetica \
--bodyfont helvetica --logoimage banner.png --headfootsize 10 \
--footer D/l --fontsize 9 --size 297x210mm -t pdf14 \
--webpage '.$filename;
 
// write output of htmldoc to clean output buffer
ob_start();
passthru($passthru);
$pdf=ob_get_contents();
ob_end_clean();
 
// deliver pdf file as download
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=test.pdf");
header('Content-length: ' . strlen($pdf));
echo $pdf;

As you can see, this is neither rocket science nor magic. Just a wrapper for htmldoc enabling you to forget about the pdf when writing the actual content of the html file. You’ll have to check how htmldoc handles your html code. You should make it as simple as possible, forget about advanced css or nested tables. But it’s actually enough for a really neat pdf file and it’s fast: The creating of 50+ page pdf files is fast enough in my case to make the on demand access of htmldoc feel like static file usage.

Please note: Calling external programs and command line tools from a web script is always a security issue and you should carefully check input and updates for the program you are using. The code provided should be easily ported to another web language/framework like Perl and Rails.

my package of the day: proggyfonts – tiny fonts for programmers and console users

(Well, it is not yet a package, but trust me: I’ll make sure it gets one.)

As a programmer or console user you might know the pain of having not as much characters on you screen as you would like to. You tried around with different fonts, it got better by reducing font size but it is not yet perfect. If I tell you, that you just have the wrong fonts you probably moan „… I tried all installed fonts“. And you are right by that: The fonts I am going to tell you about are definitely not preinstalled.

I ran into the font trouble a couple of years ago. As my eyes are quite good I yearned for a really tiny font to overflow my brain with as much content as possible on the same time. After I a while I started a research on the web and found a page that already sounds like a perfect hit: proggyfonts.com. The site hosts 24 monospaced bitmap programming fonts (licensed under a free BSD-type personal license) enhanced for a small screen footprint and issues that programmers often run into like differing 0 (zero) from O (capital letter „o“).

Font comparison

The font I use is called „ProggyFont Tiny Slashed Zero“ which stands for: A real tiny font with a cleary slashed zero. To compare it to a „normal“ font let’s see it in action. Here you can see a default installed Monospace font which has been set up to a small font size:

bildschirmfoto-mc-hasung-mnt-cryptdevice-live-home-ccm.png

Concentrate on the characters you see above: They blur a bit. It’s not a big deal but if you are working with it for hours it gets one. Now let’s compare the same screen with ProggyFont Tiny Slashed Zero:

bildschirmfoto-mc-hasung-mnt-cryptdevice-live-home-ccm-1.png

See how clear the characters are? It even got smaller – you could handle one or two lines more within the same space if you would resize the window according to the previous one. What a relief!

Even more fonts

Now the example given is the most aggressive one as it is really small. You might consider other fonts as helpfull. Let me give you another example of a font: Proggy Clean (better to read as it is bigger) Slashed Zero Bold Punc – see yourself:

font.png

What have they done? They assume when you are a programmer you like characters like brackets, colons and so on being bold as the mean something in the code. Often you have to deal with interfaces that don’t mark those characters. Now the font does this for you. Nice, isn’t it? Now even cat and less show you bold coding elements without even configuring them to do so.

Installation

The site hosts the fonts in different formats. As I am lazy and is supported I only use the TTF font. To enroll a font in Gnome you have two ways depending on your Gnome version. First download a font package, unzip it, so you have file named fontname.ttf. To speak in Ubuntu versions: If you running Ubuntu Gutsy or below, open Nautilus, go to „fonts:///“ and drag and drop the ttf file into it and just restart your X session. If you have Hardy, create a directory called „.fonts“ in your home directory and copy the ttf file into it. Restart X afterwards (though not all applications depend on this).

Now open the application you want to enhace with your shiny new font. Let’s say it’s gnome-terminal. You should be able to choose a font named ProggySomething. Now you have to choose a font size and that is the only tricky thing to do: You have to find out the only possible font size. This setting might differ from application to application. In gnome-termin it is „11“ for instance which seems huge, but in fact is not. Just try it out. Under KDE or even Windows/OSX you’ll find out fast how to enroll the fonts. In fact it works, you just have to try.

So now you have a new set of fonts ready to boost your productivity. Make sure you don’t get a headache when using it and don’t crash your brain with an information overflow. I’ll report back when I packaged those fonts for a simple usage in Debian/Ubuntu.

my package of the day: file – classify (unknown) files and mime-types on the console

You know this? Somebody just sent you a mail with attachments that don’t have usable file extensions so you don’t really know how to handle them. Audio file? PDF? What is it? The same problem might occur after a file recovery, on web pages with upload features or just when you are really and time pressure and have time for messing around with file type guessing.

While you can try to give the file an extension and open it with a software you think might be suitable, the more sophisticated way is to let your computer find out what is all about. As a GNU/Linux user you probably already think „There is surely a command line tool for this“. Of course there is: The package „file„, that often gets automatically installed by dependencies or just an „aptitude install file“ will help you out.

„file“ depends on „libmagic“ which provides patterns for the so called „magic number“ detection. You don’t have to know, what that is, but if you want, see this Wikipedia article for reference. So all you have to know, is how to handle the file command. And actually there is not much to learn. Let’s assume we have the following directory with unknown files:

file1.png

Now we want to know what’s inside those black boxes. Therefore we just call „file *“ on the console:

file2.png

Hey, that’s all. Pretty impressive, isn’t it? „file“ does even not only differs binary from text files, it even tries to guess what programming language a text file is written in. And the magic is not that much magic: In case of the zsh file it just sees a shebang pointing to the zsh in the first line of the file, a PDF file typically starts with „%PDF“ and so on. It’s all about patterns.

„file“ provides you with some command line options that make it’s usage even more helpful. The most interesting is „-i“ as it prints out mime types instead of verbose file types. If you are a web developer and want to know the exact mime type for a file download, this can save you a lot of time:

file3.png

Great, isn’t it? The Apache webserver also uses libmagic for this purpose. With „file“ you just use a wrapper for the same task.

That’s all about „file“ for today. Happy file detection – and feel free to report back.

my package of the day: listadmin – moderate mailman mailing lists from the console

Are you involved in moderating Mailman mailing lists? Then maybe you know the pain: As you try to stop spammers flooding you list you hold messages from unknown senders back for review. Or you have a moderated mailing list that only allows postings explicitly published. However. In most cases you get mails from Mailman telling you that there are messages you have to moderate. The common way is to enter the web interface, enter your password, read the messages and discard/reject/allow them.

This workflow is easy but it can really get on your nerves as the web stuff is somehow time consuming. Therefore from time to time you get lazy on moderating…

Well, there is light at the end of the tunnel and one cannot repeat this hint often enough: The package „listadmin“ provides a powerful console tool for moderating mailman mailing lists. As Debian/Ubuntu user you just have to install it via „aptitude install listadmin“ on the console or via Synaptic. You just have to write an .ini file with the configuration (admin url, credentials). The files looks like this:

adminurl https://hostname.tld/mailman/admindb/{list}
default skip
log ~/.listadmin.log
password secret
[email protected][email protected]

So we just give an url with a placeholder named „{list}“. This way we can moderate multiple lists with fewer lines of configuration. Now let’s see how listadmin behaves when checking existing mailing lists (in this case our Berlin based Ubuntu mailing lists):

bildschirmfoto-damokleslilith-listadmin.png

Nothing to do – no messages to moderate in this case. But hey – we just got an incoming request. Let’s rerun listadmin and check:

bildschirmfoto-damokleslilith-listadmin-1.png

A spammer tried to hit our list. We now can decide wether to Approve, Reject or Discard the message. If it’s spam you want to discard as this just deletes the message. When you want to provide feedback to the user, you have to reject and are able to enter a reason. Of you course you also can examine the full body of the message or just skip it and keep for the next session. In our case „d“ was entered for deleting the spam and the request was submitted. If you are fast the session will not take more than 10 seconds – try this with the web interface!

So though it’s age and the ajax web 2.0 shiny wysiwyg plinkplonk alternatives Mailman provides you with nice wrappers for moderating larger amounts of mails within seconds. If you stick to a community you will probably sooner or later be asked to moderate a mailing list. Now you can say: „No problem. I have a command line tool for this“.

my package of the day: fish – the friendly interactive shell

Always wanted to learn using a shell more deeply? Maybe „fish„, the „friendly interactive shell“ is the right kickoff for you.

If you are already a heavy command line user with customized .bashrc or even .zshrc (like me), thank you probably don’t need another shell. But if this shell thingy is somehow a miracle to you but you saw people using it like wizards with colorful commands and a typing speed that made you jealous then it could help you to start with a shell that concentrates on being very friendly to new users as common shells like Bash and ZSH expect you to read the manual and write a config file (there are aids and defaults that vary from distribution to distribution).

The standard shell for login users in Ubuntu/Debian is „Bash“. Ubuntu already ships the file /etc/bash_completion that is read by default and helps users using the TAB key more exensively. Try it on you bash shell: just type something like „ls –“ and press TAB twice. You’ll see a list of options that „ls“ provides. Nice but it could be nicer. Let’s compare this to fish. Install fish by using Synaptic or „aptitude install fish“, open a terminal and start the shell by typing „fish“. You should a changed green prompt. Now type „ls -“ and press TAB.

Stop: Already while typing you should see a strange color change. When entering „l“ the character turns red and underlined. Looks like an error? Well, it is: fish tells you, that „l“ is probably not a command. An aid during typing before running a command. Neat. Now, when pressing TAB you should a very clean list of options for „ls“ with a short description of each option:

fish11.png

Helpfull, isn’t it? Of course this is not limited to ls. Try it with other commands you are using. If you ask yourself why you have to type „command –“ and press TAB: „–“ introduces a command line option („-“ does this also – try it!). As you press TAB after this, the shells knows „the user wants to do something and needs help on completing it“. It looks after a pattern and sees that you want to use the given command and are looking for options. That’s all. As I said: This works in Bash often by default also, but not that nice.

Now fish can do more with completion of course. Want to install a program? Try „aptitude install mut“ and press TAB. It will show you a list of packages matching that pattern:

fish2.png

Need to kill a process? Type „kill “ and press TAB and you will get a nice list of running processes:

fish3.png

The list of possible TAB completions on fish is endless. Just notice that emphasis has been put on commands like mount, make, su, ssh, apt-get/aptitude. In most commands usernames, process ids will automatically be completed. The trick is just to try TAB when you are too lazy to type or unsure how to proceed. A good shell surprises you from time to time with it’s completion.

Also very helpful is the extended pattern matching for file names. Let’s say you want a list of all pdf files in a directory and all it’s subdirectories. On bash you probably use something like „find . -name „*.mp3“. On fish you use the pattern „**“ which means any files and directories in the current directory and all of its subdirectories. So type „ls **.pdf“ and you get the list you want as fish crawls through the directories for you. Want alle .mp3 and mp4 files but not files like .mpeg? Use „ls **.mp?“ as „?“ stands for one character. Of course commands like „rm **.bak“ are possible, too. Use them with care! In the following example we are looking for pdf files in all subdirectorie, delete them and afterwards make sure they are really gone:

bildschirmfoto-fish-mnt-cryptdevice-live-home-ccm-work-1.png

So let me stop here. I hope, I was able to show you that using fish instead of an unconfigured shell is a nice way of getting in the command line business. Fish provides you with a lot of more features that you might need and saves you from writing a config file from scratch.

If you want to give fish a try: Install it and run the „help“ command. I will launch a nice help page in you browser. Read some parts of the document as they’ll show you nice gimmicks. Or just don’t and start right away. But trust me: Reading hints for a shell from time to time will save you … time.

(Just in case you don’t know: You can change your standard shell by using the „chsh“ command. But when being a novice it is always a good idea to stick to the distribution specific default shell and run your shell directly by calling it. When you are more used to it feel free to make it your standard shell…)

my package of the day: weather-util (weather report and forecast for the console)

Let me introduce you today into a tool that a lot of people might evaluate as useless: Jeremy Stanley’s weather-util. Whith this tiny python script, which finally found its way into Debian Etch and Ubuntu repositories, you can retrieve weather information from weather stations worldwide directly from the command line.

After installing it by running „aptitude install weather-util“ or synaptec, call „weather“:

$ weather
Current conditions at Raleigh-Durham International Airport (KRDU)
Last updated Jun 04, 2008 - 01:51 AM EDT / 2008.06.04 0551 UTC
   Wind: from the S (180 degrees) at 10 MPH (9 KT)
   Sky conditions: mostly cloudy
   Temperature: 72.0 F (22.2 C)
   Relative Humidity: 73%

Pretty impressive, isn’t it? Weather just makes an http call to a weather server for a preset station (where the heck is Raleigh-Durham International Airport?) and returns the current weather information. Of course you can also retrieve the forecast for the next days by running „weather -f“:

$ weather -f
Current conditions at Raleigh-Durham International Airport (KRDU)
Last updated Jun 04, 2008 - 01:51 AM EDT / 2008.06.04 0551 UTC
   Wind: from the S (180 degrees) at 10 MPH (9 KT)
   Sky conditions: mostly cloudy
   Temperature: 72.0 F (22.2 C)
   Relative Humidity: 73%
City Forecast for Raleigh Durham, NC
Issued Wednesday morning - Jun 4, 2008
   Wednesday... Partly cloudy, high 67, 20% chance of precipitation.
   Wednesday night... Low 96, 20% chance of precipitation.
   Thursday... Partly cloudy, high 71, 10% chance of precipitation.
   Thursday night... Low 97.
   Friday... High 72.

Sadfully the forecast only displays Fahrenheit, but that way we have enough space for patching the package :)

Retrieving local weather information

Now we are, of course, we are interested in the weather in our area. The easiest way is getting the ID for a weather station. Just go to http://weather.noaa.gov/ and choose your country/city/station by using the drop down menus for US and international stations. When you found a station close to your point of interest you can see a four letter id in round brackets. See the example above – the airport has KRDU. I am using EDDI most of the times which is Berlin Tempelhof – an airport in the city center of Berlin.

So you are ready to ask politely for weather again by giving the id with „weather –id=ID“, in my case „–id=EDDI“. (note: you can also make it short with „-iEDDI“:

$ weather --id=EDDI
Current conditions at Germany (EDDI) 52-28N 013-24E 49M (EDDI)
Last updated Jun 04, 2008 - 01:50 AM EDT / 2008.06.04 0550 UTC
   Wind: from the E (080 degrees) at 13 MPH (11 KT)
   Temperature: 62 F (17 C)
   Relative Humidity: 59%

Please note: Not all weather stations support forecasts (-f) and drop a 404 http error. You just have to try this. You can also switch on „verbose“ mode (-v) which gives you even more details.

Weather on the command line without weather-util?

Works like a charm, doesn’t it? For the curious people around who want to understand where weather-util pulls the information from: See

http://weather.noaa.gov/pub/data/observations/metar/stations/

for reference. Just text files on a web server regularly updated. Click around and go to there parent dir – you’ll find even more interesting information. So using weather-util without weather-util should be not a big deal.

Screen integration

Now for the console lovers: You are using screen with a pimped status bar, don’t you? And in your wildest dreams you imagined the status bar showing the weather report, so you even don’t have to look outside the window because as a console guy you don’t even like your real „window“? No problem anymore by using screens backticks and weather-util.

As I noticed that weather-util runs into trouble from time to time when not being able to send it’s http request, I decided for a indirect weather pull by writing the information I need to a flat file by a cronjob. We just call weather-util and use awk to grab the snippet we need. I am interested in the temperature in Celsius. weather-util shows this line:

Temperature: 62 F (17 C)

So I use the following very quick and very dirty awk to get the „17“ out:

$ weather -iEDDI | awk '/Temperature/ {print $4}' | \
awk -F "(" '{print $2}'

Feel free to brush this up and report back. I am sure you can improve to use only one awk call instead of two.

You save this line to a shell script that is scheduled to run every five minutes and direct it via „>“ to write it’s output to a flat txt file. Within you .screenrc you read this file and display the contents in you status bar.
~/.screenrc:

startup_message off
defscrollback 1024
hardstatus on
hardstatus alwayslastline
backtick 1 0 300 cat /path/to/weather-text-file.txt

# remove line breaks made with "\"on the following lines
caption always "%{+b rk}$USER@%{wk}%H | %{yk}(Last: %l) %{gk} \
Weather: %1`C  %-21=%{wk}%D %d.%m.%Y %0c"
hardstatus alwayslastline "%?%-Lw%?%{wb}%n*%f %t%?(%u)\
%?%{kw}%?%+Lw%? %{wk}"

Make sure that have the file /path/to/weather-text-file.txt with the temperature in it. Now run screen and enjoy you shiny new status bar. See the green area in the screenshot below:
screen-weather.png

So that’s all for now. You should be able to play around with weather-util and screen to get the information you need (or let’s say „want“ :).

[update]

The incredible mnemonikk updated my awk | awk to a onetime sed within seconds:

$ weather -iEDDI | sed -n 's/.*Temperature:.*(\(.*\))/\1/p'

Thank you!

good howto: Bash Pitfalls

There is a very nice collection of common Bash scripting pitfalls and hints on how to avoid them on Greg’s Wiki: Bash Pitfalls

If you are writing little Bash scripts from time to time or are even a heavy Bash scripter, give it a try – it helps you avoiding errors that might work perfectly under normal circiumstances but suddenly go wrong… A good guide especially for writing server bullet proof scripts.