Ubuntu developers visiting Ubuntu Berlin and c-base – plus interview with Mark Shuttleworth

A couple of months ago I started annoying people by telling them, I’d like to show the Ubuntu Berlin community and c-base to Mark Shuttleworth as he is interested in community personally on one side and Ubuntu Berlin is a great example on the other. So this’s rather inviting an important member of the community than celebrating a „meet and greet“. Of course telling people plans like this makes them smile, but when you raise your finger and say „It will happen“ with certainty, they’ll get uncertain. So the plan was actually to invite Mark to one of our great traditional release parties which you shouldn’t miss when you are around Berlin at release time.

By chance the Ubuntu Developer Sprint happened to be in Berlin for the next Jaunty release this week. If I got it right, the Canonical Ubuntu developers meet five days around two weeks before a release feature freeze and work in groups and issues that need to be decided/designed or just fixed immediatly. The incredible Daniel Holbach had the idea of inviting the bunch of developers right into the c-base after their work. So he did and we scheduled it for an evening when the Ubuntu Berlin crew also meets at c-base for their monthly jour fix.

We did not announce this meeting externally as we tried to make the whole evening as comfortable as possible for everyone. And we did, I think. Right in time at 19:30 this Wednesday evening about thirty Ubuntu developers entered the c-base. Just among them Mark who seemed to like the whole c-base hackerspace, Ubuntu Berlin, community, space and future thing a lot. The Canonical crowd got several guided tours through the whole base by __t, while housetier provided the others with „German beer“ and Club Mate. We had a lot of chats in smaller groups, things, you always wanted to ask the developer of your choice and just relaxed smalltalks about space, canooeing the c-base project „OpenMoon“ (trying to send a rocket to the moon), and more. Mark seemed to be amazed about asking people why they joined to Ubuntu Berlin team, what they were currently doing and so on – so, what the community thing is about right here and right now.

We had no schedule for the evening. Therefore we spent about two and a half hour at c-base without any official part and a cosy diner in smaller groups afterwards. We only asked some people for an interview for the c-base statement studio channel where already people like Nokias Peter Schneider and Mozillas CEO John Lilly showed up. Mark took the time for an interview by jocognito. The results of this short talks are already online on Youtube:

Interview #1: (when the inbound video doesn’t show up, click here)

Interview #2: (when the inbound video doesn’t show up, click here)

Another interview with Jorge Castro and James Westby has also been taped and will be published soon. Funny guys talking about Ubuntu on the moon. I’ll post the Youtube links, when the edited version is online.
So what’s next? I hope everybody liked (Ubuntu) Berlin and c-base and we got a good start for possible events in the future.
Thanks again to Daniel who initiated the visit!

Ubuntu Intrepid Ibex Release Party on 1st of November at c-base (Berlin)

So, half a year has passed and it’s time again to celebrate a new Ubuntu release. This is an invitation for you, your friends and any other human being around to join our „Ubuntu Intrepid Ibex Release Party“ on Saturday, the 1st of November, at the sunken starship c-base (Rungestraße 20), starting at 4pm.

Again, a couple of lectures will be held – ranging from new features in Intrepid (that’s my part, I’ll give the lecture at the same day also at the BLIT), over Gnome eyecandy stuff to presentations of Freifunk project and the DeepaMehta semantic desktop (don’t miss!) directly from it’s lead developer. So either if you are new to Ubuntu and want to make first steps and contact other Ubuntu users or you are a Linux guru – you’ll find somebody to have a chat with, I promise. There’ll even be a „tux tinker corner“ where you or your girl friend have the possibility to try out some Tux Origami.

The event’ll mainly be in German, but a lot of people are speaking English, so don’t hesitate asking for help/translations.

Entrance is free, there is a free wifi, so feel free to bring your notebook in. You can also „buy“ a freshly burned Ubuntu/Kubuntu/Xubuntu cd for a service charge of 1 Euro or check the Ubuntu merchandising table.

See you there?

Links:
Official Party Announcement Page
how to get to c-base?
Freifunk
DeepaMehta
Ubuntu Berlin

Having fun with OpenSSH on Ubuntu Intrepid Ibex – visual host keys

After having a quite uneventful upgrade to Ubuntu Intrepid Ibex (time for a change), I’s happy to notice, that Intrepid Ibex ships the new OpenSSH version 5.1 which has one little feature, I really fell in love with: visual host keys. You might already have read about it on Planet Ubuntu. In case you don’t: „visual host keys“ is a way presenting the ssh client user a 2d ascii art visualation of the host key fingerprint. It shall help you to recognize a ssh server by remembering a figure rather than the host key.

If you want to give this a try, call the ssh client this way:

$ ssh -o VisualHostKey=yes your.host.name
Host key fingerprint is ff:aa:a8:dc:0b:5e:e3:9f:96:f1:75:d4:24
+--[ RSA 1024]----+
|            +o   |
|             o. .|
|            E  + |
|       .   . .. .|
|      . S   ..   |
|   . o o..  . .  |
|    + + .+.. .   |
|   . + ooo.      |
|    . ooo        |
+-----------------+

Nice, isn’t it? Now try your different ssh hosts and compare the figures. Hope you don’t start generating ssh host keys for getting a special figure, do you? :) Actually I don’t know if I’ll really remember figures of dozens of machines, but hey: it’s just additional fun.

In case you want to make this behavior default, add „VisualHostKey yes“ to your „~/.ssh/config“. In case you don’t have this file, make a new one with the following content (and find out that this file makes ssh really poweful in combination with command line completion, but that is another topic):

Host *
	VisualHostKey		yes

Please note: This might break applications that rely on the ssh console client as they don’t expect graphical art popping up. So if some other clients don’t work anymore, play around with aliases or your ~/ssh/config file.

Thank you, OpenSSH guys, I really appreciate your work.

my package of the day – htmldoc – for converting html to pdf on the fly

PDF creation got actually fairly easy. OpenOffice.org, the Cups printing system, KDE provide methods for easily printing nearly everything to a PDF file right away. A feature that even outperforms most Windows setups today. But there are still PDF related task that are not that simple. One I often run into is automated PDF creation on a web server. Let’s say you write a web application and want to create PDF invoices on the fly.

There are, of course, PDF frameworks available. Let’s take PHP as an example: If you want to create a PDF from a php script, you can choose between FPDF, Dompdf, the sophisticated Zend Framework and more (and commercial solutions). But to be honest, they are all either complicated (as you often have to use a specific syntax) to use or just quite limited in their possibilities to create a pdf file (as you can only use few design features). As I needed a simple solution for creating a 50+ pages pdf file with a huge table on the fly I tested most frameworks and failed with most of them (often just as I did not have enough time to write dozens of line of code).

So I hoped to find a solution that allowed me just to convert a simple HTML file to a PDF file on the fly providing better compatibility than Dompdf for instance. The solution was … uncommon. It was no PHP class but a neat command line tool called „htmldoc“ available as a package. If you want to give it a try just install it by calling „aptitude install htmldoc“.

You can test htmldoc by saving some html files to disk and call „htmldoc –webpage filename.html“. There a lot of interesting features like setting font size, font type, the footer, color and greyscale mode and so on. But let’s use htmldoc from PHP right away. The following very simple script uses the PHP output buffer for minimizing the need for a write to disk to one file only (if somebody knows a way of using this without any temporary files from a script, let me know):

// start output buffer for pdf capture
 
ob_start();
?>
your normal html output will be places here either by
dumping html directly or by using normal php code
<?php
// save output buffer
$html=ob_get_contents();
// delete Output-Buffer
ob_end_clean();
// write the html to a file
$filename = './tmp.html';
if (!$handle = fopen($filename, 'w')) {
	print "Could not open $filename";
	exit;
}
if (!fwrite($handle, $html)) {
	print "Could not write $filename";
	exit;
}
fclose($handle);
// htmldoc call
$passthru = 'htmldoc --quiet --gray --textfont helvetica \
--bodyfont helvetica --logoimage banner.png --headfootsize 10 \
--footer D/l --fontsize 9 --size 297x210mm -t pdf14 \
--webpage '.$filename;
 
// write output of htmldoc to clean output buffer
ob_start();
passthru($passthru);
$pdf=ob_get_contents();
ob_end_clean();
 
// deliver pdf file as download
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=test.pdf");
header('Content-length: ' . strlen($pdf));
echo $pdf;

As you can see, this is neither rocket science nor magic. Just a wrapper for htmldoc enabling you to forget about the pdf when writing the actual content of the html file. You’ll have to check how htmldoc handles your html code. You should make it as simple as possible, forget about advanced css or nested tables. But it’s actually enough for a really neat pdf file and it’s fast: The creating of 50+ page pdf files is fast enough in my case to make the on demand access of htmldoc feel like static file usage.

Please note: Calling external programs and command line tools from a web script is always a security issue and you should carefully check input and updates for the program you are using. The code provided should be easily ported to another web language/framework like Perl and Rails.

my package of the day – htop as an alternative top

„top“ is one of those programs, that are used quite often but actually nobody talks about. It just does its job: showing statistics about memory, cache and cpu consumption, listing processes and so on. Actually top provides you some more features like batch mode and the ability to kill processes, but it’s all quite low level – e.g. you have to type the process id (pid) of process you want to kill.

So, though an applications like top makes sense on the console, a more sophisticated one would be great, extending the basic top functionality with enhancements to it’s usage. This tool already exists: It’s the ncurses based „htop“ and we’ll have a closer look at it now.

For the beginning: Install „htop“ by running „aptitude install htop“, Synaptic or the package manager of your choice. As you can see, htop is quite colorful, which is, of course, a matter of taste. In my opinion, colors make sense, when the they mean something or provide better readability. So let’s check the output in brief:

htop1.png

At the upper left corner you see statistics about the usage of cpu cores (in my case there are two of them, marked „1“ and „2“), memory and swap statistics, while on the right side, you have the common uptime/load stats. The interesting part is the usage of colors in cpu/ram/swap bars. If you are new to htop you have to look the colors up at least once. Therefore just stroke „h“ („F1“ should work, too, but Gnome might get in your way) and you’ll see a nice explanation in the help:

htop2.png

Quite interesting is the distribution between green and red in the cpu stats, as a high kernel load often means something goes wrong (with the hardware i/o for instance). In the memory bar the real used ram is marked green – blue and orance actually could be cleared by the kernel if necessary. (People are often confused that their ram seems to be full, when calling a tool like/htop though they are not running that many programs. It’s important to understand, that the memory is also used for buffering/caching and that this memory can often be used by „real“ data later on).

So what’s the next htop feature? Use your mouse, if you like! You can test it by clicking on „Help“ on the menu bar at the bottom. Maybe while clicking around a bit you already noticed that you can also click on processes and mark them. What for? Well, htop enables you to kill processes quit easy, as you don’t have to type a process id, write a pattern or something, you just can mark them with a mouse or cursor and either click on „Kill“ in the menu or stroke the „F9“ or „k“ key. „htop“ will let you choose from a list of signals afterwards:

htop3.png

Of course you cannot kill processes that belong to your user when htop does not run as root (i.e. with „sudo“). „htop“ marks processes that belong to user it is run by with a brighter process id:

htop4.png

Sadfully this also means, that running htop as root/sudo, marks processes that belong to non-root with a darker grey. But hey, that’s a nice missing feature for patch, isn’t it?

If you like to become an advanced htop user, you can check the „Setup“ menu (click it or press the „F2“ or „S“ key). You will see a menu for configuring the output of htop, enabling you to switch off and on the display of certain information:

htop5.png

Of course you can also sort the process list (click „Sort“ or press „F6“) which give you a list of possible sort parameters:

htop6.png

In spite of this, you can switch to a process tree display and sort it by pressing one of the keys showed below:

htop7.png

So let me give you a last nice gimmick and then end for today: You can try to attach „strace“ to a running process by marking the process typing „s“. If you don’t know, what strace is, don’t bother, if you do, you will probably like this feature pretty much.

I hope you got the clue about using htop, which is a really neat, full featured console top replacement that is even worth to be used when running X as it supports mouse usage and brings everything you need while still having a small footprint. If you have alternatives, you like mention, feel free to drop them as a comment.

my package of the day – mtr as a powerful and default alternative to traceroute

Know the situation? Something is wrong with the network or you are just curious and want to run a „traceroute“. At least under most Debian based systems your first session will probably look like this:

$ traceroute www.ubuntu.com
command not found: traceroute

Maybe on Ubuntu you will at least be hinted to install „traceroute“ or „traceroute-nanog“… To be honest, I really hate this lack of a basic tool and cannot even remember how often I typed „aptitude install traceroute“ afterwards (and press thumbs your network is up and running).

But sometimes you just need to dig a bit deeper and this time the surprise was really big as the incredible Mnemonikk told me about an alternative that is installed by default in Ubuntu and nearly no one knows about it: „mtr„, which is an abbreviation for „my traceroute“.

Let’s just check it by calling „mtr www.ubuntu.com“ (i slightly changed the output for security reasons):

                 My traceroute  [v0.72]
ccm        (0.0.0.0)          Wed Jun 20 6:51:20 2008
Keys:  Help   Display mode   Restart statistics   Order
of fields      Packets               Pings
 Host        Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. 1.2.3.4   0.0%   331    0.3   0.3   0.3   0.5   0.0
 2. 2.3.4.5   0.0%   331   15.6  16.3  14.9  42.6   2.6
 3. 3.4.5.6   0.0%   330   15.0  15.5  14.4  58.5   2.7
 4. 4.5.6.7   0.0%   330   17.5  17.3  15.4  60.5   5.3
 5. 5.6.7.8   0.0%   330   15.7  24.3  15.6 212.3  30.2
 6. ae-32-52 58.8%   330   20.6  22.1  15.9  42.5   4.7
 7. ae-2.ebr 54.1%   330   20.6  25.0  19.0  45.4   4.7
 8. ae-1-100  0.0%   330   21.5  25.4  19.2  41.1   5.1
 9. ae-2.ebr  0.0%   330   27.5  34.0  26.7  73.5   5.2
10. ae-1-100  0.3%   330   28.8  33.6  26.7  72.5   6.0
11. ae-2.ebr  0.0%   330   30.8  32.9  26.7  48.5   5.0
12. ae-26-52  0.0%   330   27.6  34.8  26.9 226.8  26.8
13. 195.50.1  0.3%   330   27.7  28.4  27.2  42.5   1.7
14. gw0-0-gr  0.0%   330   27.9  28.1  27.0  40.5   1.4
15. avocado.  0.0%   330   27.8  28.0  27.2  36.2   1.0

You might notice, that the output is quite well formed („mtr“ uses curses for this). The interesting point is: Instead of running once, mtr continuously updates the output and statistics, providing you with a neat network overview. So you can use it as an enhanced ping showing all steps between you and the target.

For the sake of it: The package installed by default in Ubuntu is actually called „mtr-tiny“ as it lacks a graphical user interface. If you prefer a gui you can replace the package with „mtr“ by running „aptitude install mtr“. When running „mtr“ from the console afterwards you will be prompted with a gtk interface. In case you still want text mode, just append „–curses“ as a parameter.

Yes, that was a quick package, but if you keep it in mind, you will save time, you normalle spend for installing „traceroute“ and you’ll definitely have better results for network diagnose. Happy mtr’ing!

[update]

sherman noted, that the reason for traceroute not being installed is, that it’s just deprecated and „tracepath“ should be used instead. Thank you for the hint, though I’d prefer „mtr“ as it’s much more reliable and verbose.

my package of the day – gpg for symmetric encryption

Though asymmetric encryption is state of the art today, there are still cases when you probably are in need of a simple symmetric encryption. In my case, I need an easy scriptable interface for encrypting files for backup as transparent as possible. While you can, of course, use asymmetric encryption for this, symmetric methods can save you a lot of time while still being secure enough.

So there are methods like stupid .zip encryption or a bunch of packages in the repositories like „bcrypt“ that provide you with their implementations. But there is a tool, you already know and maybe even use, but don’t think of when considering symmetric encryption: „gpg„. Actually gpg heavily relies on symmetric algorithms as you might know. The public/private key encryption is a combination of asymmetric and symmetric encryption as the latter is quite more cpu efficient. In our case, gpg will use the strong cast5 cipher by default.

Encrypting

So as gpg already knows about a bunch of symmetric encryption algorithms, why not use them? Let’s just see an example. You have a file named „secretfile1.txt“ and want to encrypt it:

$ gpg --symmetric secretfile1.txt

You will be prompted for a password. Afterwards you’ll have a file named secretfile1.txt.gpg. You are already done! Please note: The file size of the encrypted file might have decreased as gpg also compresses during encryption and outputs a binary. In my test case the file size went down from 700k to 100k. Nice.

Armoring

In case you need to have an easy portable file that is even ready to be copy-pasted, you can ask gpg to create an ascii armor container:

$ gpg --armor --symmetric secretfile1.txt

The output file will be called „secretfile1.txt.asc“. Don’t bother to open it in a text editor of your choice. The beginning will look similar to this:

$ head secretfile1.txt.asc
-----BEGIN PGP MESSAGE-----
Version: GnuPG v1.4.6 (GNU/Linux)

jA0EAwMChpQrAA/o8IFgye1j3ErZPvXumcnIwbzSvENDD/fYlWMRiY/qqvn949kV
+mo/v+nQi7OFrrA45scQPuPbj8I1T+2f7XAT4ouW2kuHIJ/2zkyrxBMvO04fDH82
273NwUrXd/s+JJXe+wJz149K324rE7+FIHvfImiez8lRs5qyRI/drp/wFK8ZHRvF
gzhDGaTe8Dgj1YqHgWAY4eAjrXhYLI1imbIYrV1OVPia6Roif37FV7C1AT/i/2HX
2ytI2mBhQLdqkSVeqXZ74lgZhsitnOeqZH65IuTLi77PUcroFOuefw6+4qSpMIuM
8dyi4jCqQ1jCR7PRorpGvm3RtXhlkB689vrknKmOa5uztTj3MGrPOgC6jegBpu/L
3419sAxRtw8bj2lP76B+XXPZ2Tuzpg01hC/BWlifSexy+juYXv7iuF5BuQ1z7nTi

(In this case I used ‚head“ for displaying the first ten lines. Head is similar to tail, which you might already know.) Though the ascii file is larger than the binary .gpg file it is still much smaller than the original text file (about 200k in the above case). When tampering with binary files like already compressed tarballs the file size of the encrypted file might slightly increase. In my test, the size grew from a 478kB file to a 479kB file when using binary mode. In ascii armor mode the size nearly hit the 650kb mark, which is still pretty acceptable.

Decrypting

Decrypting is as easy. Just call „gpg –decrypt“, for instance:

$ gpg --decrypt secretfile1.txt.gpg
# or
$ gpg --decrypt secretfile1.txt.asc

gpg knows by itself, if it is given an ascii armored or binary file. But nevertheless the output will be written to standard output, so the following line might be much more helpful for you:

$ gpg --output secretfile1.txt --decrypt secretfile1.txt.gpg

Please note, that you need to stick to the order first –output, then –decrypt. Of course you can also use a redirector („>“).

Piping

So, for the sake of it: The real interesting thing is that you can use gpg symmetric encryption in a chain of programs, controlled by pipes. This enables you to encrypt/decrypt on the fly with shell scripts helping you to write strong backup scripts. Gpg already detects, if your are using it in a pipe. So let’s try it out:

$ tar c directory | gpg --symmetric --passphrase secretmantra \
| ssh hostname "cat &gt; backup.tar.gpg"

We just made a tarbal, encrypted it and sent it over ssh without creating temporary files. Nice, isn’t it? To be honest, piping over ssh is not a big deal anymore. But piping to ftp? Check this:

$  tar c directory | gpg --symmetric --passphrase SECRETMANTRA \
| curl --netrc-optional --silent --show-error --upload-file - \
--ftp-create-dirs ftp://USER:PASSWORD@SECRETHOST/SECRETFILE.tar.gpg

With the mighty curl we just piped from tar over gpg directly to a file on a ftp server without any temporary files. You might get a clue of the power of this chain. Now you can start using a dumb ftp server as encrypted backup device now completely transparently.

That’s all for now. If you like encryption, you should also check symmetric encryption and the possibilities of enhancing daily business scripts security by adding some strong crypto to it. Of course you can complain about the security of the password, the possible visibility via „ps aux“, but you should be able to reduce risks by putting some brain in it. In the meantime check „bashup„, the bash backup script, which uses methods described here to provide you with a powerful and scriptable backup library written in Bash with minimum depencies. And yes, gpg will be added soon.

my (not yet) package of the day – circular application menu

(Not yet a package, but still interesting enough to tell and hey: bleeding edge.) Circular Application Menu for Gnome is a Google Code hosted project providing a different access method to your Gnome menu. Actually all it does, is displaying the menu as circles:

hauptmenuecircular.png

Installation

But as it is different, it is somehow attractive and therefore let’s give it a try. Building „circular application menu“ is quite easy. You just have to install some libraries, subversion and essential build stuff, check out the current repository and compile it. Huh? Try this:

$ sudo aptitude install subversion build-essential \
libgnome-desktop-dev libgnome-menu-dev
$ svn checkout \
http://circular-application-menu.googlecode.com/svn/trunk/ \
circular-application-menu
$ cd circular-application-menu
$ make

Running

If no severe error occurred, you are already able to run „circular application menu“ it via ‚./circular-application-menu‘ now. Ignore error messages on the console as long as it comes up. Strange feeling to use it, isn’t it? I haven’t decided, if I really like it or not, until now.

If you like you can now install it to the system via make install, though I am fine with running it from the build directory, which I moved to „~/opt/circular/“. As it is pre-alpha-something, I just don’t want the code be mixed up with my distribution binaries.

Customizing

If you want to go one step further, install the Avant Window Navigator („$ sudo aptitude install avant-window-navigator“), the OS X style application panel, which just moved from Google Code to Launchpad (points taken!) and add an icon for circular menu to it by doing a right-click=>settings=>Launchers=>Add. Now you can start all normal applications by calling Circular Menues from the AvantGo launcher. Definitely an eye catcher:

Circular Application Menu combined with Avant Window Navigator
(click to enlarge)

Pitfalls

There are, of course, a couple of pitfalls. For instance, when running circular application menu on top of a dark or even black application, you cannot see it’s borders:

bildschirmfoto-1.png

Also, you currently don’t have the possibility to customize the launcher at all.

Nevertheless: circular application menu for Gnome is a nice desktop gimmick. I am sure, it will be packaged soon (will I?) and go to the community repositories of most GNU/Linux distributions.

my package of the day – sash – the Stand Alone SHell for system recovery

Let me introduce you today to a package that is quite unknown as you hopefully never need it. But when you need it and have not thought about it before, it is probably already too late. I am talking about „sash“ – the „Stand Alone SHell“. Yet another shell? Yes and no. Yes it is a shell, but no, I am not trying to show something like the shiny friendly interactive shell or (my favorite) „zsh“. Quite the contrary: You can give „sash“ a lot of attributes, but not „shiny“.

So what is about? Imagine the following case: You are running a machine and suddenly something goes totally wrong. Partition errors, missing libraries, you have messed around with libc, whatever. This can get you into serious trouble. You are fine, when you have the possibility to boot a recovery cd or something similar. But under some circumstances you might have to stick to the programs already installed though they seem to be broken. Maybe it is a virtual server somewhere on the web and you are only allowed to boot into a recovery mode giving you a prompt to your server. So you are trying to login as root but it just does not work for some reasons. Broken dependencies. Who knows.

The point is: When you login onto a machine for system recovery, you are already relying on a lot of tools and dependencies – though it only seems to be a shell. The shell might be linked against a couple of libraries, a lot of commands you want to run are not build in and therefore a bunch of external dependencies can bar your way. So what you actually might need in a situation of severe pain is a shell that provides you with as much essential tools as you need on its own without relying on external code.

Installing sash

This is where „sash“ comes into play. Sash is not a dynamic linked executable, it has actually all needed features built in. So as long as you can execute the sash binary, you can have a working shell. Let’s check it! Install „sash“ by using „aptitude install sash“ or you preferred package manager. Please note, that sash will clone your current root account:

cloning root account entry to create sashroot account in /etc/passwd
Cloned sashroot from root in /etc/passwd
cloning root account entry to create sashroot account in /etc/shadow
Cloned sashroot from root in /etc/shadow

So you have this new line in your /etc/passwd:

sashroot:x:0:0:root:/root:/bin/sash

You should consider giving sashroot a password if you want to be able to login with this account. But please check if this applies to your security needs.

See the difference

Now let’s check how the sash binary differs from the standard shell, the bash and the zsh. We are using „ldd“ for this, as it is lists libraries, an executable is linked against:

bildschirmfoto-terminal.png

Pretty impressive. All „normal“ shells have at least three dependencies, sash apparently has none.

But getting rid of external libraries is not the only difference sash makes. Another major feature is the collection of built-in commands:

-ar, -chattr, -chgrp, -chmod, -chown, -cmp, -cp,
-dd, -echo, -ed, -grep, -file, -find, -gunzip,
-gzip, -kill, -ln, -ls, -lsattr, -mkdir, -mknod,
-more, -mount, -mv, -printenv, -pwd, -rm, -rmdir,
-sum, -sync, -tar, -touch, -umount, -where

Seems like a list of commands you yearn for, when in recovery mode, don’t you? Note the leading „-“ at the beginning of those commands. This is the way, sash handles your attempts to run internal and external commands. When using „mv“, sash gives you the normal /bin/mv, when using „-mv“, sash provides you with it’s own replacement. But „sash“ helps you when you don’t want to type the „-“ at the beginning of every command. You can enter „aliasall“ in a sash session as it will create non permanent aliases for all builtin commands:

bildschirmfoto-terminal-1.png

Emergency

In case of an emergency you might need to boot directly into sash as maybe your initrd stuff is broken. How? Just append a „init=/bin/sash“ to your kernel command line – be it lilo or grub. This way, you will be directly dropped into a sash session.

What’s missing?

Sadfully one essential command is missing: fsck. As the sash manual points out: fsck is just way too big to be included in a statically linked binary. Sad, but true. But hey: Better being to able at least to act on the console than having no console at all.

Sash as a standard shell?

… is not a good idea. It just lacks a lot of features you’ll really want when working on the command line: A verbose prompt, command history, tab completion and so on.

So it’s to install sash now as you will miss it, when it’s too late :)
(And just if you’d like to ask: Yes, at least once I really needed sash and it helped me to save a lot of time.)

First BBJ – Berlin Bug Jam – with MOTU Daniel Holbach on Monday, 16th of June at c-base Berlin

Ubuntu Berlin ist proud to present you the first BBJ: A „Berlin Bug Jam“ with Ubuntu MOTU Daniel Holbach, who will rock the place, for sure. Don’t know what a „Bug Jam“ is? Well, imagine it as a gettogether for working on bugs in a team. That does not mean, you have to be a developer: Everybody is welcome, who can do things from testing bug reports, triaging, patching or just wants to see how it all works. So this will rather be an „event“ than a lecture/workshop and provide you with a lot of fun and knowledge. If you want to see a detailed description of a bug jam, check the wiki page.On the BBJ, we will try to persuade to join the 5-A-Day project, motivating people to continuously enhance the Ubuntu Distribution and helping you to spread the word (and yes, to compete if you like) by trying to work on five bugs every day. Let’s see, if we succeed…

Feel free to bring your notebook along. We have power and free wifi, of course.

Event: 1. Berlin Bug Jam (BBJ) with Daniel Holbach
Location: c-base Berlin, Rungestr. 20
Date: 16th of June
Time: 18:00

Please note: If you want to support the Global Ubuntu Bug Jam, which is taking place from 8th to 10th of August,
this is a perfect possibility for you to gather some hands on
experiences. Of course, Ubuntu Berlin, will bring up a great lineup and
event for the Global Bug Jam. We are already working on it.