Ubuntu Berlin review of 2008

End of 2008 is close and it’s time to review the year, isn’t it? As some of you might know, I am quite active in the „Ubuntu Berlin“ team, a local Berlin(Germany)-based user group, closely connected to the internationally known hackerspace c-base.

Le me just sum up what we’ve done this year:

  • about one dozen regular meetings (earch 1st Wednesday of the month) with an average of 10 to 30 visitors
  • we participated in the Ubuntu Global Bug Jam in August
  • we had about one dozen Bug Jams with Ubuntu developer Danien Holbach
  • we hosted two Ubuntu release parties (for Hardy and Intrepid) with about 200 visitors each of them and a video feature in the Berlin metro system viewable by approx 1.5 million people
  • we had several workshops and lectures for topics like „introduction to Latex“, „video editing with Cinelerra“, „fun on the command line“, „graphics and design with free software“
  • we supported the „Linuxtag 2008“ and hosted the end-of-Linuxtag BBQ with dozens of free software developpers and Ubuntu affilliated people
  • we started a „terminal project“, trying to set up Ubuntu powered touch screen terminals
  • we organised an Origami workshop as Christmas social event (ever build a Tux or star?)
  • we started a new series of events with a kind of lightning talks giving the possibility to speak just a couple of minutes about a piece of software – with really great success

So, looking back, 2008 was a successfull year for Ubuntu Berlin. We managed not to split into small groups and were able to increase the amount of members. Currently 120 people are reading our main mailing list and about 110 have joined our launchpad group. And no, they are not all nominal members.

So what are our goals for 2009?

Our main goal will be focussing on Ubuntu related topics. It’s hard to have clearly structured, topic related discussions on a mailing list with more than 100 readers. We already split our list into three lists and try to calm down every „religious“ debate.

We will definitely host two major release parties again while trying to make them even better. There is space for improvements in the lecture schedule and quality, though we were really fine with the last two parties. I guess we will also organise something like a BBQ for Linuxtag 2009, again.

I hope we’ll manage to provide at least one workshop per month like we did in the last months as this really brings people to the machine, using Ubuntu and free software. The last workshops took round about 90 minutes. We are also trying to host longer workshops (approx four ours) giving you real hands on experience. Moreover we’ll continue with the lightning talk sessions as this is a real neat type of event.

Together with Daniel Holbach we’ll continue with the Ubuntu Berlin Bug Jams while trying to shift them towards a real practical experience solving dozens of bugs each session, motivation users to join the Bug Triage team.

And we’ll continue to collect donations as we urgently need things like a new video beamer for our talks and similar events. We hope to find a sponsor providing Ubuntu Berlin with a monthly budget.

So, let’s start into 2009 and make it an Ubuntu year, again. It’s up to us convincing people of using Ubuntu, improving Ubuntu and spreading the word again.

Ubuntu Intrepid Ibex Release Party on 1st of November at c-base (Berlin)

So, half a year has passed and it’s time again to celebrate a new Ubuntu release. This is an invitation for you, your friends and any other human being around to join our „Ubuntu Intrepid Ibex Release Party“ on Saturday, the 1st of November, at the sunken starship c-base (Rungestraße 20), starting at 4pm.

Again, a couple of lectures will be held – ranging from new features in Intrepid (that’s my part, I’ll give the lecture at the same day also at the BLIT), over Gnome eyecandy stuff to presentations of Freifunk project and the DeepaMehta semantic desktop (don’t miss!) directly from it’s lead developer. So either if you are new to Ubuntu and want to make first steps and contact other Ubuntu users or you are a Linux guru – you’ll find somebody to have a chat with, I promise. There’ll even be a „tux tinker corner“ where you or your girl friend have the possibility to try out some Tux Origami.

The event’ll mainly be in German, but a lot of people are speaking English, so don’t hesitate asking for help/translations.

Entrance is free, there is a free wifi, so feel free to bring your notebook in. You can also „buy“ a freshly burned Ubuntu/Kubuntu/Xubuntu cd for a service charge of 1 Euro or check the Ubuntu merchandising table.

See you there?

Links:
Official Party Announcement Page
how to get to c-base?
Freifunk
DeepaMehta
Ubuntu Berlin

Your mom runs Ubuntu? Join the team? She doesn’t yet? Help her. And join the Team!

After having a lot of chats with friends I’s surprised how many of them said „Yes, I installed Ubuntu on my mom’s pc.“ So I assume there is a growing user group which will never pop up in blogs, user groups or fairs: your mom, the mom of your friend and mine. As this is a user group bringing Ubuntu to the real end user, I think it’d be nice showing that these users exist and motivating other Ubuntu users to let their mothers join the Ubuntu crowd just by installing them Ubuntu, wouldn’t it?

So join the just created launchpad team „my mom runs ubuntu“ by clicking here and show the public that you made another mom happy. I promise I’ll change the launchpad group branding as soon as possible.

My mom runs Ubuntu. And yours?

And your favorite new feature in Ubuntu Intrepid Ibex is … ?

Hey. As I am giving a little interview tomorrow about new features in Ubuntu Intrepid and’ll held two small lectures about the same topic my question to you is: What is your favorite new feature in Ubuntu Intrepid Ibex? Think, I like the new userland encrypted Private directory a lot, but also the new OpenSSH 5.1 version. But that’s just my taste – and yours? Are there – besides the widely known new features – things you’ve long been waiting for? Let my know by dropping a line in the comment field.

Having fun with OpenSSH on Ubuntu Intrepid Ibex – visual host keys

After having a quite uneventful upgrade to Ubuntu Intrepid Ibex (time for a change), I’s happy to notice, that Intrepid Ibex ships the new OpenSSH version 5.1 which has one little feature, I really fell in love with: visual host keys. You might already have read about it on Planet Ubuntu. In case you don’t: „visual host keys“ is a way presenting the ssh client user a 2d ascii art visualation of the host key fingerprint. It shall help you to recognize a ssh server by remembering a figure rather than the host key.

If you want to give this a try, call the ssh client this way:

$ ssh -o VisualHostKey=yes your.host.name
Host key fingerprint is ff:aa:a8:dc:0b:5e:e3:9f:96:f1:75:d4:24
+--[ RSA 1024]----+
|            +o   |
|             o. .|
|            E  + |
|       .   . .. .|
|      . S   ..   |
|   . o o..  . .  |
|    + + .+.. .   |
|   . + ooo.      |
|    . ooo        |
+-----------------+

Nice, isn’t it? Now try your different ssh hosts and compare the figures. Hope you don’t start generating ssh host keys for getting a special figure, do you? :) Actually I don’t know if I’ll really remember figures of dozens of machines, but hey: it’s just additional fun.

In case you want to make this behavior default, add „VisualHostKey yes“ to your „~/.ssh/config“. In case you don’t have this file, make a new one with the following content (and find out that this file makes ssh really poweful in combination with command line completion, but that is another topic):

Host *
	VisualHostKey		yes

Please note: This might break applications that rely on the ssh console client as they don’t expect graphical art popping up. So if some other clients don’t work anymore, play around with aliases or your ~/ssh/config file.

Thank you, OpenSSH guys, I really appreciate your work.

10 hints for having a nice time with an upgrade to Ubuntu Intrepid Ibex

A couple of months ago, just before the Hardy release, I posted some hints for having a smooth upgrade. As the Intrepid Ibex release is three weeks ahead and beta is flying around, I’d like to remember on the hints with some minor updates:

  1. Remove all applications you installed for testing purposes but don’t use them. It’s a nice feeling to have a mostly cleaned machine. Removing applications before an update reduces download time, the space needed and dependency calculations as well as the risk of a dependency failure. So just drop all those only once clicked applications, games and even libraries. Take some time for this, it will save you time later (downloading, unpacking, dependency management). Trust me.
  2. Check that you have enough space left on your device. Hundreds of packages are being downloaded in one step, therefore you should have enough disk space for this. Keep this in mind.
  3. Compiled software by your own? Installed external .deb-files? If possible: Uninstall them, you can later reinstall them if they are not provided by Ubuntu+1.
  4. Added software repositories to /etc/apt/sources.list (or Synaptec?). Disable them for now.
  5. Of course: Back up, back up, back up. Decide, if a backup of your home directory fits your needs or you also want the rest of your partitions.
  6. Bring enough time: A full upgrade might take two hours and more, depending on your ram, cpu power, network speed and amount of installed applications. Don’t think an upgrade runs automatically – it will ask you several questions during package upgrades and therefore awaits your attention. Make the day your upgrade day or at least the afternoon your upgrade afternoon. A cup of tea might help.
  7. Check for already known caveats that you might take care of. Normally the most important ones are collected on the wiki page to the current beta release like this one. Really do this! There has just been a severe bug in the alpha release that could even damage hardware. So reading this can save you a lot of time.
  8. Make yourself clear what „alpha“ an „beta“ mean: Take them as warnings and only take the risk of an upgrade if you are not under time pressure for a project (like writing an essay, developing an application or anything with a deadline close to your upgrade day)… and don’t moan when something doesn’t work. You are going to use free software in a testing period. It is probably your bug report that improves it.
  9. Check if you have the possibility to have a second computer around enabling you for checking against discussion boards, wikis and other ressources of useful information. In case of an emergency it is crucial to be online in way because often really simple tricks can save your day.
  10. If you are going to install more than one system, try setting up an apt-cache, apt-proxy or similar which will save you a lot of download time.

After these steps, feel free to give „update-manager -d“ a try. Take notes of things that look strange and check launchpad bug tracker if they are already reported. Now it is up to you to help making Ubuntu a better distribution and Intrepid a really success.

[update]

There is a Spanish translation of this blog entry on UbuntuWay. Thank you.

My (unofficial) package of the day: 3ware-cli and 3dms for monitoring 3ware raid controllers

Having a real hardware raid controller is a nice thing: Especially in a server setup it helps you keeping data safe on multiple disks. Though, a common mistake is, having a raid controller and not monitoring it. Why? Let’s say, you have a simple type 1 array (one disk mirrored to another) and one of the disks fails. If your raid systems works it will continue to work. But if you did not setup a monitoring for it, you won’t notice it and the chance of a total data loss increases as you are running on one disk now.

So monitoring a raid is actually the step that makes your raid system as safe as you wanted it when setting it up. Some raids are quite easy to monitor, like a Linux software raid system. Some need special software. As I recently got a bunch of dedicated (Hetzner DS8000 and other) servers with 3ware raid controllers, I checked the common software repositories for monitoring software and was surprised not finding any suitable. So a web research showed me that there are Linux tools from 3ware. Of course they don’t provide .deb packages so you need to take of this yourself if you don’t want to install the software manually.

But there exists an unofficial Debian repository by Jonas Genannt (thank you!), providing recent packages of 3ware utilities under http://jonas.genannt.name/. Check the repository, it offers 3ware-3dms and 3ware-cli. 3ware-3dms is a web application for managing your raid controller via browser, BUT: think twice, if you want this. The application opens a privileged port (888) as it is not able to bind on the local interface and has a crappy user identification system. As I am not a friend of opening ports and closing them afterwards via firewall I dropped the web solution.

The „3ware-cli“ utility is just a command line interface to 3ware controllers. Just grab a .deb from the repository above and install it via „dpkg -i xxx.deb“. Aftwerwards you stark asking your controller questions about it’s status. The command is called „tw_cli“, so let’s give it a try with „info“ as parameter:

# tw_cli info
Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU
------------------------------------------------------------------------
c0    8006-2LP     2         2        1       0       2       -      -

tw_cli told us, that there is one controller (meaning a real piece of raid hardware) called „c0“ with two drives. No we want more detailed information about the given controller:

# tw_cli info c0
 
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       232.885   ON     -      
 
Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

tw_cli reports that controller c0 has one unit „u0“. A unit is the device that your operating system is working with – the „virtual“ raid drive provided by the raid controller. There are two ports/drives in this unit, called „p0“ and „p1“. Both of them have „OK“ as status message meaning that the drives are running fine.

You also ask a drive directly by asking tw_cli for the port on the controller:

# tw_cli info c0 p0

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9            

# tw_cli info c0 p1

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

So you might already got the clue: As tw_cli is just a command line tool your task for an automated setup is setting up a cronjob checking the status of the ports (not the unit! the ports – trust me) regularly and sending a mail or nagios alarm when necessary. I just started writing a little shell script which, right now, just returns an exit status – 0 for a working raid and 1 for a problem:

#!/bin/bash
 
UNIT=u0
CONTROLLER=c0
PORTS=( p0 p1 )
 
tw_check() {
  local regex=${1:-${UNIT}}
  local field=3
  if [ $# -gt 0 ]; then
    field=2
  fi
  local check=$(tw_cli info ${CONTROLLER} $1 \
    | awk "/^$regex/ { print \${field} }")
  [ "XOK" = "X${check}" ]
  return $?
}
 
tw_check || exit 1
for PORT in ${PORTS[@]}; do
tw_check ${PORT} || exit 1
done

As you see you can configure unit, controller and ports. I have not checked this against systems with multiple controllers and units as I don’t have such a setup. But if you need you could just put the configuration stuff in a sourced configuration file.

After writing this little summary I checked all servers I am responsible of and noticed that nearly every server with hardware raid has a 3ware controller and can be checked with tw_cli. Fine…

Let me know how you manage your 3ware raid monitoring under GNU/Linux and Debian/Ubuntu based systems.

my package of the day – htmldoc – for converting html to pdf on the fly

PDF creation got actually fairly easy. OpenOffice.org, the Cups printing system, KDE provide methods for easily printing nearly everything to a PDF file right away. A feature that even outperforms most Windows setups today. But there are still PDF related task that are not that simple. One I often run into is automated PDF creation on a web server. Let’s say you write a web application and want to create PDF invoices on the fly.

There are, of course, PDF frameworks available. Let’s take PHP as an example: If you want to create a PDF from a php script, you can choose between FPDF, Dompdf, the sophisticated Zend Framework and more (and commercial solutions). But to be honest, they are all either complicated (as you often have to use a specific syntax) to use or just quite limited in their possibilities to create a pdf file (as you can only use few design features). As I needed a simple solution for creating a 50+ pages pdf file with a huge table on the fly I tested most frameworks and failed with most of them (often just as I did not have enough time to write dozens of line of code).

So I hoped to find a solution that allowed me just to convert a simple HTML file to a PDF file on the fly providing better compatibility than Dompdf for instance. The solution was … uncommon. It was no PHP class but a neat command line tool called „htmldoc“ available as a package. If you want to give it a try just install it by calling „aptitude install htmldoc“.

You can test htmldoc by saving some html files to disk and call „htmldoc –webpage filename.html“. There a lot of interesting features like setting font size, font type, the footer, color and greyscale mode and so on. But let’s use htmldoc from PHP right away. The following very simple script uses the PHP output buffer for minimizing the need for a write to disk to one file only (if somebody knows a way of using this without any temporary files from a script, let me know):

// start output buffer for pdf capture
 
ob_start();
?>
your normal html output will be places here either by
dumping html directly or by using normal php code
<?php
// save output buffer
$html=ob_get_contents();
// delete Output-Buffer
ob_end_clean();
// write the html to a file
$filename = './tmp.html';
if (!$handle = fopen($filename, 'w')) {
	print "Could not open $filename";
	exit;
}
if (!fwrite($handle, $html)) {
	print "Could not write $filename";
	exit;
}
fclose($handle);
// htmldoc call
$passthru = 'htmldoc --quiet --gray --textfont helvetica \
--bodyfont helvetica --logoimage banner.png --headfootsize 10 \
--footer D/l --fontsize 9 --size 297x210mm -t pdf14 \
--webpage '.$filename;
 
// write output of htmldoc to clean output buffer
ob_start();
passthru($passthru);
$pdf=ob_get_contents();
ob_end_clean();
 
// deliver pdf file as download
header("Content-type: application/pdf");
header("Content-Disposition: attachment; filename=test.pdf");
header('Content-length: ' . strlen($pdf));
echo $pdf;

As you can see, this is neither rocket science nor magic. Just a wrapper for htmldoc enabling you to forget about the pdf when writing the actual content of the html file. You’ll have to check how htmldoc handles your html code. You should make it as simple as possible, forget about advanced css or nested tables. But it’s actually enough for a really neat pdf file and it’s fast: The creating of 50+ page pdf files is fast enough in my case to make the on demand access of htmldoc feel like static file usage.

Please note: Calling external programs and command line tools from a web script is always a security issue and you should carefully check input and updates for the program you are using. The code provided should be easily ported to another web language/framework like Perl and Rails.

first „my package of the day“ republished on debaday

Just for your information as some people moaned about my „my package of the day“ series: A first article about „file“ has been republished there, more will follow and I am really happy contributing to this project. But let me add that I will continue introducing you right here to some of my favorite software packages as there are three major reasons:

  1. Some packages already have been described on debaday and therefore won’t be pushed a second time. As times change and people focus on different features when talking about software I see no problem in repeating package description with other words and even images.
  2. The debaday team tries to distribute the articles they get as good as possible. This means that between publication there can be a gap of some days. As you might understand, when writing a blog article you are most times yearning for hitting the „publish“ button and see what the crowd says.
  3. The republication on debaday shows, that a cooperations works really fine and there is no need to bother around.

So, thank you guys for reading and commenting and thank you, debaday team, for the wonderful work.