Global Jam Participation – Ubuntu Berlin is prepared – are you?

(Wow, I needed to type „Global Jam“ more than once as I’s used to write „Global Bug Jam“.)

I am happy looking forward to the third „Ubuntu Global Jam“ taking place from 2nd to 4th of October 2009. „We“, meaning „Ubuntu Berlin„, Daniel Holbach, Benjamin Drung and me, already managed to prepare the local Jam session as far as possible:

  1. We found a place with power supply and network uplink: the c-base, where Ubuntu Berlin became some kind of resident.
  2. We decided to schedule our Global Jam for Saturday, the 3rd of October, from 11 am to 7 pm, which should allow people with different personal schedules to join us at least for some hours.
  3. We added our Global Jam to the official team list.
  4. We set up a core team with responsibilities for different aspects of the Jam. Daniel Holbach and Benjamin Drung will care for bug triaging, bug squatting and packaging while I’ll work on and present translation and documentation.
  5. We announced our Global Jam participation and invited more people to join us on several mailing lists (from Ubuntu Berlin lists over local Berlin Linux lists to c-base lists).
  6. We started to announce our invitation on different twitter feeds, our facebook group, identi.ca and some blogs. One of them you are currently reading :)
  7. Last but not least I dropped some lines on the Ikhaya-suggest-an-article box, one of the most read German Ubuntu ressources.
  8. We will continue inviting people on personal basis as there are a bunch of people able to support us and just need a small friendly push… :)

I admit I am curious about this years participation and impact. And you? What are your hints for setting up a Global Jam session? Or do you need any more advices? Let me know. I am curious. Really.

usability as blocker? of course. – UbuntuOne and the bandwith limit

Recently I ran into a missing feature of the UbuntuOne client. If you don’t know yet: UbuntuOne is the Canonical driven online space for storing and saving data. It’s a commercial service with a free entry level like similar offers from DropBox. The UbuntuOne client is a small open source application that cares for the synchronisation of files. So when you put a file in a specific folder, the client pushes it to the server or pulls changes in the same way.

The missing feature was a small one: The ability to limit the upload speed. As you might know, uploading large files can nearly block you internet access as your machine isn’t able to send back packages in time. So limiting upload speed either automatically or with a specific setting helps you sending data to the net while still being able doing other things.

While the fact that the feature is somehow is maybe rather not of interest for you, the interesting part is to see how the feature request evolves in the Ubuntu bug tracker. I filed a request myself (#375328) and ran into a – technically correct – discussion about the necessity of implementing network speed limits inside applications. Of course you have the ability to use the Linux kernels traffic shaping features or even a more centralized setup in your local network. While these arguments are absolutely right from an administrator’s point of view they are nearly incorrect from an end user’s side. And end user shouldn’t really care about lan traffic shaping setups or need to know about the Linux kernel’s traffic shaping features.

So while more beta testers filed similar feature requests and (#381348) got the main ticket, the importance of this issue remained in the discussion. I’s surprised to see that the request got recently tagged as „karmic-blocker“ meaning it has to be done before Ubuntu Karmic Koala is released. While the tag was removed temporarily as a reason was missing, Elliot Murphy filed it later, stating

„if we don’t have bandwidth limiting and the user fires up their laptop on a slow connection (maybe an edge connection via their mobile phone), and the syncdaemon will use all available bandwidth and cripple any other applications that need some bandwidth. we’ve gotten bug reports from users already complaining about this being so bad that DNS requests are taking forever to get through. so, I think the syncdaemon having some semi-intelligent bandwidth limiting (like the suggestion of monitoring the transmit queue depth) is a karmic blocker.“
source: https://bugs.launchpad.net/ubuntuone-client/+bug/381348

The point about all this is: The nearly tiny feature request of adding a bandwith control to a small client got a blocker for Karmic as it might break user experience and could lead to a lot of bug reports about slow networking connections that are rather about UbuntuOne client consuming upload speed completely. I guess the decision to handle this as a blocker might be surprising on the one side but it is a wise decision as it focusses on the users point of view on the other and finally that’s what it’s all about: the user. Isn’t it?

sync ruby gems between different installed ruby versions

If you are in the Ruby business (which probably means „in the Ruby on Rails business“ nowadays) sooner or later you’ll have to play around with different Ruby versions on the same machine as you might run into crashing ruby processes or performance issues . At least you’ll notice that running the standard Debian/Ubuntu Ruby versions might get you into serious trouble as it is several times slower than a manually compiled version (for reference see this launchpad bug and this blog entry.).

So a common situation with is: you have Ruby and a lot of Ruby gems installed and need to switch to a different Ruby version while making sure that you have all gems installed in the new version that you had in the old version. As gems differ from version to version you should also be interested in installing exactly the same gem versions again and not justing doing a install of all recent versions.

As far as I know there is no official way of syncing gems between two ruby installations. So the common way is something like asking ruby for a list of currently installed gems like

$ gem list
 
*** LOCAL GEMS ***
 
actionmailer (2.3.2, 2.2.2, 2.1.1)
actionpack (2.3.2, 2.2.2, 2.1.1)
activerecord (2.3.2, 2.2.2, 2.1.1)
activeresource (2.3.2, 2.2.2, 2.1.1)
activesupport (2.3.2, 2.2.2, 2.1.1)
[...]
ZenTest (4.0.0)

and then running a

$ gem install actionmailer -v 2.3.2
$ gem install actionmailer -v 2.2.2
$ gem install actionmailer -v 2.1.1
[...]
$ gem install Zentest -v 4.0.0

for every gem and every gem version you probably need. As a couple of gems are native extensions they’ll get compiled and you need to wait some seconds or minutes.

As I had to do this task more than once I wrote a small wrapper script that automates the process completely by fetching the list of gems and installing them again on another ruby version:

#!/bin/sh
GEM_FROM=/path/to/old/gem
GEM_TO=/path/to/new/gem
${GEM_FROM} list | sed -n '4,$ p' | \
 while read gem versions; do
  for version in $(echo ${versions} | sed "s/[(),]//g"); do
  echo ${gem} ${version}
  ${GEM_TO} install --no-rdoc --no-ri ${gem} -v ${version}
 done
done

The script uses some regular expression sed magic, friendly tweaked by Mnemonikk (thank you). Please note, that I prefer not to install rdoc and ri, as it saves time and disk space. Feel free to change this to your needs.

The only caveeat in this script are gems that cannot be installed as they come from unknown external repositories or were manually downloaded/installed. Therefore try to make sure to check this after a run of the gem sync script – it won’t stop when a gem cannot be installed which is intended behaviour.

So far about this. Hope, it helps you out when dealing with different Ruby versions. Do you have similar best practices for keeping Ruby gems in sync?

Ubuntu Jaunty Jackalope on Berlin metro system

Again I am happy to announce, that Berliner Fenster, the company behind the Berlin metro tv advertisement system, viewed by approx. 1.5 million people a day, was so kind of providing Ubuntu and Ubuntu Berlin with a spot for the release of Ubuntu Jaunty Jackalope and the release party hosted by Ubuntu Berlin at c-base:

If you cannot see the embedded spot, click on this link.

The spot runs for three days, showing Ubuntu to a remarkable amount of passengers.

Please note: The background image is from Marvin Kubiak, you can find it among other interesting Jaunty background images on:

https://wiki.ubuntu.com/Artwork/Incoming/Jaunty/AlphaBackgrounds

Happy release!

What do director Tom Tykwer and Ubuntu have in common?

What do director Tom Tykwer („The International„) and Ubuntu have in common? Well, they seem to share the same passion for commitment in developing countries by focusing on media. This similarity came up when I read the schedule for the one day conference „Jour Fixe – media and development„.

While the first talk, held by Andrea Goetzke und Geraldine de Bastion from newthinking communications, titled „Ubuntu and the free toaster„, deals with free software and digital culture in Africa, the second talk, held by director Tom Tykwer and titled „The Making of Soulboy. A movie production in the slums of Nairobi„, deals with a movie project in Nairobi (and Tom Tykwers engagement in artistic education for youngsters in Africa with the ngo „one fine day“).

The „Jour Fixe“ is scheduled for this Friday, the 24th of April, and takes place at „Hamburger Bahnhof“ in Berlin/Germany. As I’m going to attend the conference, I’ll try to report back a short resume. Please note that there are more interesting talks besides Ubuntu/Tykwer – I just wanted to point out the interesting coincidence.

possibly scrambled qt4 apps in Ubuntu Jaunty Jackalope while using VRGB and LCD

In less than 48 hours Ubuntu Jaunty Jackalope will be released officially! While using Jaunty for more than a month I am pleased to see this great new release getting it’s way to a huge usership. While preparing my small „new features in Ubuntu Jaunty Jackalope“ talk for the „Ubuntu Berlin“ release party at c-base, I’d like to point out one open bug, that will make it into the release and might get some users into strange trouble:

When you have – for whatever reason (either by chance, clicking around or due to having a rotated display) – changed the display mode of Ubuntu to VRGB or VBGR (you can easily do this in the „System>Preferences>Appearance>Fonts“ menu) and additionaly turned on the lcd/subpixel mode, probably all of your qt4 applications – even qtconfig – will be totatlly scrambled and unreadable. The bug might hit you under Ubuntu (without a blue „K“ in the beginning) as you probably run something like VirtualBox, Skype or even Amarok in your Gnome environment. It will probably look like this:

Screenshot showing the scrambled qt application
Screenshot showing the scrambled qt application

But: Don’t panic! This bug only occurs when you changed some defaults in the Appearance/Fonts settings to rather uncommon or often maybe useless settings and switching back is fairly easy. But as I ran into this and put some effort into the corresponding bug on launchpad Subpixel/Lcd mode with VRGB/VBGR makes qt4 applications on Jaunty unreadable, I noticed, that a remarkable bunch of users posted duplicate bug reports or commented the report.

So if you are running into this error, all you have to do, is:

  1. exit all qt applications,
  2. go into „System>Preferences>Appearance>Fonts“,
  3. change VRGB/BGR to RGB/BGR.

That’s all. If you want to examine the bug, do the following (don’t do this at home):

  1. exit all qt applications,
  2. go into „System>Preferences>Appearance>Fonts“,
  3. change RGB/BGR to VRGB/BGR,
  4. switch to lcd/subpixel mode.

Afterwards your qt applications will look scrambled. If you don’t know, which one to run for a test, you probably have „qtconfig“ installed.

(I had a little chat with some developers about the severity of this bug. Though I think it’d would be definitely better not having it in a release, I understand the point, that the environment must be set up in a rather special way to run into this and it popped up quite late. Therefore it did not make it into the release critical bugs, but I am quite sure, it will be fixed soon.)

That’s all about this. I wish you a smooth upgrade to Jaunty – see you there :)

[update 2009-04-03]

As you can read on https://bugs.launchpad.net/ubuntu/+source/qt4-x11/+bug/334657, a patch has been committed to the Ubuntu repositories, that has been found here:

http://cvs.fedoraproject.org/viewvc/rpms/qt/devel/qt-x11-opensource-src-4.5.0-disable_ft_lcdfilter.patch?revision=1.1&view=markup

So the fix as an update is coming closer…

A quick note on MySQL troubleshooting and MySQL replication

PLEASE NOTE: I am currently reviewing and extending this document.

While caring for a remarkable amount of MySQL server instances, troubleshooting becomes a common task. It might of interest for you which

Recovering a crashed MySQL server

After a server crash (meaning the system itself or just the MySQL daemon) corrupted table files are quite common. You’ll see this when checking the /var/log/syslog, as the MySQL daemon checks tables during its startup.

Apr 17 13:54:44 live1 mysqld[2613]: 090417 13:54:44 [ERROR]
  /usr/sbin/mysqld: Table './database1/table1' is marked as
  crashed and should be repaired

The MySQL daemon just told you that it found a broken MyISAM table. Now it’s up to you fixing it. You might already know, that there is the „REPAIR“ statement. So a lot of people enter their PhpMyAdmin afterwards, select database and table(s) and run the REPAIR statements. The problem with this is that in most cases your system is already in production – for instance is up again and the MySQL server already serves a bunch of requests. Therefore a REPAIR request gets slowed down dramatically. Consider taking your website down for the REPAIR – it will be faster and it’s definitely smarter not to deliver web pages based on corrupted tables.

The other disadvantage of the above method is, that you probably just shut down your web server and your PhpMyAdmin is down either or you have dozens of databases and tables and therefore it’s just a hard task to cycle through them. The better choice is the command line in this case.

If you only have a small number of corrupted tables, you can use the „mysql“ client utility doing something like:

$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.0.75-0ubuntu10 (Ubuntu)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> REPAIR TABLE database1.table1;
+--------------------+--------+----------+----------+
| Table              | Op     | Msg_type | Msg_text |
+--------------------+--------+----------+----------+
| database1.table1   | repair | status   | OK       |
+--------------------+--------+----------+----------+
1 row in set (2.10 sec)

This works, but there is a better way: First, using OPTIMIZE in combination with REPAIR is suggested and there is a command line tool only for REPAIR jobs. Consider this call:

$ mysqlcheck -u root -p --auto-repair --check --optimize database1
Enter password:
database1.table1      OK
database1.table2      Table is already up to date

As you see, MySQL just checked the whole database and tried to repair and optimize it.

The great deal about using „mysqlcheck“ is, that it can also be run against all databases in one run without the need of getting a list of them in advance:

$ mysqlcheck -u root -p --auto-repair --check --optimize \
  --all-databases

Of course you need to consider if an optimize of all your databases and tables might just take too long if you have huge tables. On the other hand a complete run prevents of thinking about a probably missed table.

[update]

nobse pointed out in the comments, that it’s worth having a look at the automatic MyIsam repair options in MySQL. So have a look at them if you want to automate recovery:

option_mysqld_myisam-recover

Recovering a broken replication

MySQL replication is an easy method of load balancing database queries to multiple servers or just continuously backing up data. Though it is not hard to setup, troubleshooting it might be a hard task. A common reason for a broken replication is a server crash – the replication partner notices that there are broken queries – or even worse: the MySQL slave just guesses there is an error though there is none. I just ran into the latter one as a developer executed a „DROP VIEW“ on a non-existing VIEW on the master. The master justs returns an error and ignores. But as this query got replicated to the MySQL SLAVE, the slave thinks it cannot enroll a query and immediately stopped replication. This is just an example of a possible error (and a hint on using „IF EXISTS“ as often as possible).

Actually all you want to do now, is telling the slave to ignore just one query. All you need to do for this is stopping the slave, telling it to skip one query and starting the slave again:

$ mysql -u root -p
mysql> STOP SLAVE;
mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
mysql> START SLAVE;

That’s all about this.

Recreating databases and tables the right way

In the next topic you’ll recreate databases. A common mistake when dropping and recreating tables and databases is forgetting about all the settings it had – especially charsets which can run you into trouble later on („Why do all these umlauts show up scrambled?“). The best way of recreating tables and databases or creating them on other systems therefore is using the „SHOW CREATE“ statement. You can use „SHOW CREATE DATABASE database1“ or „SHOW CREATE TABLE database1.table1“ providing you with a CREATE statement with all current settings applied.

mysql> show create database database1;
+-----------+--------------------------------------------------------------------+
| Database  | Create Database                                                    |
+-----------+--------------------------------------------------------------------+
| database1 | CREATE DATABASE `database1` /*!40100 DEFAULT CHARACTER SET utf8 */ |
+-----------+--------------------------------------------------------------------+
1 row in set (0.00 sec)

The important part in this case is the „comment“ after the actual create statement. It is executed only on compatible MySQL server versions and makes sure, your are running utf8 on the database.

Keep this in mind and it might save you a lot of trouble.

Fixing replication when master binlog is broken

When your MySQL master crashes there is a slight chance that your master binlog gets corrupted. This means that the slaves won’t receive updates anymore stating:

[ERROR] Slave: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‚mysqlbinlog‘ on the binary log), the slave’s relay log is corrupted (you can check this by running ‚mysqlbinlog‘ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‚SHOW SLAVE STATUS‘ on this slave. Error_code: 0

You might have luck when only the slave’s relay log is corrupted as you can fix this with the steps mentioned above. But a corrupted binlog on the master might not be fixable though the databases itself can be fixed. Depending on your time you try to use the „SQL_SLAVE_SKIP_COUNTER“ from above but actually the only way is to setup

Setting up replication from scratch

There are circumstances forcing you to start replication from scratch. For instance you have a server going live for the first time and actually all those test imports don’t need to be replicated to the slave anymore as this might last hours. My quick note for this (consider backing up your master database before!)

slave: STOP SLAVE;
slave: RESET SLAVE;
slave: SHOW CREATE DATABASE datenbank;
slave: DROP DATABASE datenbank;
slave: CREATE DATABASE datenbank;

master: DROP DATABASE datenbak;
master: SHOW CREATE DATABASE datenbank;
master: CREATE DATABASE datenbank;
master: RESET MASTER

slave: CHANGE MASTER TO MASTER_USER="slave-user", \
MASTER_PASSWORD="slave-password", MASTER_HOST="master.host";
slave: START SLAVE

You just started replication from scratch, check „SHOW SLAVE STATUS“ on the slave and „SHOW MASTER STATUS“ on the master.

Deleting unneeded binlog files

Replication needs binlog files – a mysql file format for storing database changes in a binary format. Sometimes it is hard to decide how many of the binlog files you want to keep on the server possibly getting you into disk space trouble. Therefore deleting binlog files that have already been transferred to the client might be a smart idea when running low on space.

First you need to know which binlog files the slave already fetched. You can do this by having a look on „SHOW SLAVE STATUS;“ on the slave. Now log into the MySQL master and run something like:

mysql> PURGE BINARY LOGS TO 'mysql-bin.010';

You can even do this on a date format level:

mysql> PURGE BINARY LOGS BEFORE '2008-04-02 22:46:26';

Conclusion

The above hints might save you same time when recovering or troubleshooting a MySQL server. Please note, that these are hints and you have – at any time – make sure, that your data has an up to date backup. Nothing will help you more.

And your favorite new feature in Jaunty Jackalope is … ?

Again I’ll present a short new features from the next Ubuntu release (this time „Jaunty Jackalope“) to an audience of about 150 to 200 visitors at the upcoming traditional „Ubuntu release party“ at c-base organized by „Ubuntu Berlin“. While it’s worth noting that this event will happen on Saturday, the 25th of April, so when the release is still a hot topic, and the very friendly guys from „Berliner Fenster„, the local metro television system, are going to support release and release party by sponsoring spots, I am curious about the new features in Jaunty Jackalope that you like emphazise – be it a litte eye candy or, a command line tool or a major innovation.

After using Jaunty for about a month now I think the graphical changes really please my eyes. I like the new notification scheme (though it feels still young and there is a lot to do like implementing it to common applications like Thunderbird), introduced by Mark Shuttleworth some months ago. Moreover while not currently using Evolution myself I am impressed about reading about „evolution-mapi„, enabling Evolution to access Exchange 2000, 2003 and 2007 servers by talking the native Exchange mapi protocol and and not using the neat but slow owa (web) wrapper. Yes, that seems to be a techie feature but it might improve acceptance of Ubuntu/Gnome/Evolution in enterprise setups dramatically.

Your turn now. What is your new killer feature or just tiny improvement you like most? Let me know, and I promise, I’ll try to include it into my presentation.

Ubuntu developers visiting Ubuntu Berlin and c-base – plus interview with Mark Shuttleworth

A couple of months ago I started annoying people by telling them, I’d like to show the Ubuntu Berlin community and c-base to Mark Shuttleworth as he is interested in community personally on one side and Ubuntu Berlin is a great example on the other. So this’s rather inviting an important member of the community than celebrating a „meet and greet“. Of course telling people plans like this makes them smile, but when you raise your finger and say „It will happen“ with certainty, they’ll get uncertain. So the plan was actually to invite Mark to one of our great traditional release parties which you shouldn’t miss when you are around Berlin at release time.

By chance the Ubuntu Developer Sprint happened to be in Berlin for the next Jaunty release this week. If I got it right, the Canonical Ubuntu developers meet five days around two weeks before a release feature freeze and work in groups and issues that need to be decided/designed or just fixed immediatly. The incredible Daniel Holbach had the idea of inviting the bunch of developers right into the c-base after their work. So he did and we scheduled it for an evening when the Ubuntu Berlin crew also meets at c-base for their monthly jour fix.

We did not announce this meeting externally as we tried to make the whole evening as comfortable as possible for everyone. And we did, I think. Right in time at 19:30 this Wednesday evening about thirty Ubuntu developers entered the c-base. Just among them Mark who seemed to like the whole c-base hackerspace, Ubuntu Berlin, community, space and future thing a lot. The Canonical crowd got several guided tours through the whole base by __t, while housetier provided the others with „German beer“ and Club Mate. We had a lot of chats in smaller groups, things, you always wanted to ask the developer of your choice and just relaxed smalltalks about space, canooeing the c-base project „OpenMoon“ (trying to send a rocket to the moon), and more. Mark seemed to be amazed about asking people why they joined to Ubuntu Berlin team, what they were currently doing and so on – so, what the community thing is about right here and right now.

We had no schedule for the evening. Therefore we spent about two and a half hour at c-base without any official part and a cosy diner in smaller groups afterwards. We only asked some people for an interview for the c-base statement studio channel where already people like Nokias Peter Schneider and Mozillas CEO John Lilly showed up. Mark took the time for an interview by jocognito. The results of this short talks are already online on Youtube:

Interview #1: (when the inbound video doesn’t show up, click here)

Interview #2: (when the inbound video doesn’t show up, click here)

Another interview with Jorge Castro and James Westby has also been taped and will be published soon. Funny guys talking about Ubuntu on the moon. I’ll post the Youtube links, when the edited version is online.
So what’s next? I hope everybody liked (Ubuntu) Berlin and c-base and we got a good start for possible events in the future.
Thanks again to Daniel who initiated the visit!

From workshops over Jour Fix to free network games – one week in Berlin

This week seems to be a new hilight in the „Ubuntu Berlin“ history. I already mentioned that we are really busy with event organisation, but we continue to outperform ourself: In the last weeks we noticed that we reached a point where it is impossible to plan events so that a majority of is can participate, as we are just running so many of them which has positive and negative side effects.

So how is this weeks schedule?

  • Monday: workshop with the incredible Sven Guckes about screen and irssi part one,
  • Tuesday: workshop with Sven Guckes part two,
  • Wednesday: Monthly Jour Fix at c-base with some very special guests – I’ll report back with interesting material,
  • Saturday: free game afternoon/evening/night – our first try for a free lan party with free games like „Battle for Wesnoth“, „Sauerbraten“, „Tetrinet“ – at least one game for every taste.

This feels really great on one side as you can participate at Ubuntu events on four of seven evenings within this week. And it seems we wont run out of content for the next months as we still have a lot of ideas and volunteers for workshops, release parties, events, jams and technical projects.

On the other side most members of our user group face the new situation of not being able to participate in all events. While a year ago the core team showed up at nearly every event, nearly nobody is able to participate in an Ubuntu event every second day a week.

We discussed a lot about this issue: Is it possible to offer too many events? The main argument for this is the possibility of shooting one’s wad within a short time. I, personally, don’t agree with this right now. When your possible audience is large enough – Berlin has round about 3.5 million residents – you actually can not run out of visitors. Of course millions of them aren’t interested in Ubuntu at all right now (are they?), but you have to decide whether you want to build a community for the sake of a community – which is okay, meaning meetings, chats and similar in and for a closer core group. Or you decide that you use the community as a kind of incubator trying to reach out for all the different people out there. Not to include them all into the community but to spread knowledge, free software and yes, fun.

So I guess, we’ll continue to fire off events as they come in. Actually we came to the comfortable problem of having „too many“ events by doing so for the last months, even years and we’ll see what this means for 2009. Think a lot of fun and a growing amount of happy Ubuntu users. Promised.