A quick note on MySQL troubleshooting and MySQL replication

PLEASE NOTE: I am currently reviewing and extending this document.

While caring for a remarkable amount of MySQL server instances, troubleshooting becomes a common task. It might of interest for you which

Recovering a crashed MySQL server

After a server crash (meaning the system itself or just the MySQL daemon) corrupted table files are quite common. You’ll see this when checking the /var/log/syslog, as the MySQL daemon checks tables during its startup.

Apr 17 13:54:44 live1 mysqld[2613]: 090417 13:54:44 [ERROR]
  /usr/sbin/mysqld: Table './database1/table1' is marked as
  crashed and should be repaired

The MySQL daemon just told you that it found a broken MyISAM table. Now it’s up to you fixing it. You might already know, that there is the „REPAIR“ statement. So a lot of people enter their PhpMyAdmin afterwards, select database and table(s) and run the REPAIR statements. The problem with this is that in most cases your system is already in production – for instance is up again and the MySQL server already serves a bunch of requests. Therefore a REPAIR request gets slowed down dramatically. Consider taking your website down for the REPAIR – it will be faster and it’s definitely smarter not to deliver web pages based on corrupted tables.

The other disadvantage of the above method is, that you probably just shut down your web server and your PhpMyAdmin is down either or you have dozens of databases and tables and therefore it’s just a hard task to cycle through them. The better choice is the command line in this case.

If you only have a small number of corrupted tables, you can use the „mysql“ client utility doing something like:

$ mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 10
Server version: 5.0.75-0ubuntu10 (Ubuntu)

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> REPAIR TABLE database1.table1;
+--------------------+--------+----------+----------+
| Table              | Op     | Msg_type | Msg_text |
+--------------------+--------+----------+----------+
| database1.table1   | repair | status   | OK       |
+--------------------+--------+----------+----------+
1 row in set (2.10 sec)

This works, but there is a better way: First, using OPTIMIZE in combination with REPAIR is suggested and there is a command line tool only for REPAIR jobs. Consider this call:

$ mysqlcheck -u root -p --auto-repair --check --optimize database1
Enter password:
database1.table1      OK
database1.table2      Table is already up to date

As you see, MySQL just checked the whole database and tried to repair and optimize it.

The great deal about using „mysqlcheck“ is, that it can also be run against all databases in one run without the need of getting a list of them in advance:

$ mysqlcheck -u root -p --auto-repair --check --optimize \
  --all-databases

Of course you need to consider if an optimize of all your databases and tables might just take too long if you have huge tables. On the other hand a complete run prevents of thinking about a probably missed table.

[update]

nobse pointed out in the comments, that it’s worth having a look at the automatic MyIsam repair options in MySQL. So have a look at them if you want to automate recovery:

option_mysqld_myisam-recover

Recovering a broken replication

MySQL replication is an easy method of load balancing database queries to multiple servers or just continuously backing up data. Though it is not hard to setup, troubleshooting it might be a hard task. A common reason for a broken replication is a server crash – the replication partner notices that there are broken queries – or even worse: the MySQL slave just guesses there is an error though there is none. I just ran into the latter one as a developer executed a „DROP VIEW“ on a non-existing VIEW on the master. The master justs returns an error and ignores. But as this query got replicated to the MySQL SLAVE, the slave thinks it cannot enroll a query and immediately stopped replication. This is just an example of a possible error (and a hint on using „IF EXISTS“ as often as possible).

Actually all you want to do now, is telling the slave to ignore just one query. All you need to do for this is stopping the slave, telling it to skip one query and starting the slave again:

$ mysql -u root -p
mysql> STOP SLAVE;
mysql> SET GLOBAL SQL_SLAVE_SKIP_COUNTER = 1;
mysql> START SLAVE;

That’s all about this.

Recreating databases and tables the right way

In the next topic you’ll recreate databases. A common mistake when dropping and recreating tables and databases is forgetting about all the settings it had – especially charsets which can run you into trouble later on („Why do all these umlauts show up scrambled?“). The best way of recreating tables and databases or creating them on other systems therefore is using the „SHOW CREATE“ statement. You can use „SHOW CREATE DATABASE database1“ or „SHOW CREATE TABLE database1.table1“ providing you with a CREATE statement with all current settings applied.

mysql> show create database database1;
+-----------+--------------------------------------------------------------------+
| Database  | Create Database                                                    |
+-----------+--------------------------------------------------------------------+
| database1 | CREATE DATABASE `database1` /*!40100 DEFAULT CHARACTER SET utf8 */ |
+-----------+--------------------------------------------------------------------+
1 row in set (0.00 sec)

The important part in this case is the „comment“ after the actual create statement. It is executed only on compatible MySQL server versions and makes sure, your are running utf8 on the database.

Keep this in mind and it might save you a lot of trouble.

Fixing replication when master binlog is broken

When your MySQL master crashes there is a slight chance that your master binlog gets corrupted. This means that the slaves won’t receive updates anymore stating:

[ERROR] Slave: Could not parse relay log event entry. The possible reasons are: the master’s binary log is corrupted (you can check this by running ‚mysqlbinlog‘ on the binary log), the slave’s relay log is corrupted (you can check this by running ‚mysqlbinlog‘ on the relay log), a network problem, or a bug in the master’s or slave’s MySQL code. If you want to check the master’s binary log or slave’s relay log, you will be able to know their names by issuing ‚SHOW SLAVE STATUS‘ on this slave. Error_code: 0

You might have luck when only the slave’s relay log is corrupted as you can fix this with the steps mentioned above. But a corrupted binlog on the master might not be fixable though the databases itself can be fixed. Depending on your time you try to use the „SQL_SLAVE_SKIP_COUNTER“ from above but actually the only way is to setup

Setting up replication from scratch

There are circumstances forcing you to start replication from scratch. For instance you have a server going live for the first time and actually all those test imports don’t need to be replicated to the slave anymore as this might last hours. My quick note for this (consider backing up your master database before!)

slave: STOP SLAVE;
slave: RESET SLAVE;
slave: SHOW CREATE DATABASE datenbank;
slave: DROP DATABASE datenbank;
slave: CREATE DATABASE datenbank;

master: DROP DATABASE datenbak;
master: SHOW CREATE DATABASE datenbank;
master: CREATE DATABASE datenbank;
master: RESET MASTER

slave: CHANGE MASTER TO MASTER_USER="slave-user", \
MASTER_PASSWORD="slave-password", MASTER_HOST="master.host";
slave: START SLAVE

You just started replication from scratch, check „SHOW SLAVE STATUS“ on the slave and „SHOW MASTER STATUS“ on the master.

Deleting unneeded binlog files

Replication needs binlog files – a mysql file format for storing database changes in a binary format. Sometimes it is hard to decide how many of the binlog files you want to keep on the server possibly getting you into disk space trouble. Therefore deleting binlog files that have already been transferred to the client might be a smart idea when running low on space.

First you need to know which binlog files the slave already fetched. You can do this by having a look on „SHOW SLAVE STATUS;“ on the slave. Now log into the MySQL master and run something like:

mysql> PURGE BINARY LOGS TO 'mysql-bin.010';

You can even do this on a date format level:

mysql> PURGE BINARY LOGS BEFORE '2008-04-02 22:46:26';

Conclusion

The above hints might save you same time when recovering or troubleshooting a MySQL server. Please note, that these are hints and you have – at any time – make sure, that your data has an up to date backup. Nothing will help you more.

My (unofficial) package of the day: 3ware-cli and 3dms for monitoring 3ware raid controllers

Having a real hardware raid controller is a nice thing: Especially in a server setup it helps you keeping data safe on multiple disks. Though, a common mistake is, having a raid controller and not monitoring it. Why? Let’s say, you have a simple type 1 array (one disk mirrored to another) and one of the disks fails. If your raid systems works it will continue to work. But if you did not setup a monitoring for it, you won’t notice it and the chance of a total data loss increases as you are running on one disk now.

So monitoring a raid is actually the step that makes your raid system as safe as you wanted it when setting it up. Some raids are quite easy to monitor, like a Linux software raid system. Some need special software. As I recently got a bunch of dedicated (Hetzner DS8000 and other) servers with 3ware raid controllers, I checked the common software repositories for monitoring software and was surprised not finding any suitable. So a web research showed me that there are Linux tools from 3ware. Of course they don’t provide .deb packages so you need to take of this yourself if you don’t want to install the software manually.

But there exists an unofficial Debian repository by Jonas Genannt (thank you!), providing recent packages of 3ware utilities under http://jonas.genannt.name/. Check the repository, it offers 3ware-3dms and 3ware-cli. 3ware-3dms is a web application for managing your raid controller via browser, BUT: think twice, if you want this. The application opens a privileged port (888) as it is not able to bind on the local interface and has a crappy user identification system. As I am not a friend of opening ports and closing them afterwards via firewall I dropped the web solution.

The „3ware-cli“ utility is just a command line interface to 3ware controllers. Just grab a .deb from the repository above and install it via „dpkg -i xxx.deb“. Aftwerwards you stark asking your controller questions about it’s status. The command is called „tw_cli“, so let’s give it a try with „info“ as parameter:

# tw_cli info
Ctl   Model        (V)Ports  Drives   Units   NotOpt  RRate   VRate  BBU
------------------------------------------------------------------------
c0    8006-2LP     2         2        1       0       2       -      -

tw_cli told us, that there is one controller (meaning a real piece of raid hardware) called „c0“ with two drives. No we want more detailed information about the given controller:

# tw_cli info c0
 
Unit  UnitType  Status         %RCmpl  %V/I/M  Stripe  Size(GB)  Cache  AVrfy
------------------------------------------------------------------------------
u0    RAID-1    OK             -       -       -       232.885   ON     -      
 
Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

tw_cli reports that controller c0 has one unit „u0“. A unit is the device that your operating system is working with – the „virtual“ raid drive provided by the raid controller. There are two ports/drives in this unit, called „p0“ and „p1“. Both of them have „OK“ as status message meaning that the drives are running fine.

You also ask a drive directly by asking tw_cli for the port on the controller:

# tw_cli info c0 p0

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p0     OK               u0     232.88 GB   488397168     6RYBP4R9            

# tw_cli info c0 p1

Port   Status           Unit   Size        Blocks        Serial
---------------------------------------------------------------
p1     OK               u0     232.88 GB   488397168     6RYBSHJC

So you might already got the clue: As tw_cli is just a command line tool your task for an automated setup is setting up a cronjob checking the status of the ports (not the unit! the ports – trust me) regularly and sending a mail or nagios alarm when necessary. I just started writing a little shell script which, right now, just returns an exit status – 0 for a working raid and 1 for a problem:

#!/bin/bash
 
UNIT=u0
CONTROLLER=c0
PORTS=( p0 p1 )
 
tw_check() {
  local regex=${1:-${UNIT}}
  local field=3
  if [ $# -gt 0 ]; then
    field=2
  fi
  local check=$(tw_cli info ${CONTROLLER} $1 \
    | awk "/^$regex/ { print \${field} }")
  [ "XOK" = "X${check}" ]
  return $?
}
 
tw_check || exit 1
for PORT in ${PORTS[@]}; do
tw_check ${PORT} || exit 1
done

As you see you can configure unit, controller and ports. I have not checked this against systems with multiple controllers and units as I don’t have such a setup. But if you need you could just put the configuration stuff in a sourced configuration file.

After writing this little summary I checked all servers I am responsible of and noticed that nearly every server with hardware raid has a 3ware controller and can be checked with tw_cli. Fine…

Let me know how you manage your 3ware raid monitoring under GNU/Linux and Debian/Ubuntu based systems.

my package of the day – sash – the Stand Alone SHell for system recovery

Let me introduce you today to a package that is quite unknown as you hopefully never need it. But when you need it and have not thought about it before, it is probably already too late. I am talking about „sash“ – the „Stand Alone SHell“. Yet another shell? Yes and no. Yes it is a shell, but no, I am not trying to show something like the shiny friendly interactive shell or (my favorite) „zsh“. Quite the contrary: You can give „sash“ a lot of attributes, but not „shiny“.

So what is about? Imagine the following case: You are running a machine and suddenly something goes totally wrong. Partition errors, missing libraries, you have messed around with libc, whatever. This can get you into serious trouble. You are fine, when you have the possibility to boot a recovery cd or something similar. But under some circumstances you might have to stick to the programs already installed though they seem to be broken. Maybe it is a virtual server somewhere on the web and you are only allowed to boot into a recovery mode giving you a prompt to your server. So you are trying to login as root but it just does not work for some reasons. Broken dependencies. Who knows.

The point is: When you login onto a machine for system recovery, you are already relying on a lot of tools and dependencies – though it only seems to be a shell. The shell might be linked against a couple of libraries, a lot of commands you want to run are not build in and therefore a bunch of external dependencies can bar your way. So what you actually might need in a situation of severe pain is a shell that provides you with as much essential tools as you need on its own without relying on external code.

Installing sash

This is where „sash“ comes into play. Sash is not a dynamic linked executable, it has actually all needed features built in. So as long as you can execute the sash binary, you can have a working shell. Let’s check it! Install „sash“ by using „aptitude install sash“ or you preferred package manager. Please note, that sash will clone your current root account:

cloning root account entry to create sashroot account in /etc/passwd
Cloned sashroot from root in /etc/passwd
cloning root account entry to create sashroot account in /etc/shadow
Cloned sashroot from root in /etc/shadow

So you have this new line in your /etc/passwd:

sashroot:x:0:0:root:/root:/bin/sash

You should consider giving sashroot a password if you want to be able to login with this account. But please check if this applies to your security needs.

See the difference

Now let’s check how the sash binary differs from the standard shell, the bash and the zsh. We are using „ldd“ for this, as it is lists libraries, an executable is linked against:

bildschirmfoto-terminal.png

Pretty impressive. All „normal“ shells have at least three dependencies, sash apparently has none.

But getting rid of external libraries is not the only difference sash makes. Another major feature is the collection of built-in commands:

-ar, -chattr, -chgrp, -chmod, -chown, -cmp, -cp,
-dd, -echo, -ed, -grep, -file, -find, -gunzip,
-gzip, -kill, -ln, -ls, -lsattr, -mkdir, -mknod,
-more, -mount, -mv, -printenv, -pwd, -rm, -rmdir,
-sum, -sync, -tar, -touch, -umount, -where

Seems like a list of commands you yearn for, when in recovery mode, don’t you? Note the leading „-“ at the beginning of those commands. This is the way, sash handles your attempts to run internal and external commands. When using „mv“, sash gives you the normal /bin/mv, when using „-mv“, sash provides you with it’s own replacement. But „sash“ helps you when you don’t want to type the „-“ at the beginning of every command. You can enter „aliasall“ in a sash session as it will create non permanent aliases for all builtin commands:

bildschirmfoto-terminal-1.png

Emergency

In case of an emergency you might need to boot directly into sash as maybe your initrd stuff is broken. How? Just append a „init=/bin/sash“ to your kernel command line – be it lilo or grub. This way, you will be directly dropped into a sash session.

What’s missing?

Sadfully one essential command is missing: fsck. As the sash manual points out: fsck is just way too big to be included in a statically linked binary. Sad, but true. But hey: Better being to able at least to act on the console than having no console at all.

Sash as a standard shell?

… is not a good idea. It just lacks a lot of features you’ll really want when working on the command line: A verbose prompt, command history, tab completion and so on.

So it’s to install sash now as you will miss it, when it’s too late :)
(And just if you’d like to ask: Yes, at least once I really needed sash and it helped me to save a lot of time.)