Author Archives: elenst

RQG: SlaveCrashRecovery Reporter

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

The reporter can be used to test crash-safety of replication.

It is a periodic reporter, every 30 seconds it kills the slave server using SIGKILL, and immediately restarts it on the old data directory, with the same parameters as before. On server restart, the reporter checks that the server itself and the replication started all right.

The reporter itself does not check consistency of the data, but it can be used together with ReplicationConsistency reporter.

It is supposed to be used with, so that the server is started without MTR involvement.

Some more information at

RQG: LimitRowsExamined Transformer

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

There are many things in RQG for which it’s not obvious what they do by a quick look. It becomes especially embarrassing though when those are things that you developed yourself; so, I’ll try to keep track of those at least, and maybe add some records for legacy components when I can.

The LimitRowsExamined transformer checks whether the original query already contains a ROWS EXAMINED clause. If it does not, it adds the clause either after the LIMIT clause, or at the end of the query. In any case (even if ROWS EXAMINED was already there), the transformer returns the following sequence of statements:

* the query with ROWS EXAMINED
* a query which sums up status variables related to examined rows

The result of the main query is checked to be a subset of the original query’s result set. The sum of status variables is checked to be not greater than the limit provided in the ROWS EXAMINED clause, plus a margin. The margin is configured in the transformer.

Some more information at

MySQL ODBC connector: build and run tests

Январь 29th, 2013 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

I spent some time on it, and I have a feeling I’ll have to do it again some day, so here it goes, assuming I’ve downloaded the source package of the connector (in my case it was 5.2.3) and either installed MySQL client library, or have a binaries and headers somewhere.

I will of course need different stuff for building, and in addition to that, unixODBC and unixODBC-devel (that’s how they are called for Fedora’s yum, I didn’t check others yet).

Then, if we have MySQL client library installed system-wide, I run
cmake -G "Unix Makefiles" -DWITH_UNIXODBC=1

And if we have it in a secret place, I run
cmake -G "Unix Makefiles" -DWITH_UNIXODBC=1 -DMYSQL_INCLUDE_DIR=<secret folder with MySQL/MariaDB include files> -DMYSQL_LIB_DIR=<secret folder with MySQL/MariaDB client libs>

Either way, just hope it will find all it needs. What it needs is another story: I’ve heard today that the connector is *supposed* to link statically, only it does not. It links dynamically if it can, which kind of supported by the connectors manual. It’s rather unfortunate, since it doesn’t seem to work all that well with the current MySQL client library:

$ ldd -r lib/ => (0x00007fffb49ff000) => /lib64/ (0x00007f5d1fdb8000) => /mysql-5.5.29/lib/ (0x00007f5d1f959000) => /lib64/ (0x00007f5d1f73d000) => /lib64/ (0x00007f5d1f442000) => /lib64/ (0x00007f5d1f22f000) => /lib64/ (0x00007f5d1f02b000) => /lib64/ (0x00007f5d1ec74000) => /lib64/ (0x00007f5d1ea6a000) => /lib64/ (0x00007f5d1e862000)
/lib64/ (0x000000353c200000)
undefined symbol: dynstr_append_os_quoted (lib/
undefined symbol: dynstr_append (lib/
undefined symbol: strfill (lib/
undefined symbol: init_dynamic_string (lib/
undefined symbol: dynstr_realloc (lib/
undefined symbol: dynstr_free (lib/
undefined symbol: dynstr_append_mem (lib/

Anyway, as long as it builds, we can run at least some tests from the test folder. There is even a perl script, but it doesn’t work for me, and neither does make test. I don’t know why, and for now I don’t care. I just need to get something run.

So, I set 2 variables just like the manual says:


The second one looks stupid, but I set it anyway. Whatever it takes.

In odbc.ini, I add


In odbcinst.ini, I add


Now, if I didn’t forget to start the server (importantly, on /tmp/mysql.sock, otherwise tests fail somewhere in the middle, even though they get the socket as a parameter), I can run

./test/my_basics elenstdsn root test /tmp/mysql.sock

I can’t say it works, it would be exaggeration, but at least it does something marginally sensible.

Setting static IP on Ubuntu 12.10

Январь 20th, 2013 | Posted by elenst in Pensieve - (Комментарии отключены)

Sound advice I found here after torturing myself and my newly installed VM for quite a while:

In /etc/network/interfaces

auto eth0
iface eth0 inet static

(dns-namesevers == gateway == router also works for me)

Packages to get MariaDB and tests up and running

Январь 18th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)


It’s often pain to guess package names when you need to install stuff on, lets say, CentOS. So there is a list, although maybe not full, of what I needed to get another VM build and run MariaDB server and to execute at least some tests on it (all done via yum install):


Same in one line, for lazy me:
sudo yum install cmake gcc ncurses-devel bison g++ gcc-c++ perl-DBD-MySQL gdb libaio-devel openssl-devel

To install bzr (if it’s not in the official repo):

su -c ‘rpm -Uvh’
(check the architecture)

and then can use yum install

Another story (taken from
su -c 'rpm -Uvh'

In newer distributions bzr seems to be already in place.

To install jemalloc:

Search for recent versions of

download them, install.

To install pcre-devel:

Search for recent versions of

download them, install.


Pretty much the same stuff, but few names are different:






The same in one line:
sudo apt-get install cmake gcc make ncurses-dev bison g++ gdb valgrind libaio-dev libssl-dev libdbi-perl libdbd-mysql-perl libjemalloc1 libjemalloc-dev libpcre3 libpcre3-dev zlib1g-dev

To build PAM plugin:




The same in one line:
sudo zypper install cmake gcc make ncurses-devel bison gcc-c++ gdb valgrind libaio-devel perl-DBD-mysql

Collecting coverage info for a patch in a human-readable form

Декабрь 12th, 2012 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

Usually we collect coverage information for a certain source file or directory. gcov/lcov and genhtml are good enough for that. But what I’m actually interested in is coverage for a patch, or, in other words, for bzr diff between the base tree and the new code. A patch might affect dozens of files, a few lines in each, and it’s pain to go through all HTML reports and compare them to the diff to find out which added/modified lines are hit and which aren’t. I’m pretty sure I’m going to be inventing a wheel here, but sometimes it’s easier and faster than find one that fits.

So, what do I need to do?
For now I will hope that a developer rebases before pushing a patch — that is, the code change is one or several revisions over the base tree, rather than a mix of patch-related revisions and base tree revisions. In the latter case, I’m in trouble.

I’ll need gcov and lcov to be installed and be on the PATH.

First, clean up the source tree and build with

Or, if it’s already built, run
lcov --directory <basedir> --zerocounters
to remove all leftovers from previous tests. I want clean data.

Then, run the tests. If it’s RQG combinations, make sure not to use --gcov option, as it cleans up after each run and collects the data separately, while I want the total coverage. If it’s MTR, also no need to use --gcov option, I’ll process what I need by myself.

I think that the tests also need to shut down the server properly (rather than kill it in our usual savage fashion). Not that I’m completely sure it’s important, but lets do it just to be safe.

When the tests are finished, run
lcov --quiet --directory <basedir> --capture --output-file <basedir>/

It will create a nice text file Well, maybe it’s not that nice to read, but it’s a Perl coder dream. And we have a description of its format. So it’s all good.

I also need the actual patch file. It can be either taken from commit mails, or produced by
bzr diff -r<last base revision>

So, I have a patch file and an lcov info file.
From the patch file, I need ‘+’ lines with their numbers and names of the source files; in the lcov file, I need to find coverage info for these code lines, using the source name and the line number. Probably the branch info too, while we are still there. It requires a bit of scripting, but it’s not nuclear physics, is it?

perl ~/mariadb-toolbox/scripts/ --help

The script produces an lcov summary of gcov data stored in the basedir
(or uses already existing lcov info file), and extracts the coverage data related
to the code patch or diff

Usage: perl <options>

--basedir=<path>: source code work tree
needed if there is no lcov info file or patch/diff file yet;
it is also used to remove the prefix from absolute paths in

--diff-file=<path>: a patch or bzr diff file;
if there is no such file yet, it will be generated by bzr diff

--prev-revno=<bzr revision>: a revision to compare with the work tree;
-2 by default (meaning last but one)

--lcov-info=<path>: a coverage summary file produced by lcov;
if there is no such file yet, it will be generated

--branch-info: include branch coverage info into the report
(FALSE by default)

--debug: script debug output

--help: print this help and exit

Make sure basedir is correct and is not a symlink! In other words, it should be the same as in, otherwise the result will be very confusing.

Example of the command line:
perl ~/mariadb-toolbox/scripts/ --basedir=/data/repo/bzr/5.5 --diff-file=/home/elenst/bzr/5.5/3733.dif --lcov-info=/home/elenst/bzr/5.5/ --branch-info 1>3733.coverage 2>missings

If we test not the tip of the tree, line numbers in the patch diff might be off, and it will create a totally wrong coverage report.
We need to adjust it. Here is one way to do it, it’s ugly, but it worked for me today:

1. run
bzr diff -c ${revno} > ${revno}.dif

It will produce the diff between the target revision and the previous one.

2. branch the tree locally, e.g.
bzr branch 5.5 5.5-temporary

3. go to the temporary branch and apply the created patch in the reverse mode:
patch -R -p0 < ${revno}.dif Hopefully it will apply all right (with lines properly shifted). If it fails on test/result files, it can be ignored, we're only interested in the code. If it worked, commit the temporary tree (don't push!) 4. bzr commit Thus we'll have in 5.5 the tip of the tree (including the patch), and in 5.5-temporary the tip of the tree minus the patch. Now we just need to run the bzr diff.
Go back to the main tree.

5. bzr diff —old ../5.5-temporary > ${revno}.dif.adjusted

Now in ${revno}.dif.adjusted we should have the very same patch, but with the right line numbers. Use it instead of the original one in the script command line.

RQG: grammar keywords

Декабрь 11th, 2012 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

Since I’m totally lost in the github version of the RQG documentation, and can only find things there by pure chance, I will start adding here links to useful parts and pages I ran into; and if I don’t find something and have to figure it out by myself, I will put it here too. Maybe later, if it has enough contents, we will move it to AskMonty KnowledgeBase, but so far let it be here.

There used to be a nice list of keywords (e.g. _table, _field etc.) that can be used in a grammar file, but I can’t find it anymore. So, lets see what accepts and returns.

letter || _letter :
$_ = $prng->letter();

digit || _digit :
$_ = $prng->digit();

_table :
$tables = $executors->[0]->metaTables($last_database);
$last_table = $prng->arrayElement($tables);
$_ = '`'.$last_table.'`';

_field :
$fields = $executors->[0]->metaColumns($last_table, $last_database);
$_ = '`'.$prng->arrayElement($fields).'`';

_hex :
$_ = $prng->hex();

_cwd :
$_ = "'".$cwd."'";

_tmpnam || tmpnam || _tmpfile :
# Create a new temporary file name and record it for unlinking at the next statement
$generator->[GENERATOR_TMPNAM] = tmpdir()."gentest".abs($$).".tmp" if not defined $generator->[GENERATOR_TMPNAM];
$_ = "'".$generator->[GENERATOR_TMPNAM]."'";
$_ =~ s{\\}{\\\\}sgio if osWindows();   # Backslash-escape backslashes on Windows

_tmptable :
$_ = "tmptable".abs($$);

_unix_timestamp :
$_ = time();

_pid :
$_ = abs($$);

_thread_id :
$_ = $generator->threadId();

_thread_count :

_database || _db || _schema :
my $databases = $executors->[0]->metaSchemas();
$last_database = $prng->arrayElement($databases);
$_ = '`'.$last_database.'`';

_field_list :
my $fields = $executors->[0]->metaColumns($last_table, $last_database);
$_ = '`'.join('`,`', @$fields).'`';

_field_count :
my $fields = $executors->[0]->metaColumns($last_table, $last_database);
$_ = $#$fields + 1;

_field_next :
# Pick the next field that has not been picked recently and increment the $field_pos counter
my $fields = $executors->[0]->metaColumns($last_table, $last_database);
$_ = '`'.$fields->[$field_pos++ % $#$fields].'`';

_field_no_pk :
my $fields = $executors->[0]->metaColumnsTypeNot('primary',$last_table, $last_database);
$_ = '`'.$prng->arrayElement($fields).'`';

_field_indexed || _field_key :
my $fields_indexed = $executors->[0]->metaColumnsType('indexed',$last_table, $last_database);
$_ = '`'.$prng->arrayElement($fields_indexed).'`';

_field_unindexed || _field_nokey :
my $fields_unindexed = $executors->[0]->metaColumnsTypeNot('indexed',$last_table, $last_database);
$_ = '`'.$prng->arrayElement($fields_unindexed).'`';

_collation :
my $collations = $executors->[0]->metaCollations();
$_ = '_'.$prng->arrayElement($collations);

_collation_name :
my $collations = $executors->[0]->metaCollations();
$_ = $prng->arrayElement($collations);

_charset :
my $charsets = $executors->[0]->metaCharactersets();
$_ = '_'.$prng->arrayElement($charsets);

_charset_name :
my $charsets = $executors->[0]->metaCharactersets();
$_ = $prng->arrayElement($charsets);

_data :
$_ = $prng->file($cwd."/data");

# Additional logic, another time...

} elsif (
($field_type == FIELD_TYPE_NUMERIC) ||
($field_type == FIELD_TYPE_BLOB)
) {
$_ = $prng->fieldType($_);
} elsif ($field_type) {
$_ = $prng->fieldType($_);
if (
(substr($orig_item, -1) eq '`') ||
(substr($orig_item, 0, 2) eq "b'") ||
(substr($orig_item, 0, 2) eq '0x')
) {
# Do not quote, quotes are already present
} elsif (index($_, "'") > -1) {
$_ = '"'.$_.'"';
} else {
$_ = "'".$_."'";
} elsif (substr($_, 0, 1) eq '_') {
$item_nodash = substr($_, 1);
if ($prng->isFieldType($item_nodash)) {
$_ = "'".$prng->fieldType($item_nodash)."'";
if (index($_, "'") > -1) {
$_ = '"'.$_.'"';
} else {
$_ = "'".$_."'";

# If the grammar initially contained a ` , restore it. This allows
# The generation of constructs such as `table _digit` => `table 5`

if (
(substr($orig_item, -1) eq '`') &&
(index($_, '`') == -1)
) {
$_ = $_.'`';

$invariants{$orig_item} = $_ if $modifier eq 'invariant';

Test installation of MariaDB-Galera RPM packages from buildbot using yum

Ноябрь 30th, 2012 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

Buildbot only builds MariaDB-Galera-X.Y.Z-system-arch-server packages, and the rest is supposed to be taken from the standard repositories. In real life the package will live in the same repository, so everything should work smoothly. But I need to test it before it got there…

- start a nice clean VM, preferably with the standard MariaDB repository already configured, but if not, then add
/etc/yum.repos.d/MariaDB.repo with the contents like

# MariaDB 5.5 repository list - created 2012-11-29 22:41 UTC
name = MariaDB
baseurl =

or take the right link in the repository tool;

- download MariaDB-Galera-X.Y.Z-system-arch-server.rpm from the buildbot archive;

- run under root or sudo
yum install ./MariaDB-Galera-X.Y.Z-system-arch-server.rpm --nogpgcheck

If it works, also run

sudo yum install MariaDB-client

The server should start as is, but won’t be a real cluster node (as of 5.5.28 at least), because it is not pre-configured. Add Galera-related options to /etc/my.cnf file, e.g.


/etc/init.d/mysql start

and pray to your tester’s supersomething.

MySQL type sizes

Август 27th, 2012 | Posted by elenst in Pensieve - (Комментарии отключены)

Another piece of trivia which sometimes is not easy to find fast, and which easily gets demoted into the cold cache (if that) of my brain when isn’t used.

CHAR(M) 255 characters
VARCHAR(M) 65,535 characters
TINYTEXT 255 characters
TEXT 65,535 characters
MEDIUMTEXT 16,777,215 characters
LONGTEXT 4,294,967,295 characters
BINARY 255 bytes
VARBINARY 65,535 bytes
TINYBLOB 255 bytes
BLOB 65,535 bytes
MEDIUMBLOB 16,777,215 bytes
LONGBLOB 4,294,967,295 bytes
TINYINT -128 — 127
SMALLINT -32,768 — 32,767
MEDIUMINT -8,388,608 — 8,388,607
INT -2,147,683,648 — 2,147,683,647
INT UNSIGNED 4,294,967,295
BIGINT -9,223,372,036,854,775,808 — 9,223,372,036,854,775,807
BIGINT UNSIGNED 18,446,744,073,709,551,615
FLOAT precision 0..23
DOUBLE precision 24..53
DECIMAL 65 digits
ENUM 65,635 members
SET 64 members
BIT 1..64 bits
DATE 1000-01-01 — 9999-12-31
TIME -838:59:59 — 838:59:59
DATETIME 1000-01-01 00:00:00 — 9999-12-31 23:59:59
TIMESTAMP 1970-01-01 00:00:00, 1970-01-01 00:00:01 — 2038-01-19 03:14:07 UTC
YEAR(4) 1901 — 2155
YEAR(2) 1970 — 2069

Installing Java on Ubuntu — meant to be easy, right?

Август 22nd, 2012 | Posted by elenst in Pensieve - (Комментарии отключены)

Actually, I was installing Cassandra. But this is another story, I’m afraid will be coming soon.

So, Cassandra wants Java, specifically
Depends: openjdk-6-jre-headless (>= 6b11) but it is not going to be installed

JDK wants tzdata-java, no specifics:
Depends: tzdata-java but it is not going to be installed

tzdata-java wants tzdata, but…
Depends: tzdata (= 2011k-1) but 2012b-0ubuntu0.11.10 is to be installed

Fun, isn’t it? It’s not like I’m trying to pull Java from some obscure repository.

Some digging revealed that 2011K-1 lives on Ubuntu Main repository, while 2012b (or even 2012e) on Ubuntu Proposed Main repository. There wasn’t the Proposed Main repository on my source list… I wonder where it was pulling it from. (Maybe it was from the Cassandra repo? I didn’t think about it back then.) Anyway, even although my source list is nearly pristine, I prefer to think that it was a problem with my configuration, rather than suspect that the Ubuntu main repository offers conflicting packages.

And ‘anyway’ again, since we’re already playing with 2012e, I’d rather they give me both of that.
So, I added the Proposed Main repository. No luck, still the same incompatibility. I tried to remove the Main repository. No luck again. Well, there are many default Ubuntu repositories in the sources.list, I didn’t feel like playing with them all, especially since apt-get update is annoyingly slow. So, I replaced /etc/apt/sources.list with a dummy one which contained only the Proposed Main repository:
deb oneiric-proposed main

And voila!

Stupid workaround, but fast if you know what you want, but don’t know exactly what you’re doing.

I even remembered to restore the initial sources.list afterwards, but of course I forgot to ‘apt-get update’ it, and spent a couple of fancy minutes trying to figure out why now Cassandra refuses to install, pretending it had never been there…