External bug reports #2: Build your portfolio

Март 3rd, 2014 | Posted by elenst in MariaDB | Testing - (Комментарии отключены)

While user bug reports are the most important ones, there is a category of external reporters which I historically have a special interest in and great expectations for: entry-level testers. I was one, trained some, interviewed many, had a few hired, and have always wanted someone to wake them up and get going before it’s too late.

There is no secret that quality control is not as glamourous as other IT specialities, and there are no famous (or maybe any) student programs for testers. Usually people come into testing because it is deceptively open for newbies, planning to obtain a few points for a CV and switch either to development or to project management as soon as they can. Most do, a few stay.

It creates a vicious cycle. Since this is a well-known pattern, employers hire testers without experience not to teach them, but to let them do brainless mundane work. In turn, newcomers gain a distorted impression of the speciality, get discouraged and move on long before the job has a chance to grow on them.

But it doesn’t have to be like that. Nowadays a new-born tester can get good karma before starting the first job, and enter the market as an equal. The job is very different when you are hired as a specialist and not as a cog, the work won’t be nearly as frustrating, and you’ll be able to make a well thought-out decision about the future.

First CV: skills and portfolio

Since you don’t have any work experience to put on the CV, you usually have to go with education and skills. You can add another important part though — a bug report portfolio.

Particular technical skills are mainly important for the initial screening, when some HR simply match your CV against a wishlist that they have. You still need to gain some to pass, since you mustn’t lie in your resume, ever. But once you get through to the technical interviewers, if they know the first thing about testing, they will look at two main points:

  • whether you know what it means to be a tester;
  • whether you show any potential for being a good one.

Testing is not easy, and your interviewer is painfully aware of that, so it’s important to show that you have already tried, somewhat succeeded, and made a conscious decision to keep doing it.

That’s something you can prove by providing the bug report portfolio.

It reveals a lot about you, much more than just your hunt-fu. While a CV is only something you can say about yourself, your public portfolio shows what the world can say about you as a tester, since your bug reports are evaluated by independent parties. Only a really dumb interviewer could ignore it.

So, it is worthwile to make sure you create a good one.

How to choose the project(s)?

The common idea is simple: you choose a project with a public bug tracker and start testing it.

If you want to deal with the source code, obviously the project has to be open-source.

If there is a particular field you want to get familiar with, choose a project where you can focus on it. There is no need to worry about testing technologies: you can use whichever you like, or want to learn, and apply to the product of your choice.

Also, the product should be fairly known, so that the interviewer can easily understand the context of reports.

Main traits of a bug report portfolio

An interviewer will try to quickly evaluate your portfolio. Thus, it’s important that all bugs can be viewed, searched and filtered by a non-logged user, and that you can create a permanent link to a list of your reports only, preferably with some sorting.

The list should display a summary of the report, its severity/priority from the project’s point of view, status/resolution.
It should either contain all your reports, or come with an explanation which filters were applied, and how to remove them. Don’t hide “bad” reports behind obscure filters. Instead, make sure that most of your reports are really good.

Severity/priority shows your hunting skill. It is fine to report minor issues and cosmetic problems if you notice them, but if most of your reports are like that, it just won’t impress the interviewer. One killed bear is worth a dozen of quail… Although, be smart about it. People like juicy crashes, but sometimes non-crashing functional bugs are much more important, and hopefully your future interviewers know that.

Status/resolution shows a lot more.

The more fixed bugs you have on your list, the better — it means the reports were valid, and were taken seriously. Confirmed but not fixed bugs are not bad either.

Many “Can’t reproduce” bugs is a very bad sign — it means you either cannot provide reproducible test cases, or cannot communicate them to the project people. Both are crucial for a good tester.

Many “Duplicate” bugs is also bad — it means you don’t bother or cannot search the bug base before filing a bug. Nobody likes those who don’t use the search.

Many “Not a bug” entries is suspicious — it looks like you don’t check documentation with due diligence.

Many bugs which were just submitted but didn’t get any feedback don’t say anything about you as a tester, only show that you probably chose a wrong project to work with.

Watch for common flaws in bug tracking routine

Disclaimer: Any resemblance to bug trackers living or dead should be plainly apparent to those who know them, especially because the author has been grumbling about them for really long time…

The criteria above are reasonable in theory. Unfortunately, it happens that you get your portfolio spoiled without any fault from your side.

To display the real ratio of your report severity, the bug base should provide access to all reports. If the critical bugs remain hidden, then the visible list will only contain mediocre bugs, and the better your real list was, the less impressive your portfolio will look.

It often happens that a report is initially confirmed, stays in the queue for a (long) while, and eventually gets closed because the bug is no longer reproducible in the current version.
Not only is it unfair towards the reporter, but is also dangerous for the project. The project’s problem is another story, but for you it means that you’ll have an unfair share of bad “Can’t reproduce” reports on the list.

Wrong “Duplicate” status comes in different flavours.
First, it can be a consequence of hidden bugs. If something is not searchable, you cannot find it, so you reasonably file a new one, and it gets closed as a duplicate.
Secondly, the problem can be similar to “Can’t reproduce” ones. You file a bug in June, it is confirmed and stays in the queue, then 3 months later somebody else reports the same issue, it gets fixed, and the older bug is marked a duplicate. There is no harm for the product, so developers find this unimportant (but try to call them plagiarists after somebody copied their code, and see if you live through the day…).
Another, more subtle issue with “Duplicates” is that it can be set after a deep analysis by a developer, because, although bugs look nothing alike, they (allegedly) have the same root cause. Again, doing so is wrong for the reporter and dangerous for the product.

Also, numerous “Not a bug” problem can be caused by a lack of documentation. If you can’t find out how something is supposed to work, you are likely to waste time on reporting something that will be dismissed. So, make sure that the product is documented.

Most of these problems do not really concern users or senior testers, just maybe irritate them. For you, they are critical. While you’ll have very good excuses, keep in mind that you won’t be around when the interviewer initially evaluates your list. So, try to avoid it in advance — browse the tracker of your choice, check the history of some reports, see how things are done there. Exceptions can and do happen, but they should remain rare.

Why MariaDB?

Of course, my goal is to convince new testers not only to start testing, but to choose MariaDB over other projects to perfect their skills on. While I’m somewhat biased, I still think that MariaDB is objectively a good choice:

  • It is a crazy open-source mix of various technologies — numerous engines, plugins, connectors, replication, Galera, optimizer… Whatever you want to focus on, unless it is seriously exotic, you will most likely find something related at MariaDB;
  • MariaDB Knowledge Base contains a lot of information about the products; also, for the most part the MySQL manual applies;
  • All our bug reports are public, so the visibility problems do not exist;
  • JIRA which we use for bug reporting is hardly the best system in the world, but it does provide enough searching/filtering to fulfill all the needs associated with building a portfolio;
  • We release early, and unstable early releases are golden mine for eager testers;

And last but not least — it takes one to know one; you can expect that your interests will be taken seriously. We cut some corners when it comes to internal reports, but I’ve been trying to make sure that our external reporters are treated fairly, and I think our developers started picking up the idea. So, if something was done wrong with your report, you can let us know without a fear of being laughed at, and we’ll try to make it right if the request is reasonable.


If you indicate that you are a tester, I can provide you with some hints on how to improve your report, or some other community members might (we don’t bother users with improving reports as long as there is enough for us to go on). If there is a demand, most interesting cases can be later analyzed in blog posts; finally, if you are a regular reporter, and a really good one, we can also provide you with a reference for your first CV.

How to start

It is really this simple.

Obtaining and running oprofile

Февраль 20th, 2014 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

It’s not that easy to get oprofile nowadays… Shame, really.


For Precise, found it here: https://launchpad.net/ubuntu/+source/oprofile/0.9.6-1.3ubuntu1
Plain deb files, e.g. for amd64



dpkg --install libopagent1_0.9.6-1.3ubuntu1_amd64.deb oprofile_0.9.6-1.3ubuntu1_amd64.deb
worked like a charm.

For Wheezy, used the Squeeze repo:

deb http://ftp.ru.debian.org/debian squeeze main

Also had to install keys

sudo apt-key adv --recv-keys --keyserver keyserver.ubuntu.com ...

and then apt-get install just worked.


Installing it in the Wheezy VM was useless. It turns out that oprofile does not work in a VM.
Well, technically it works, if you set sudo modprobe oprofile timer=1, but then it returns something like that:

1043 75.8545 /data/repo/10.0/sql/mysqld my_timer_cycles
313 22.7636 /data/repo/10.0/storage/federatedx/ha_federatedx.so federatedx_io_mysql::mark_position(st_federatedx_result*, void*)
6 0.4364 /data/repo/10.0/sql/mysqld code_state
3 0.2182 /no-vmlinux /no-vmlinux
2 0.1455 /data/repo/10.0/sql/mysqld _db_enter_

No much use.

On Precise, it required running

sudo bash -c "echo 0 > /proc/sys/kernel/nmi_watchdog"

and then it actually worked.

To run (for mysqld):

sudo opcontrol --init
sudo opcontrol --setup --separate=lib,kernel,thread --no-vmlinux
sudo opcontrol --start-daemon
sudo opcontrol --start

Then do stuff in mysqld that needs to be profiled. Then

sudo opcontrol --dump
sudo opreport --demangle=smart --symbols --long-filenames --merge \
tgid path-to-mysqld | head -n 20
sudo opcontrol --stop
sudo opcontrol --deinit
sudo opcontrol --reset

Building sysbench 0.4

Февраль 17th, 2014 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

Download from http://netcologne.dl.sourceforge.net/project/sysbench/sysbench/0.4.12/sysbench-0.4.12.tar.gz

Get correct library and include paths by running mysql_config from the MySQL/MariaDB package that you want to use for building sysbench.

If there are two paths returned, like in MariaDB 5.5, the one where mysql.h sits seems to work well.

To link dynamically, run
./configure --with-mysql-libs= --with-mysql-includes=

To link statically, run
./configure --with-mysql-libs= --with-mysql-includes= --with-extra-ldflags=-all-static

Copy libtool from the system to the sysbench dir, otherwise it is likely to fail:

cp `which libtool` ./
ls -l ./libtool


sysbench/sysbench --version

Building TokuDB unit tests in MariaDB tree

Февраль 16th, 2014 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

There are some TokuDB *.cc tests in MariaDB tree, e.g. in /storage/tokudb/ft-index/portability/tests, but they are not built by default.

Generally, to build them, we need a couple includes and one library:

g++ -c -I<basedir>/storage/tokudb/ft-index/toku_include -I<basedir>/storage/tokudb/ft-index/portability -std=c++11 test-cpu-freq.cc
g++ -o test-cpu-freq test-cpu-freq.o -L<basedir>/storage/tokudb/ft-index/portability -ltokuportability
LD_LIBRARY_PATH=<basedir>/storage/tokudb/ft-index/portability:$LD_LIBRARY_PATH ./test-cpu-freq

But on more conservative systems, there is an additional problem:

test-cpu-freq.cc: In function ‘int main()’:
test-cpu-freq.cc:103:13: error: expected ‘)’ before ‘PRIu64’

There are most certainly some smart solutions for that, but I haven’t found any that works on Wheezy, and to get it work quick, this is enough:

--- storage/tokudb/ft-index/portability/tests/test-cpu-freq.cc 2013-10-04 20:49:53 +0000
+++ storage/tokudb/ft-index/portability/tests/test-cpu-freq.cc 2014-02-16 14:35:15 +0000
@@ -100,7 +100,7 @@
int r = toku_os_get_processor_frequency(&cpuhz);
assert(r == 0);
if (verbose) {
- printf("%" PRIu64 "\n", cpuhz);
+ printf("%li\n", cpuhz);
return 0;

Also, verbose here is not an option, it’s a constant. To make the test actually verbose, modify it in the code and recompile.

Creating a local deb repository with MariaDB packages

Май 22nd, 2013 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

Based on Daniel’s email and my own experience.

Variables to set:

DEBIAN=/path/to/mkrepo-debian.sh # /path/to/mariadb-tools/buildbot/mkrepo-debian.sh)
UBUNTU="/path/to/mkrepo-ubuntu.sh" # (/path/to/mariadb-tools/buildbot/mkrepo-ubuntu.sh)
version="mariadb-10.0.2" # change to whatever makes sense
tree="10.0" # change to whatever tree you are using
tarbuildnum="3544" # change to the correct tar build number
archive_dir="/media/backup/archive/pack" # this is on hasky, but it should be local, so I need to create some $(archive_dir)/$(tree) folder locally, and copy build-$(tarbuildnum) from hasky into it.

Modify mkrepo-debian.sh and mkrepo-ubuntu.sh scripts to use my GPG key.
If needed, modify the scripts to exclude distributions that aren’t needed, especially if they are not present on hasky.

Might need to install reprepro if it’s not there yet:
apt-get install reprepro

mkdir -v /path/to/where/i/want/${version}/repo
cd /path/to/where/i/want/${version}/repo
eval $(gpg-agent --daemon)
${DEBIAN} debian ${archive_dir}/${tree}/build-${tarbuildnum}
${UBUNTU} ubuntu ${archive_dir}/${tree}/build-${tarbuildnum}

I actually didn’t run eval $(gpg-agent —daemon) , don’t even have it installed, and it still worked all right, or so it seemed.

Then, add

deb file:///path/to/where/i/want/${version}/repo/ubuntu precise main
deb-src file:///path/to/where/i/want/${version}/repo/ubuntu precise main

(and comment the previous one if there was any).
Replace ubuntu and precise with whatever you’re on.


sudo apt-get update

It might complain about GPG key, then make sure it’s installed as described in http://elenst.ru/pensieve/it-bits/creating-and-installing-a-local-gpg-key-for-testing-purposes/

Then install as usual.

Creating and installing a local GPG key for testing purposes

Май 22nd, 2013 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

No fancy GUI is necessary. The base gpg is installed on Ubuntu by default.

gpg --gen-key

Use default type, size and duration.
Enter the name and the email address (I suppose it doesn’t matter which as long as it’s for testing only).
Type O to create the key.
Enter a passphrase twice.
Move your mouse like crazy, but try to spare the effort, it might take long and you’ll get tired. When it’s enough, it will tell you it’s enough. Until it does, keep moving. Don’t stop for long, or you’ll have to repeat the exercise.

The file pubring.gpg will appear in $HOME/.gnupg/


sudo apt-key add $HOME/.gnupg/pubring.gpg

Now I can run apt-get update with repos where something is signed with this key.

RQG: SlaveCrashRecovery Reporter

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

The reporter can be used to test crash-safety of replication.

It is a periodic reporter, every 30 seconds it kills the slave server using SIGKILL, and immediately restarts it on the old data directory, with the same parameters as before. On server restart, the reporter checks that the server itself and the replication started all right.

The reporter itself does not check consistency of the data, but it can be used together with ReplicationConsistency reporter.

It is supposed to be used with runall-new.pl, so that the server is started without MTR involvement.

Some more information at https://kb.askmonty.org/en/rqg-extensions-for-mariadb-features/

RQG: LimitRowsExamined Transformer

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

There are many things in RQG for which it’s not obvious what they do by a quick look. It becomes especially embarrassing though when those are things that you developed yourself; so, I’ll try to keep track of those at least, and maybe add some records for legacy components when I can.

The LimitRowsExamined transformer checks whether the original query already contains a ROWS EXAMINED clause. If it does not, it adds the clause either after the LIMIT clause, or at the end of the query. In any case (even if ROWS EXAMINED was already there), the transformer returns the following sequence of statements:

* the query with ROWS EXAMINED
* a query which sums up status variables related to examined rows

The result of the main query is checked to be a subset of the original query’s result set. The sum of status variables is checked to be not greater than the limit provided in the ROWS EXAMINED clause, plus a margin. The margin is configured in the transformer.

Some more information at https://kb.askmonty.org/en/rqg-extensions-for-mariadb-features/

MySQL ODBC connector: build and run tests

Январь 29th, 2013 | Posted by elenst in Pensieve | Testing - (Комментарии отключены)

I spent some time on it, and I have a feeling I’ll have to do it again some day, so here it goes, assuming I’ve downloaded the source package of the connector (in my case it was 5.2.3) and either installed MySQL client library, or have a binaries and headers somewhere.

I will of course need different stuff for building, and in addition to that, unixODBC and unixODBC-devel (that’s how they are called for Fedora’s yum, I didn’t check others yet).

Then, if we have MySQL client library installed system-wide, I run
cmake -G "Unix Makefiles" -DWITH_UNIXODBC=1

And if we have it in a secret place, I run
cmake -G "Unix Makefiles" -DWITH_UNIXODBC=1 -DMYSQL_INCLUDE_DIR=<secret folder with MySQL/MariaDB include files> -DMYSQL_LIB_DIR=<secret folder with MySQL/MariaDB client libs>

Either way, just hope it will find all it needs. What it needs is another story: I’ve heard today that the connector is *supposed* to link statically, only it does not. It links dynamically if it can, which kind of supported by the connectors manual. It’s rather unfortunate, since it doesn’t seem to work all that well with the current MySQL client library:

$ ldd -r lib/libmyodbc5w.so
linux-vdso.so.1 => (0x00007fffb49ff000)
libodbc.so.2 => /lib64/libodbc.so.2 (0x00007f5d1fdb8000)
libmysqlclient.so.18 => /mysql-5.5.29/lib/libmysqlclient.so.18 (0x00007f5d1f959000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f5d1f73d000)
libm.so.6 => /lib64/libm.so.6 (0x00007f5d1f442000)
libodbcinst.so.2 => /lib64/libodbcinst.so.2 (0x00007f5d1f22f000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f5d1f02b000)
libc.so.6 => /lib64/libc.so.6 (0x00007f5d1ec74000)
libltdl.so.7 => /lib64/libltdl.so.7 (0x00007f5d1ea6a000)
librt.so.1 => /lib64/librt.so.1 (0x00007f5d1e862000)
/lib64/ld-linux-x86-64.so.2 (0x000000353c200000)
undefined symbol: dynstr_append_os_quoted (lib/libmyodbc5w.so)
undefined symbol: dynstr_append (lib/libmyodbc5w.so)
undefined symbol: strfill (lib/libmyodbc5w.so)
undefined symbol: init_dynamic_string (lib/libmyodbc5w.so)
undefined symbol: dynstr_realloc (lib/libmyodbc5w.so)
undefined symbol: dynstr_free (lib/libmyodbc5w.so)
undefined symbol: dynstr_append_mem (lib/libmyodbc5w.so)

Anyway, as long as it builds, we can run at least some tests from the test folder. There is even a perl script, but it doesn’t work for me, and neither does make test. I don’t know why, and for now I don’t care. I just need to get something run.

So, I set 2 variables just like the manual says:


The second one looks stupid, but I set it anyway. Whatever it takes.

In odbc.ini, I add


In odbcinst.ini, I add


Now, if I didn’t forget to start the server (importantly, on /tmp/mysql.sock, otherwise tests fail somewhere in the middle, even though they get the socket as a parameter), I can run

./test/my_basics elenstdsn root test /tmp/mysql.sock

I can’t say it works, it would be exaggeration, but at least it does something marginally sensible.

Packages to get MariaDB and tests up and running

Январь 18th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)


It’s often pain to guess package names when you need to install stuff on, lets say, CentOS. So there is a list, although maybe not full, of what I needed to get another VM build and run MariaDB server and to execute at least some tests on it (all done via yum install):


Same in one line, for lazy me:
sudo yum install cmake gcc ncurses-devel bison g++ gcc-c++ perl-DBD-MySQL gdb libaio-devel openssl-devel

To install bzr (if it’s not in the official repo):

su -c ‘rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-2.noarch.rpm’
(check the architecture)

and then can use yum install

Another story (taken from http://wiki.bazaar.canonical.com/Download):
su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm'

In newer distributions bzr seems to be already in place.

To install jemalloc:

Search for recent versions of

download them, install.

To install pcre-devel:

Search for recent versions of

download them, install.


Pretty much the same stuff, but few names are different:






The same in one line:
sudo apt-get install cmake gcc make ncurses-dev bison g++ gdb valgrind libaio-dev libssl-dev libdbi-perl libdbd-mysql-perl libjemalloc1 libjemalloc-dev libpcre3 libpcre3-dev zlib1g-dev

To build PAM plugin:




The same in one line:
sudo zypper install cmake gcc make ncurses-devel bison gcc-c++ gdb valgrind libaio-devel perl-DBD-mysql