MariaDB 10.0 comes with ~50 engines and plugins; and it comes in 35 package sets (34 binary ones and a source tarball).

Every day people come asking on #maria IRC whether a package X contains an engine Y, or saying that it doesn’t, or wondering if it should. Remembering all combinations isn’t easy, and it became impractical to study build logs or package contents every time, so I ended up with a cheat sheet for 10.0.10 GA. At the very least it should help me to answer those questions; even better if somebody else finds it useful.

The tables below refer to contents of packages provided at downloads.mariadb.org or at MariaDB repository mirrors. Packages built by distributions might have different contents and are not covered here.

Legend

— built-in (also known as static):
the plugin comes as a part of the server binary. It can be disabled or enabled by default, but even when it is disabled, it is still known to the server and shown in SHOW ENGINES or SHOW PLUGINS with the corresponding status.

— dynamic library:
plugin is installed as an .so or .dll file in the plugin_dir. To start using it, you might need to run INSTALL SONAME '<file name>' or add plugin-load-add='<file name>' to your configuration file. If you did not do either of these, the engine/plugin will not be shown in SHOW ENGINES or SHOW PLUGINS. Currently is is the most common reason why people complain about “missing” engines.

Please note that you can run
SELECT PLUGIN_NAME, PLUGIN_STATUS, PLUGIN_LIBRARY, PLUGIN_DESCRIPTION
FROM INFORMATION_SCHEMA.ALL_PLUGINS
to see available plugins.

— separate package: plugin is provided as a separate .rpm or .deb package in the repository along with server and client packages. To start using it, you need to install the package first (it will put the library into the plugin folder), and then possibly enable it as a usual dynamic library.

Engines

Some engines are built-in and enabled in all binary packages:

— Aria
— CSV
— Memory
— Merge
— MyISAM
— XtraDB (default engine)
— Performance schema

The rest can vary from package to package. The summary is below.

Single-file packages (bintar, zip, msi)
Generic bintar GLIBC_2.14 bintar Windows zip Windows msi
x86 amd64 x86 amd64 x86 amd64 x86 amd64
Archive built-in .dll
Blackhole built-in .dll
Cassandra
Connect .so .dll
Example .so .dll
Federated .dll
FederatedX built-in .dll
InnoDB .so .dll
Oqgraph .so
Sequence .so .dll
Sphinx .so .dll
Spider .so .dll
Test SQL Discovery .so .dll
TokuDB .so
x86 amd64 x86 amd64 x86 amd64 x86 amd64
Generic bintar GLIBC_2.14 bintar Windows zip Windows msi

Notes:

It might happen that an engine provided as a dynamic library in a bintar is not installable on some systems. he reason is that it is not feasible to have a bintar for each OS flavor. For example, Connect engine from Generic bintar will not be installable on Centos 6, because it requires libodbc.so.1, while Centos 6 has libodbc.so.2. TIn such cases, OS-specific packages should be used instead.

The absence of Cassandra engine in GLIBC_2.14+ bintar should be resolved later.

Deb packages
Squeeze Wheezy Sid Lucid Precise Quantal Saucy Trusty
x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64
Archive built-in
Blackhole built-in
Cassandra .so .so .so
Connect separate package
Example
Federated
FederatedX built-in
InnoDB .so
Oqgraph separate package
Sequence .so
Sphinx .so
Spider .so
Test SQL Discovery .so (in test package)
TokuDB .so .so .so .so .so
x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64
Squeeze Wheezy Sid Lucid Precise Quantal Saucy Trusty

Note: The absence of Cassandra engine for Quantal (and probably for Sid) should be resolved later.

RPM packages
Centos5 RHEL5 Centos6 Fedora19 Fedora20
x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64
Archive built-in
Blackhole built-in
Cassandra separate package
Connect separate package
Example .so (in test package)
Federated
FederatedX built-in
InnoDB .so
Oqgraph separate package
Sequence .so
Sphinx .so
Spider .so
Test SQL Discovery .so (in test package)
TokuDB .so .so .so
x86 amd64 x86 amd64 x86 amd64 x86 amd64 x86 amd64
Centos5 RHEL5 Centos6 Fedora19 Fedora20

Other plugins

Situation with other plugins is less complicated and rarely causes questions.

There are, again, several plugins which are built-in in all binary packages:

— Binlog (pseudo SE to represent the binlog in a transaction)
— User feedback plugin (disabled by default)
— Native MySQL authentication
— Old MySQL-4.0 authentication
— Partition SE helper (formally it is an engine, but I find it more appropriate to put it here)

Some plugins are present as dynamic libraries in all packages:

— Locale list
— Metadata lock info
— Query cache info
— Query response time
— Semisync master
— Semisync slave
— Server audit
— SQL error log

Platform-specific plugins:

— HandlerSocket (in all Linux packages)
— PAM authentication plugin (in all Linux packages)
— Unix socket authentication plugin (in all Linux packages)

Client plugins present everywhere as dynamic libraries (in bintars, zips, msi, ‘libmariadbclient18′ debs, ‘shared’ RPMs)

— Dialog
— Clear password auth plugin
— Windows authentication plugin (in all Windows packages)

Test plugins (in bintars, zips, ‘test’ debs and RPMs):

— NULL audit
— Test for API 0x0100 support (old API support)
— Dialog plugin demos
— Daemon example
— Full-text parser
— Plugin API tests

External bug reports #2: Build your portfolio

Март 3rd, 2014 | Posted by elenst in MariaDB | Testing - (Комментарии отключены)

While user bug reports are the most important ones, there is a category of external reporters which I historically have a special interest in and great expectations for: entry-level testers. I was one, trained some, interviewed many, had a few hired, and have always wanted someone to wake them up and get going before it’s too late.

There is no secret that quality control is not as glamourous as other IT specialities, and there are no famous (or maybe any) student programs for testers. Usually people come into testing because it is deceptively open for newbies, planning to obtain a few points for a CV and switch either to development or to project management as soon as they can. Most do, a few stay.

It creates a vicious cycle. Since this is a well-known pattern, employers hire testers without experience not to teach them, but to let them do brainless mundane work. In turn, newcomers gain a distorted impression of the speciality, get discouraged and move on long before the job has a chance to grow on them.

But it doesn’t have to be like that. Nowadays a new-born tester can get good karma before starting the first job, and enter the market as an equal. The job is very different when you are hired as a specialist and not as a cog, the work won’t be nearly as frustrating, and you’ll be able to make a well thought-out decision about the future.

First CV: skills and portfolio

Since you don’t have any work experience to put on the CV, you usually have to go with education and skills. You can add another important part though — a bug report portfolio.

Particular technical skills are mainly important for the initial screening, when some HR simply match your CV against a wishlist that they have. You still need to gain some to pass, since you mustn’t lie in your resume, ever. But once you get through to the technical interviewers, if they know the first thing about testing, they will look at two main points:

  • whether you know what it means to be a tester;
  • whether you show any potential for being a good one.

Testing is not easy, and your interviewer is painfully aware of that, so it’s important to show that you have already tried, somewhat succeeded, and made a conscious decision to keep doing it.

That’s something you can prove by providing the bug report portfolio.

It reveals a lot about you, much more than just your hunt-fu. While a CV is only something you can say about yourself, your public portfolio shows what the world can say about you as a tester, since your bug reports are evaluated by independent parties. Only a really dumb interviewer could ignore it.

So, it is worthwile to make sure you create a good one.

How to choose the project(s)?

The common idea is simple: you choose a project with a public bug tracker and start testing it.

If you want to deal with the source code, obviously the project has to be open-source.

If there is a particular field you want to get familiar with, choose a project where you can focus on it. There is no need to worry about testing technologies: you can use whichever you like, or want to learn, and apply to the product of your choice.

Also, the product should be fairly known, so that the interviewer can easily understand the context of reports.

Main traits of a bug report portfolio

An interviewer will try to quickly evaluate your portfolio. Thus, it’s important that all bugs can be viewed, searched and filtered by a non-logged user, and that you can create a permanent link to a list of your reports only, preferably with some sorting.

The list should display a summary of the report, its severity/priority from the project’s point of view, status/resolution.
It should either contain all your reports, or come with an explanation which filters were applied, and how to remove them. Don’t hide “bad” reports behind obscure filters. Instead, make sure that most of your reports are really good.

Severity/priority shows your hunting skill. It is fine to report minor issues and cosmetic problems if you notice them, but if most of your reports are like that, it just won’t impress the interviewer. One killed bear is worth a dozen of quail… Although, be smart about it. People like juicy crashes, but sometimes non-crashing functional bugs are much more important, and hopefully your future interviewers know that.

Status/resolution shows a lot more.

The more fixed bugs you have on your list, the better — it means the reports were valid, and were taken seriously. Confirmed but not fixed bugs are not bad either.

Many “Can’t reproduce” bugs is a very bad sign — it means you either cannot provide reproducible test cases, or cannot communicate them to the project people. Both are crucial for a good tester.

Many “Duplicate” bugs is also bad — it means you don’t bother or cannot search the bug base before filing a bug. Nobody likes those who don’t use the search.

Many “Not a bug” entries is suspicious — it looks like you don’t check documentation with due diligence.

Many bugs which were just submitted but didn’t get any feedback don’t say anything about you as a tester, only show that you probably chose a wrong project to work with.

Watch for common flaws in bug tracking routine

Disclaimer: Any resemblance to bug trackers living or dead should be plainly apparent to those who know them, especially because the author has been grumbling about them for really long time…

The criteria above are reasonable in theory. Unfortunately, it happens that you get your portfolio spoiled without any fault from your side.

To display the real ratio of your report severity, the bug base should provide access to all reports. If the critical bugs remain hidden, then the visible list will only contain mediocre bugs, and the better your real list was, the less impressive your portfolio will look.

It often happens that a report is initially confirmed, stays in the queue for a (long) while, and eventually gets closed because the bug is no longer reproducible in the current version.
Not only is it unfair towards the reporter, but is also dangerous for the project. The project’s problem is another story, but for you it means that you’ll have an unfair share of bad “Can’t reproduce” reports on the list.

Wrong “Duplicate” status comes in different flavours.
First, it can be a consequence of hidden bugs. If something is not searchable, you cannot find it, so you reasonably file a new one, and it gets closed as a duplicate.
Secondly, the problem can be similar to “Can’t reproduce” ones. You file a bug in June, it is confirmed and stays in the queue, then 3 months later somebody else reports the same issue, it gets fixed, and the older bug is marked a duplicate. There is no harm for the product, so developers find this unimportant (but try to call them plagiarists after somebody copied their code, and see if you live through the day…).
Another, more subtle issue with “Duplicates” is that it can be set after a deep analysis by a developer, because, although bugs look nothing alike, they (allegedly) have the same root cause. Again, doing so is wrong for the reporter and dangerous for the product.

Also, numerous “Not a bug” problem can be caused by a lack of documentation. If you can’t find out how something is supposed to work, you are likely to waste time on reporting something that will be dismissed. So, make sure that the product is documented.

Most of these problems do not really concern users or senior testers, just maybe irritate them. For you, they are critical. While you’ll have very good excuses, keep in mind that you won’t be around when the interviewer initially evaluates your list. So, try to avoid it in advance — browse the tracker of your choice, check the history of some reports, see how things are done there. Exceptions can and do happen, but they should remain rare.

Why MariaDB?

Of course, my goal is to convince new testers not only to start testing, but to choose MariaDB over other projects to perfect their skills on. While I’m somewhat biased, I still think that MariaDB is objectively a good choice:

  • It is a crazy open-source mix of various technologies — numerous engines, plugins, connectors, replication, Galera, optimizer… Whatever you want to focus on, unless it is seriously exotic, you will most likely find something related at MariaDB;
  • MariaDB Knowledge Base contains a lot of information about the products; also, for the most part the MySQL manual applies;
  • All our bug reports are public, so the visibility problems do not exist;
  • JIRA which we use for bug reporting is hardly the best system in the world, but it does provide enough searching/filtering to fulfill all the needs associated with building a portfolio;
  • We release early, and unstable early releases are golden mine for eager testers;

And last but not least — it takes one to know one; you can expect that your interests will be taken seriously. We cut some corners when it comes to internal reports, but I’ve been trying to make sure that our external reporters are treated fairly, and I think our developers started picking up the idea. So, if something was done wrong with your report, you can let us know without a fear of being laughed at, and we’ll try to make it right if the request is reasonable.

Perks

If you indicate that you are a tester, I can provide you with some hints on how to improve your report, or some other community members might (we don’t bother users with improving reports as long as there is enough for us to go on). If there is a demand, most interesting cases can be later analyzed in blog posts; finally, if you are a regular reporter, and a really good one, we can also provide you with a reference for your first CV.

How to start

It is really this simple.

External bug reports #1: Last call for early adopters

Март 1st, 2014 | Posted by elenst in MariaDB - (Комментарии отключены)

MariaDB 10.0.9, tentatively the last RC, is almost out. MariaDB 10.0.10 GA is coming shortly after.

I envision all those who is planning to upgrade to 10.0 as soon as it becomes stable, waiting for the GA to throw a switch.
I know the feeling. As a user, I never take anything too new either.
But you are not just users, right? If you are in charge of upgrade, you are superusers, you are smart and cautious, you won’t put it in production just like that, but will try in a staging first. So do it, do it now, it’s already time. You won’t gain much by waiting further; but you’ll most certainly lose something.

Of course you have every right to expect a stable reliable version once it’s called GA, and to be irritated if you encounter a problem, and to blame us for that. But it won’t help you get what you need.
See, there are always bugs, GA or not GA. Most of them won’t affect you, but some might. Nobody can evaluate this soon-to-be-GA product in regard to your tasks and goals better than you.

There won’t be huge changes between the RC and GA. Install now, try, report while it is still RC — and there is a good chance you’ll get the fix in time, and possibly even a better one than you’d have if you reported after-GA: as everyone knows, the further after GA, the less intrusive (and sometimes less efficient) bugfixes become. Also, keep in mind that we always give higher priority to external bug reports comparing to those of comparable severity, but filed from inside.

Why do I care whether you install or not?

There is the everlasting argument whether it’s right or wrong to release early and let the community try out the product before it ripens. Ironically, quality control people are often the most fierce antagonists. Why, is it not a good thing when somebody helps you “do your job”?

Well, most testers are still marginally humans. While we know better than anybody that there are always bugs, it’s still excruciating when somebody finds one after you; and much more so if this somebody is an external user. Besides, for many testers the cool part of the job is hunting and finding fancy crashes, while working with external reports is a mundane routine.

So, all in all, we begin to rationalize. The argument that QC often comes up with is that external bugs are mostly a useless waste of time, they are unimportant, badly reported, and so on. When higher authorities are involved, the rationale sounds more elegant, something like not wishing to burden the community with doing the dirty work. But underneath, it’s all the same, trying to keep appearances.

I think at the end it works against the quality of the product.

Maybe I’m just lucky, or maybe MariaDB users are the best, but I’m getting really great bug reports from the community all the time. Some are nearly perfect right away, many require additional work and clarification, and yes, it takes time to process them, and — oh yes — I feel very bad about every bug that we hadn’t caught before the users did; but anyway, this contribution is invaluable, because it represents the real users’ problems, and because, no matter how hard we work on testing the product, we can’t compete with the whole universe of users who have all imaginable (and often unimaginable) kinds of workflow, data, environments, combinations of the third-party software, you name it.

So, please install, try, report, and thanks to everyone who is already doing that.

Building TokuDB unit tests in MariaDB tree

Февраль 16th, 2014 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

There are some TokuDB *.cc tests in MariaDB tree, e.g. in /storage/tokudb/ft-index/portability/tests, but they are not built by default.

Generally, to build them, we need a couple includes and one library:


g++ -c -I<basedir>/storage/tokudb/ft-index/toku_include -I<basedir>/storage/tokudb/ft-index/portability -std=c++11 test-cpu-freq.cc
g++ -o test-cpu-freq test-cpu-freq.o -L<basedir>/storage/tokudb/ft-index/portability -ltokuportability
LD_LIBRARY_PATH=<basedir>/storage/tokudb/ft-index/portability:$LD_LIBRARY_PATH ./test-cpu-freq

But on more conservative systems, there is an additional problem:

test-cpu-freq.cc: In function ‘int main()’:
test-cpu-freq.cc:103:13: error: expected ‘)’ before ‘PRIu64’

There are most certainly some smart solutions for that, but I haven’t found any that works on Wheezy, and to get it work quick, this is enough:

--- storage/tokudb/ft-index/portability/tests/test-cpu-freq.cc 2013-10-04 20:49:53 +0000
+++ storage/tokudb/ft-index/portability/tests/test-cpu-freq.cc 2014-02-16 14:35:15 +0000
@@ -100,7 +100,7 @@
int r = toku_os_get_processor_frequency(&cpuhz);
assert(r == 0);
if (verbose) {
- printf("%" PRIu64 "\n", cpuhz);
+ printf("%li\n", cpuhz);
}
assert(cpuhz>100000000);
return 0;

Also, verbose here is not an option, it’s a constant. To make the test actually verbose, modify it in the code and recompile.

RQG: SlaveCrashRecovery Reporter

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

The reporter can be used to test crash-safety of replication.

It is a periodic reporter, every 30 seconds it kills the slave server using SIGKILL, and immediately restarts it on the old data directory, with the same parameters as before. On server restart, the reporter checks that the server itself and the replication started all right.

The reporter itself does not check consistency of the data, but it can be used together with ReplicationConsistency reporter.

It is supposed to be used with runall-new.pl, so that the server is started without MTR involvement.

Some more information at https://kb.askmonty.org/en/rqg-extensions-for-mariadb-features/

RQG: LimitRowsExamined Transformer

Май 8th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

There are many things in RQG for which it’s not obvious what they do by a quick look. It becomes especially embarrassing though when those are things that you developed yourself; so, I’ll try to keep track of those at least, and maybe add some records for legacy components when I can.

The LimitRowsExamined transformer checks whether the original query already contains a ROWS EXAMINED clause. If it does not, it adds the clause either after the LIMIT clause, or at the end of the query. In any case (even if ROWS EXAMINED was already there), the transformer returns the following sequence of statements:

* FLUSH STATUS
* the query with ROWS EXAMINED
* a query which sums up status variables related to examined rows

The result of the main query is checked to be a subset of the original query’s result set. The sum of status variables is checked to be not greater than the limit provided in the ROWS EXAMINED clause, plus a margin. The margin is configured in the transformer.

Some more information at https://kb.askmonty.org/en/rqg-extensions-for-mariadb-features/

Packages to get MariaDB and tests up and running

Январь 18th, 2013 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

yum

It’s often pain to guess package names when you need to install stuff on, lets say, CentOS. So there is a list, although maybe not full, of what I needed to get another VM build and run MariaDB server and to execute at least some tests on it (all done via yum install):

cmake
gcc
ncurses-devel
bison
g++
gcc-c++
aclocal
automake
libtool
perl-DBD-MySQL
gdb
libaio-devel
openssl-devel

Same in one line, for lazy me:
sudo yum install cmake gcc ncurses-devel bison g++ gcc-c++ aclocal automake libtool perl-DBD-MySQL gdb libaio-devel openssl-devel

To install bzr (if it’s not in the official repo):

su -c ‘rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-2.noarch.rpm’
(check the architecture)

and then can use yum install
bzr

Another story (taken from http://wiki.bazaar.canonical.com/Download):
su -c 'rpm -Uvh http://dl.fedoraproject.org/pub/epel/5/i386/epel-release-5-4.noarch.rpm'

In newer distributions bzr seems to be already in place.

To install jemalloc:

Search for recent versions of
jemalloc-3.6.0-1.el6.x86_64
jemalloc-devel-static-3.6.0-23.1.x86_64
jemalloc-devel-3.6.0-1.el6.x86_64

download them, install.

To install pcre-devel:

Search for recent versions of
pcre
pcre-devel

download them, install.

apt-get

Pretty much the same stuff, but few names are different:

bzr
cmake
gcc
make
ncurses-dev
bison
g++
automake
libtool
gdb
valgrind
libaio-dev
libssl-dev

libdbi-perl
libdbd-mysql-perl

libjemalloc-dev
libjemalloc1

libpcre3
libpcre3-dev

The same in one line:
sudo apt-get install bzr cmake gcc make ncurses-dev bison g++ automake libtool gdb valgrind libaio-dev libssl-dev libdbi-perl libdbd-mysql-perl libjemalloc1 libjemalloc-dev libpcre3 libpcre3-dev

To build PAM plugin:

libpam0g-dev

zypper

bzr
cmake
gcc
make
ncurses-devel
bison
gcc-c++
automake
libtool
gdb
valgrind
libaio-devel
perl-DBD-mysql

The same in one line:
sudo zypper install cmake gcc make ncurses-devel bison gcc-c++ automake libtool gdb valgrind libaio-devel perl-DBD-mysql

Collecting coverage info for a patch in a human-readable form

Декабрь 12th, 2012 | Posted by elenst in MariaDB | Pensieve | Testing - (Комментарии отключены)

Usually we collect coverage information for a certain source file or directory. gcov/lcov and genhtml are good enough for that. But what I’m actually interested in is coverage for a patch, or, in other words, for bzr diff between the base tree and the new code. A patch might affect dozens of files, a few lines in each, and it’s pain to go through all HTML reports and compare them to the diff to find out which added/modified lines are hit and which aren’t. I’m pretty sure I’m going to be inventing a wheel here, but sometimes it’s easier and faster than find one that fits.

So, what do I need to do?
For now I will hope that a developer rebases before pushing a patch — that is, the code change is one or several revisions over the base tree, rather than a mix of patch-related revisions and base tree revisions. In the latter case, I’m in trouble.

I’ll need gcov and lcov to be installed and be on the PATH.

First, clean up the source tree and build with
cmake . -DCMAKE_BUILD_TYPE=Debug -DENABLE_GCOV=ON && make

Or, if it’s already built, run
lcov --directory <basedir> --zerocounters
to remove all leftovers from previous tests. I want clean data.

Then, run the tests. If it’s RQG combinations, make sure not to use --gcov option, as it cleans up after each run and collects the data separately, while I want the total coverage. If it’s MTR, also no need to use --gcov option, I’ll process what I need by myself.

I think that the tests also need to shut down the server properly (rather than kill it in our usual savage fashion). Not that I’m completely sure it’s important, but lets do it just to be safe.

When the tests are finished, run
lcov --quiet --directory <basedir> --capture --output-file <basedir>/lcov.info

It will create a nice text file lcov.info. Well, maybe it’s not that nice to read, but it’s a Perl coder dream. And we have a description of its format. So it’s all good.

I also need the actual patch file. It can be either taken from commit mails, or produced by
bzr diff -r<last base revision>

So, I have a patch file and an lcov info file.
From the patch file, I need ‘+’ lines with their numbers and names of the source files; in the lcov file, I need to find coverage info for these code lines, using the source name and the line number. Probably the branch info too, while we are still there. It requires a bit of scripting, but it’s not nuclear physics, is it?

perl ~/mariadb-toolbox/scripts/coverage_for_patch.pl --help

The script produces an lcov summary of gcov data stored in the basedir
(or uses already existing lcov info file), and extracts the coverage data related
to the code patch or diff

Usage: perl coverage_for_patch.pl <options>

--basedir=<path>: source code work tree
needed if there is no lcov info file or patch/diff file yet;
it is also used to remove the prefix from absolute paths in lcov.info

--diff-file=<path>: a patch or bzr diff file;
if there is no such file yet, it will be generated by bzr diff

--prev-revno=<bzr revision>: a revision to compare with the work tree;
-2 by default (meaning last but one)

--lcov-info=<path>: a coverage summary file produced by lcov;
if there is no such file yet, it will be generated

--branch-info: include branch coverage info into the report
(FALSE by default)

--debug: script debug output

--help: print this help and exit

Make sure basedir is correct and is not a symlink! In other words, it should be the same as in lcov.info, otherwise the result will be very confusing.

Example of the command line:
perl ~/mariadb-toolbox/scripts/coverage_for_patch.pl --basedir=/data/repo/bzr/5.5 --diff-file=/home/elenst/bzr/5.5/3733.dif --lcov-info=/home/elenst/bzr/5.5/lcov.info --branch-info 1>3733.coverage 2>missings

If we test not the tip of the tree, line numbers in the patch diff might be off, and it will create a totally wrong coverage report.
We need to adjust it. Here is one way to do it, it’s ugly, but it worked for me today:

1. run
bzr diff -c ${revno} > ${revno}.dif

It will produce the diff between the target revision and the previous one.

2. branch the tree locally, e.g.
bzr branch 5.5 5.5-temporary

3. go to the temporary branch and apply the created patch in the reverse mode:
patch -R -p0 < ${revno}.dif

Hopefully it will apply all right (with lines properly shifted). If it fails on test/result files, it can be ignored, we're only interested in the code.
If it worked, commit the temporary tree (don't push!)

4. bzr commit

Thus we'll have in 5.5 the tip of the tree (including the patch), and in 5.5-temporary the tip of the tree minus the patch. Now we just need to run the bzr diff.
Go back to the main tree.

5. bzr diff —old ../5.5-temporary > ${revno}.dif.adjusted

Now in ${revno}.dif.adjusted we should have the very same patch, but with the right line numbers. Use it instead of the original one in the script command line.

aio-max-nr in general and “InnoDB: Error: io_setup() failed with EAGAIN” in particular

Август 3rd, 2012 | Posted by self in MariaDB - (Комментарии отключены)

The problem many MySQL/MariaDB 5.5+ users are painfully aware of:

InnoDB: Using Linux native AIO
InnoDB: Warning: io_setup() failed with EAGAIN. Will make 5 attempts before giving up.
InnoDB: Warning: io_setup() attempt 1 failed.
InnoDB: Warning: io_setup() attempt 2 failed.
InnoDB: Warning: io_setup() attempt 3 failed.
InnoDB: Warning: io_setup() attempt 4 failed.
InnoDB: Warning: io_setup() attempt 5 failed.
InnoDB: Error: io_setup() failed with EAGAIN after 5 attempts.
InnoDB: You can disable Linux Native AIO by setting innodb_native_aio = off in my.cnf
InnoDB: Initializing buffer pool, size = 128.0M
InnoDB: Completed initialization of buffer pool
mysqld got signal 11 ;

There is no news that disabling InnoDB native AIO is not exactly the best possible option. It’s also not a secret that the alternative is increasing aio-max-nr if possible. What is always news for me is how it’s done…
So,

to see the current value:
cat /proc/sys/fs/aio-max-nr
65536

to set the new one:
sudo sysctl fs.aio-max-nr=262144

And yeah, it has already been fixed in MariaDB: replication tests crash… But still.

Btw, while we are here — what is aio-max-nr, anyway?

aio-nr is the running total of the number of events specified on the io_setup system call for all currently active aio contexts. If aio-nr reaches aio-max-nr then io_setup will fail with EAGAIN. Note that raising aio-max-nr does not result in the pre-allocation or re-sizing of any kernel data structures.

Source: http://www.kernel.org/doc/Documentation/sysctl/fs.txt

Using Windows profiler with mysqld

Июнь 10th, 2012 | Posted by self in MariaDB - (Комментарии отключены)

My colleague who is a Windows guru taught me how to do very basic things with the Windows profiler. While this stuff is primitive, the sad truth is that many people don’t know even this little about Windows, me including. I’ll keep the hints here, and hopefully will learn more with time.

  • Use RelWithDebInfo builds, Debug build will screw the picture, it will tell you _db_enter is the most expensive function;
  • Start mysqld;
  • Use x64 VS command prompt:

Start => All Programs => MS Visual Studio 2010 => Visual Studio Tools => Visual Studio x64 Win64 command prompt

  • Set _NT_SYMBOL_PATH to the package bin directory, so mysqld.pdb can be found, and start the IDE:

set _NT_SYMBOL_PATH=... && devenv.exe

  • In the VS command prompt, run

vsperfcmd /start:sample /output:mysample
vsperfcmd /attach:mysqld.exe

  • Run whatever flow you need on mysqld;
  • In the VS command prompt, run

vsperfcmd /shutdown
(It will start waiting till mysqld is stopped)

  • Stop mysqld.

The output of the above is mysample.vsp in the directory where you ran VS command prompt;

  • Open the file in Visual Studio.

Visual Studio provides a number of different views and slices.

  • In Functions, ‘exclusive samples’ is the time spent in this function only, ‘inclusive samples’ is the time spent in this function and its “children”.
  • Callstack is useful, especially with “hot” button in the menu (it can be pushed more than once to see deeper data).
  • You can also create comparison reports.