Category: MySQL / MariaDB


MYSQL_HISTFILE and .mysql_history

The MySQL manual says:
“mysql Logging
On Unix, the mysql client logs statements executed interactively to a history file. By default, this file is named .mysql_history in your home directory. To specify a different file, set the value of the MYSQL_HISTFILE environment variable.”
The trouble with that is: it doesn’t tell you what you don’t need to know. So I’ll tell you.

Heritage

The history-file concept that MySQL and MariaDB are following is indeed “on Unix” and specifically is like the GNU History Library. There is a dependence on external libraries, Readline or EditLine/libedit, depending on the licence. The similarity to (say) the Bash shell history is clear when one compares some names and some actions.

Thing Bash MySQL
Default file name $HOME/.bash_history $HOME/.mysql_history
Environment variable for name HISTFILE MYSQL_HISTFILE
Environment variable for ignoring HISTIGNORE MYSQL_HISTIGNORE

This explains why the history file is hidden, why the facility doesn’t work on Windows, and why there’s no orthogonality (that is, one can set an environment variable but one cannot set a configuration-file or command-line parameter).

Is it really a log?

The manual says .mysql_history is a log, but that’s a bit misleading, since it’s not much like the “logs” that the server produces (the transaction log, the slow query log, etc.). There are no timestamps; there is a large class of SQL statements that are ignored such as ones that come via file input; there is no way to know which statements succeeded and which ones ended with errors.

So the only thing .mysql_history is truly good for is making some edits easier. If you like to say “repeat last statements”, or some sophisticated variant that requires knowing what earlier statements were, then the mysql client — actually, the Readline or EditLine library — needs the history for that.

Also it’s occasionally useful for answering questions like “How did I get myself into this mess?” when odd things start to happen, and especially so when you have to write a bug report and your friendly MySQL or SkySQL or Percona support worker enquires: “Could you please show us the things you were doing just before the, er, anomaly occurred.” A tee file would be better, but sometimes a histfile might do.

Format

Not all versions of the mysql client will write the .mysql_history file in the same way. That is why sometimes you will see “X Y” if you print the file, but sometimes you will see “X\040Y”. Of course 040 is the octal for the ASCII for the space character, but this has led to bug reports such as Bug#14634386 – “History written by Libedit is not readable by Readline” and Bug#16633124 – Compatibility issue with mysql history (“\040” instead of space). These are Oracle internal bug numbers, so don’t bother trying to find them on bugs.mysql.com. They’re unimportant bugs, since they should not cause problems for editing.

A more severe matter is that multi-line statements are logged twice. First there is a record of each line. Then there is a record of the statement as a whole.

Stopping

Although disk space is cheap, people do worry sometimes when they realize their computer might be writing secret stuff in a file that burglars could see. The easiest prevention is to start the mysql client with –batch or an option that implies –batch. However, those options all do something else, as well as stopping writing to .mysql_history, so they’re not “well targeted” solutions.

There’s a recommendation to set the MYSQL_HISTFILE environment variable to say “/dev/null”, or to link the .mysql_history file to /dev/null. Loosely one might think: oh, statements are being sent to /dev/null and therefore they won’t get saved. More accurately, they might not be sent anywhere at all, as one can see from these snippets of mysql.cc code (Copyright (c) 2000, 2014, Oracle and/or its affiliates):

  if (my_readlink(link_name, histfile, 0) == 0 &&
    strncmp(link_name, "/dev/null", 10) == 0)
  {
    my_free(histfile);
    histfile= 0;
  }
  ...
  if (histfile && strncmp(histfile, "/dev/null", 10) == 0)
    histfile= NULL;

Limiting

Despite the similarities to Bash, there is no MYSQL_HISTSIZE environment variable to be like Bash’s HISTSIZE. With some unreliable fiddling, one can try to tell the underlying library what the maximum history size should be, that is, how many lines are allowed in the history file before starting to discard. For example, instead of saying “/usr/bin/mysql”, say

rlwrap -a --histsize=5 /usr/bin/mysql

Usually this particular trick is best avoided because of its side effects.

MYSQL_HISTIGNORE

In MySQL’s mysql, there’s a newish environment variable:

export MYSQL_HISTIGNORE=pattern

It’s not in my copy of MariaDB’s mysql, but I suppose we can expect it soon. The idea is: if a statement matches the pattern, then it doesn’t get saved in the history file. But there are some flaws …

  • There are only two wildcards for the pattern: “?” meaning “one occurrence of any character”, and “*” meaning “zero or more occurrences of any character”. There is no regular-expression variation, and the choice is un-SQL-ish (in SQL one would use _ and % not ? and *).
  • A certain pattern is always in effect so that passwords won’t be saved. A fine default, but it’s regrettable that there’s no way to remove it.
  • It appears to be impossible to escape certain characters. For example, if I wanted the pattern to include a colon, I couldn’t say “export MYSQL_HISTIGNORE=a\:b”.

ocelotgui

The objective of Ocelot Computer Services Inc. is that the ocelotgui program will do anything that the mysql client can do, and one can dream that there are ways to do even more (porting to Windows, recognizing history size, matching more expressions, etc.) along with the advantages that a GUI automatically has over a command-line editor. But at the moment we don’t have a histfile, we only support tee. I think that’s the last big feature that ocelotgui needs before we call it ‘feature complete’, which is why I’ve been looking at .mysql_history details.

The latest ocelotgui release is 0.6.0, source and binaries are on github.com/ocelot-inc/ocelotgui.

Connecting to MySQL or MariaDB with sockets on Linux

The MySQL manual says

–socket=file_name, -S file_name … On Unix, the name of the Unix socket file to use, for connections made using a named pipe to a local server.
The default Unix socket file name is /tmp/mysql.sock.

which might surprise folks who’ve had to contend with the error message
“Can’t connect to local MySQL server through socket ‘[something-other-than-/tmp/mysql.sock]'”.

I’ll try to explain here why the name is often something quite different, how to know what the MySQL server is really listening for, what the fixes are for either users or application developers, and why it still matters.

Why the name is not always /tmp/mysql.sock

First, the Linux Foundation publishes a document “Filesystem Hierarchy Standard”. Version 2.3 says in the section about the /var/run directory: “Programs that maintain transient UNIX-domain sockets must place them in this directory.” Unfortunately Version 3 says something a bit different
in the section about the /run directory: “System programs that maintain transient UNIX-domain sockets must place them in this directory or an appropriate subdirectory as outlined above.” But version 3 also says: “In general, programs may continue to use /var/run to fulfill the requirements set out for /run for the purposes of backwards compatibility.” so /var/run is still standard.

Second, there’s a bit of fine print tucked away in an appendix of the MySQL manual: “For some distribution formats, the directory might be different, such as /var/lib/mysql for RPMs.” That’s a vague way of saying it’s determined at source-installation time by -DINSTALL_LAYOUT={STANDALONE|RPM|SVR4|DEB} which in effect causes this:

SET(INSTALL_UNIX_ADDRDIR_STANDALONE     "/tmp/mysql.sock")
SET(INSTALL_UNIX_ADDRDIR_RPM            "/var/lib/mysql/mysql.sock")
SET(INSTALL_UNIX_ADDRDIR_DEB            "/var/run/mysqld/mysqld.sock")
SET(INSTALL_UNIX_ADDRDIR_SVR            "/tmp/mysql.sock")

but anybody can override that by setting MYSQL_UNIX_ADDR to something else.

And so different machines have different defaults. The following comes from notes I made long ago so may not be the latest information:

Non-Linux e.g. FreeBSD or Solaris: /tmp/mysql.sock
Debian-based e.g. Ubuntu, and archlinux: /var/run/mysqld/mysqld.sock
SUSE (after v11.2): /var/run/mysql/mysql.sock
Red Hat, and SUSE (before v11.2): /var/lib/mysql/mysql.sock
archlinux (very old versions): /tmp/mysqld.sock

Sometimes you can find out what the real default on your machine was,
by typing mysql_config –socket.

Finding what the server is really listening for

If you’re not the one who started the server, or the starting has disappeared in the mists of memory, there are various flawed ways to find what socket
it’s really opened.

Possible Method #1: netstat -lpn | grep mysqld
Example:

$ netstat -lpn | grep mysqld
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::3306 :::*  LISTEN 4436/mysqld     
unix 2 [ ACC ] STREAM LISTENING 101567 4436/mysqld ./mysql_sock

… This method’s flaw is that, as the warning says, you won’t see everything unless you’re root. Also the “grep mysqld” filtering means it’s assumed the server’s name is mysqld.

Possible Method #2: find directory-list -name “mysql*.sock”
Example:

$ find /tmp /var/lib/mysql /var/run/mysqld -name "mysql*.sock"
find: 'var/lib/mysql': Permission denied
/var/run/mysqld/mysqld.sock

… This method’s flaw is that you have to guess in advance what directories the socket might be on.

Possible Method #3: ask via TCP
Example:

mysql -h 127.0.0.1 -e "select @@socket"
+-----------------------------+
| /var/run/mysqld/mysqld.sock |
+-----------------------------+

… This method’s flaw is in the assumption that the port is the default (3306), that the local host is accessible (I think it’s theoretically possible that it won’t be), and that everyone has the privilege to access MySQL this way.

Possible Method #4: look at the running process
Example (after finding with ps -A that mysqld process ID = 3938):

$ ps -fp 3938
UID   PID  PPID  C STIME TTY    TIME CMD
1000  3938 18201 0 09:58 pts/2  00:00:00 bin/mysqld --socket=./sock

… This method’s flaw is that it only works if –socket was specified explicitly on a mysqld command line, overriding the default configuration.

What a user can do

Once you know that the server is listening on socket X, you can redirect with ln, for example

ln -s /var/run/mysqld/mysqld.sock /tmp/mysql.sock

… The flaw this time is that it’s trying to solve a MySQL/MariaDB problem with a Linux workaround. The only reason that I mention it is that I’ve seen it recommended on Internet forums, so I guess some people see advantages here, which I don’t.

A better answer, then, would be: tell the client program what the socket is. On a client that follows the MySQL guidelines (such as the mysql client itself, or ocelotgui), this would mean setting the MYSQL_UNIX_PORT environment variable, or starting with –socket=X on the command line, or changing one of the configuration files such as ~/.my.cnf to add the line socket = X. Beware that the socket location might also be stored in other places, such as /etc/mysql/debian.cnf or php.ini.

What a client program should do

People who write client programs shouldn’t pass this socket hassle on to the user, if they don’t have to.

The mysql client tries to make things easier by, in effect, hard-coding a socket name so it’s the same as what the server got installed with. That’s a good try. My only criticism is that mysql –help will say that the socket parameter has “(No default value)” when, in a sense, it does.

I’ve been told that another client-GUI product tries to make things easier by automatically going with TCP. It’s difficult to criticize this — I’ve always thought that MySQL made trouble for itself by deciding that even when a user says “host=localhost” we should ignore the usual implication that the user is trying to use TCP and try to find a socket anyway — but here at Ocelot we try to behave the way the mysql client behaves, flawed or not.

So ocelotgui will try to make things easier by hard-coding the most likely paths. That is, if the user doesn’t specify a socket but the host is either default or ‘localhost’, then ocelotgui will try to connect via /tmp/mysql.sock, and if that fails then /var/lib/mysql/mysql.sock, and if that fails then /var/run/mysqld/mysqld.sock. That’s just the plan; currently ocelotgui isn’t doing it.

Is it worth caring about?

Couldn’t the Gordian socket be cut by saying protocol=TCP? What’s the point of finding the socket, anyway?

The answer usually is performance. The transfer rate with sockets is faster than with TCP because why do I need to worry about protocol when I’m talking to myself? I haven’t done a benchmark myself but Brandon D. Northcutt did and it does appear that sockets are worth the trouble if huge quantities of data are coming through.

However, really the TCP-versus-socket difference is a Linux problem, so why isn’t it being solved by the Linux crowd rather than the MySQL or MariaDB crowds? Well, unsurprisingly, that question has been asked before. So there is a fix, or more precisely there is a Google patch, which turns a TCP/IP connection into a socket connection if the target is localhost. There was an article about it on lwn.net in 2012. What happened after that, I don’t know.

By the way

The current state of ocelotgui (our open-source GUI client for MySQL and MariaDB) is still alpha, but it’s stable and good-looking now. Some screenshots are here. The latest release is 0.7, here.

The ocelotgui debugger

I have merged a debugger for MySQL/MariaDB stored procedures and functions into our GUI client and posted the source and binaries on github. It allows breakpoint, clear, continue, next, skip, step, tbreakpoint, and variable displays. Features which are rare or missing in other debuggers include:
its current platform is Linux;
it allows breakpoints on functions which are invoked within SQL statements;
it never changes existing stored procedures or functions;
it is integrated with a general GUI client;
it allows commands like gdb and allows menu items / shortcut keys like ddd;
it is available on github as C++ source with GPL licence.

It’s alpha and it’s fragile but it works. Here is a demo.

Start the client and connect to a running server, as root. Actually the required privileges are merely for creation of certain objects and SUPER, but I’m making this simple.
connection_dialog_box

Type the statement: DELIMITER // (For all typed-in instructions, execute by typing carriage-return after the delimiter or by clicking Run|Execute.)delimiter_1

Create a function fdemo which updates and returns a counter.

CREATE FUNCTION fdemo () RETURNS INT
  BEGIN
    IF @counter IS NULL THEN SET @counter = 0; END IF;
    SET @counter = @counter + 1;
    RETURN @counter;
    END
  //


fdemo

Create a procedure pdemo which contains a loop of “INSERT INTO tdemo VALUES (fdemo())” statements.

CREATE PROCEDURE pdemo ()
  BEGIN
    DECLARE i INT;
    CREATE TABLE IF NOT EXISTS tdemo (s1 INT);
    SET i = 0;
    WHILE i < 100 DO
      INSERT INTO tdemo VALUES (fdemo());
      SET i = i + 1;
      END WHILE;
    SET i = i + 1;
    END
  //


pdemo

Type the statement: DELIMITER ;
delimiter_2

Type the statement: $install;
install

Type the statement: $setup fdemo, pdemo;
setup

Type the statement: $debug pdemo;
After this statement is executed a tabbed widget appears. The first line of pdemo is highlighted. There is always a breakpoint before the first line.
debug

Click on the Debug menu to see what options are available. Debugger instructions may be done via the menu, via Alt keys, or via command-line statements. For example to enter a Next instruction one may now click Debug|Next, or type Alt+3, or type the statement: "$next;".
menu

Enter a Next instruction repeatedly, watching how the highlighted line changes, until the INSERT line is highlighted.
next

Enter a Step instruction. After this is executed, the function now comes into the foreground.
step

Enter three more Step (or Next) instructions. After these are executed, the procedure now is in the foreground again.
three_steps

Set a breakpoint for the final executable line of the procedure. This can be done by clicking over the line number, or by moving the cursor to the line and then clicking Debug|Breakpoint, or by typing the statement "$breakpoint pdemo 9;". After this is executed, line 9 of pdemo has a small red mark showing that it has a breakpoint.
breakpoint

Enter a Continue instruction. After this is executed, watch the debugger highlight hop around 100 times as it moves through the loop, and finally settle on line 9.
continue

Type the statements "$refresh variables;" and "select old_value, value from xxxmdbug.variables;". After this is executed, the result-set widget will contain the old value of variable i (99) and the current value (100), This is the way that one examines DECLAREd variables (there are other statements for user variables and server variables).
select

Enter an $exit instruction. This stops the debugging session, so the effects of the earlier $debug instruction are cancelled. The effects of the earlier $install and $setup instructions are not cancelled, so they will not have to be repeated for the next debugging session involving pdemo and fdemo.
exit

Thus ends our demo. If you would like to confirm it: an introduction for how ocelotgui in general works is in the earlier blog post An open-source MySQL/MariaDB client on Linux" and the source + executable download files for version 0.3.0 are on github.com/ocelot-inc/ocelotgui.

If there is anything Ocelot can do to make your next debugging trip more enjoyable, please leave a comment. If you have private thoughts, please write to pgulutzan at ocelot.ca.

Stored Procedures: critiques and defences

I’ve gathered the main critiques of MySQL / MariaDB stored procedures, and will try some defences.

Monoglots

The critique:

SQL/PSM is the standard 4GL and it was the work of Andrew Eisenberg. Andy based it on ADA. Unless you are military, you have never seen ADA. Be grateful it is dead.
— Joe Celko, reminiscing about his days on the SQL standard committee

Actually I believe Mr Celko likes SQL/PSM, which is the standard that MySQL and MariaDB follow. Here at last is your chance to see some Ada code, and compare with MySQL code …

Ada
MySQL
declare a: integer;
begin

  a := 0;
  loop
    a := a + 100;
    exit when a = 200;
  end loop;
  if a /= 300 then
    a := 400;
  else
    a := 500;
  end if;
  case a is
    when 600 a := 700;
    when others a := 800;
  end case;
end;

begin
  declare a integer;
  set a = 0;
  x:loop
    set a = a + 100;
    if a = 200 then leave x; end if;
  end loop;
  if a <> 300 then
    set a = 400;
  else
    set a = 500;
  end if;
  case a
    when 600 then set a = 700;
    else set a = 800;
  end case;
end;

The trouble is, Ada isn’t even among the top ten programming languages according to O’Reilly’s count of book sales, and the choice of only 1.6% of programmers according to the Language Popularity Index. So programmers aren’t used to seeing syntax like the above. And I, who am just as good a psychologist as I am an Ada programmer, explain: they’re not familiar with it, so they call it ugly.

The defence:

Look at the first alternative: UDFs. Percona posts point out that UDFs can be faster. of course.
But look at the example in Managing & Using MySQL Second Edition. Do you understand it without reading the long commentary?

Now look at the second alternative: External stored procedures. There’s a worklog task for stored procedures in other languages, of course.
But it’s moribund. An implementation on launchpad, “External Language Stored Procedures for MySQL”, exists but doesn’t seem to have been updated since 2009. Are you going to go to the trouble of downloading and adjusting it?

If you answered yes to either of the above questions, please check the length of your nose now.

Migrants

The critique:

“Migrating a stored procedure is much more complex than rewriting one because the relevant standards of various vendors differ greatly. In this situation, users have no choice but stick to one database vendor rigidly. There is not any room left for users to beat down the price if database vendors overcharge them on upgrading their servers, storages, and user license agreements.”
— Couchbase, “Alternative to Difficult Stored Procedures in Big Data Computation”

The defence: That’s not true. There’s only one relevant standard, there are multiple migration paths, and there’s no charge.

The other DBMSs that follow the ANSI/ISO SQL/PSM standard are: DB2, Mimer, SolidDB, Sybase iAnywhere (Advantage Database Server).

Selective quotes from a case study about “experiences with porting stored procedures written in SQL PL (DB2) to MySQL”:

To find out how “standardized” the MySQL implementation of the SQL/PSM specification really is, we tried to port all our DB2 stored procedures to MySQL. First, we ran into DB2 non-standard extensions of SQL PL that were used in our existing procedures. Actually, there was only one such extension [SQLCODE] … So we first rewrote our DB2 procedures –in DB2–, making sure not to use SQLCODE anymore. Instead we had to introduce the corresponding continue handler(s), thereby introducing an additional “flag” variable. This worked out fine: the new procedures ran perfectly in DB2. … Now we observed some syntactic differences, luckily not in the body of the procedures but in the optional clauses specified just before the body … The only option clause which we could keep was the “LANGUAGE SQL”: required in DB2, optional with MySQL. After these small modifications, the CREATE PROCEDURE statements from DB2 worked on MySQL!
But did they run properly? To verify this, we had to create identical tables on both systems, have the same test data in both, and migrate then run the unit test programs from DB2 on MySQL. And indeed: it turned out that MySQL worked exactly as expected!
— Peter Vanroose (ABIS), MySQL: stored procedures and SQL/PSM

As for Oracle, its PL/SQL language is not standard, but read the Oracle documentation: “PL/SQL is based on the programming language Ada.” Due to the common Ada heritage that PL/SQL and SQL/PSM share, I’ve been able to convert Oracle stored procedures to MySQL stored procedures at a rate of about a line per minute. I expect I’d achieve the same speed with NuoDB (NuoDB architect Jim Starkey once assured me that their stored procedures will follow PL/SQL), and with PostgreSQL (PostgreSQL’s stored procedures are deliberately Oracle-like). There’s also an SQL/PSM add-on for PostgreSQL, although I don’t think it’s popular.

Finally, there are some commercial tools that try to automate the migration process. I think the one from ispirer is the best known.

Toolmakers

The critique:

Your stored routines are not likely to integrate well with your IDE. … [SQL-oriented GUI tools] do not integrate (well?) with your source control … While engineers are keen on writing unit tests for every class and method they create, they are less keen on doing the same for stored routines. … MySQL routines have [no?] built in debugging capabilities …
— Shlomi Noach, “Why delegating code to MySQL Stored Routines is poor engineering practice”

The defence:

These are important considerations, but strictly speaking they’re about what people should have written to go with stored procedures, not about stored procedures themselves. It’s not the wiener’s fault if there’s no mustard.

Take versioning. For Oracle it can be done with a client GUI named Oracle SQL Developer. For MySQL there are open-source utilities like dbv. Either way, it’s not a server task.

Take unit tests. Hmm, okay, you can’t, but some people might be satisfied with MyTAP or maybe even STK/Unit. And did you notice in the quote above that one can make unit tests with DB2 tools and run them in MySQL?

Take debuggers. I’m aware of five, although I haven’t tried any of them, except the one that I wrote myself. I am going to integrate it with ocelotgui.

Could-do-betters

The critique:

Stored Procedures … do not perform very well
— Percona employee, How can we bring query to the data”

The defence:

Supposedly there are more than 12 million installations and that means there are 12 million which are, as the old saying goes, “not Facebook”. I haven’t interviewed them all (only Monty Widenius can do that), but the ones that I’ve talked to are concerned more about the effects of badly-written queries or database design mistakes.

Grapevines

The critique:

It’s easy to find articles and blog posts with titles like “Why I hate stored procedures”, “Goodbye Stored procedures, It’s the time to move on”, “Stored Procedures – End of days”, “Why I avoid stored procedures and you should too”.

The defence:

Usually those articles are about SQL Server. They are by people who got fed up with Microsoft’s T-SQL language, or got enamoured of ORM (Object Relational Management). Inevitably some of the bad vibes get picked up in the MySQL / MariaDB community due to morphic resonance. Filter out the Microsoft material, and it gets harder to find such articles or blog posts, and it gets easier to find articles with more walking-in-the-sunshine outlook.

Besides, some of the alternatives might just be fads. As Winston Churchill said:

Many forms of Database Code have been tried, and will be tried in this world of sin and woe. No one pretends that Stored Procedures are perfect or all-wise. Indeed, it has been said that Stored Procedures are the worst form of Database Code except all those other forms that have been tried from time to time.
— Winston Churchill, Hansard, November 11 1947

Okay, he didn’t actually say Database Code or Stored Procedures, he said Government and Democracy. Close enough.

Sloths

The critique:

“The following features have been added to MySQL 5.6: … GET DIAGNOSTICS”
“The following features have been added to MySQL 5.7: … GET STACKED DIAGNOSTICS”
— MySQL user manual for 5.6 and 5.7

All right, you have to read between the lines a bit to see the critique here.
What MySQL/Oracle is implying, by omission, is: progress is glacial. As I said in a previous post I think GET DIAGNOSTICS is good to see. But the first MySQL 5.6 releases were four years ago. So, one significant new feature every two years.

What about MariaDB? Well, I did see a new trick recently in a blog post by Federico Razzoli: How MariaDB makes Stored Procedures usable, about using cursors even for SHOW statements. And I suppose that MariaDB’s “compound statements” feature could be looked on as at least a feature that’s closely related to stored procedures. Still, small potatoes.

The defence:

* The current implementation has pretty well all the significant matters required by the standard.
* There has been no official announcement that any significant stored procedure feature is deprecated.

GET DIAGNOSTICS

I know of seven DBMSs that support GET DIAGNOSTICS: DB2, Oracle Rdb, MySQL, MariaDB, PostgreSQL, Teradata, Mimer.

I regret that its clauses don’t resemble SELECT’s clauses and that the result isn’t a table, but it’s always good to see standard compliance.

Now GET DIAGNOSTICS is getting better due to two new features, one in MySQL and the other in MariaDB.

STACKED

The problem: suppose a condition (an error or warning) happens. Execution goes to a condition handler (a series of statements that are preceded by DECLARE … CONDITION HANDLER FOR condition-value). Now, within those statements, how do we know what was the condition that caused the execution to go to the handler?
For example:

CREATE PROCEDURE p ()
BEGIN
  DECLARE CONTINUE HANDLER FOR SQLWARNING
  BEGIN
    statement-1;
    statement-2;
    statement-3;
    END;
  SIGNAL SQLSTATE '01000';
  END

Well, the answer looks easy: if you say GET DIAGNOSTICS before statement-1, you’ll get ‘01000’. But there are no guarantees after that, so you’d have to store the results of GET DIAGNOSTICS somewhere, in case you need to see them later.

The solution: GET STACKED DIAGNOSTICS, a new feature in MySQL 5.7. There are now two diagnostics areas: diagnostics area #1 = the “current” one, and diagnostics area #2 = the “stacked” one. The stacked diagnostics area always has the results as they were upon entry. Here’s an illustration:

delimiter //
CREATE PROCEDURE p ()
BEGIN
  DECLARE CONTINUE HANDLER FOR SQLEXCEPTION BEGIN
    SIGNAL SQLSTATE '01000';
    GET CURRENT DIAGNOSTICS CONDITION 1 @current = RETURNED_SQLSTATE;
    GET STACKED DIAGNOSTICS CONDITION 1 @stacked = RETURNED_SQLSTATE;
    END;
  SIGNAL SQLSTATE '03000';
  END//
CALL p()//
SELECT @current, @stacked//

The result: @current = ‘01000’ but @stacked =’03000′. There’s been a known bug since February, but that’s okay in an unreleased version.

Multiple warnings

The problem: if there are two warnings, you want to see both of them with GET DIAGNOSTICS. This is especially desirable because SHOW WARNINGS has no trouble returning a set of warnings, and it would be great if we could eliminate SHOW WARNINGS usage (for reasons unrelated to this problem). However, we don’t know in advance that there will be two warnings, and we don’t have a direct way to check. If we put the check for warnings in a procedure, then that works with MariaDB but requires a privilege. If we ask for conditions that don’t exist, then a curious error/warning hybrid happens, which is workaroundable but confusing.

The solution: dynamic compound statements, a new feature in MariaDB 10.1. I mentioned it last week but didn’t think: hmm, if there’s no reason that the diagnostics area would be cleared by a simple comparison. So can I put GET DIAGNOSTICS statements inside an IF … THEN … ELSE so that I get zero, or one, or two results just by typing in appropriately on the mysql client?

I’ll start by creating a table and doing an INSERT, with default SQL mode, in MariaDB only, will cause two “Division by 0” warnings. Then I’ll see what the new feature does.

DELIMITER //
CREATE TABLE test (s1 INT)//
INSERT INTO test VALUES ('a'),('a')//
GET DIAGNOSTICS @condition_count = number//
IF @condition_count = 1 THEN
  GET DIAGNOSTICS CONDITION 1 @message_1 = message_text;
  SELECT @message_1;
ELSEIF @condition_count = 2 THEN
  GET DIAGNOSTICS CONDITION 1 @message_1 = message_text;
  GET DIAGNOSTICS CONDITION 2 @message_2 = message_text;
  SELECT @message_1, @message_2;
  END IF//

The result: both warnings are displayed. Whoopee.

By the way

The worklog task for GET DIAGNOSTICS was WL#2111 Stored Procedures: Implement GET DIAGNOSTICS. Since it got implemented long ago, it’s obsolete. But if you’re interested “what happens if GET DIAGNOSTICS itself causes an error?” and the reasons behind our decisions at that time, go over there and click the high level architecture box.

Temporary tables, standard SQL

The PostgreSQL manual says:

“The standard’s definition of the behavior of temporary tables is widely ignored. PostgreSQL’s behavior on this point is similar to that of several other SQL databases.”

The first sentence is false. The second sentence could be rephrased as “MySQL copied us”, although nobody else did, as far as I know.

The standard’s definition is widely followed

I base this claim on the documentation of DB2, Oracle Rdb, Oracle 12c, and Firebird.

I don’t say the standard is “universally followed” — for example SQL Server is out of step. I don’t say the whole standard is followed — only the subset related to global temporary tables. I don’t say the standard is the only thing that’s followed. But all these DBMSs support at least four, and sometimes all six, of these characteristics:

1. There is a syntax CREATE GLOBAL TEMPORARY TABLE <table-name> … The word GLOBAL means “the scope of the name is global”, that is, it’s recognized by all modules. Since most DBMSs don’t have multiple modules, the word GLOBAL is compulsory and the standard’s other type of temporary table, LOCAL, is usually not there. (I will mention, though, parenthetically, that LOCAL could optionally mean a temporary table is local to a stored procedure if it’s created in a stored procedure.)

2. If there is a non-temporary table already existing with the same name, then CREATE GLOBAL TEMPORARY TABLE fails.

3. There’s a distinction between the metadata — the definition of the table — and the data — the contents of the table.

4. The metadata is persistent; the data disappears when the session ends, or earlier.

5. The metadata is visible to all sessions in INFORMATION_SCHEMA or equivalent. The data is not visible to all sessions, and one sometimes calls the temporary table’s contents “session-specific data”.

6. There is an optional clause ON COMMIT {PRESERVE|DELETE} ROWS. When ON COMMIT DELETE ROWS is specified or is default, then the data disappears when COMMIT happens.

PostgreSQL’s behaviour is copied by MySQL

For PostgreSQL, the CREATE TEMPORARY TABLE statement arrived in PostgreSQL 6.5 which was released in June 1999. For MySQL, CREATE TEMPORARY TABLE arrived in MySQL 3.23 in August 1999.

That was before my time, but I’m a post-hoc-ergo-propter-hoc kind of guy, and I suspect that somebody was seduced by the logical fallacy that PostgreSQL has some standard SQL, therefore copy it. So the thing that’s presently in MySQL and MariaDB is CREATE TEMPORARY TABLE table_name …;

But these DBMSs support zero (0) of the six characteristics that I described for the standard-compliant DBMSs.

1. The specification of scope is missing.

2. If there is a non-temporary table already existing with the same name, then CREATE TEMPORARY TABLE succeeds, and the existing table is hidden.

3. There’s no distinction between the metadata — the definition of the table — and the data — the contents of the table.

4. The metadata is not persistent; the data disappears when the session ends, or earlier.

5. The metadata is not visible to all sessions in INFORMATION_SCHEMA or equivalent … Yes, I’m refusing to acknowledge the existence of the new table INNODB_TEMP_TABLE_INFO, for quality reasons.

6. There is no optional clause ON COMMIT {PRESERVE|DELETE} ROWS.

One funny result of this is that simple statements like SELECT * FROM table_name; can’t be fully checked or optimized at the time they’re checked within a stored procedure, because there’s no way to know what table_name is — maybe at execution time it will be the table that’s defined and visible in information_schema, but maybe it will be a temporary table which is only temporarily visible.

A less serious but irritating result is that, if you want to use a temporary table repeatedly, you have to re-create everything — find the original table definition somewhere, then add whatever indexes or views or triggers applied originally.

Is there any good news?

I wrote a worklog task about this long ago. It was WL#934 “Temporary tables in the standard way”. But it is a private Oracle spec, and is not in the Internet archive, so the general public can’t see it. There’s been no action on it that I know of. Some attention has been paid to feature request #58392 “Prevent TEMPORARY tables from shadowing regular tables” and feature request #20001 “Support for temp-tables in INFORMATION_SCHEMA”, but those are mere distractions.

However, it remains possible to implement standard global temporary tables without disrupting existing applications. The key is the syntax difference between

CREATE TEMPORARY TABLE ...;                    /* MySQL or MariaDB */

and

CREATE GLOBAL TEMPORARY TABLE ...;             /* standard */

They’re two different statements. So one could just say that “if and only if GLOBAL is specified, then the table has to have the standard rules”.

This isn’t due to luck. It happened because in olden days MySQL’s architects paid attention to the official ANSI/ISO framework requirements:

“In the Syntax Rules, the word shall defines conditions that are required to be true of syntactically-conforming SQL language … The treatment of language that does not conform to the SQL Formats and Syntax Rules is implementation-dependent.”

In other words, by allowing only the non-standard syntax CREATE TEMPORARY TABLE, MySQL is not violating the standard. On the other hand, the latest PostgreSQL release allows CREATE GLOBAL TEMPORARY TABLE, so PostgreSQL is violating the standard. To people who read my earlier blog post “Sometimes MySQL is more standards-compliant than PostgreSQL”, that won’t come as a shock.

An open-source MySQL/MariaDB GUI client on Linux

We’ve written a GUI application. Its command-line options are like those in the mysql client. Its graphic features are an SQL-statement text editor and a scrollable SQL result set. It runs on Linux now and we believe it could be ported to other platforms.

Here are four screenshots.


front_end_screen1

The startup is as non-GUI as can be — in fact it gets options from the command line, or from my.cnf, the same way that the mysql client does. Wherever it seemed reasonable, we asked: What would mysql do?


front_end_screen_2

The statement (at the bottom of the screenshot) has the usual GUI features for editing, and has syntax highlighting — comments are green, reserved words are magenta, and so on.


front_end_screen_3

Here the result is appearing in a grid. The second column is long so it’s on multiple lines with its own scroll bars.


front_end_screen_4

Finally, here’s the same result after fooling around with cosmetics — the colours and fonts for each section of the screen are resettable via menu items, and the column widths can be changed by dragging.

The program is written in C++ and uses Qt. Qt is available on many operating systems including Mac and Windows but we only tried Linux. The license is GPL version 2. The status is alpha — there are bugs. Source and executable files can be downloaded from https://github.com/ocelot-inc/ocelotgui.

The BINARY and VARBINARY data types

MySQL’s support of the BINARY and VARBINARY data type is good, and the BINARY and VARBINARY data types are good things. And now the details. What applies here for MySQL applies for MariaDB as well.

Who supports VARBINARY

There’s an SQL:2008 standard optional feature T021 “BINARY and VARBINARY data types”. Our book has a bigger description, but here is one that is more up to date:

DBMS Standard-ish? Maximum length
DB2 for LUW No. Try CHAR FOR BIT DATA. 254 for fixed-length, 32672 for variable-length
DB2 for z/OS Yes. 255 for fixed-length, 32704 for variable-length
Informix No. Try CHAR. 32767
MySQL Yes. constrained by maximum row length = 65535
Oracle No. Try RAW. 2000 sometimes; 32767 sometimes
PostgreSQL No. Try BYTEA. theoretically 2**32 – 1
SQL Server Yes. 8000 for fixed-length. 2**31 -1 for variable length

Standard Conformance

Provided that sql_mode=strict_all_tables, MySQL does the right (standard) thing most of the time, with two slight exceptions and one exception that isn’t there.

The first exception:

CREATE TABLE t1 (s1 VARBINARY(2));
INSERT INTO t1 VALUES (X'010200');

… This causes an error. MySQL is non-conformant. If one tries to store three bytes into a two-byte target, but the third byte is X’00’, there should be no error.

The second exception:

SET sql_mode='traditional,pipes_as_concat';
CREATE TABLE t2 (s1 VARBINARY(50000));
CREATE TABLE t3 AS SELECT s1 || s1 FROM t2;

… This does not cause an error. MySQL is non-conformant. If a concatenation results in a value that’s longer than the maximum length of VARBINARY, which is less than 65535, then there should be an error.

The exception that isn’t there:
The MySQL manual makes an odd claim that, for certain cases when there’s a UNIQUE index, “For example, if a table contains ‘a’, an attempt to store ‘a\0’ causes a duplicate-key error.” Ignore the manual. Attempting to insert ‘a\0’ will only cause a duplicate-key error if the table’s unique-key column contains ‘a\0’.

The poem Antigonish desccribed a similar case:

Yesterday, upon the stair,
I met a man who wasn’t there.
He wasn’t there again today,
I wish, I wish he’d go away…”

The BINARY trap

Since BINARY columns are fixed-length, there has to be a padding rule. For example, suppose somebody enters zero bytes into a BINARY(2) target:

CREATE TABLE t4 (s1 BINARY(2));
INSERT INTO t4 VALUES (X'');
SELECT HEX(s1) FROM t4;

… The result is ‘0000’ — the padding byte for BINARY is X’00’ (0x00), not X’20’ (space).

There also has to be a rule about what to do for comparisons if comparands end with padding bytes.

CREATE TABLE t5 (s1 VARBINARY(2));
INSERT INTO t5 VALUES (X'0102');
SELECT * FROM t5 WHERE s1 = X'010200';

… This returns zero rows. It’s implementation-defined whether MySQL should
ignore trailing X’00’ during comparisons, so there was no chance of getting
it wrong.

The behaviour difference between BINARY and VARBINARY can cause fun:

CREATE TABLE t7 (s1 VARBINARY(2) PRIMARY KEY);
CREATE TABLE t8 (s1 BINARY(2), FOREIGN KEY (s1) REFERENCES t7 (s1));
INSERT INTO t7 VALUES (0x01);
INSERT INTO t8 SELECT s1 FROM t7;

… which fails on a foreign-key constraint error! It looks bizarre
that a value which is coming from the primary-key row can’t be put
in the foreign-key row, doesn’t it? But the zero-padding rule, combined
with the no-ignore-zero rule, means this is inevitable.

BINARY(x) is a fine data type whenever it’s certain that all the values
will be exactly x bytes long, and otherwise it’s a troublemaker.

When to use VARBINARY

VARBINARY is better than TINYBLOB or MEDIUMBLOB because it has a definite
maximum size, and that makes life easier for client programs that want to
know: how wide can the display be? In most DBMSs it’s more important that BLOBs can be stored separately from the rest of the row.

VARBINARY is better than VARCHAR if there should be no validity checking.
For example, if the default character set is UTF8 then this is illegal:

CREATE TABLE t9 (s1 VARCHAR(5));
INSERT INTO t9 VALUES (0xF4808283);

… but this is legal because character set doesn’t matter:

CREATE TABLE t10 (s1 VARBINARY(5));
INSERT INTO t10 VALUES (0xF4808283);

(I ran into this example on a SQL Server forum where the participants display woeful ignorance of Unicode).

And finally converting everything to VARBINARY is one way to avoid the annoying message “Invalid mix of collations”. In fact the wikimedia folks appear to have changed all VARCHARs to VARBINARYs back in 2011 just to avoid that error. I opine that the less drastic solution is to use collations consistently, but I wasn’t there.

Roles Review

A role is a bundle of GRANTed privileges which can be assigned to users or which can take the place of users. When there are hundreds or thousands of users, administration becomes a horrible burden if the DBMS doesn’t support roles. Our online ANSI-standard book has syntax descriptions. Other DBMSs do support roles, and MySQL has had a worklog task “WL#988 Roles” for many years. Earlier attempts to implement them included a Google Summer of Code project and a MySQL tool released by Google.

Now another google-summer-coder, Vicențiu Ciorbaru, has put together something which will be in MariaDB. MariaDB’s official blog says this is “a big thing”. I’ll compare the MySQL specification to the MariaDB feature.

Unfortunately Oracle made WL#988 Roles a hidden task about a year and a half ago — you can’t find it by looking on dev.mysql.com/worklog. Fortunately the Wayback Machine had crawled the page in 2012 — you can find it by looking on http://web.archive.org/web/20120508072213/http://forge.mysql.com/worklog/task.php?id=988. Remember that worklog pages are tabbed, and you must click the “High Level Architecture” button to see most of the specification.

I’m using MariaDB 10.0.8 built from source. This is an early version and changes will happen before General Availability happens. Although MariaDB has its own worklog task which I’ll refer to as MDEV-4397, I prefer WL#988 for obvious reasons. For this review I’ll use the same order and same section names as WL#988 high-level architecture. It will be hard to follow unless you read WL#988 first.

CREATE ROLE

“CREATE ROLE role_1;” works. Good.

“CREATE ROLE IF NOT EXISTS role_2;” does not work. I’m ambivalent about this. I don’t like the IF NOT EXISTS clause myself, but I thought it was the way of the future, according to another MySQL Worklog task, WL#3129 Consistent clauses in CREATE and DROP.

Here’s the first bug: I get a crash if I use a bad name:

MariaDB [test]> CREATE ROLE ``;
ERROR 2013 (HY000): Lost connection to MySQL server during query

Notes about role names

MariaDB allows role names to be wildcards or privilege names, for example

MariaDB [test]> CREATE ROLE super;
Query OK, 0 rows affected (0.00 sec)
MariaDB [d]> create role '%';
Query OK, 0 rows affected (0.00 sec)

I regard that as a flaw. Somewhere, ages hence, somebody will get confused because these names have special meanings.

DROP ROLE

“DROP ROLE role_1;” works. Good.

“DROP ROLE IF EXISTS role_1;” does not work. Once again I’m ambivalent. WL#988 specifies that a NOT EXISTS clause should be permissible. On the other hand, “DROP USER IF EXISTS u;” doesn’t work either.

DROP ROLE does not take effect immediately. If a user has enabled role_x, then that user continues to have the privileges of role_x even after role_x is dropped. I regard that as a flaw. [UPDATE: it’s normal, see the comments.]

Here’s the second bug: DROP USER crashes if I specify a role name:

MariaDB [test]> CREATE ROLE role_55;
Query OK, 0 rows affected (0.00 sec)

MariaDB [test]> DROP USER role_55;
ERROR 2013 (HY000): Lost connection to MySQL server during query

GRANT privilege TO role_name

“GRANT SELECT ON t TO role_1;” works. Good.

“GRANT PROXY ON a TO role_1;” works too. I’m ambivalent yet again. As WL#988 says, it’s meaningless to grant a privilege to a role if it only can affect a user. There will always be an argument between people who think this must mean the user is making a mistake, and people who think it doesn’t do any harm.

The privilege change does not take effect immediately. Consider what WL#988 says:

“The privilege change takes effect for the next statement that any affected user executes. (A user is affected if he/she has an enabled role that is affected by the privilege change.)

That seems reasonable — after all, if I say “GRANT … TO user_x” then user_x gets the privilege immediately. But it doesn’t happen for roles. If I say “GRANT … TO role_x”, then role_x gets the privilege — but users who have enabled role_x do not. I regard this as a flaw. [UPDATE: my test was bad, I was not granting the same type of privilege to both the role and the user, see the comments.]

Here’s the third bug: if a user and a role have the same name, GRANT only works on the role. If I say

CREATE USER u_1;
CREATE ROLE u_1;
GRANT INSERT ON *.* TO u_1;
SELECT host,user,insert_priv,is_role FROM mysql.user WHERE mysql.user.user='u_1';

I’d expect that INSERT would be granted to both the user and the role. It’s not, it’s granted only to the role. Moral: users and roles should not have the same names!

GRANT role_name TO user_name

“GRANT role_1 TO user_1;” works. Good. “GRANT role_1 ON *.* TO user_1;” would be good too but an [ON *.*] clause should never be necessary.

“GRANT role_1 TO user_1 WITH GRANT OPTION;” does not work. I’m ambivalent yet again. What we actually have is an ability to say “GRANT role_1 TO user_1 WITH ADMIN OPTION;” and that’s okay (WITH ADMIN OPTION is a standard clause), but I’m not sure it’s a good idea that it’s the default.

“GRANT role_1, role_2 to user_1;” does not work. I regard this as a flaw, but it might have been tough to implement. (UPDATE: 2016-03-20: It is indeed tough to implement. Two months after this blog comment was made, since the manual said that it would work, this flaw was reported as a bug: MDEV-5772 Granting multiple roles in single statement does not work. But two years later it was still unresolved.)

Who has GRANT role_name privileges?

As WL#988 puts it:

Some People will say that, if Peter said CREATE ROLE Role1, Peter should automatically have GRANT Role1 privilege. The analogy is with the way that we grant for routines. This is what would happen with Oracle.
Other People will say that, no, the only way for Peter to get GRANT Role1 privileges is if somebody grants to Peter. The analogy is with the way that we grant for tables.

The implementer decided to follow the advice of Some People for this one. Good.

GRANT role_name TO role_name

“GRANT role_1 TO role_2;” works. Good.

“GRANT role_1 TO role_1;” does not work — and that’s good too. This is a special instance of what WL#988 calls a “cyclical GRANT”. Nobody would want that.

GRANT CREATE / ALTER / DROP ROLE

“GRANT CREATE ROLE …” does not work. I regard that as a flaw.

WL#988 has quite a bit of verbiage about how there has to be a CREATE ROLE privilege, and why. MariaDB sidesteps — the decision was that all you need is a CREATE USER privilege. Although I cannot think of a way that this could lead to security breaches, I am fairly sure that cases exist where administrators who want to allow user-creation do not want to allow role-creation, and vice versa.

REVOKE

“REVOKE role_1 FROM user_1;” works. Good.

SET ROLE

“SET ROLE role_1;” works. And it only works if somebody previously said “GRANT role_1 TO user_1;” for the user that’s doing the SET ROLE. Good.

“SET role ‘role_1′;” works, that is, the role name can be a string literal. But “SET @x=’role_1’; SET ROLE @x;” does not work. I regard that as a flaw because it’s a lack-of-orthogonality thing. By that I mean: if a string literal is okay in a certain place in an SQL statement, then The User On The Clapham Omnibus will expect that a string variable would be okay in the same place.

“SET ROLE role_1,role_2;” does not work. That’s because MariaDB, unlike Oracle, doesn’t allow assignment of multiple roles. And that’s what WL#988 asks for, so no complaints here.

SET ROLE DEFAULT does not work. I regard that as a flaw. Yes, I recognize there is no non-null default, but that just means that SET ROLE DEFAULT should have the same effect as SET ROLE NONE.

Big Example

The big example didn’t work due to incompatible syntax, but I see that there is some evidence that testing has been done. Good.

CURRENT_ROLE

“SELECT CURRENT_ROLE;” works. Good. As expected, it’s possible to find out what the last “SET ROLE” statement did.

“SELECT CURRENT_ROLE();” also works. I regard this as a flaw but I suppose it was inevitable. After all, “SELECT CURRENT_USER();” works.

CURRENT_ROLE is not on the current reserved words list and this will cause a minor incompatibility with MySQL.

ALTER ROLE

“ALTER ROLE role_1 …” does not work. Good.

ALTER ROLE exists in Oracle for changing the IDENTIFIED BY clause, but MariaDB doesn’t support the IDENTIFIED BY clause, so ALTER ROLE would have nothing to do.

RENAME ROLE

“RENAME ROLE role_1 TO role_2;” does not work. Good.

Although “RENAME USER user_1 TO user_2;” works, WL#988 reminds us that renaming a role is going to have cascading effects that wouldn’t exist for renaming a user.

SHOW PRIVILEGES

“SHOW PRIVILEGES;” is the same as it used to be. Good.

Since MariaDB doesn’t have a CREATE ROLE privilege, it doesn’t have to list it.

SET DEFAULT ROLE

“SET DEFAULT ROLE role_1 TO user_1;” does not work. I regard this as a flaw.

It’s a good thing if, when user_1 connects, user_1 automatically gets the privileges associated with a default role. It’s less good if user_1 has to do her own SET ROLE whenever she connects, probably with mysql –init_command=’SET ROLE role_1′. The point of roles was supposed to be the elimination of administrator hassle, and –init-command won’t help there.

The Initial State

The initial state when a user connects is, effectively, SET ROLE NONE. That’ll have to do for now.

SHOW

“SHOW GRANTS FOR user_1;” works and shows roles too. Good.

“SHOW GRANTS FOR CURRENT_ROLE;” also works. WL#988 failed to suggest this obvious addition, so maybe I should call it: better than good.

INFORMATION_SCHEMA

“SELECT * FROM INFORMATION_SCHEMA.ENABLED_ROLES;” works. Good.

“SELECT * FROM INFORMATION_SCHEMA.APPLICABLE_ROLES;” also works. Also good.

And it does show contained roles. That is, “If role1 is contained in role2, then both roles appear.” However: It only shows the roles that have been granted for the current user, it does not show all roles. I regard that as a flaw.
[UPDATE: In an earlier version of this blog this observation was in the wrong place.] [UPDATE again: I missed what the cause was, the real problem was that I couldn’t grant to public — but that’s another bug. See comments.]

Command line options

“–role=role_name” doesn’t work (on the command line). Good.

The way to go for setting a role name at connect time is to support defaults. Any other “solution” would just get in the way.

For Replication

As WL#988 says, “The plan is to ignore replication till the last minute.” I can’t tell whether MariaDB is following such a plan.

Plan for Backup

Backup is simplified by the fact that users and roles come from the same table in the mysql database. I did not test it, so let’s say: assumed good.

Stored Procedures and Definer Privileges

After “SET ROLE role_1; CREATE PROCEDURE p () SQL SECURITY DEFINER SELECT CURRENT_ROLE; CALL p();” the result is NULL. Good.

The idea is supposed to be that, within an SQL SECURITY DEFINER routine, an implicit SET ROLE NONE takes place temporarily. It’s great that the implementer caught on to this rather obscure point.

Logging in with a Role Name

After “CREATE ROLE ‘role_5@localhost’;” or “CREATE ROLE role_5;” I was not able to log in as role_5. Good.

However, “CREATE DEFINER=role_5 PROCEDURE p2() SET @x=5;” works. I regard that as a flaw. There should at least be a warning that the DEFINER clause is being used with a nonexistent user. I get the impression from MDEV-4397 that this is deliberate behaviour, so don’t expect a fix.

DML statements and role privileges

The Oracle-style restriction for CREATE VIEW isn’t there. Good.

Other worklog tasks affected by roles

Well, WL#988 anticipated that roles would come first, and pluggable authentication support would come later. It didn’t happen that way. No problem.

Glossary

It doesn’t seem that the terminology differs from what’s expected. Good.

New columns in mysql.user

When I say “CREATE ROLE role_1;” the effect is that a new row goes into the mysql.user table. The mysql.user table definition had to be adjusted to take that into account. But I don’t think it’s all good.

WL#988 expected that the length of a role name would be 16 characters, but mysql.user.user is now CHAR(80). Well, that was a flaw, but just an old bug that wasn’t caused by introduction of roles.

The mysql.user.host column is blank (”). I’d expected it would be ‘%’. I suppose that this was intended as a way to disambiguate users and roles that had the same names, but that’s not working well anyway. The column should be ‘%’ because that means “any or all hosts”.

There is one new column at the end, named is_role, which can be ‘Y’ or ‘N’. I regard that as a flaw. The future-considerations way would have been to add a new column named user_type, which could be NULL or ‘Role’ — or something else that somebody will dream up in a future release. The way it is now, if somebody dreams up another type later, they’ll have to add yet another column in mysql.user.

Storing role names

The implementer picked “Alternative #2: (loosely based on a suggestion from Monty Widenius)”. So there’s a new table in the mysql database:
CREATE TABLE mysql.roles_mapping (Host CHAR(60), User CHAR(80), Role CHAR(80), Admin_option ENUM(‘N’,’Y’))
… And it works. Good. Although ROLE was a reserved word in SQL-99, and although WL#988 suggested an additional column and different names, it works.
However, storing the WITH ADMIN OPTION value should only be in one place and that place is mysql.user, so mysql.roles.mapping should not contain a column named Admin_option. I regard this as a flaw. [UPDATE: not it is not a flaw, see the comments.]

The Decisions about Options

The early decisions of Monty Widenius were: we don’t want passwords, we do want defaults, we don’t want multiple current roles, and PUBLIC should not be a role. That’s what ended up happening, except that there are no defaults. Good.

Overall

For all of the bugs, and for some of the flaws, there’s no worry — they’ll probably disappear. The MariaDB advantage here is the release-early policy, because there’s lots of advance time if there’s a decision to make a few corrections or course changes.

The MariaDB disadvantage is illustrated by its worklog task for roles, MDEV-4397. I’m biased, but I think it’s fair to say that MySQL’s WL#988 “high level architecture” specification covered the necessary ground and MariaDB’s MDEV-4397 did not. This means Oracle made the right decision when it “hid” its worklog task. Too bad it didn’t work out this particular time.

The USA’s healthcare.gov site and LAMP

The USA’s health care exchange site, healthcare.gov, has had well-publicized initial woes.

The New York Times has said one of the problems was the government’s choice of DBMS, namely MarkLogic. A MarkLogic employee has said that “If the exact same processes and analysis were applied to a LAMP stack or an Oracle Exa-stack, the results would have likely been the same.”

I don’t know why he picked Exastack for comparison, but I too have wondered whether things would have been different if the American government had chosen a LAMP component (MySQL or MariaDB) as a DBMS, instead of MarkLogic.

What is MarkLogic?

The company is a software firm founded in 2001 based in San Carlos California. It has 250 employees. The Gartner Magic Quadrant classes it as a “niche player” in the Operational DBMS Category.

The product is a closed-source XML DBMS. The minimum price for a perpetual enterprise license is $32,000 but presumably one would also pay for support, just as one does with MySQL or MariaDB.

There are 250 customers. According to the Wall Street Journal “most of its sales come from dislodging Oracle Corp.”

One of the customers, since 2012 or before, is CMS (the Centers for Medicare and Medicaid), which is a branch of the United States Department of Health and Human Services. CMS is the agency that built the healthcare.gov online portal.

Is MarkLogic responsible for the woes?

Probably MarkLogic is not the bottleneck.

It’s not even the only DBMS that the application queries. There is certainly some contact with other repositories during a get-acquainted process, including Oracle Enterprise Identity management — so one could just as easily blame Oracle.

There are multiple other vendors. USA Today mentions Equifax, Serco, Optum/QSSI, and the main contractor
CGI Federal.

A particular focus for critics has been a web-hosting provider, Verizon Terremark. They have been blamed for some of the difficulties and will eventually be replaced by an HP solution. HP also has a fairly new contract for handling the replication.

Doubtless all the parties would like to blame the other parties, but “the Obama administration has requested that all government officials and contractors involved keep their work confidential”.

It’s clear, though, that the site was launched with insufficient hardware. Originally it was sharing machines with other government services. That’s changed. Now it has dedicated machines.

But the site cost $630 million so one has to suppose they had money to buy hardware in the first place. That suggests that something must have gone awry with the planning, and so it’s credible what a Forbes article is saying, that the government broke every rule of project management.

So we can’t be sure because of the government confidentiality requirement, but it seems unlikely that MarkLogic will get the blame when the dust settles.

Is MarkLogic actually fast?

One way to show that MarkLogic isn’t responsible for slowness, would be to look for independent confirmations of its fastness. The problem with that is MarkLogic’s evaluator-license agreement, from which I quote:


MarkLogic grants to You a limited, non-transferable, non-exclusive, internal use license in the United States of America

[You must not] disclose, without MarkLogic’s prior written consent, performance or capacity statistics or the results of any benchmark test performed on Software

[You must not] use the Product for production activity,

You acknowledge that the Software may electronically transmit to MarkLogic summary data relating to use of the Software

http://developer.marklogic.com/products

These conditions aren’t unheard of in the EULA world, but they do have the effect that I can’t look at the product at all (I’m not in the United States), and others can look at the product but can’t say what they find wrong with it.

So it doesn’t really matter that Facebook got 13 million transactions/second in 2011, or that the HandlerSocket extension for MySQL got 750,000 transactions/second with a lot less hardware. Possibly MarkLogic could do better. And I think we can dismiss the newspaper account that MarkLogic “continued to perform below expectations, according to one person who works in the command center.” Anonymous accounts don’t count.

So we can’t be sure because of the MarkLogic confidentiality requirements, but it seems possible that MarkLogic could outperform its SQL competitors.

Is MarkLogic responsible for absence of High Availability?

High Availability shouldn’t be an issue.

At first glance the reported uptime of the site — 43% initially, 90% now — looks bad. After all, Yves Trudeau surveyed MySQL High Availability solutions years ago and found even the laggards were doing 98%. Later the OpenQuery folks reported that some customers find “five nines” (99.999%) is too fussily precise so let’s just round it to a hundred.

At second glance, though, the reported uptime of the site is okay.

First: The product only has to work in 36 American states and Hawaii is not one of them. That’s only five time zones, then. So it can go down a few hours per night for scheduled maintenance. And uptime “exclusive of scheduled maintenance” is actually 95%.

Second: It’s okay to have debugging code and extra monitoring going on during the first few months. I’m not saying that’s what’s happening — indeed the fact that they didn’t do a 500-simulated-sites test until late September suggests they aren’t worry warts — but it is what others would have done, and therefore others would also be below 99% at this stage of the game.

So, without saying that 90 is the new 99, I think we can admit that it wouldn’t really be fair to make a big deal about some LAMP installation that has higher availability than healthcare.gov.

Is it hard to use?

MarkLogic is an XML DBMS. So its principal query language is XQuery, although there’s a section in the manual about how you could use SQL in a limited way.

Well, of course, to me and to most readers of this blog, XQuery is murky gibberish and SQL is kindergartenly obvious. But we have to suppose that there are XML experts who would find the opposite.

What, then, can we make out of the New York Times’s principal finding about the DBMS? It says:

“Another sore point was the Medicare agency’s decision to use database software, from a company called MarkLogic, that managed the data differently from systems by companies like IBM, Microsoft and Oracle. CGI officials argued that it would slow work because it was too unfamiliar. Government officials disagreed, and its configuration remains a serious problem.”

New York Times November 23 2013

Well, of course, to me and to most readers of this blog, the CGI officials were right because it really is unfamiliar — they obviously had people with experience in IBM DB2, Microsoft SQL Server, or Oracle (either Oracle 12c or Oracle MySQL). But we have to suppose that there are XML experts who would find the opposite.

And, though I think it’s a bit extreme, we have to allow that it’s possible the problems were due to sabotage by Oracle DBAs.

Yet again, it’s impossible to prove that MarkLogic is at fault, because we’re all starting off with biases.

Did the problem have something to do with IDs?

I suspect there was an issue with IDs (identifications).

It starts off with this observation of a MarkLogic feature: “Instead of storing strings as sequences of characters, each string gets stored as a sequence of numeric token IDs. The original string can be reconstructed using the dictionary as a lookup table.”

It ends with this observation from an email written on September 27 2013 by a healthcare.gov worker: “The generation of identifiers within MarkLogic was inefficient. This was fixed and verified as part of the 500 user test.”

Of course that’s nice to see it was fixed, but isn’t it disturbing that a major structural piece was inefficient as late as September?

Hard to say. Too little detail. So the search for a smoking gun has so far led nowhere.

Is it less reliable?

Various stories — though none from the principals — suggest that MarkLogic was chosen because of its flexibility. Uh-oh.

The reported quality problems are “one in 10 enrollments through HealthCare.gov aren’t accurately being transmitted” and “duplicate files, lack of a file or a file with mistaken data, such as a child being listed as a spouse.”

I don’t see how the spousal problem could have been technical, but the duplications and the gone-missings point to: uh-oh, lack of strong rules about what can go in. And of course strong rules are something that the “relational” fuddy-duddies have worried about for decades. If the selling point of MarkLogic is in fact leading to a situation which is less than acceptable, then we have found a flaw at last. In fact it would suggest that the main complaints so far have been trivia.

This is the only matter that I think looks significant at this stage.

How’s that hopey-changey stuff working out for your Database?

The expectation of an Obama aide was: “a consumer experience unmatched by anything in government, but also in the private sector.”

The result is: so far not a failure, and nothing that shows that MarkLogic will be primarily responsible if it is a failure.

However: most of the defence is along the lines of “we can’t be sure”. That cuts both ways — nobody can say it’s “likely” that LAMP would have been just as bad.

Next page →
← Previous page