Sunday, January 22, 2017

MySQL Group Replication vs. Multi Source

In my previous post, we saw the usage of MySQL Group Replication (MGR) in single-primary mode. We know that Oracle does not recommends using MGR in multi-primary mode, but there is so much in the documentation and in presentations about MGR behavior in multi-primary, that I feel I should really give it a try, and especially compare this technology with the already existing multiple master solution introduced in 5.7: multi-source replication.

Installation

To this extent, I will set up two clusters using MySQL-Sandbox. The instructions for MGR in the manual use three nodes in the same host without using MySQL Sandbox. Here we can see that using MySQL-Sandbox simplifies operations considerably (the scripts are available in GitHub):

Group replication

# ----------------------------------------------------------------------------
#!/bin/bash
# mm_gr.sh : installs MySQL Group Replication
MYSQL_VERSION=$1
[ -z "$MYSQL_VERSION" ] && MYSQL_VERSION=5.7.17

make_multiple_sandbox --gtid --group_directory=GR $MYSQL_VERSION

if [ "$?" != "0" ] ; then exit 1 ; fi
multi_sb=$HOME/sandboxes/GR
baseport=$($multi_sb/n1 -BN -e 'select @@port')
baseport=$(($baseport+99))

port1=$(($baseport+1))
port2=$(($baseport+2))
port3=$(($baseport+3))
for N in 1 2 3
do
    myport=$(($baseport+N))
    options=(
        binlog_checksum=NONE
        log_slave_updates=ON
        plugin-load=group_replication.so
        group_replication=FORCE_PLUS_PERMANENT
        group_replication_start_on_boot=OFF
        group_replication_bootstrap_group=OFF
        transaction_write_set_extraction=XXHASH64
        loose-group_replication_group_name="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
        loose-group_replication_local_address="127.0.0.1:$myport"
        loose-group_replication_group_seeds="127.0.0.1:$port1,127.0.0.1:$port2,127.0.0.1:$port3"
        loose-group-replication-single-primary-mode=off
    )
    $multi_sb/node$N/add_option ${options[*]}

    user_cmd='reset master;'
    user_cmd="$user_cmd CHANGE MASTER TO MASTER_USER='rsandbox', MASTER_PASSWORD='rsandbox' FOR CHANNEL 'group_replication_recovery';"
    $multi_sb/node$N/use -v -u root -e "$user_cmd"
done

START_CMD="SET GLOBAL group_replication_bootstrap_group=ON;"
START_CMD="$START_CMD START GROUP_REPLICATION;"
START_CMD="$START_CMD SET GLOBAL group_replication_bootstrap_group=OFF;"
$multi_sb/n1 -v -e "$START_CMD"
sleep 1
$multi_sb/n2 -v -e 'START GROUP_REPLICATION;'
sleep 1
$multi_sb/n3 -v -e 'START GROUP_REPLICATION;'
sleep 1
$multi_sb/use_all 'select * from performance_schema.replication_group_members'
# ----------------------------------------------------------------------------

Using this script, we get a cluster with MGR up and running. Here's a trimmed-out sample of its output:

$ ./mm_gr.sh
installing node 1
installing node 2
installing node 3
group directory installed in $HOME/sandboxes/GR
# option 'binlog_checksum=NONE' added to configuration file
# option 'log_slave_updates=ON' added to configuration file
# option 'plugin-load=group_replication.so' added to configuration file
# option 'group_replication=FORCE_PLUS_PERMANENT' added to configuration file
# option 'group_replication_start_on_boot=OFF' added to configuration file
# option 'group_replication_bootstrap_group=OFF' added to configuration file
# option 'transaction_write_set_extraction=XXHASH64' added to configuration file
# option 'loose-group_replication_group_name=aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee' added to configuration file
# option 'loose-group_replication_local_address=127.0.0.1:14518' added to configuration file
# option 'loose-group_replication_group_seeds=127.0.0.1:14518,127.0.0.1:14519,127.0.0.1:14520' added to configuration file
# option 'loose-group-replication-single-primary-mode=off' added to configuration file
.. sandbox server started
reset master
CHANGE MASTER TO MASTER_USER='rsandbox', MASTER_PASSWORD='rsandbox' FOR CHANNEL 'group_replication_recovery'

# [ ...]
.. sandbox server started
reset master
CHANGE MASTER TO MASTER_USER='rsandbox', MASTER_PASSWORD='rsandbox' FOR CHANNEL 'group_replication_recovery'

# [...]
.. sandbox server started
reset master
CHANGE MASTER TO MASTER_USER='rsandbox', MASTER_PASSWORD='rsandbox' FOR CHANNEL 'group_replication_recovery'

SET GLOBAL group_replication_bootstrap_group=ON
START GROUP_REPLICATION
SET GLOBAL group_replication_bootstrap_group=OFF
--------------

--------------
START GROUP_REPLICATION
--------------
START GROUP_REPLICATION
--------------

Multi-source replication

We have a similar (but much shorter) script to run multi-source replication in sandboxes.

# ----------------------------------------------------------------------------
#!/bin/bash
# mm_ms.sh : installs MySQL multi-source replication
MYSQL_VERSION=$1
[ -z "$MYSQL_VERSION" ] && MYSQL_VERSION=5.7.16

make_multiple_sandbox --gtid --group_directory=MS $MYSQL_VERSION

if [ "$?" != "0" ] ; then exit 1 ; fi
multi_sb=$HOME/sandboxes/MS

$multi_sb/use_all 'reset master'

for N in 1 2 3
do
    user_cmd=''
    for node in 1 2 3
    do
        if [ "$node" != "$N" ]
        then
            master_port=$($multi_sb/n$node -BN -e 'select @@port')
            user_cmd="$user_cmd CHANGE MASTER TO MASTER_USER='rsandbox', "
            user_cmd="$user_cmd MASTER_PASSWORD='rsandbox', master_host='127.0.0.1', "
            user_cmd="$user_cmd master_port=$master_port FOR CHANNEL 'node$node';"
            user_cmd="$user_cmd START SLAVE FOR CHANNEL 'node$node';"
        fi
    done
    $multi_sb/node$N/use -v -u root -e "$user_cmd"
done
# ----------------------------------------------------------------------------

Sample run:

$ ./mm_ms.sh
installing node 1
installing node 2
installing node 3
group directory installed in $HOME/sandboxes/MS
# server: 1:
# server: 2:
# server: 3:
--------------
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14318 FOR CHANNEL 'node2'
START SLAVE FOR CHANNEL 'node2'
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14319 FOR CHANNEL 'node3'
START SLAVE FOR CHANNEL 'node3'

--------------
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14317 FOR CHANNEL 'node1'
START SLAVE FOR CHANNEL 'node1'
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14319 FOR CHANNEL 'node3'
START SLAVE FOR CHANNEL 'node3'

--------------
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14317 FOR CHANNEL 'node1'
START SLAVE FOR CHANNEL 'node1'
CHANGE MASTER TO MASTER_USER='rsandbox',  MASTER_PASSWORD='rsandbox', master_host='127.0.0.1',  master_port=14318 FOR CHANNEL 'node2'
START SLAVE FOR CHANNEL 'node2'
--------------

Simple test data

Finally, we have a script that will create one table for each node and insert one record.


# ----------------------------------------------------------------------------     
#!/bin/bash
multi_sb=$1
if [ -z "$multi_sb" ]
then
    echo multiple sandbox path needed
    exit 1
fi
if [ ! -d $multi_sb ]
then
    echo directory $multi_sb not found
    exit 1
fi
if [ ! -d "$multi_sb/node3" ]
then
    echo directory $multi_sb/node3 not found
    exit 1
fi
cd $multi_sb

for N in  1 2 3 ; do
    ./n$N -e "create schema if not exists test"
    ./n$N -e "drop table if exists test.t$N"
    ./n$N -e "create table test.t$N(id int not null primary key, sid int)"
    ./n$N -e "insert into  test.t$N values ($N, @@server_id)"
done

./use_all 'select * from test.t1 union select * from test.t2 union select * from test.t3'
# ----------------------------------------------------------------------------

We run the script in both clusters, and at the end we'll have the test database with three tables, each one created and filled by a different node.

Checking replication status

The old topology: multi-source

Let's start with the the old technology, so we can easily compare it with the new one.

node1 [localhost] {msandbox} (performance_schema) > select * from replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: node2
               GROUP_NAME:
              SOURCE_UUID: 00014318-2222-2222-2222-222222222222   # ----
                THREAD_ID: 32
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 244
 LAST_HEARTBEAT_TIMESTAMP: 2017-01-22 13:31:54
 RECEIVED_TRANSACTION_SET: 00014318-2222-2222-2222-222222222222:1-4
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE:
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************
             CHANNEL_NAME: node3
               GROUP_NAME:
              SOURCE_UUID: 00014319-3333-3333-3333-333333333333   # ----
                THREAD_ID: 34
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 244
 LAST_HEARTBEAT_TIMESTAMP: 2017-01-22 13:31:55
 RECEIVED_TRANSACTION_SET: 00014319-3333-3333-3333-333333333333:1-4
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE:
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
2 rows in set (0.00 sec)

Notice that we are benefitting from a feature of MySQL-Sandbox that creates a more readable version of the server UUID. This way we can easily identify the nodes. Here we see that each transaction set has a clearly defined origin. We can see similar information in the replication tables from the mysql database:

node1 [localhost] {msandbox} (mysql) > select * from slave_master_info\G
*************************** 1. row ***************************
       Number_of_lines: 25
       Master_log_name: mysql-bin.000001
        Master_log_pos: 154
                  Host: 127.0.0.1       # ----
             User_name: rsandbox
         User_password: rsandbox
                  Port: 14318           # ----
         Connect_retry: 60
           Enabled_ssl: 0
Ssl_verify_server_cert: 0
             Heartbeat: 30
                  Bind:
    Ignored_server_ids: 0
                  Uuid: 00014318-2222-2222-2222-222222222222  # ----
           Retry_count: 86400
           Ssl_crlpath:
 Enabled_auto_position: 0
          Channel_name: node2
           Tls_version:
*************************** 2. row ***************************
       Number_of_lines: 25
       Master_log_name: mysql-bin.000001
        Master_log_pos: 154
                  Host: 127.0.0.1    # ----
             User_name: rsandbox
         User_password: rsandbox
                  Port: 14319        # ----
         Connect_retry: 60
           Enabled_ssl: 0
Ssl_verify_server_cert: 0
             Heartbeat: 30
                  Bind:
    Ignored_server_ids: 0
                  Uuid: 00014319-3333-3333-3333-333333333333  # ----
           Retry_count: 86400
           Ssl_crlpath:
 Enabled_auto_position: 0
          Channel_name: node3
           Tls_version:
2 rows in set (0.00 sec)

Additionally, we have SHOW SLAVE STATUS, which, although not the ideal monitoring tool, is still the only place where we can see at once both the received and executed transactions, and the corresponding binary log and relay log records.

Here's an abridged version:

node1 [localhost] {msandbox} (performance_schema) > SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 127.0.0.1
                  Master_User: rsandbox
                  Master_Port: 14318
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 965
               Relay_Log_File: mysql-relay-node2.000002
                Relay_Log_Pos: 1178
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 965
              Relay_Log_Space: 1387
             Master_Server_Id: 102
                  Master_UUID: 00014318-2222-2222-2222-222222222222
             Master_Info_File: mysql.slave_master_info
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
           Retrieved_Gtid_Set: 00014318-2222-2222-2222-222222222222:1-4
            Executed_Gtid_Set: 00014317-1111-1111-1111-111111111111:1-4,
00014318-2222-2222-2222-222222222222:1-4,
00014319-3333-3333-3333-333333333333:1-4
                 Channel_Name: node2
*************************** 2. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 127.0.0.1
                  Master_User: rsandbox
                  Master_Port: 14319
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000001
          Read_Master_Log_Pos: 965
               Relay_Log_File: mysql-relay-node3.000002
                Relay_Log_Pos: 1178
        Relay_Master_Log_File: mysql-bin.000001
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
          Exec_Master_Log_Pos: 965
              Relay_Log_Space: 1387
              Until_Condition: None
             Master_Server_Id: 103
                  Master_UUID: 00014319-3333-3333-3333-333333333333
             Master_Info_File: mysql.slave_master_info
      Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates
           Master_Retry_Count: 86400
           Retrieved_Gtid_Set: 00014319-3333-3333-3333-333333333333:1-4
            Executed_Gtid_Set: 00014317-1111-1111-1111-111111111111:1-4,
00014318-2222-2222-2222-222222222222:1-4,
00014319-3333-3333-3333-333333333333:1-4
                 Channel_Name: node3
2 rows in set (0.00 sec)

Finally, we'll have a look at the data itself:

node1 [localhost] {msandbox} (mysql) > show binlog events;
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| Log_name         | Pos | Event_type     | Server_id | End_log_pos | Info                                                              |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
| mysql-bin.000001 |   4 | Format_desc    |       101 |         123 | Server ver: 5.7.16-log, Binlog ver: 4                             |
| mysql-bin.000001 | 123 | Previous_gtids |       101 |         154 |                                                                   |
| mysql-bin.000001 | 154 | Gtid           |       101 |         219 | SET @@SESSION.GTID_NEXT= '00014317-1111-1111-1111-111111111111:1' |
| mysql-bin.000001 | 219 | Query          |       101 |         325 | create schema if not exists test                                  |
| mysql-bin.000001 | 325 | Gtid           |       101 |         390 | SET @@SESSION.GTID_NEXT= '00014317-1111-1111-1111-111111111111:2' |
| mysql-bin.000001 | 390 | Query          |       101 |         518 | DROP TABLE IF EXISTS `test`.`t1` /* generated by server */        |
| mysql-bin.000001 | 518 | Gtid           |       101 |         583 | SET @@SESSION.GTID_NEXT= '00014317-1111-1111-1111-111111111111:3' |
| mysql-bin.000001 | 583 | Query          |       101 |         711 | create table test.t1(id int not null primary key, sid int)        |
| mysql-bin.000001 | 711 | Gtid           |       101 |         776 | SET @@SESSION.GTID_NEXT= '00014317-1111-1111-1111-111111111111:4' |
| mysql-bin.000001 | 776 | Query          |       101 |         844 | BEGIN                                                             |
| mysql-bin.000001 | 844 | Table_map      |       101 |         890 | table_id: 108 (test.t1)                                           |
| mysql-bin.000001 | 890 | Write_rows     |       101 |         934 | table_id: 108 flags: STMT_END_F                                   |
| mysql-bin.000001 | 934 | Xid            |       101 |         965 | COMMIT /* xid=72 */                                               |
+------------------+-----+----------------+-----------+-------------+-------------------------------------------------------------------+
13 rows in set (0.00 sec)

The binary log contains only the data produced in this node.

The new topology: MGR

Turning to the new software, let's first check whether replication is working. An important note here: SHOW SLAVE STATUS is not available in MGR. That's not entirely true. The channel architecture used for multi-master has been hijacked to convey information about group problems. If something goes wrong during the setup, you will find the information in the groupreplicationrecovery channel.

node1 [localhost] {msandbox} (performance_schema) > SHOW SLAVE STATUS for channel 'group_replication_recovery';
Empty set (0.00 sec)

When things are fine, the tables in performance_schema report a satisfactory status:

node1 [localhost] {msandbox} (performance_schema) > select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 00014418-1111-1111-1111-111111111111 | gmini       |       14418 | ONLINE       |
| group_replication_applier | 00014419-2222-2222-2222-222222222222 | gmini       |       14419 | ONLINE       |
| group_replication_applier | 00014420-3333-3333-3333-333333333333 | gmini       |       14420 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+

The above command tells us that all nodes are online.

Next, we ask what are the stats of the current member.

node1 [localhost] {msandbox} (performance_schema) > select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14850806532423012:3
                         MEMBER_ID: 00014418-1111-1111-1111-111111111111
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 12
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1-7:1000003-1000006:2000003-2000006
    LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000006
1 row in set (0.00 sec)

The same operation from a different member will give a very similar result.

node2 [localhost] {msandbox} (performance_schema) > select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14850806532423012:3
                         MEMBER_ID: 00014419-2222-2222-2222-222222222222
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 12
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1-7:1000003-1000006:2000003-2000006
    LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000006
1 row in set (0.00 sec)

Then, we check the more classical replication status:

node1 [localhost] {msandbox} (performance_schema) > select * from replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
              SOURCE_UUID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee    # ----
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1-7:1000003-1000006:2000003-2000006
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE:
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

There are a few things that strike the observer immediately:

  • As we saw in the single-primary topology, all transactions bear the UUID of the group, not of the server that generated them. While in single-primary mode this could be considered an asset, as it simplifies a failover procedure, in multi-primary mode I consider it to be a loss. We lose the knowledge of the transaction provenience. As you can see, the SOURCE_UUID field shows the group ID instead of the node.
  • The GTID numbers look odd. There is a set that stars at 1, another set that starts at 1 million, and a third one that starts at 2 million. What's going on? The answer is in the value of group_replication_gtid_assignment_block_size, which determines the block of values for each node. When the values in the block are exhausted, the node allocates another block. Someone could naively think that we could use this block to identify which node the data comes from, but this would be ultimately wrong for two reasons:
    • The blocks are assigned on a first-come-first-served basis. If we start operations in node 2, its transactions will bear the lowest numbers.
    • When the blocks are exhausted, the node starts a new block, meaning that with a busy cluster we will have hard time identifying which nodes uses which block.

If someone thought that we could get some more information from the replication tables in mysql, they are in for a disappointment:

node2 [localhost] {msandbox} (mysql) > select * from slave_master_info\G
*************************** 1. row ***************************
       Number_of_lines: 25
       Master_log_name:
        Master_log_pos: 4
                  Host: <NULL>            # ----
             User_name:
         User_password:
                  Port: 0                 # ----
         Connect_retry: 60
           Enabled_ssl: 0
Ssl_verify_server_cert: 0
             Heartbeat: 30
                  Bind:
    Ignored_server_ids: 0
                  Uuid:                   # ----
           Retry_count: 86400
 Enabled_auto_position: 1
          Channel_name: group_replication_applier
           Tls_version:
*************************** 2. row ***************************
       Number_of_lines: 25
       Master_log_name:
        Master_log_pos: 4
                  Host: <NULL>
             User_name: rsandbox
         User_password: rsandbox
                  Port: 0
         Connect_retry: 60
           Enabled_ssl: 0
Ssl_verify_server_cert: 0
             Heartbeat: 30
                  Bind:
    Ignored_server_ids: 0
                  Uuid:
           Retry_count: 1
 Enabled_auto_position: 1
          Channel_name: group_replication_recovery
           Tls_version:
2 rows in set (0.00 sec)

The table shows group operations rather than individual hosts connections. There is no origin information here.

Looking at the events, we will notice immediately some more differences.

node2 [localhost] {msandbox} (mysql) > show binlog events;
+------------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------+
| Log_name         | Pos  | Event_type     | Server_id | End_log_pos | Info                                                                    |
+------------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------+
| mysql-bin.000001 |    4 | Format_desc    |       102 |         123 | Server ver: 5.7.17-log, Binlog ver: 4                                   |
| mysql-bin.000001 |  123 | Previous_gtids |       102 |         150 |                                                                         |
| mysql-bin.000001 |  150 | Gtid           |       101 |         211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1'       |
| mysql-bin.000001 |  211 | Query          |       101 |         270 | BEGIN                                                                   |
| mysql-bin.000001 |  270 | View_change    |       101 |         369 | view_id=14850806532423012:1                                             |
| mysql-bin.000001 |  369 | Query          |       101 |         434 | COMMIT                                                                  |
| mysql-bin.000001 |  434 | Gtid           |       101 |         495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2'       |
| mysql-bin.000001 |  495 | Query          |       101 |         554 | BEGIN                                                                   |
| mysql-bin.000001 |  554 | View_change    |       101 |         693 | view_id=14850806532423012:2                                             |
| mysql-bin.000001 |  693 | Query          |       101 |         758 | COMMIT                                                                  |
| mysql-bin.000001 |  758 | Gtid           |       102 |         819 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:3'       |
| mysql-bin.000001 |  819 | Query          |       102 |         878 | BEGIN                                                                   |
| mysql-bin.000001 |  878 | View_change    |       102 |        1017 | view_id=14850806532423012:3                                             |
| mysql-bin.000001 | 1017 | Query          |       102 |        1082 | COMMIT                                                                  |
| mysql-bin.000001 | 1082 | Gtid           |       101 |        1143 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:4'       |
| mysql-bin.000001 | 1143 | Query          |       101 |        1250 | create schema if not exists test                                        |
| mysql-bin.000001 | 1250 | Gtid           |       101 |        1311 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:5'       |
| mysql-bin.000001 | 1311 | Query          |       101 |        1440 | DROP TABLE IF EXISTS `test`.`t1` /* generated by server */              |
| mysql-bin.000001 | 1440 | Gtid           |       101 |        1501 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:6'       |
| mysql-bin.000001 | 1501 | Query          |       101 |        1630 | create table test.t1(id int not null primary key, sid int)              |
| mysql-bin.000001 | 1630 | Gtid           |       101 |        1691 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:7'       |
| mysql-bin.000001 | 1691 | Query          |       101 |        1755 | BEGIN                                                                   |
| mysql-bin.000001 | 1755 | Table_map      |       101 |        1797 | table_id: 219 (test.t1)                                                 |
| mysql-bin.000001 | 1797 | Write_rows     |       101 |        1837 | table_id: 219 flags: STMT_END_F                                         |
| mysql-bin.000001 | 1837 | Xid            |       101 |        1864 | COMMIT /* xid=51 */                                                     |
| mysql-bin.000001 | 1864 | Gtid           |       102 |        1925 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1000003' |
| mysql-bin.000001 | 1925 | Query          |       102 |        2032 | create schema if not exists test                                        |
| mysql-bin.000001 | 2032 | Gtid           |       102 |        2093 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1000004' |
| mysql-bin.000001 | 2093 | Query          |       102 |        2222 | DROP TABLE IF EXISTS `test`.`t2` /* generated by server */              |
| mysql-bin.000001 | 2222 | Gtid           |       102 |        2283 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1000005' |
| mysql-bin.000001 | 2283 | Query          |       102 |        2412 | create table test.t2(id int not null primary key, sid int)              |
| mysql-bin.000001 | 2412 | Gtid           |       102 |        2473 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1000006' |
| mysql-bin.000001 | 2473 | Query          |       102 |        2542 | BEGIN                                                                   |
| mysql-bin.000001 | 2542 | Table_map      |       102 |        2584 | table_id: 220 (test.t2)                                                 |
| mysql-bin.000001 | 2584 | Write_rows     |       102 |        2624 | table_id: 220 flags: STMT_END_F                                         |
| mysql-bin.000001 | 2624 | Xid            |       102 |        2651 | COMMIT /* xid=62 */                                                     |
| mysql-bin.000001 | 2651 | Gtid           |       103 |        2712 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000003' |
| mysql-bin.000001 | 2712 | Query          |       103 |        2819 | create schema if not exists test                                        |
| mysql-bin.000001 | 2819 | Gtid           |       103 |        2880 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000004' |
| mysql-bin.000001 | 2880 | Query          |       103 |        3009 | DROP TABLE IF EXISTS `test`.`t3` /* generated by server */              |
| mysql-bin.000001 | 3009 | Gtid           |       103 |        3070 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000005' |
| mysql-bin.000001 | 3070 | Query          |       103 |        3199 | create table test.t3(id int not null primary key, sid int)              |
| mysql-bin.000001 | 3199 | Gtid           |       103 |        3260 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2000006' |
| mysql-bin.000001 | 3260 | Query          |       103 |        3324 | BEGIN                                                                   |
| mysql-bin.000001 | 3324 | Table_map      |       103 |        3366 | table_id: 221 (test.t3)                                                 |
| mysql-bin.000001 | 3366 | Write_rows     |       103 |        3406 | table_id: 221 flags: STMT_END_F                                         |
| mysql-bin.000001 | 3406 | Xid            |       103 |        3433 | COMMIT /* xid=68 */                                                     |
+------------------+------+----------------+-----------+-------------+-------------------------------------------------------------------------+
47 rows in set (0.00 sec)

Two important points:

  • All transaction IDs are assigned to the group, not to the node. The only way to see where the data is coming from is to look at the binary log itself and check the good old server-id. One wonders why we have come all this way with the ugly UUIDs in the global transaction identifier only to maim their usefulness by removing one of the most important feature, which is tracking the data origin.

For example:

# at 434
#170122 11:24:11 server id 101  end_log_pos 495         GTID    last_committed=1        sequence_number=2
SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2'/*!*/;
# at 495
#170122 11:24:11 server id 101  end_log_pos 554         Query   thread_id=7     exec_time=6     error_code=0
SET TIMESTAMP=1485080651/*!*/;
BEGIN
/*!*/;
  • Because log-slave-updates is mandatory, the binary log in every node will have all the transactions of every other node. This can have disagreeable side effects when dealing with large data. Here is an example when we load the sample employee database from node #1:

With Group Replication, the load takes 2 minutes and 16 seconds, and the binary logs have the same size in every node.

[GR]$ ls -lh node?/data/*bin*
-rw-r-----  1 gmax  staff   8.2K Jan 22 10:22 node1/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    63M Jan 22 10:24 node1/data/mysql-bin.000002
-rw-r-----  1 gmax  staff    38B Jan 22 10:22 node1/data/mysql-bin.index

-rw-r-----  1 gmax  staff    63M Jan 22 10:24 node2/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    19B Jan 22 10:12 node2/data/mysql-bin.index

-rw-r-----  1 gmax  staff    63M Jan 22 10:24 node3/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    19B Jan 22 10:12 node3/data/mysql-bin.index

The same operation in multi-source replication takes 1 minute and 30 seconds. The binary logs are kept only in the origin.

[MS]$ ls -lh node?/data/*bin*
-rw-r-----  1 gmax  staff   4.9K Jan 22 10:26 node1/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    63M Jan 22 10:27 node1/data/mysql-bin.000002
-rw-r-----  1 gmax  staff    38B Jan 22 10:26 node1/data/mysql-bin.index

-rw-r-----  1 gmax  staff   1.4K Jan 22 10:14 node2/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    19B Jan 22 10:14 node2/data/mysql-bin.index

-rw-r-----  1 gmax  staff   1.4K Jan 22 10:14 node3/data/mysql-bin.000001
-rw-r-----  1 gmax  staff    19B Jan 22 10:14 node3/data/mysql-bin.index

Conflict resolution

One of the strong points of MGR is conflict resolution.

We can try a conflicting operations in two nodes, inserting the same data at the same time:

use test;
set autocommit=0;
insert into t2 values (3, @@server_id);
commit;

In multi source, we get a replication error, on both nodes. It's an ugly result, but it tells the user immediately that something went wrong in a given node, and doesn't let the error propagate to other nodes.

In MGR, the situation varies. This is a possible outcome:

node1 [localhost] {msandbox} (test) > set autocommit=0;                        |   node2 [localhost] {msandbox} (test) > set autocommit=0;
Query OK, 0 rows affected (0.00 sec)                                           |   Query OK, 0 rows affected (0.00 sec)
                                                                               |
node1 [localhost] {msandbox} (test) > insert into t2 values (3, @@server_id);  |   node2 [localhost] {msandbox} (test) > insert into t2 values (3, @@server_id);
Query OK, 1 row affected (0.00 sec)                                            |   Query OK, 1 row affected (0.00 sec)
                                                                               |
node1 [localhost] {msandbox} (test) > select * from t2;                        |   node2 [localhost] {msandbox} (test) > select * from t2;
+----+------+                                                                  |   +----+------+
| id | sid  |                                                                  |   | id | sid  |
+----+------+                                                                  |   +----+------+
|  2 |  102 |                                                                  |   |  2 |  102 |
|  3 |  101 |                                                                  |   |  3 |  102 |
+----+------+                                                                  |   +----+------+
2 rows in set (0.00 sec)                                                       |   2 rows in set (0.00 sec)
                                                                               |
node1 [localhost] {msandbox} (test) > commit;                                  |   node2 [localhost] {msandbox} (test) > commit;
Query OK, 0 rows affected (0.01 sec)                                           |   ERROR 3101 (HY000): Plugin instructed the server to rollback the current transaction.
                                                                               |   node2 [localhost] {msandbox} (test) > select * from t2;
 node1 [localhost] {msandbox} (test) > select * from t2;                       |   +----+------+
 +----+------+                                                                 |   | id | sid  |
 | id | sid  |                                                                 |   +----+------+
 +----+------+                                                                 |   |  2 |  102 |
 |  2 |  102 |                                                                 |   |  3 |  101 |
 |  3 |  101 |                                                                 |   +----+------+
 +----+------+                                                                 |   2 rows in set (0.00 sec)
 2 rows in set (0.00 sec)                                                      |

Here node # 2 got the transaction a fraction of second later, and its transaction was rolled back. Thus the transaction that was ultimately kept in the database was the one from node1 (server-id 101.) However, this behavior is not predictable. If we try the same operation again, we get a different outcome:

node1 [localhost] {msandbox} (test) > insert into t2 values (4, @@server_id);  |   node2 [localhost] {msandbox} (test) > insert into t2 values (4, @@server_id);
Query OK, 1 row affected (0.00 sec)                                            |   Query OK, 1 row affected (0.00 sec)
                                                                               |
node1 [localhost] {msandbox} (test) > select * from t2;                        |   node2 [localhost] {msandbox} (test) > select * from t2;
+----+------+                                                                  |   +----+------+
| id | sid  |                                                                  |   | id | sid  |
+----+------+                                                                  |   +----+------+
|  2 |  102 |                                                                  |   |  2 |  102 |
|  3 |  101 |                                                                  |   |  3 |  101 |
|  4 |  101 |                                                                  |   |  4 |  102 |
+----+------+                                                                  |   +----+------+
3 rows in set (0.00 sec)                                                       |   3 rows in set (0.00 sec)
                                                                               |
node1 [localhost] {msandbox} (test) > commit;                                  |   node2 [localhost] {msandbox} (test) > commit;
Query OK, 0 rows affected (0.01 sec)                                           |
ERROR 3101 (HY000): Plugin instructed the server to rollback                   |
the current transaction.                                                       |
node1 [localhost] {msandbox} (test) > select * from t2;                        |   node2 [localhost] {msandbox} (test) > select * from t2;
+----+------+                                                                  |   +----+------+
| id | sid  |                                                                  |   | id | sid  |
+----+------+                                                                  |   +----+------+
|  2 |  102 |                                                                  |   |  2 |  102 |
|  3 |  101 |                                                                  |   |  3 |  101 |
|  4 |  102 |                                                                  |   |  4 |  102 |
+----+------+                                                                  |   +----+------+
4 rows in set (0.00 sec)                                                       |   3 rows in set (0.00 sec)

In the second attempt, the transaction was rolled back by node 1, and the surviving one is the one that was inserted from node 2. This means that conflict resolution works, but it may not be what the user wants, as the resolved conflict if aleatory.

Summing up

On the plus side, MGR keeps what it promises. We can set up a cluster of peer nodes and replicate data between nodes with some advantages compared to older multi-source topologies.

On the minus side, the documentation could be vastly improved, especially for multi-primary setup. Moreover, users need to be aware of the limitations, such as serializable isolation level and foreign keys with constraints not being supported.

Most important from my standpoint is the reduction of monitoring information for this technology, namely the loss of information about the data origin.

Sunday, January 15, 2017

MySQL group replication: installation with Docker

Overview

MySQL Group Replication was released as GA with MySQL 5.7.17. It is essentially a plugin that, when enabled, allows users to set replication with this new way.

There has been some confusion about the stability and usability of this release. Until recently, MySQL Group Replication (MGR) was only available in the Labs, which traditionally denotes a preview or an use-at-your-own-risk feature. Several months ago we saw the release of Group Replication as a Docker image, which allowed users to deploy a peer-to-peer cluster (every node is a master.) However, about one month after such release, word came from Oracle discouraging this setup, and inviting users to use Group Replicator in Single Primary mode which is functionally equivalent to traditional replication, with just some synchronous component more. There hasn't been an update of MGR for Docker since.

BTW, some more confusion came from the use of "synchronous replication" to refer to Group Replication operations. In reality, what in many presentations was called synchronous replication is only a synchronous transfer of binary logs data. The replication itself, i.e. the operation that makes a node able to retrieve the data inserted in the master, is completed asynchronously. Therefore, if you looked at MGR as a way of using multiple masters without conflicts, this is not the solution.

What we have is a way of replicating from a node that is the Primary in the group, with some features designed to facilitate high availability solutions. And all eyes are on the next product, which is based on MGR, named MySQL InnoDB Cluster which is MGR + an hormone pumped MySQL Shell (released with the same version number 1.0.5 in two different packages,) and MySQL-Router.

MGR has several limitations, mostly related to multi-primary mode.

Another thing that users should know is that the performance of MGR is inferior to that of asynchronous replication, even in Single-Primary mode. As an example, loading the test employees database takes 92 seconds in MGR, against 49 seconds in asynchronous replication (same O.S., same MySQL version, same server setup.)

Installing MySQL Group Replication

One of the biggest issue with MGR has been the quality of its documentation, which for a while was just lack of documentation altogether. What we have now has a set of instructions that refers to installing group replication in three nodes on the same host. You know, sandboxes, although without the benefit of using a tool to simplify operations. It's just three servers on the same host, and you drive with stick shift.

What we'll see in this post is how to set group replication using three servers in Docker. The advantage of using this approach is that the servers look and feel like real ones. Since the instructions assume that you are only playing with sandboxes (an odd assumption for a GA product) we lack the instructions for a real world setup. The closest thing to a useful manual is the tutorial given by Frédéric Descamps and Kenny Gryp at PerconaLive Amsterdam in October. The instructions, however, are muddled up by the fact that they were using the still unreliable InnoDB Cluster instead of a bare bones Group Replicator. What follows is my own expansion of the sandboxed rules as applied to distinct servers.

The environment:

I am using Docker 1.12.6 on Linux, and the image for mysql/mysql-server:5.7.17. I deploy three containers, with a customized my.cnf containing the bare minimum options to run Group Replication. Here's the template for the configuration files:

$ cat my-template.cnf
[mysqld]
user=mysql
server_id=_SERVER_ID_
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=mysql-bin
relay-log=relay
binlog_format=ROW
log-error=mysqld.err

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "172.19.0._IP_END_:6606"
loose-group_replication_group_seeds= "172.19.0.2:6606,172.19.0.3:6606,172.19.0.4:6606"
loose-group_replication_ip_whitelist="172.19.0.2,172.19.0.3,172.19.0.4,127.0.0.1"
loose-group_replication_bootstrap_group= off

Here I take a shortcut. Recent versions of Docker assign a predictable IP address to new containers. To make sure I get the right IPs, I use a private network to deploy the containers. In a perfect world, I should use the container names for this purpose, but the manual lacks the instructions to set up the cluster progressively. For now, this method requires full knowledge about the IPs of the nodes, and I play along with what I have.

This is the deployment script:

#!/bin/bash
exists_net=$(docker network ls | grep -w group1 )
if [ -z "$exists_net" ]
then
    docker network create group1
fi
docker network ls

for node in 1 2 3
do
    export SERVERID=$node
    export IPEND=$(($SERVERID+1))
    perl -pe 's/_SERVER_ID_/$ENV{SERVERID}/;s/_IP_END_/$ENV{IPEND}/' my-template.cnf > my${node}.cnf
    datadir=ddnode${node}
    if [ ! -d $datadir ]
    then
        mkdir $datadir
    fi
    unset SERVERID
    docker run -d --name=node$node --net=group1 --hostname=node$node \
        -v $PWD/my${node}.cnf:/etc/my.cnf \
        -v $PWD/data:/data \
        -v $PWD/$datadir:/var/lib/mysql \
        -e MYSQL_ROOT_PASSWORD=secret \
        mysql/mysql-server:5.7.17

    ip=$(docker inspect --format '{{ .NetworkSettings.Networks.group1.IPAddress}}' node${node})
    echo "${node} $ip"
done

This script deploys three nodes, called node1, node2, and node3. For each one, the template is modified to use a different server ID. They use an external data directory created on the current directory (see Customizing MYSQL in Docker for more details on this technique.) Moreover, each node can access the folder /data, which contains this set of SQL commands:

reset master;
SET SQL_LOG_BIN=0;
CREATE USER rpl_user@'%';
GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%' IDENTIFIED BY 'rpl_pass';
SET SQL_LOG_BIN=1;
CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery';
INSTALL PLUGIN group_replication SONAME 'group_replication.so';

Operations

After deploying the containers using the above script, I wait a few seconds to give time to the servers to be ready. I can peek at the error logs, which are in the directories ddnode1, ddnode2, and ddnode3, as defined in the installation command. Then I run the SQL code:

$ for N in 1 2 3; do docker exec -ti node$N bash -c 'mysql -psecret < /data/user.sql' ; done

At this stage, the plugin is installed in all three nodes. I can start the cluster:

$ docker exec -ti node1 mysql -psecret
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.17-log MySQL Community Server (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (1.14 sec)

mysql>SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | ecba1582-db68-11e6-a492-0242ac130002 | node1       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)

The above operations have started the replication with the bootstrap, an operation that must be executed only once, and that defines the primary node.

After setting the replication, I can enter some data, and then see what happens in the other nodes:

mysql> create schema test;
Query OK, 1 row affected (0.01 sec)

mysql> use test
Database changed
mysql> create table t1 (id int not null primary key, msg varchar(20));
Query OK, 0 rows affected (0.06 sec)

mysql> insert into t1 values (1, 'hello from node1');
Query OK, 1 row affected (0.01 sec)

mysql> show binlog events;
+------------------+------+----------------+-----------+-------------+----------------------------------------------------------------------------+
| Log_name         | Pos  | Event_type     | Server_id | End_log_pos | Info                                                                       |
+------------------+------+----------------+-----------+-------------+----------------------------------------------------------------------------+
| mysql-bin.000001 |    4 | Format_desc    |         1 |         123 | Server ver: 5.7.17-log, Binlog ver: 4                                      |
| mysql-bin.000001 |  123 | Previous_gtids |         1 |         150 |                                                                            |
| mysql-bin.000001 |  150 | Gtid           |         1 |         211 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1'          |
| mysql-bin.000001 |  211 | Query          |         1 |         270 | BEGIN                                                                      |
| mysql-bin.000001 |  270 | View_change    |         1 |         369 | view_id=14845163185775300:1                                                |
| mysql-bin.000001 |  369 | Query          |         1 |         434 | COMMIT                                                                     |
| mysql-bin.000001 |  434 | Gtid           |         1 |         495 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:2'          |
| mysql-bin.000001 |  495 | Query          |         1 |         554 | BEGIN                                                                      |
| mysql-bin.000001 |  554 | View_change    |         1 |         693 | view_id=14845163185775300:2                                                |
| mysql-bin.000001 |  693 | Query          |         1 |         758 | COMMIT                                                                     |
| mysql-bin.000001 |  758 | Gtid           |         1 |         819 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:3'          |
| mysql-bin.000001 |  819 | Query          |         1 |         912 | create schema test                                                         |
| mysql-bin.000001 |  912 | Gtid           |         1 |         973 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:4'          |
| mysql-bin.000001 |  973 | Query          |         1 |        1110 | use `test`; create table t1 (id int not null primary key, msg varchar(20)) |
| mysql-bin.000001 | 1110 | Gtid           |         1 |        1171 | SET @@SESSION.GTID_NEXT= 'aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:5'          |
| mysql-bin.000001 | 1171 | Query          |         1 |        1244 | BEGIN                                                                      |
| mysql-bin.000001 | 1244 | Table_map      |         1 |        1288 | table_id: 219 (test.t1)                                                    |
| mysql-bin.000001 | 1288 | Write_rows     |         1 |        1341 | table_id: 219 flags: STMT_END_F                                            |
| mysql-bin.000001 | 1341 | Xid            |         1 |        1368 | COMMIT /* xid=144 */                                                       |
+------------------+------+----------------+-----------+-------------+----------------------------------------------------------------------------+
19 rows in set (0.00 sec)

The binary log events show that we are replicating using the ID of the group, instead of the ID of the single server.

In the other two nodes I run the operation a bit differently:

$ docker exec -ti node2 mysql -psecret
mysql> select * from performance_schema.global_variables where variable_name in ('read_only', 'super_read_only');
+-----------------+----------------+
| VARIABLE_NAME   | VARIABLE_VALUE |
+-----------------+----------------+
| read_only       | OFF            |
| super_read_only | OFF            |
+-----------------+----------------+
2 rows in set (0.01 sec)

mysql>  START GROUP_REPLICATION;
Query OK, 0 rows affected (5.62 sec)

mysql> select * from performance_schema.global_variables where variable_name in ('read_only', 'super_read_only');
+-----------------+----------------+
| VARIABLE_NAME   | VARIABLE_VALUE |
+-----------------+----------------+
| read_only       | ON             |
| super_read_only | ON             |
+-----------------+----------------+
2 rows in set (0.01 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | ecba1582-db68-11e6-a492-0242ac130002 | node1       |        3306 | ONLINE       |
| group_replication_applier | ecf2eae5-db68-11e6-a492-0242ac130003 | node2       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.01 sec)

Now the cluster has two nodes, and I've seen that the nodes are automatically defined as read-only. I can repeat the same operation in the third one.

$ docker exec -ti node2 mysql -psecret
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.35 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | ecba1582-db68-11e6-a492-0242ac130002 | node1       |        3306 | ONLINE       |
| group_replication_applier | ecf2eae5-db68-11e6-a492-0242ac130003 | node2       |        3306 | ONLINE       |
| group_replication_applier | ed259dfc-db68-11e6-a4a6-0242ac130004 | node3       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

What about the data? It's been replicated:

mysql> show schemas;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
+--------------------+
5 rows in set (0.00 sec)

mysql> show tables from test;
+----------------+
| Tables_in_test |
+----------------+
| t1             |
+----------------+
1 row in set (0.01 sec)

Monitoring

In this flavor of replication there is no SHOW SLAVE STATUS. Everything I've got is in performanceschema tables and in mysql.slavemasterinfo and mysql.slaverelayloginfo, and sadly it is not a lot.

mysql> select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14845163185775300:3
                         MEMBER_ID: ecba1582-db68-11e6-a492-0242ac130002
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 3
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1-6
    LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:5
1 row in set (0.00 sec)


mysql> select * from replication_connection_status\G
*************************** 1. row ***************************
             CHANNEL_NAME: group_replication_recovery
               GROUP_NAME:
              SOURCE_UUID:
                THREAD_ID: NULL
            SERVICE_STATE: OFF
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET:
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE:
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************
             CHANNEL_NAME: group_replication_applier
               GROUP_NAME: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
              SOURCE_UUID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
                THREAD_ID: NULL
            SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
 RECEIVED_TRANSACTION_SET: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee:1-6
        LAST_ERROR_NUMBER: 0
       LAST_ERROR_MESSAGE:
     LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
2 rows in set (0.00 sec)

Compared to regular replication, we lose the ID of the node where the data was originated. Instead, we get the ID of the group replication (which we set in the configuration file.) This is useful for a smoother operation of replacing the primary node (a.k.a. the master) with another node, but we have lost some valuable information that could have been added to the output rather than simply being replaced. Another valuable piece of information that is missing is the transactions that were executed (we only see RECEIVED_TRANSACTION_SET.) As in regular replication, we can get this information with "SHOW MASTER STATUS" or "SELECT @@global.gtid_executed", but as mentioned in improving the design of MySQL replication there are several flaws in this paradigm. What we see in MGR is a reduction of replication monitoring data, while we would have expected some improvement, given the complexity of the operations for this new technology.

Summing up

MySQL Group Replication is an interesting technology. If we consider it in the framework of a component for high availability (which will be completed when the InnoDB Cluster is released) it might improve the workflow of many database users.

As it is now, however, it gives the feeling of being a rushed up piece of software that does not offer any noticeable advantage to users, especially considering that the documentation released with it is far below the standards of other MySQL products.

Thursday, November 03, 2016

MySQL-Sandbox 3.2.03 with customized initialization

MySQL-Sandbox installs the MySQL server in isolation, by rejecting existing option files using the option --no-defaults. This is usually a good thing, because you don't want the initialization to be influenced by options in your /etc/my.cnf or other options files in default positions.

However, such isolation is also a problem when you need to add options during the initialization. One example is innodb-page-size, which can be set to many values, but only if the server was initialized accordingly. Thus, you can't set innodb-page-size=64K in your configuration file because the default value is different. It would fail, as InnoDB would conflict.

Mysql init

MySQL-Sandbox 3.2.03 introduces three options that allow flexibility during initialization.

  • --init_option='some options' will add 'some options' to the initialization command.
  • Another possibility is --init_my_cnf which will load the sandbox configuration file. This is simple, but sometimes it may case initialization issues, depending on what else is in the options file.
  • Finally, --init_use_cnf allows you to define a custom configuration file, which will be used during initialization.

The following three examples will all produce the wanted result, i.e. install MySQL with a custom innodb-page-size of 64K.

make_sandbox 5.7.16 -- -c innodb-page-size=64K --init_option='--innodb-page-size=64K'

make_sandbox 5.7.16 -- -c innodb-page-size=64K --init_my_cnf

cat /tmp/my.cnf
[mysqld]
innodb-page-size=64K

make_sandbox 5.7.16 -- -c innodb-page-size=64K --init_use_cnf=/tmp/my.cnf

Tuesday, October 11, 2016

OTN appreciation day : MySQL 8.0 data dictionary

About one month ago, the MySQL team at Oracle released MySQL 8.0, with a large list of changes. One of the most interesting features in the new release is also one that does not show up much, also because the team has gone to great length to keep most of its implementation hidden: the data dictionary.

NewImage

What makes the data dictionary so interesting, despite its scarce visibility, is the effect that it has on performance. Up to MySQL 5.7, searching the information_schema was an onerous operation, potentially crippling the system. In MySQL 8.0, the same operations are 100 times faster. This would be reason enough to be excited, as I know many users who have had terrible problems with the inefficiency of information_schema.

But there are several other visible changes: the various files that were needed to identify database objects (.frm for tables, .trg for triggers, .par for partitions) are now gone. When the database server wants to know metadata about such objects, it doesn't have to open files anymore. All the information is stored in the hidden InnoDB tables of the data dictionary. If the hidden tables bother you, you can use a not so difficult hack to discover them, although most operations can be performed without inner knowledge of how the dictionary is organized. The idea is that the public interface (views in information_schema) should be considered reliable and used for future compatibility. Perhaps the hidden status will be lifted in one of the next releases: many in the community have given feedback in this direction.

What else is the data dictionary good for? It allows atomic DDL operations, which were not guaranteed when data for the dictionary was stored in files and MyISAM tables. A nice complement to the data dictionary is the elimination of all MyISAM tables from the system. Now the grant privileges are stored in InnoDB tables, which, probably for historical compatibility, were left visible.

I am sure users will come up with other clever usages of the data dictionary when the system is wider adopted and understood. For now, it's a wonderful toy to explore!


This post was suggested by Tim Hall, a well known community champion among Oracle users, who maintains a rich web site of news and free technical info. According to his suggestion, this post wants to add to the OTN appreciation day, a distributed community effort to show something useful, or pleasant, or both related to the Oracle world.


For those not used to the Oracle Technology Network (OTN), it is the center of Oracle technology, the place where users can get all software (proprietary or open source) and other resources related to Oracle products. In the image below you may find several familiar names.

Monday, September 26, 2016

PerconaLive Amsterdam 2016 - Talks and shows

With Oracle Open World behind us, we are now getting ready for the next big event, i.e. the European edition of PerconaLive. I am going to be a presenter three times:

  • MySQL operations in Docker is a three-hour tutorial, and it will be an expansion of the talk by the same title presented at OOW. Attendees who want to play along can do it, by coming prepared with Docker 1.11 or later and the following images already pulled (images with [+] are mandatory, while [-] are optional):

    • mysql/mysql-server [+]
    • mysql/mysql-gr [+]
    • mysql [-]
    • percona/percona-server [-]
    • mariadb [-]
    • mysql/shell [-]
    • datacharmer/mysql-minimal-5.5 [-]
    • datacharmer/mysql-minimal-5.6 [-]
    • datacharmer/mysql-minimal-5.7 [-]
    • datacharmer/mysql-minimal-8.0 [-]
    • datacharmer/my-ubuntu [-]
    • datacharmer/my-debian [-]
    • datacharmer/my-centos [-]
  • The fastest customized MySQL deployment ever is a presentation where I show two ways of deploying MySQL on a custom server, with MySQL Sandbox and Docker.

  • The lightning talks will be held during the Community dinner at Booking.com, diamond sponsor of the conference. If you want to attend, you need to register, and if you want a free ticker for that dinner, there is still ONE SLOT for the lightning talks. Contact me directly with a title and an abstract if you want to be considered for that slot (finding my email is part of the challenge, but it should not be that difficult).

UPDATE: here's the lightning talks program. Percona should eventually add it to the schedule.

  • 19:00: arrival at Booking.com by boat
  • 19:15: welcoming speech and beginning of the LT
  • 20:00 food buffet opens, LT are not finished yet
  • 20:30: LT are done, buffet still open
  • 21:15: buffet closes

The talks accepted are the following:

  • Jaime Crespo
    dbhell: a tiny Python framework for the administration and monitoring of farms of MySQL servers
  • Federico Razzoli
    How to write slow and buggy stored procedures
  • Art Van Scheppingen
    Simple household rules that keep MySQL running
  • Pavel Tru
    What internal statistics every self-respecting dbms should have!
  • Giuseppe Maxia
    Fastest, smallest, newest: download and deploy MySQL at high speed
  • Ronald Bradford
    An Awe-Inspiring Journey Through the World of Numbers

Sunday, September 25, 2016

Showing the hidden tables in MySQL 8 data dictionary

The freshly released MySQL 8.0 includes a data dictionary, which makes MySQL much more reliable. Thanks to this features, we don't have any '.frm' files, and querying the information_schema is 30x to 100x faster than previous versions.

One drawback of the implementation is that the data dictionary tables are hidden by design.

While the reason is fully understandable (they don't want to commit on an interface that may change in the future) many curious users are disappointed, because openness is the basis of good understanding and feedback.

The problem to access the dictionary tables can be split in three parts:

  • Finding the list of tables;
  • Finding the description of the tables;
  • Getting the contents of each table.

The first part is easily solved. We know that the data dictionary tables are accessed from some information_schema views (the views are defined during the initialization, at which point the DD tables are readable.)

For example:

 show create view information_schema.tables\G
*************************** 1. row ***************************
                View: TABLES
         Create View: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost`
SQL SECURITY DEFINER VIEW `TABLES` AS select `cat`.`name` AS
`TABLE_CATALOG`,`sch`.`name` AS `TABLE_SCHEMA`,`tbl`.`name` AS
`TABLE_NAME`,`tbl`.`type` AS `TABLE_TYPE`,if((`tbl`.`type` = 'BASE
TABLE'),`tbl`.`engine`,NULL) AS `ENGINE`,if((`tbl`.`type` = 'VIEW'),NULL,10)
AS `VERSION`,`tbl`.`row_format` AS `ROW_FORMAT`,`stat`.`table_rows` AS
`TABLE_ROWS`,`stat`.`avg_row_length` AS `AVG_ROW_LENGTH`,`stat`.`data_length`
AS `DATA_LENGTH`,`stat`.`max_data_length` AS
`MAX_DATA_LENGTH`,`stat`.`index_length` AS `INDEX_LENGTH`,`stat`.`data_free`
AS `DATA_FREE`,`stat`.`auto_increment` AS `AUTO_INCREMENT`,`tbl`.`created` AS
`CREATE_TIME`,`stat`.`update_time` AS `UPDATE_TIME`,`stat`.`check_time` AS
`CHECK_TIME`,`col`.`name` AS `TABLE_COLLATION`,`stat`.`checksum` AS
`CHECKSUM`,if((`tbl`.`type` =
'VIEW'),NULL,get_dd_create_options(`tbl`.`options`,if((ifnull(`tbl`.`partition_expression`,'NOT_PART_TBL')
= 'NOT_PART_TBL'),0,1))) AS
`CREATE_OPTIONS`,internal_get_comment_or_error(`sch`.`name`,`tbl`.`name`,`tbl`.`type`,`tbl`.`options`,`tbl`.`comment`)
AS `TABLE_COMMENT` from ((((`mysql`.`tables` `tbl` join `mysql`.`schemata`
`sch` on((`tbl`.`schema_id` = `sch`.`id`))) join `mysql`.`catalogs` `cat`
on((`cat`.`id` = `sch`.`catalog_id`))) left join `mysql`.`collations` `col`
on((`tbl`.`collation_id` = `col`.`id`))) left join `mysql`.`table_stats`
`stat` on(((`tbl`.`name` = `stat`.`table_name`) and (`sch`.`name` =
`stat`.`schema_name`)))) where (can_access_table(`sch`.`name`,`tbl`.`name`)
and (not(`tbl`.`hidden`)))
character_set_client: latin1
collation_connection: latin1_swedish_ci
1 row in set (0.00 sec)

Here we see many tables (such as mysql.schemata or mysql.collations) that don't show up when we run SHOW TABLES in the mysql database.

We can use a script like this to get the list of all tables:

MYSQL=~/sandboxes/msb_full_8_0_0/use
TABLES=$($MYSQL  information_schema -BN -e 'show tables')

function show_tables
{
    for T in $TABLES
    do
        is_view=$($MYSQL information_schema -BN -e "show create table $T\G" | grep ALGORITHM)
        if [ -n "$is_view" ]
        then
            $MYSQL information_schema -e "show create table $T\G" \
               | perl -lne 'print $1 while /mysql.\..(\w+)/g'
        fi
    done
}
show_tables | sort | uniq

This script searches all information_schema tables, skips the ones that are not views, and then searches in the view definition every table from the mysql database. The result is this:

catalogs
character_sets
collations
columns
foreign_key_column_usage
foreign_keys
index_column_usage
indexes
index_stats
schemata
tables
table_stats

Good. Now we have the list of tables that we can't see. The second operation is getting the description.

So, I looked at the source code, and I found out where the prohibition originated. From there, I saw that the table is accessible when the variable skip_dd_table_access_check is set. Looking at the variables inside the server, I did not find any skip_dd_table_access_check, as I was expecting, since it would not make sense to provide this information in the open after going through the pains of making all DD tables unreachable.

I searched the code for the string skip_dd_table_access_check and I found out how it is used in the test suite. The key is using the debug build of the MySQL server.

Using MySQL Sandbox, with a sandbox made from the full tarball of MySQL 8.0, I run:

 ~/sandboxes/msb_full_8_0_0/restart --mysqld=mysqld-debug

Now I have loaded the debug-enabled server. Let's try:

$ ~/sandboxes/msb_full_8_0_0/use mysql
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 91
Server version: 8.0.0-dmr-debug MySQL Community Server - Debug (GPL)

Copyright (c) 2000, 2016, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql [localhost] {msandbox} (mysql) > show create table schemata\G
ERROR 3554 (HY000): Access to system table 'mysql.schemata' is rejected.

Having the debug build is not enough. We need to use the magic spell.

mysql [localhost] {msandbox} (mysql) > SET SESSION debug='+d,skip_dd_table_access_check';
Query OK, 0 rows affected (0.00 sec)

mysql [localhost] {msandbox} (mysql) > show create table schemata\G
*************************** 1. row ***************************
       Table: schemata
Create Table: CREATE TABLE `schemata` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `catalog_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) COLLATE utf8_bin NOT NULL,
  `default_collation_id` bigint(20) unsigned NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `catalog_id` (`catalog_id`,`name`),
  KEY `default_collation_id` (`default_collation_id`),
  CONSTRAINT `schemata_ibfk_1` FOREIGN KEY (`catalog_id`) REFERENCES `catalogs` (`id`),
  CONSTRAINT `schemata_ibfk_2` FOREIGN KEY (`default_collation_id`) REFERENCES  `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
1 row in set (0.00 sec)

Yay! The table is now visible! We can modify the above script as follows:

mysql_tables=$(show_tables | sort | uniq)
for T in $mysql_tables
do
    echo "-- $T "
    $MYSQL -e "SET SESSION debug= '+d,skip_dd_table_access_check'; show create table mysql.$T\G"
done

And we get the description of every table in the data dictionary. Here it goes:

-- catalogs
*************************** 1. row ***************************
       Table: catalogs
Create Table: CREATE TABLE `catalogs` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(64) COLLATE utf8_bin NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- character_sets
*************************** 1. row ***************************
       Table: character_sets
Create Table: CREATE TABLE `character_sets` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(64) CHARACTER SET utf8 NOT NULL,
  `default_collation_id` bigint(20) unsigned NOT NULL,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `mb_max_length` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`),
  KEY `default_collation_id` (`default_collation_id`),
  CONSTRAINT `character_sets_ibfk_1` FOREIGN KEY (`default_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=249 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- collations
*************************** 1. row ***************************
       Table: collations
Create Table: CREATE TABLE `collations` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(64) CHARACTER SET utf8 NOT NULL,
  `character_set_id` bigint(20) unsigned NOT NULL,
  `is_compiled` tinyint(1) NOT NULL,
  `sort_length` int(10) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`),
  KEY `character_set_id` (`character_set_id`),
  CONSTRAINT `collations_ibfk_1` FOREIGN KEY (`character_set_id`) REFERENCES `character_sets` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=278 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- columns
*************************** 1. row ***************************
       Table: columns
Create Table: CREATE TABLE `columns` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `table_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
  `ordinal_position` int(10) unsigned NOT NULL,
  `type` enum(/* removed */) COLLATE utf8_bin NOT NULL,
  `is_nullable` tinyint(1) NOT NULL,
  `is_zerofill` tinyint(1) DEFAULT NULL,
  `is_unsigned` tinyint(1) DEFAULT NULL,
  `char_length` int(10) unsigned DEFAULT NULL,
  `numeric_precision` int(10) unsigned DEFAULT NULL,
  `numeric_scale` int(10) unsigned DEFAULT NULL,
  `datetime_precision` int(10) unsigned DEFAULT NULL,
  `collation_id` bigint(20) unsigned DEFAULT NULL,
  `has_no_default` tinyint(1) DEFAULT NULL,
  `default_value` blob,
  `default_value_utf8` text COLLATE utf8_bin,
  `default_option` blob,
  `update_option` varchar(32) COLLATE utf8_bin DEFAULT NULL,
  `is_auto_increment` tinyint(1) DEFAULT NULL,
  `is_virtual` tinyint(1) DEFAULT NULL,
  `generation_expression` longblob,
  `generation_expression_utf8` longtext COLLATE utf8_bin,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `hidden` tinyint(1) NOT NULL,
  `options` mediumtext COLLATE utf8_bin,
  `se_private_data` mediumtext COLLATE utf8_bin,
  `column_key` enum('','PRI','UNI','MUL') COLLATE utf8_bin NOT NULL,
  `column_type_utf8` mediumtext COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `table_id` (`table_id`,`name`),
  UNIQUE KEY `table_id_2` (`table_id`,`ordinal_position`),
  KEY `collation_id` (`collation_id`),
  CONSTRAINT `columns_ibfk_1` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
  CONSTRAINT `columns_ibfk_2` FOREIGN KEY (`collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3450 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- foreign_key_column_usage
*************************** 1. row ***************************
       Table: foreign_key_column_usage
Create Table: CREATE TABLE `foreign_key_column_usage` (
  `foreign_key_id` bigint(20) unsigned NOT NULL,
  `ordinal_position` int(10) unsigned NOT NULL,
  `column_id` bigint(20) unsigned NOT NULL,
  `referenced_column_name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
  PRIMARY KEY (`foreign_key_id`,`ordinal_position`),
  UNIQUE KEY `foreign_key_id` (`foreign_key_id`,`column_id`,`referenced_column_name`),
  KEY `column_id` (`column_id`),
  CONSTRAINT `foreign_key_column_usage_ibfk_1` FOREIGN KEY (`foreign_key_id`) REFERENCES `foreign_keys` (`id`),
  CONSTRAINT `foreign_key_column_usage_ibfk_2` FOREIGN KEY (`column_id`) REFERENCES `columns` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- foreign_keys
*************************** 1. row ***************************
       Table: foreign_keys
Create Table: CREATE TABLE `foreign_keys` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `schema_id` bigint(20) unsigned NOT NULL,
  `table_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) CHARACTER SET utf8 NOT NULL,
  `unique_constraint_id` bigint(20) unsigned NOT NULL,
  `match_option` enum('NONE','PARTIAL','FULL') COLLATE utf8_bin NOT NULL,
  `update_rule` enum('NO ACTION','RESTRICT','CASCADE','SET NULL','SET DEFAULT') COLLATE utf8_bin NOT NULL,
  `delete_rule` enum('NO ACTION','RESTRICT','CASCADE','SET NULL','SET DEFAULT') COLLATE utf8_bin NOT NULL,
  `referenced_table_catalog` varchar(64) COLLATE utf8_bin NOT NULL,
  `referenced_table_schema` varchar(64) COLLATE utf8_bin NOT NULL,
  `referenced_table_name` varchar(64) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `schema_id` (`schema_id`,`name`),
  UNIQUE KEY `table_id` (`table_id`,`name`),
  KEY `unique_constraint_id` (`unique_constraint_id`),
  CONSTRAINT `foreign_keys_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
  CONSTRAINT `foreign_keys_ibfk_2` FOREIGN KEY (`unique_constraint_id`) REFERENCES `indexes` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=46 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- index_column_usage
*************************** 1. row ***************************
       Table: index_column_usage
Create Table: CREATE TABLE `index_column_usage` (
  `index_id` bigint(20) unsigned NOT NULL,
  `ordinal_position` int(10) unsigned NOT NULL,
  `column_id` bigint(20) unsigned NOT NULL,
  `length` int(10) unsigned DEFAULT NULL,
  `order` enum('UNDEF','ASC','DESC') COLLATE utf8_bin NOT NULL,
  `hidden` tinyint(1) NOT NULL,
  UNIQUE KEY `index_id` (`index_id`,`ordinal_position`),
  UNIQUE KEY `index_id_2` (`index_id`,`column_id`,`hidden`),
  KEY `f2` (`column_id`),
  CONSTRAINT `index_column_usage_ibfk_1` FOREIGN KEY (`index_id`) REFERENCES `indexes` (`id`),
  CONSTRAINT `index_column_usage_ibfk_2` FOREIGN KEY (`column_id`) REFERENCES `columns` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- indexes
*************************** 1. row ***************************
       Table: indexes
Create Table: CREATE TABLE `indexes` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `table_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) CHARACTER SET utf8 COLLATE utf8_tolower_ci NOT NULL,
  `type` enum('PRIMARY','UNIQUE','MULTIPLE','FULLTEXT','SPATIAL') COLLATE utf8_bin NOT NULL,
  `algorithm` enum('SE_SPECIFIC','BTREE','RTREE','HASH','FULLTEXT') COLLATE utf8_bin NOT NULL,
  `is_algorithm_explicit` tinyint(1) NOT NULL,
  `is_visible` tinyint(1) NOT NULL,
  `is_generated` tinyint(1) NOT NULL,
  `hidden` tinyint(1) NOT NULL,
  `ordinal_position` int(10) unsigned NOT NULL,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `options` mediumtext COLLATE utf8_bin,
  `se_private_data` mediumtext COLLATE utf8_bin,
  `tablespace_id` bigint(20) unsigned DEFAULT NULL,
  `engine` varchar(64) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `table_id` (`table_id`,`name`),
  KEY `tablespace_id` (`tablespace_id`),
  CONSTRAINT `indexes_ibfk_1` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
  CONSTRAINT `indexes_ibfk_2` FOREIGN KEY (`tablespace_id`) REFERENCES `tablespaces` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=235 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- index_stats
*************************** 1. row ***************************
       Table: index_stats
Create Table: CREATE TABLE `index_stats` (
  `schema_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `table_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `index_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `column_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `cardinality` bigint(20) unsigned DEFAULT NULL,
  UNIQUE KEY `schema_name` (`schema_name`,`table_name`,`index_name`,`column_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- schemata
*************************** 1. row ***************************
       Table: schemata
Create Table: CREATE TABLE `schemata` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `catalog_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) COLLATE utf8_bin NOT NULL,
  `default_collation_id` bigint(20) unsigned NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `catalog_id` (`catalog_id`,`name`),
  KEY `default_collation_id` (`default_collation_id`),
  CONSTRAINT `schemata_ibfk_1` FOREIGN KEY (`catalog_id`) REFERENCES `catalogs` (`id`),
  CONSTRAINT `schemata_ibfk_2` FOREIGN KEY (`default_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- tables
*************************** 1. row ***************************
       Table: tables
Create Table: CREATE TABLE `tables` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `schema_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) COLLATE utf8_bin NOT NULL,
  `type` enum('BASE TABLE','VIEW','SYSTEM VIEW') COLLATE utf8_bin NOT NULL,
  `engine` varchar(64) CHARACTER SET utf8 NOT NULL,
  `mysql_version_id` int(10) unsigned NOT NULL,
  `row_format` enum('Fixed','Dynamic','Compressed','Redundant','Compact','Paged') COLLATE utf8_bin DEFAULT NULL,
  `collation_id` bigint(20) unsigned DEFAULT NULL,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `hidden` tinyint(1) NOT NULL,
  `options` mediumblob,
  `se_private_data` mediumtext COLLATE utf8_bin,
  `se_private_id` bigint(20) unsigned DEFAULT NULL,
  `tablespace_id` bigint(20) unsigned DEFAULT NULL,
  `partition_type` enum(/*removed*/) COLLATE utf8_bin DEFAULT NULL,
  `partition_expression` varchar(2048) COLLATE utf8_bin DEFAULT NULL,
  `default_partitioning` enum('NO','YES','NUMBER') COLLATE utf8_bin DEFAULT NULL,
  `subpartition_type` enum('HASH','KEY_51','KEY_55','LINEAR_HASH','LINEAR_KEY_51','LINEAR_KEY_55') COLLATE utf8_bin DEFAULT NULL,
  `subpartition_expression` varchar(2048) COLLATE utf8_bin DEFAULT NULL,
  `default_subpartitioning` enum('NO','YES','NUMBER') COLLATE utf8_bin DEFAULT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `view_definition` longblob,
  `view_definition_utf8` longtext COLLATE utf8_bin,
  `view_check_option` enum('NONE','LOCAL','CASCADED') COLLATE utf8_bin DEFAULT NULL,
  `view_is_updatable` enum('NO','YES') COLLATE utf8_bin DEFAULT NULL,
  `view_algorithm` enum('UNDEFINED','TEMPTABLE','MERGE') COLLATE utf8_bin DEFAULT NULL,
  `view_security_type` enum('DEFAULT','INVOKER','DEFINER') COLLATE utf8_bin DEFAULT NULL,
  `view_definer` varchar(93) COLLATE utf8_bin DEFAULT NULL,
  `view_client_collation_id` bigint(20) unsigned DEFAULT NULL,
  `view_connection_collation_id` bigint(20) unsigned DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `schema_id` (`schema_id`,`name`),
  UNIQUE KEY `engine` (`engine`,`se_private_id`),
  KEY `engine_2` (`engine`),
  KEY `collation_id` (`collation_id`),
  KEY `tablespace_id` (`tablespace_id`),
  CONSTRAINT `tables_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
  CONSTRAINT `tables_ibfk_2` FOREIGN KEY (`collation_id`) REFERENCES `collations` (`id`),
  CONSTRAINT `tables_ibfk_3` FOREIGN KEY (`tablespace_id`) REFERENCES `tablespaces` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=322 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
-- table_stats
*************************** 1. row ***************************
       Table: table_stats
Create Table: CREATE TABLE `table_stats` (
  `schema_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `table_name` varchar(64) COLLATE utf8_bin NOT NULL,
  `table_rows` bigint(20) unsigned DEFAULT NULL,
  `avg_row_length` bigint(20) unsigned DEFAULT NULL,
  `data_length` bigint(20) unsigned DEFAULT NULL,
  `max_data_length` bigint(20) unsigned DEFAULT NULL,
  `index_length` bigint(20) unsigned DEFAULT NULL,
  `data_free` bigint(20) unsigned DEFAULT NULL,
  `auto_increment` bigint(20) unsigned DEFAULT NULL,
  `checksum` bigint(20) unsigned DEFAULT NULL,
  `update_time` timestamp NULL DEFAULT NULL,
  `check_time` timestamp NULL DEFAULT NULL,
  PRIMARY KEY (`schema_name`,`table_name`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

In addition to the tables referred in views, there are three that are mentioned in the documentation but not implemented as a view in information_schema: triggers, events, and routines.


show create table triggers\G
*************************** 1. row ***************************
       Table: triggers
Create Table: CREATE TABLE `triggers` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `schema_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) CHARACTER SET utf8 NOT NULL,
  `event_type` enum('INSERT','UPDATE','DELETE') COLLATE utf8_bin NOT NULL,
  `table_id` bigint(20) unsigned NOT NULL,
  `action_timing` enum('BEFORE','AFTER') COLLATE utf8_bin NOT NULL,
  `action_order` int(10) unsigned NOT NULL,
  `action_statement` longblob NOT NULL,
  `action_statement_utf8` longtext COLLATE utf8_bin NOT NULL,
  `created` timestamp(2) NOT NULL DEFAULT CURRENT_TIMESTAMP(2) ON UPDATE CURRENT_TIMESTAMP(2),
  `last_altered` timestamp(2) NOT NULL DEFAULT CURRENT_TIMESTAMP(2),
  `sql_mode` set(/*removed*/) COLLATE utf8_bin NOT NULL,
  `definer` varchar(93) COLLATE utf8_bin NOT NULL,
  `client_collation_id` bigint(20) unsigned NOT NULL,
  `connection_collation_id` bigint(20) unsigned NOT NULL,
  `schema_collation_id` bigint(20) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `schema_id` (`schema_id`,`name`),
  UNIQUE KEY `table_id` (`table_id`,`event_type`,`action_timing`,`action_order`),
  KEY `client_collation_id` (`client_collation_id`),
  KEY `connection_collation_id` (`connection_collation_id`),
  KEY `schema_collation_id` (`schema_collation_id`),
  CONSTRAINT `triggers_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
  CONSTRAINT `triggers_ibfk_2` FOREIGN KEY (`table_id`) REFERENCES `tables` (`id`),
  CONSTRAINT `triggers_ibfk_3` FOREIGN KEY (`client_collation_id`) REFERENCES `collations` (`id`),
  CONSTRAINT `triggers_ibfk_4` FOREIGN KEY (`connection_collation_id`) REFERENCES `collations` (`id`),
  CONSTRAINT `triggers_ibfk_5` FOREIGN KEY (`schema_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC
show create table events\G
*************************** 1. row ***************************
       Table: events
Create Table: CREATE TABLE `events` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `schema_id` bigint(20) unsigned NOT NULL,
  `name` varchar(64) CHARACTER SET utf8 NOT NULL,
  `definer` varchar(93) COLLATE utf8_bin NOT NULL,
  `time_zone` varchar(64) COLLATE utf8_bin NOT NULL,
  `definition` longblob NOT NULL,
  `definition_utf8` longtext COLLATE utf8_bin NOT NULL,
  `execute_at` datetime DEFAULT NULL,
  `interval_value` int(11) DEFAULT NULL,
  `interval_field` enum('YEAR','QUARTER','MONTH','DAY','HOUR','MINUTE','WEEK','SECOND','MICROSECOND','YEAR_MONTH','DAY_HOUR','DAY_MINUTE','DAY_SECOND','HOUR_MINUTE','HOUR_SECOND','MINUTE_SECOND','DAY_MICROSECOND','HOUR_MICROSECOND','MINUTE_MICROSECOND','SECOND_MICROSECOND') COLLATE utf8_bin DEFAULT NULL,
  `sql_mode` set('REAL_AS_FLOAT','PIPES_AS_CONCAT','ANSI_QUOTES','IGNORE_SPACE','NOT_USED','ONLY_FULL_GROUP_BY','NO_UNSIGNED_SUBTRACTION','NO_DIR_IN_CREATE','POSTGRESQL','ORACLE','MSSQL','DB2','MAXDB','NO_KEY_OPTIONS','NO_TABLE_OPTIONS','NO_FIELD_OPTIONS','MYSQL323','MYSQL40','ANSI','NO_AUTO_VALUE_ON_ZERO','NO_BACKSLASH_ESCAPES','STRICT_TRANS_TABLES','STRICT_ALL_TABLES','NO_ZERO_IN_DATE','NO_ZERO_DATE','INVALID_DATES','ERROR_FOR_DIVISION_BY_ZERO','TRADITIONAL','NO_AUTO_CREATE_USER','HIGH_NOT_PRECEDENCE','NO_ENGINE_SUBSTITUTION','PAD_CHAR_TO_FULL_LENGTH') COLLATE utf8_bin NOT NULL,
  `starts` datetime DEFAULT NULL,
  `ends` datetime DEFAULT NULL,
  `status` enum('ENABLED','DISABLED','SLAVESIDE_DISABLED') COLLATE utf8_bin NOT NULL,
  `on_completion` enum('DROP','PRESERVE') COLLATE utf8_bin NOT NULL,
  `created` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `last_altered` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `last_executed` datetime DEFAULT NULL,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `originator` int(10) unsigned NOT NULL,
  `client_collation_id` bigint(20) unsigned NOT NULL,
  `connection_collation_id` bigint(20) unsigned NOT NULL,
  `schema_collation_id` bigint(20) unsigned NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `schema_id` (`schema_id`,`name`),
  KEY `client_collation_id` (`client_collation_id`),
  KEY `connection_collation_id` (`connection_collation_id`),
  KEY `schema_collation_id` (`schema_collation_id`),
  CONSTRAINT `events_ibfk_1` FOREIGN KEY (`schema_id`) REFERENCES `schemata` (`id`),
  CONSTRAINT `events_ibfk_2` FOREIGN KEY (`client_collation_id`) REFERENCES `collations` (`id`),
  CONSTRAINT `events_ibfk_3` FOREIGN KEY (`connection_collation_id`) REFERENCES `collations` (`id`),
  CONSTRAINT `events_ibfk_4` FOREIGN KEY (`schema_collation_id`) REFERENCES `collations` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

show create table routines\G
*************************** 1. row ***************************
       Table: ROUTINES
Create Table: CREATE TEMPORARY TABLE `ROUTINES` (
  `SPECIFIC_NAME` varchar(64) NOT NULL DEFAULT '',
  `ROUTINE_CATALOG` varchar(512) NOT NULL DEFAULT '',
  `ROUTINE_SCHEMA` varchar(64) NOT NULL DEFAULT '',
  `ROUTINE_NAME` varchar(64) NOT NULL DEFAULT '',
  `ROUTINE_TYPE` varchar(9) NOT NULL DEFAULT '',
  `DATA_TYPE` varchar(64) NOT NULL DEFAULT '',
  `CHARACTER_MAXIMUM_LENGTH` int(21) DEFAULT NULL,
  `CHARACTER_OCTET_LENGTH` int(21) DEFAULT NULL,
  `NUMERIC_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `NUMERIC_SCALE` int(21) DEFAULT NULL,
  `DATETIME_PRECISION` bigint(21) unsigned DEFAULT NULL,
  `CHARACTER_SET_NAME` varchar(64) DEFAULT NULL,
  `COLLATION_NAME` varchar(64) DEFAULT NULL,
  `DTD_IDENTIFIER` longtext,
  `ROUTINE_BODY` varchar(8) NOT NULL DEFAULT '',
  `ROUTINE_DEFINITION` longtext,
  `EXTERNAL_NAME` varchar(64) DEFAULT NULL,
  `EXTERNAL_LANGUAGE` varchar(64) DEFAULT NULL,
  `PARAMETER_STYLE` varchar(8) NOT NULL DEFAULT '',
  `IS_DETERMINISTIC` varchar(3) NOT NULL DEFAULT '',
  `SQL_DATA_ACCESS` varchar(64) NOT NULL DEFAULT '',
  `SQL_PATH` varchar(64) DEFAULT NULL,
  `SECURITY_TYPE` varchar(7) NOT NULL DEFAULT '',
  `CREATED` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `LAST_ALTERED` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  `SQL_MODE` varchar(8192) NOT NULL DEFAULT '',
  `ROUTINE_COMMENT` longtext NOT NULL,
  `DEFINER` varchar(93) NOT NULL DEFAULT '',
  `CHARACTER_SET_CLIENT` varchar(32) NOT NULL DEFAULT '',
  `COLLATION_CONNECTION` varchar(32) NOT NULL DEFAULT '',
  `DATABASE_COLLATION` varchar(32) NOT NULL DEFAULT ''
) ENGINE=InnoDB DEFAULT CHARSET=utf8

Looking at the code again, I see that there are also tables tablespaces and version:

show create table tablespaces\G
*************************** 1. row ***************************
       Table: tablespaces
Create Table: CREATE TABLE `tablespaces` (
  `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(255) COLLATE utf8_bin NOT NULL,
  `options` mediumtext COLLATE utf8_bin,
  `se_private_data` mediumtext COLLATE utf8_bin,
  `comment` varchar(2048) COLLATE utf8_bin NOT NULL,
  `engine` varchar(64) COLLATE utf8_bin NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

show create table version\G
*************************** 1. row ***************************
       Table: version
Create Table: CREATE TABLE `version` (
  `version` int(10) unsigned NOT NULL,
  PRIMARY KEY (`version`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin STATS_PERSISTENT=0 ROW_FORMAT=DYNAMIC

Now we can try the last part of our task, i.e., querying the data directory for some specific info.

mysql [localhost] {msandbox} (mysql) > SET SESSION debug='+d,skip_dd_table_access_check';
mysql [localhost] {msandbox} (mysql) > select * from version;
+---------+
| version |
+---------+
|       1 |
+---------+

mysql [localhost] {msandbox} (mysql) > select id, name from schemata;
+----+--------------------+
| id | name               |
+----+--------------------+
|  2 | information_schema |
|  1 | mysql              |
|  3 | performance_schema |
|  4 | sys                |
|  5 | test               |
+----+--------------------+


mysql [localhost] {msandbox} (mysql) > select id, name, type, engine, mysql_version_id, comment from tables where name = 'user' and schema_id=1;
+----+------+------------+--------+------------------+-----------------------------+
| id | name | type       | engine | mysql_version_id | comment                     |
+----+------+------------+--------+------------------+-----------------------------+
| 84 | user | BASE TABLE | InnoDB |            80000 | Users and global privileges |
+----+------+------------+--------+------------------+-----------------------------+

Now the data dictionary is much more readable!

DISCLAIMER: there may be a simpler or more elegant solution to this problem. The method shown here is what I got by researching. But in fact, if there is a better method, short of recompiling the server, I'd like to know.

WARNING: Don't do what I do in the following paragraphs!

To complete the experiment, I am going to do what the MySQL team does not want me to do at all.

First, I create a directory inside the data directory. As shown in data dictionary limitations, this is not supported. But since we can access the data dictionary ...


$ mkdir ~/sandboxes/msb_full_8_0_0/data/db1

Now for the felony part:

mysql [localhost] {msandbox} (mysql) > SET SESSION debug= '+d,skip_dd_table_access_check';
Query OK, 0 rows affected (0.00 sec)

mysql [localhost] {msandbox} (mysql) > select * from schemata;
+----+------------+--------------------+----------------------+---------------------+---------------------+
| id | catalog_id | name               | default_collation_id | created             | last_altered        |
+----+------------+--------------------+----------------------+---------------------+---------------------+
|  1 |          1 | mysql              |                    8 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
|  2 |          1 | information_schema |                   33 | 2016-09-25 18:06:00 | 2016-09-25 18:06:00 |
|  3 |          1 | performance_schema |                   33 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
|  4 |          1 | sys                |                   33 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
|  5 |          1 | test               |                    8 | 0000-00-00 00:00:00 | 0000-00-00 00:00:00 |
+----+------------+--------------------+----------------------+---------------------+---------------------+
5 rows in set (0.00 sec)

mysql [localhost] {msandbox} (mysql) > insert into schemata values (6, 1, 'db1', 8, now(), now());
Query OK, 1 row affected (0.01 sec)

mysql [localhost] {msandbox} (mysql) > show schemas;
+--------------------+
| Database           |
+--------------------+
| db1                | ## TA-DA!
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
+--------------------+

Now, pay attention! This why the MySQL team don't want anyone to mess up with the data dictionary tables.

DOUBLE WARNING! Don't do the following!


mysql [localhost] {msandbox} (mysql) > insert into schemata values (7, 1, 'db2', 8, now(), now());
Query OK, 1 row affected (0.01 sec)

mysql [localhost] {msandbox} (mysql) > show schemas;
+--------------------+
| Database           |
+--------------------+
| db1                |
| db2                |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| test               |
+--------------------+
7 rows in set (0.00 sec)

mysql [localhost] {msandbox} (mysql) > use db2
ERROR 1049 (42000): Unknown database 'db2'

There! I broke the system. Lesson learned: read, don't write data dictionary tables.

Vote on Planet MySQL