Tag Archives: MySQL 5.6

Understanding InnoDB transportable tablespaces in MySQL 5.6

If you’re anything like me,  your initial reaction upon hearing about transportable tablespaces for InnoDB in 5.6 was to imagine it like MyISAM, where you can copy the .frm, .myi and .myd files around to your heart’s content, and everything will be great.  You might have read Sunny’s excellent blog, and realized that there’s a bit more to it than that – you have to explicitly prepare the tablespace for copying using FLUSH TABLES FOR EXPORT.  That’s perfectly acceptable for the bulk of use cases, such as single-table InnoDB backups, and opens up exciting new possibilities for moving or copying InnoDB data at the filesystem level.

But for situations where the need is a little different, you might really dug into it and start to wonder about the .cfg files produced during the FLUSH TABLES FOR EXPORT operation.  What happens if you don’t have – and can’t produce – that information?  What does that .cfg file contain that makes it critical to success importing the tablespace later? Continue reading Understanding InnoDB transportable tablespaces in MySQL 5.6

New 5.6 password verification plugin (and impacts to PASSWORD() function)

The recent MySQL 5.6.6 release includes a new password verification plugin.  This is a great tool for enforcing passwords that meet certain strength thresholds.  Quoting the options from the manual, there are three different criteria levels that can be applied:

  • WEAK policy tests password length only. Passwords must be at least 8 characters long.
  • MEDIUM policy adds the conditions that passwords must contain at least 1 numeric character, 1 lowercase and uppercase character, and 1 special (nonalphanumeric) character.
  • STRONG policy adds the condition that password substrings of length 4 or longer must not match words in the dictionary file, if one has been specified.

Deprecated in MySQL 5.6 – ignore_builtin_innodb

Back when MySQL 5.1 was first released, Oracle (which didn’t at the time own Sun or MySQL) wanted to add new InnoDB functionality faster than MySQL could effectively incorporate it into the server, and the InnoDB plugin was introduced.  It provided 5.1 users the option of deploying a more advanced version of InnoDB, or using the more conservative built-in version baked into MySQL.  By the time the plugin reached GA status, though, Oracle had announced the acquisition of Sun (and MySQL), and things started to change.  The coordination between the InnoDB development team at Oracle and the MySQL development team increased once we were all “under one roof”.  The strategic directions of InnoDB and MySQL were aligned, and the results can be seen in the tremendous work done for MySQL 5.5 and 5.6.  In fact, the InnoDB plugin was removed with MySQL 5.5:

The built-in InnoDB storage engine within MySQL is the original form of distribution for the storage engine. Contrast with the InnoDB Plugin. Starting with MySQL 5.5, the InnoDB Plugin is merged back into the MySQL code base as the built-in InnoDB storage engine (known as InnoDB 1.1).

This distinction is important mainly in MySQL 5.1, where a feature or bug fix might apply to the InnoDB Plugin but not the built-in InnoDB, or vice versa.

One artifact of the “two different InnoDB engines” introduced with the plugin remains:  the ignore_builtin_innodb configuration option.  As of 5.6.5 and later, that option is now deprecated, and has no effect:

D:\mysql-5.6.5-m8-win32>bin\mysqld –no-defaults –console –port=3307 –ignore-builtin-innodb
120531 10:54:30 [Warning] ignore-builtin-innodb is ignored and will be removed in future releases.

Since MySQL 5.5, we’ve focused on and invested in InnoDB as a core element and default storage engine for MySQL.  As such, in 5.5, we removed the InnoDB plugin.  And now with 5.6, we are eliminating the option to disable InnoDB.

 

Learn From MySQL Support Staff at MySQL Connect!

Members of the MySQL Support Team wear a number of different hats here at Oracle.  Obviously, our top priority is to provide amazing technical support that makes customers rave.  We also have a team dedicated to processing bug reports from the MySQL Community.  Some of us are active bloggers or assist on mailing lists or forums, while others find other ways to contribute to the MySQL Community.  We help out with QA and product planning, write books, and the consultative support aspects of MySQL subscriptions allow us the opportunity to help train and advise customers on best practices and successful MySQL-backed application architecture.  In short, we’re deeply invested in making sure that MySQL deployments are successful.

This year, we’re privileged to send some of these great MySQL experts to the inaugural MySQL Connect Conference this September 29-30, and it will be a great opportunity for you to experience first-hand the value that MySQL Support can provide:

Be sure to stop by the MySQL Support booth in the demo grounds at MySQL Connect and meet these MySQL experts – they’ll be happy to help you solve any of your tricky MySQL problems.

If you’re an Oracle DBA just getting started on MySQL, it’s worth the $100 fee to add MySQL Connect to your OOW registration, but Ben Krug will also be leading a session at OOW with the goal of Demystifying MySQL for Oracle DBAs and Developers.

And if California is just too far to travel, perhaps you can catch Uma Bhat‘s presentation at PyCon India, Introduction to MySQL Connector/Python.

We hope to see you at one of these upcoming conferences!

Understanding mysql_config_editor’s security aspects

The recent release of 5.6.6 includes a new utility, mysql_config_editor, which makes it easier to interact with MySQL clients securely.  At the same time, it’s easy to overstate the security benefits of using this new tool, and unfortunately, I think a couple of statements in the release notes and documentation go a step too far (something we’re in the process of correcting).  Ronald quoted one of them in his blog:

MySQL now provides a method for storing authentication credentials securely in an option file named .mylogin.cnf.

This enhancement really isn’t about securing passwords at a file-system level.  Don’t assume that the encryption used will prevent attackers from getting passwords stored in the .mylogin.cnf file if they have read access to it – it absolutely will not.  There are multiple ways to extract the password out of this file, some more complex than others.  Don’t make the mistake of believing that the password storage is safe just because the contents are encrypted.

So, if this enhancement isn’t about secure storing of passwords, what is it about?  It makes secure access via MySQL client applications easier to use.  For example, it helps eliminate the need to include a password as an argument to mysql or other command-line clients:

shell>  mysql -hproduction -uroot -pmypass

becomes

shell> mysql --login-path=production

(assuming we used mysql_config_editor to set up a “production” configuration)

That’s obviously more secure if somebody can view the running processes – you don’t want to expose your password like that.   Of course, you could always just include the -p argument and enter the password interactively (that doesn’t echo it to the screen or display it in any process list.  But that doesn’t help with scripts or batch jobs where interactive password entry isn’t an option.  So, great – this enhancement helps eliminate the need for including the password where it can’t be entered interactively.  Sounds great – but this mechanism has long existed in MySQL through the use of configuration files (and you can have as many different configuration files as you want dedicated to various connections).  You can easily put the password under the [client] header, and invoke clients like so:

shell> mysql --defaults-file=~/production.cnf

Again, assuming there’s ~/production.cnf file that has the connection attributes needed defined in the [client] section.

So, if it’s already possible to eliminate passwords for both interactive and batch use cases, why is this needed?  The most immediate answer is that this makes it easier to get password storage – when you do it – right.  In the above example with the production.cnf file, you have to create that file yourself and ensure that it has appropriate privileges assigned.  That’s not the case when you use mysql_config_utility.  Quoting from the manual:

The login file must be readable and writable to the current user, and inaccessible to other users. Otherwise, mysql_config_editor ignores it, and the file is not used by client programs, either.

So one of the principal benefits of mysql_config_editor is ease-of-use – you can rely on it to store your password in a file that is not readable by other users.  That’s not to say that you could not have done exactly the same thing yourself by setting appropriate file permissions on a .cnf file of your own making, but using the tool eliminates the possibility that it gets overlooked.  Note that I believe that this aspect of the feature may not be working properly for Windows at this stage, and I’m in the process of validating and reporting a bug on this.

Now you may be wondering:  If the file encryption doesn’t protect against attackers having file system access, what does it protect against?  It eliminates a password storage location that might be misused by a user in a fit of urgency.  If a user can’t get access (their own) plain-text passwords from .mylogin.cnf, it may throw just one more hurdle preventing passwords from being copied and pasted into other files (which would have the same need for file system restrictions) or command-line parameters.  In short, it reduces the possibility that users may unintentionally use their own passwords in an insecure manner.

All that said, the feature does have some nice ease-of-use aspects that aren’t explicitly security-related.  For example, you don’t even have to store passwords using the utility, and use it as a collection of all your hard-to-remember server locations, and still enter the password interactively:

shell> bin\mysql_config_editor print --all
[slave1]
host = some.really.obscure.slave.host
[slave2]
host = another.really.obscure.slave.host

shell> bin\mysql --login-path=slave1 --user=something -p
Enter password: ****

The options you specify as client application arguments will overwrite whatever is stored in .mylogin.pass:

shell> bin\mysql --login-path=slave1 --host=localhost --user=root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5

So, if you do choose to store passwords in your .mylogin.cnf file, and you use the same password for multiple accounts (look at me advocating security), you can have a single entry that remembers your password, and choose the host as an argument to the client.

This can be a useful enhancement for very specific security requirements and helps make it easier to be more secure in handling passwords as well as connections to multiple servers.  At the same time, it should not be confused with a mechanism that prevents attackers from acquiring passwords if they have file-system level access.

 

Bye-bye .MSI (Hello MySQL Installer)

Windows users of MySQL may find something missing when they first download MySQL 5.6.6.  The .MSI package of MySQL for Windows has been eliminated in favor of the MySQL Installer for Windows.  You can still download MySQL in .ZIP format (as well as a source code .ZIP for the GPL-licensed Community edition), but the quickest way to get running MySQL on Windows is the MySQL Installer.  You can download the version of MySQL Installer that includes MySQL 5.6.6 on dev.mysql.com.

The MySQL Installer has a number of improvements over the old .MSI installer and configuration wizard.  Running the 5.6.6 installer on my machine, it detected that ports 3306 and 3307 were already used by other MySQL installations, and suggested port 3308 for the new installation.  It also recognized conflicts with existing Windows service names, and suggests variants that don’t conflict (e.g., suggesting “MySQL56” when “MySQL” is already taken, and “MySQL56_1” when both are taken).  It smartly helps you set an appropriately-strong password for the root account, as well as the option to add new user accounts during initial configuration.  Additionally, you can install (or update) tools that Windows users will find helpful, including MySQL Workbench and the new MySQL Notifier and MySQL for Excel products, as well as various connectors and documentation – all from one interface.

The new installer is roughly the same size as MySQL Server .ZIP downloads (about 200MB).  The package includes MySQL Server, and downloads other products on-demand.  It will also help you check for and install recent updates to any of your installed products.  I ran into a few rough edges while testing (the Windows service created could not be started, and a datadir was created in two different locations), but on whole, it’s a vast improvement over the previous .MSI packages, and there are continuous improvements being made by the great engineering team here at Oracle.

Deprecated in MySQL 5.6 – YEAR(2)

Back in April, I issued a challenge to identify any compelling use case for the YEAR(2) data type.  Months later, I still don’t have such a use case, and the release of 5.6.6 brings deprecation of support for the YEAR(2) datatype.  Trying to create a table with a YEAR(2) column in 5.6.6 or greater will result in conversion to YEAR(4) and a warning:

mysql> SELECT VERSION();
+-----------+
| VERSION() |
+-----------+
| 5.6.6-m9  |
+-----------+
1 row in set (0.00 sec)

mysql> CREATE TABLE y (y YEAR(2));
Query OK, 0 rows affected, 1 warning (0.13 sec)

mysql> SHOW WARNINGS\G
*************************** 1. row ***************************
Level: Warning
Code: 1818
Message: YEAR(2) column type is deprecated. Creating YEAR(4) column instead.
1 row in set (0.00 sec)

The change logs for 5.6.6 highlight this change, and reference the excellent documentation of YEAR(2) limitations and migration strategies.  If you’re not sure whether you have YEAR(2) columns or not, the following command will identify any such columns:

SELECT table_schema, table_name, column_name, column_type
FROM information_schema.columns
WHERE data_type = 'YEAR'
AND column_type != 'year(4)';

Note that some queries (including this one) against INFORMATION_SCHEMA tables can be slow, and may not be suitable for running in against a busy production database server.  In such instances, you may want to execute this against a slave, backup or development machine loaded with the current schema.

 

How will IPv6 changes in 5.6.6 affect you?

As stated in the 5.6.6 release notes, the default value of –bind-address has changed from IPv4-specific “0.0.0.0” to “*” – the latter value which allows MySQL to bind to IPv6 interfaces by default.  There are a few implications to this change.

First, make sure you’ve hardened both your IPv4 and your IPv6 access points to your MySQL server.  If you are running IPv4 interfaces only, you can also change the –bind-address value back to the previous default value of “0.0.0.0”.  Because MySQL now listens by default on both IPv4 and IPv6 interfaces, an installation that has only hardened IPv4 interfaces may find unaddressed vulnerabilities exposed via the IPv6 interface that is now used by default.

Second, the change in defaults can have an impact on startup speed for deployments where there is no IPv6 support.  On my Windows XP laptop, startup stalls for 45 seconds while MySQL lets Windows determine that IPv6 is not supported:

120813 10:17:16 [Note] Server hostname (bind-address): ‘*’; port: 3307
120813 10:18:01 [Note] IPv6 is not available.
120813 10:18:01 [Note]   – ‘0.0.0.0’ resolves to ‘0.0.0.0’;
120813 10:18:01 [Note] Server socket created on IP: ‘0.0.0.0’.

Why Windows takes 45 seconds to figure this out is anybody’s guess, but explicitly setting –bind-address to 0.0.0.0 eliminates this behavior, and server start-up speed is restored:

120813 10:27:39 [Note] Server hostname (bind-address): ‘0.0.0.0’; port: 3307
120813 10:27:39 [Note]   – ‘0.0.0.0’ resolves to ‘0.0.0.0’;
120813 10:27:39 [Note] Server socket created on IP: ‘0.0.0.0’.

Another option is to make sure your OS has IPv6 support enabled (Microsoft documents this procedure for XP in KB article 2478747).

Whether you have IPv6 support enabled or not, make sure you consider how the change in behavior introduced in 5.6.6 will affect you.

 

 

 

Who’s leaking prepared statements?

In my last post, I described a specific problem with prepared statements into which PERFORMANCE_SCHEMA can give visibility.  That made me wonder whether PERFORMANCE_SCHEMA can also be used to identify other areas where prepared statements run into problems.  The most significant problem tends to be leakage of prepared statements.  This can inflate memory usage, both on the server and application side, and it’s not uncommon to find applications which fail to close prepared statements.

So the question is, what can PERFORMANCE_SCHEMA tell us about how connections close (or more importantly, fail to close) prepared statements?

At the most basic level, one can check the number of PREPARE statements executed compared to DEALLOCATE PREPARE, and you can do that using global status variables.  You’re shooting for general equality between (Com_prepare_sql + Com_stmt_prepare) and (Com_stmt_close + Com_dealloc_sql).  I’m using session status below to make it easy to follow, but obviously global status variables will be most interesting in any review of prepared statement leakage:

mysql> flush status;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW SESSION STATUS LIKE 'com_%prepare%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| Com_prepare_sql    | 0     |
| Com_stmt_prepare   | 0     |
| Com_stmt_reprepare | 0     |
| Com_xa_prepare     | 0     |
+--------------------+-------+
4 rows in set (0.00 sec)

mysql> PREPARE stmt FROM 'SELECT 1';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> SHOW SESSION STATUS LIKE 'com_%prepare%';
+--------------------+-------+
| Variable_name      | Value |
+--------------------+-------+
| Com_prepare_sql    | 1     |
| Com_stmt_prepare   | 1     |
| Com_stmt_reprepare | 0     |
| Com_xa_prepare     | 0     |
+--------------------+-------+
4 rows in set (0.00 sec)

mysql> SHOW SESSION STATUS LIKE 'Com_stmt_close';
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| Com_stmt_close | 0     |
+----------------+-------+
1 row in set (0.00 sec)

mysql> SHOW SESSION STATUS LIKE 'Com_dealloc_sql';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Com_dealloc_sql | 0     |
+-----------------+-------+
1 row in set (0.00 sec)

mysql> DEALLOCATE PREPARE stmt;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW SESSION STATUS LIKE 'Com_stmt_close';
+----------------+-------+
| Variable_name  | Value |
+----------------+-------+
| Com_stmt_close | 1     |
+----------------+-------+
1 row in set (0.00 sec)

mysql> SHOW SESSION STATUS LIKE 'Com_dealloc_sql';
+-----------------+-------+
| Variable_name   | Value |
+-----------------+-------+
| Com_dealloc_sql | 1     |
+-----------------+-------+
1 row in set (0.00 sec)

That’s useful as a start to identify whether a problem exists, but it doesn’t help isolate the source very well. PERFORMANCE_SCHEMA can take it the next step.

The PERFORMANCE_SCHEMA counters are smarter than the status variables, and we can count the number of prepared statements that have been opened (statement/sql/prepare_sql event for normal statements, statement/com/Prepare event for connections using COM_STMT_PREPARE protocol command) and the number closed (statement/sql/dealloc_sql event for normal statements, statement/com/Close stmt event for COM_STMT_CLOSE command).  Once can write a query which computes the difference between the prepares and closes:

SELECT sp.thread_id, t.processlist_user user, t.processlist_host host,
sp.count_star - sd.count_star open_com_query_ps,
cp.count_star - cc.count_star open_com_prepare_ps
FROM
( SELECT COUNT_STAR,
THREAD_ID
FROM events_statements_summary_by_thread_by_event_name
WHERE event_name = 'statement/sql/prepare_sql' ) sp
JOIN
( SELECT COUNT_STAR,
THREAD_ID
FROM events_statements_summary_by_thread_by_event_name
WHERE event_name = 'statement/com/Prepare' ) cp
ON (cp.THREAD_ID = sp.THREAD_ID)
JOIN
( SELECT COUNT_STAR,
THREAD_ID
FROM events_statements_summary_by_thread_by_event_name
WHERE event_name = 'statement/sql/dealloc_sql' ) sd
ON (sd.THREAD_ID = sp.THREAD_ID)
JOIN
( SELECT COUNT_STAR,
THREAD_ID
FROM events_statements_summary_by_thread_by_event_name
WHERE event_name = 'statement/com/Close stmt' ) cc
ON (cc.THREAD_ID = sp.THREAD_ID)
JOIN threads t ON (t.thread_id = sp.thread_id)
ORDER BY GREATEST(open_com_query_ps, open_com_prepare_ps) DESC;

That’s nice, and it gives us output like the following:

+-----------+------+-----------+-------------------+---------------------+
| thread_id | user | host      | open_com_query_ps | open_com_prepare_ps |
+-----------+------+-----------+-------------------+---------------------+
|        22 | root | localhost |                 3 |                   0 |
+-----------+------+-----------+-------------------+---------------------+
1 row in set (0.08 sec)

Of course, there are still ways in which this isn’t 100% accurate. For example, one can do stuff like this:

+-----------+------+-----------+-------------------+---------------------+
| thread_id | user | host      | open_com_query_ps | open_com_prepare_ps |
+-----------+------+-----------+-------------------+---------------------+
|        22 | root | localhost |                 3 |                   0 |
+-----------+------+-----------+-------------------+---------------------+
1 row in set (0.08 sec)

mysql> PREPARE stmt FROM 'SELECT 1';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> PREPARE stmt FROM 'SELECT 2';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> SELECT sp.thread_id, t.processlist_user user, t.processlist_host hos
-> sp.count_star - sd.count_star open_com_query_ps,
-> cp.count_star - cc.count_star open_com_prepare_ps
-> FROM
->  ( SELECT COUNT_STAR,
->     THREAD_ID
->    FROM events_statements_summary_by_thread_by_event_name
->    WHERE event_name = 'statement/sql/prepare_sql' ) sp
->  JOIN
->  ( SELECT COUNT_STAR,
->     THREAD_ID
->    FROM events_statements_summary_by_thread_by_event_name
->    WHERE event_name = 'statement/com/Prepare' ) cp
->  ON (cp.THREAD_ID = sp.THREAD_ID)
->  JOIN
->  ( SELECT COUNT_STAR,
->     THREAD_ID
->    FROM events_statements_summary_by_thread_by_event_name
->    WHERE event_name = 'statement/sql/dealloc_sql' ) sd
->  ON (sd.THREAD_ID = sp.THREAD_ID)
->  JOIN
->  ( SELECT COUNT_STAR,
->     THREAD_ID
->    FROM events_statements_summary_by_thread_by_event_name
->    WHERE event_name = 'statement/com/Close stmt' ) cc
->  ON (cc.THREAD_ID = sp.THREAD_ID)
->  JOIN threads t ON (t.thread_id = sp.thread_id)
->  WHERE sp.count_star - sd.count_star <> 0
->   OR cp.count_star - cc.count_star <> 0
-> ORDER BY GREATEST(open_com_query_ps, open_com_prepare_ps) DESC;
+-----------+------+-----------+-------------------+---------------------+
| thread_id | user | host      | open_com_query_ps | open_com_prepare_ps |
+-----------+------+-----------+-------------------+---------------------+
|        22 | root | localhost |                 5 |                   0 |
+-----------+------+-----------+-------------------+---------------------+
1 row in set (0.09 sec)

In the above, I prepared the same statement handler twice without closing it.

I know Marc has some ideas for further PERFORMANCE_SCHEMA monitoring of prepared statements, but what’s implemented today is a good step forward. It’s easier to tie a potential prepared statement leak to specific accounts, and to isolate the leakage to protocol-based prepared statements (drivers using COM_PREPARE) or the more generic COM_QUERY interface used by the mysql command-line interface.

Prepared statement peculiarities (P_S to the rescue)

Prepared statements have been with MySQL since version 4.1, including the protocol plumbing that helps support it.  What I didn’t realize – until a recent expedition through a general query log – is that the mysql command-line interface doesn’t implement the protocol commands that support this explicitly.  I came to this realization after observing a byproduct of this behavior.

The initial observation that triggered this exploration was noting that PREPARE and EXECUTE statements, when issued from the mysql command-line interface, result in two entries per command in the general query log:

6 Query    PREPARE stmt FROM 'SELECT RAND()'
6 Prepare    SELECT RAND()
6 Query    EXECUTE stmt
6 Execute    SELECT RAND()

Contrast this behavior with what is seen when a client sends COM_PREPARE and COM_EXECUTE, such as below with Connector/J (and useServerPrepStmts=true):

14 Prepare    SELECT * FROM t1 WHERE a = ?
14 Execute    SELECT * FROM t1 WHERE a = 2

This was of interest to me because I had a script to take the contents of the general query log, separate them into per-connection files containing the SQL commands, to assist in diagnosing application behavior when developers can’t effectively isolate or describe problematic behavior.  By stripping out certain commands (connect/disconnect) and transforming others (Init DB to the original USE [db] command), this script could potentially help us build repeatable test cases faster.

Of course, that doesn’t work if prepared statement commands are duplicated sometimes.

The general query log helpfully records the type of command executed.  From the mysql cli, you can see “Query”, which corresponds to the COM_QUERY command in the protocol.  You also see the “Prepare” and “Execute”, the latter of which has the interpreted values.  Executed from Connector/J, though, you only see the “Prepare” and “Execute” because the driver is sending COM_STMT_PREPARE and COM_STMT_EXECUTE directly.  The protocol documentation describes these commands well.  That gives me two options, if I want to transform the general query log into a somewhat accurate series of SQL statements:

  1. Throw away all of the Prepare and Execute events, hoping that everybody executing prepared statements is using the cli (or COM_QUERY), and replay only the COM_QUERY.
  2. Parse and discard any COM_QUERY commands that invoke PREPARE or EXECUTE, and just use the generated SQL found in the “Execute” entries in the general query log.

I kind of understand why the mysql cli doesn’t implement the protocol commands for prepared statements, even if it does parse and transform other commands (e.g., “USE db”).  The server-side support allows prepared statements to be used at the application level even if the driver doesn’t support it at the protocol level.  And from a debugging perspective, it’s very nice to see the generated SQL in the general query log.  Because the general query log includes every command it gets, before execution is started, I also understand why the the original query received has to show there, unaltered.  I do wish, however, that there was a flag in the general query log to indicate that the Prepare or Execute being logged was internally-generated from a COM_QUERY.

The duplication of commands in the general query log did make me wonder about performance.  It seems pretty clear that there’s some overhead added when COM_QUERY is used to send PREPARE or EXECUTE commands to the server – at the very minimum, the original statement gets logged to the general query log and some parsing done to redirect execution to code paths handling preparation or execution of prepared statements.  For the fun of it, I thought I would benchmark what the performance difference is on my (admittedly slow) laptop, using Java.  Here’s what the code looks like:

public static void testPSPerformance() throws Exception {
 Class.forName("com.mysql.jdbc.Driver");
 Properties props = new Properties();
 props.setProperty("user", "root");
 props.setProperty("useServerPrepStmts", "true");
 Connection conn =
  DriverManager.getConnection("jdbc:mysql://localhost:3306/test", props);
 System.out.println("Connected!");
 PreparedStatement ps = conn.prepareStatement("SELECT RAND()");
 long start = System.currentTimeMillis();
 for(int i = 0; i < 1000000; i++){ ps.execute(); }
 long end = System.currentTimeMillis();
 System.out.println("Using COM_PREPARE:  " + (end - start));

 Statement stmt = conn.createStatement();
 stmt.execute("PREPARE stmt FROM 'SELECT RAND()'");
 final String ex = "EXECUTE stmt";
 start = System.currentTimeMillis();
 for(int i = 0; i < 1000000; i++){ stmt.execute(ex); }
 end = System.currentTimeMillis();
 System.out.println("Using COM_QUERY:  " + (end - start));
}

The end results show about 10% performance loss by using COM_QUERY, although the actual difference may be more or less depending on your driver and deployment environment.  For example, the Connector/J Java code path for using Statement objects is different than using server-side PreparedStatements, and overhead may be added or removed there.  Here are the results from my testing:

Using COM_PREPARE:  80516
Using COM_QUERY:  90109

Using COM_PREPARE:  80547
Using COM_QUERY:  87594

Using COM_PREPARE:  81344
Using COM_QUERY:  89781

As you might expect, the overall execution time as well as the performance difference increases when the general query log is enabled:

Using COM_PREPARE:  100031
Using COM_QUERY:  126485

Again, you won’t want to draw any solid conclusions from the above about whether use of prepared statements in the same way as the mysql cli does represents performance problems for you.  But it might be worth checking.  So, how can that be done?  Unfortunately, the relevant status variables don’t distinguish between when a statement is prepared using COM_PREPARE or COM_QUERY (UPDATE: You can evaluate whether COM_QUERY or COM_PREPARE is used by subtracting Com_prepare_sql from Com_stmt_prepare. The latter is always incremented for PREPARE statements, regardless of whether they are issued as part of a COM_QUERY or COM_PREPARE command, while the former is only incremented when COM_QUERY is used):

mysql> FLUSH STATUS;
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW SESSION STATUS LIKE '%prep%';
+---------------------+-------+
| Variable_name       | Value |
+---------------------+-------+
| Com_prepare_sql     | 0     |
| Com_stmt_prepare    | 0     |
| Com_stmt_reprepare  | 0     |
| Com_xa_prepare      | 0     |
| Handler_prepare     | 0     |
| Prepared_stmt_count | 1     |
+---------------------+-------+
6 rows in set (0.00 sec)

mysql> prepare stmt from 'SELECT 1';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> SHOW SESSION STATUS LIKE '%prep%';
+---------------------+-------+
| Variable_name       | Value |
+---------------------+-------+
| Com_prepare_sql     | 1     |
| Com_stmt_prepare    | 1     |
| Com_stmt_reprepare  | 0     |
| Com_xa_prepare      | 0     |
| Handler_prepare     | 0     |
| Prepared_stmt_count | 1     |
+---------------------+-------+
6 rows in set (0.00 sec)

So, how can you determine whether your use of prepared statements uses COM_PREPARE or COM_QUERY?  Using PERFORMANCE_SCHEMA in 5.6, it’s easy!  Here’s the query:

SELECT
user,
host,
count_star,
IF(event_name = 'statement/sql/prepare_sql',
'COM_QUERY', 'COM_PREPARE') command
FROM events_statements_summary_by_account_by_event_name
WHERE EVENT_NAME IN
('statement/sql/prepare_sql', 'statement/com/Prepare');

Here it is in action:

mysql> SELECT
->  user,
->  host,
->  count_star,
->  IF(event_name = 'statement/sql/prepare_sql',
->   'COM_QUERY', 'COM_PREPARE') command
-> FROM events_statements_summary_by_account_by_event_name
-> WHERE EVENT_NAME IN
-> ('statement/sql/prepare_sql', 'statement/com/Prepare');
+------+-----------+------------+-------------+
| user | host      | count_star | command     |
+------+-----------+------------+-------------+
| NULL | NULL      |          0 | COM_QUERY   |
| NULL | NULL      |          0 | COM_PREPARE |
| root | localhost |          4 | COM_QUERY   |
| root | localhost |          3 | COM_PREPARE |
+------+-----------+------------+-------------+
4 rows in set (0.00 sec)

mysql> prepare stmt from 'SELECT 1';
Query OK, 0 rows affected (0.00 sec)
Statement prepared

mysql> SELECT
->  user,
->  host,
->  count_star,
->  IF(event_name = 'statement/sql/prepare_sql',
->   'COM_QUERY', 'COM_PREPARE') command
-> FROM events_statements_summary_by_account_by_event_name
-> WHERE EVENT_NAME IN
-> ('statement/sql/prepare_sql', 'statement/com/Prepare');
+------+-----------+------------+-------------+
| user | host      | count_star | command     |
+------+-----------+------------+-------------+
| NULL | NULL      |          0 | COM_QUERY   |
| NULL | NULL      |          0 | COM_PREPARE |
| root | localhost |          5 | COM_QUERY   |
| root | localhost |          3 | COM_PREPARE |
+------+-----------+------------+-------------+
4 rows in set (0.00 sec)

mysql> -- Issue prepared statement from Connector/J:
mysql> SELECT
->  user,
->  host,
->  count_star,
->  IF(event_name = 'statement/sql/prepare_sql',
->   'COM_QUERY', 'COM_PREPARE') command
-> FROM events_statements_summary_by_account_by_event_name
-> WHERE EVENT_NAME IN
-> ('statement/sql/prepare_sql', 'statement/com/Prepare');
+------+-----------+------------+-------------+
| user | host      | count_star | command     |
+------+-----------+------------+-------------+
| NULL | NULL      |          0 | COM_QUERY   |
| NULL | NULL      |          0 | COM_PREPARE |
| root | localhost |          5 | COM_QUERY   |
| root | localhost |          4 | COM_PREPARE |
+------+-----------+------------+-------------+
4 rows in set (0.00 sec)

The original problem may not be of much concern to you, but the power of PERFORMANCE_SCHEMA to dig deeper into server behavior is something that is applicable regardless of whether prepared statements are a concern for you or not.