Tweaking MySQL Optimizer for performance

We can directly control MySQL optimizer with two popular system variables, They are optimizer_prune_level and optimizer_search_depth . You must be very careful while changing these parameters in production, If not tested well in staging before there are high chances of experiencing undesired results and performance degradation. 
optimizer_prune_level 
The optimizer_prune_level variable influence optimizer in skipping the selected execution plans based on total number of records accessed for each table. The default value is "1" and this works well in most of cases but if you strongly believe (with supporting evidence is always better) MySQL optimizer is missing much better execution plan then disable optimizer_prune_level to "0". In many cases even after disabling optimizer_prune_level MySQL end-up using plans recommended by optimizer
optimizer_search_depth 
By setting optimizer_search_depth MySQL system variable you are telling to optimizer how far into the "future" of each incomplete execution plan it should evaluate to expand further. In simple terms maximum depth of search performed by the query optimizer., The default value is "62" (which is also the highest configurable value), The values larger than the number of relations in a query result in better query plans, but take longer to generate an execution plan for a query. The values smaller than the number of relations in a query return an execution plan quicker, but the resulting plan may be far from being optimal. If set to 0, the system automatically picks a reasonable value.

Benchmarking MySQL using Sysbench


  • Install Sysbench on CentOS / Redhat 
                   yum install sysbench 

  • Install Sysbench on Debian / Ubuntu 
                  apt-get install sysbench 

Using sysbench to benchmark CPU 

This is a simple CPU benchmarking exercise using sysbench, In this test we are using sysbench to measure total time required to calculate prime 


[root@localhost backup]# sysbench --test=cpu --cpu-max-prime=20000 run


Running the test with following options:
Number of threads: 1
Initializing random number generator from current time


Prime numbers limit: 20000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:   259.27

General statistics:
    total time:                          10.0032s
    total number of events:              2594

Latency (ms):
         min:                                  3.45
         avg:                                  3.85
         max:                                 20.48
         95th percentile:                      5.67
         sum:                               9998.63

Threads fairness:
    events (avg/stddev):           2594.0000/0.00
    execution time (avg/stddev):   9.9986/0.00

[root@localhost backup]# 

What really useful for us from this exercise is total time, Which is 10.0032s

Using sysbench to benchmark I/O

Now let's see how system behaves / performs under different kinds of I/O loads. The pre-requsite for this test is to prepare files for this benchmark. You must prepare more data than what can fit in the memory, If data fits in the memory then operating system will cache most of it and so you will not have accurate results. We will begin by creating the files:

[root@localhost backup]# sysbench --test=fileio --file-total-size=180G prepare  


The next step is to run the benchmark, There are multiple options to test different kinds of I/O performance 

  • seqwr - sequential write 
  • rndwr - random write 
  • seqrewr - sequential rewrite 
  • seqrd - sequential read 
  • rndrd - random read  
  • rndrw - combined random read / write 
In this example we are running random read/write access file I/O benchmark 


[root@localhost backup]# sysbench --test=fileio --file-total-size=180G --file-test-mode=rndrw --max-time=300 --max-requests=0 run 



File operations:
    reads/s:                      148.60
    writes/s:                     99.06
    fsyncs/s:                     316.59

Throughput:
    read, MiB/s:                  2.32
    written, MiB/s:               1.55

General statistics:
    total time:                          300.0026s
    total number of events:              169280

Latency (ms):
         min:                                  0.00
         avg:                                  1.77
         max:                                148.57
         95th percentile:                      4.10
         sum:                             299124.94

Threads fairness:
    events (avg/stddev):           169280.0000/0.00
    execution time (avg/stddev):   299.1249/0.00


Though we have exhaustive report post benchmarking I/O what really I look up-to is Throughput and total time 

** Please do not forget to cleanup after this benchmarking exercise 

[root@localhost backup]# sysbench --test=fileio --file-total-size=180G cleanup

Removing test files...


OLTP (online transaction processing workload) benchmark using sysbench 


In this exercise we will se how to use sysbench to emulate OLTP workload. I have used here Sysbench 0.5 which is only available from source tree at Launchpad. You can build it straightforward:

bzr branch lp:sysbench sysbench-trunk
cd sysbench-trunk
./autogen.sh
./configure
make

The Lua scripts can be found in the sysbench source director under sysbench/tests/db. To run the OLTP benchmark you have to give path to the Lua scrip to run

[root@localhost db]# sysbench oltp.lua --mysql-user=root --mysql-password=MyMySQLPassword --time=60 --threads=10 prepare 

sysbench 1.0.8 (using bundled LuaJIT 2.1.0-beta2)


Creating table 'sbtest1'...
Inserting 10000 records into 'sbtest1'


[root@localhost db]# sysbench oltp.lua --mysql-user=root --mysql-password=MyMySQLPassword --time=60 --threads=10 run 

sysbench 1.0.8 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 10
Initializing random number generator from current time


Initializing worker threads...

Threads started!

SQL statistics:
    queries performed:
        read:                            232932
        write:                           66537
        other:                           33270
        total:                           332739
    transactions:                        16632  (277.06 per sec.)
    queries:                             332739 (5542.93 per sec.)
    ignored errors:                      6      (0.10 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0268s
    total number of events:              16632

Latency (ms):
         min:                                  4.64
         avg:                                 36.08
         max:                                390.05
         95th percentile:                     56.84
         sum:                             600078.60

Threads fairness:
    events (avg/stddev):           1663.2000/10.06
    execution time (avg/stddev):   60.0079/0.00




  I generally look upto the matrices like transactions "277.06 per sec."  queries "5542.93 per sec." , total time "60.0268s", Latency (ms) "600078.60"      

Setting BULK_INSERT_BUFFER_SIZE for loading data faster on MyISAM


How fast you can load data to MyISAM ? You can't answer this question with some number but are there techniques to load fast ? Absolutely ! It's always we need to consider simple things to load data faster, The following are quick wins to load data faster on MyISAM 


  • if doing bulk inset, consider disable the indexes before loading the data
                   alter table table1 disable keys;


  • The BULK_INSERT_BUFFER_SIZE variable limits the size of the cache tree in bytes per thread. If data is being added to a non-empty table, tuning the bulk_insert_buffer_size variable can make data insertion faster. Normally  it shows the improvement when the data to be inserted is more than 10k rows. But it is hard to say what is the right value, so, trail and try with incremental buffer size values.Setting it to 0 disables this optimization. The default value is 8MB. The maximum value is 4GB. The bulk_insert_buffer_size can be set both on system and session levels.
  • Consider setting optimally  MYISAM_SORT_BUFFER_SIZE and KEY_BUFFER_SIZE

Things to worry about while considering optimal MySQL performance


  • Optimal of structure of table is very important, Especially the columns must have the right data types and also columns are so well built to handle the data flow inside MySQL. Typically tables with large UPDATES must be conservative about adding columns and tables with large SELECTS will have more columns (hopping the tables for data is expensive)
  • Optimal indexing is good but over indexing is long-term problem to solve 
  • What is your locking strategy ? Shared access guarantees the smooth operations of MySQL and use exclusive only when you have critical transactions under top priority. InnoDB storage engine handles most locking issues without involvement from you, allowing for better concurrency in the database and reducing the amount of experimentation and tuning for your code.
  • Optimally use available memory for caching schema and SQL, Oversized memory handling causes excessive paging. 
  • Distribute data across multiple disks to optimize disk seek time 
  • The modern disks with higher throughput rate can do magic and  is easier to optimize than seeks because you can read in parallel from multiple disks.
  • When you have most often accessed data optimally placed in memory you can get maximum results from CPU investments, It's often main memory bandwidth becomes a bottleneck when the CPU needs more data than can fit in the CPU cache

Which my.cnf file is being used ?

To find which my.cnf used you can use -

 /usr/sbin/mysqld --verbose --help | grep -A 1 "Default options"


[root@localhost yum.repos.d]# /usr/sbin/mysqld --verbose --help | grep -A 1 "Default options"
2017-08-17 12:22:50 0 [Note] /usr/sbin/mysqld (mysqld 5.6.37) starting as process 5418 ...
2017-08-17 12:22:50 5418 [Note] Plugin 'FEDERATED' is disabled.
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf 
2017-08-17 12:22:50 5418 [Note] Binlog end
2017-08-17 12:22:50 5418 [Note] Shutting down plugin 'MyISAM'
2017-08-17 12:22:50 5418 [Note] Shutting down plugin 'CSV'
[root@localhost yum.repos.d]#