Tweaking MySQL Optimizer for performance

We can directly control MySQL optimizer with two popular system variables, They are optimizer_prune_level and optimizer_search_depth . You must be very careful while changing these parameters in production, If not tested well in staging before there are high chances of experiencing undesired results and performance degradation. 
optimizer_prune_level 
The optimizer_prune_level variable influence optimizer in skipping the selected execution plans based on total number of records accessed for each table. The default value is "1" and this works well in most of cases but if you strongly believe (with supporting evidence is always better) MySQL optimizer is missing much better execution plan then disable optimizer_prune_level to "0". In many cases even after disabling optimizer_prune_level MySQL end-up using plans recommended by optimizer
optimizer_search_depth 
By setting optimizer_search_depth MySQL system variable you are telling to optimizer how far into the "future" of each incomplete execution plan it should evaluate to expand further. In simple terms maximum depth of search performed by the query optimizer., The default value is "62" (which is also the highest configurable value), The values larger than the number of relations in a query result in better query plans, but take longer to generate an execution plan for a query. The values smaller than the number of relations in a query return an execution plan quicker, but the resulting plan may be far from being optimal. If set to 0, the system automatically picks a reasonable value.

Benchmarking MySQL using Sysbench


  • Install Sysbench on CentOS / Redhat 
                   yum install sysbench 

  • Install Sysbench on Debian / Ubuntu 
                  apt-get install sysbench 

Using sysbench to benchmark CPU 

This is a simple CPU benchmarking exercise using sysbench, In this test we are using sysbench to measure total time required to calculate prime 


[root@localhost backup]# sysbench --test=cpu --cpu-max-prime=20000 run


Running the test with following options:
Number of threads: 1
Initializing random number generator from current time


Prime numbers limit: 20000

Initializing worker threads...

Threads started!

CPU speed:
    events per second:   259.27

General statistics:
    total time:                          10.0032s
    total number of events:              2594

Latency (ms):
         min:                                  3.45
         avg:                                  3.85
         max:                                 20.48
         95th percentile:                      5.67
         sum:                               9998.63

Threads fairness:
    events (avg/stddev):           2594.0000/0.00
    execution time (avg/stddev):   9.9986/0.00

[root@localhost backup]# 

What really useful for us from this exercise is total time, Which is 10.0032s

Using sysbench to benchmark I/O

Now let's see how system behaves / performs under different kinds of I/O loads. The pre-requsite for this test is to prepare files for this benchmark. You must prepare more data than what can fit in the memory, If data fits in the memory then operating system will cache most of it and so you will not have accurate results. We will begin by creating the files:

[root@localhost backup]# sysbench --test=fileio --file-total-size=180G prepare  


The next step is to run the benchmark, There are multiple options to test different kinds of I/O performance 

  • seqwr - sequential write 
  • rndwr - random write 
  • seqrewr - sequential rewrite 
  • seqrd - sequential read 
  • rndrd - random read  
  • rndrw - combined random read / write 
In this example we are running random read/write access file I/O benchmark 


[root@localhost backup]# sysbench --test=fileio --file-total-size=180G --file-test-mode=rndrw --max-time=300 --max-requests=0 run 



File operations:
    reads/s:                      148.60
    writes/s:                     99.06
    fsyncs/s:                     316.59

Throughput:
    read, MiB/s:                  2.32
    written, MiB/s:               1.55

General statistics:
    total time:                          300.0026s
    total number of events:              169280

Latency (ms):
         min:                                  0.00
         avg:                                  1.77
         max:                                148.57
         95th percentile:                      4.10
         sum:                             299124.94

Threads fairness:
    events (avg/stddev):           169280.0000/0.00
    execution time (avg/stddev):   299.1249/0.00


Though we have exhaustive report post benchmarking I/O what really I look up-to is Throughput and total time 

** Please do not forget to cleanup after this benchmarking exercise 

[root@localhost backup]# sysbench --test=fileio --file-total-size=180G cleanup

Removing test files...


OLTP (online transaction processing workload) benchmark using sysbench 


In this exercise we will se how to use sysbench to emulate OLTP workload. I have used here Sysbench 0.5 which is only available from source tree at Launchpad. You can build it straightforward:

bzr branch lp:sysbench sysbench-trunk
cd sysbench-trunk
./autogen.sh
./configure
make

The Lua scripts can be found in the sysbench source director under sysbench/tests/db. To run the OLTP benchmark you have to give path to the Lua scrip to run

[root@localhost db]# sysbench oltp.lua --mysql-user=root --mysql-password=MyMySQLPassword --time=60 --threads=10 prepare 

sysbench 1.0.8 (using bundled LuaJIT 2.1.0-beta2)


Creating table 'sbtest1'...
Inserting 10000 records into 'sbtest1'


[root@localhost db]# sysbench oltp.lua --mysql-user=root --mysql-password=MyMySQLPassword --time=60 --threads=10 run 

sysbench 1.0.8 (using bundled LuaJIT 2.1.0-beta2)

Running the test with following options:
Number of threads: 10
Initializing random number generator from current time


Initializing worker threads...

Threads started!

SQL statistics:
    queries performed:
        read:                            232932
        write:                           66537
        other:                           33270
        total:                           332739
    transactions:                        16632  (277.06 per sec.)
    queries:                             332739 (5542.93 per sec.)
    ignored errors:                      6      (0.10 per sec.)
    reconnects:                          0      (0.00 per sec.)

General statistics:
    total time:                          60.0268s
    total number of events:              16632

Latency (ms):
         min:                                  4.64
         avg:                                 36.08
         max:                                390.05
         95th percentile:                     56.84
         sum:                             600078.60

Threads fairness:
    events (avg/stddev):           1663.2000/10.06
    execution time (avg/stddev):   60.0079/0.00




  I generally look upto the matrices like transactions "277.06 per sec."  queries "5542.93 per sec." , total time "60.0268s", Latency (ms) "600078.60"      

Setting BULK_INSERT_BUFFER_SIZE for loading data faster on MyISAM


How fast you can load data to MyISAM ? You can't answer this question with some number but are there techniques to load fast ? Absolutely ! It's always we need to consider simple things to load data faster, The following are quick wins to load data faster on MyISAM 


  • if doing bulk inset, consider disable the indexes before loading the data
                   alter table table1 disable keys;


  • The BULK_INSERT_BUFFER_SIZE variable limits the size of the cache tree in bytes per thread. If data is being added to a non-empty table, tuning the bulk_insert_buffer_size variable can make data insertion faster. Normally  it shows the improvement when the data to be inserted is more than 10k rows. But it is hard to say what is the right value, so, trail and try with incremental buffer size values.Setting it to 0 disables this optimization. The default value is 8MB. The maximum value is 4GB. The bulk_insert_buffer_size can be set both on system and session levels.
  • Consider setting optimally  MYISAM_SORT_BUFFER_SIZE and KEY_BUFFER_SIZE

Things to worry about while considering optimal MySQL performance


  • Optimal of structure of table is very important, Especially the columns must have the right data types and also columns are so well built to handle the data flow inside MySQL. Typically tables with large UPDATES must be conservative about adding columns and tables with large SELECTS will have more columns (hopping the tables for data is expensive)
  • Optimal indexing is good but over indexing is long-term problem to solve 
  • What is your locking strategy ? Shared access guarantees the smooth operations of MySQL and use exclusive only when you have critical transactions under top priority. InnoDB storage engine handles most locking issues without involvement from you, allowing for better concurrency in the database and reducing the amount of experimentation and tuning for your code.
  • Optimally use available memory for caching schema and SQL, Oversized memory handling causes excessive paging. 
  • Distribute data across multiple disks to optimize disk seek time 
  • The modern disks with higher throughput rate can do magic and  is easier to optimize than seeks because you can read in parallel from multiple disks.
  • When you have most often accessed data optimally placed in memory you can get maximum results from CPU investments, It's often main memory bandwidth becomes a bottleneck when the CPU needs more data than can fit in the CPU cache

Which my.cnf file is being used ?

To find which my.cnf used you can use -

 /usr/sbin/mysqld --verbose --help | grep -A 1 "Default options"


[root@localhost yum.repos.d]# /usr/sbin/mysqld --verbose --help | grep -A 1 "Default options"
2017-08-17 12:22:50 0 [Note] /usr/sbin/mysqld (mysqld 5.6.37) starting as process 5418 ...
2017-08-17 12:22:50 5418 [Note] Plugin 'FEDERATED' is disabled.
Default options are read from the following files in the given order:
/etc/my.cnf /etc/mysql/my.cnf /usr/etc/my.cnf ~/.my.cnf 
2017-08-17 12:22:50 5418 [Note] Binlog end
2017-08-17 12:22:50 5418 [Note] Shutting down plugin 'MyISAM'
2017-08-17 12:22:50 5418 [Note] Shutting down plugin 'CSV'
[root@localhost yum.repos.d]# 


The Commandments for Optimal MySQL Performance

What we have to keep in mind for an Optimal MySQL Performance ? There are many, I am just writing down what I seriously consider for an Optimal and Scalable MySQL Performance, As you usual the comments are most welcome, Thank you !! 
  1. Go generous while buying physical memory, InnoDB works great when files are accessed from memory rather than from disk. 
  2. Lean MySQL Schema and SQL works great. Trust me, The fat is not Optimal everywhere !!
  3. Carefully choose your Datatype, The smaller ones are better. 
  4. You need Indexes to search records faster but Index overuse / abuse will have negative impact to performance, Remove unused indexes.  
  5. Occasionally MySQL go wrong in choosing Optimal Indexes, Here help MySQL using hints like "USE INDEX".
  6. Think seriously about indexing columns in WHERE, GROUP BY, and ORDER BY queries. 
  7. Less Rows Examined is great !!!
  8. Reduce disk reads, Compress TEXT, BLOB 
  9. Make incremental Schema changes sometimes small changes cause severe Performance improvements / degradation. 
  10. Lookout for duplicate records they seriously cause Performance Bottlenecks.
  11. Archive the old records, They make Sort / Search especially very expensive.
  12. Profile your MySQL regularly to define the trending chart of Performance, Nothing goes wrong overnight, Performance degradation happens in several cases over a period of time so think proactive, That's cool :) 
  13. The fat multi-core processors hyper-threading enabled works great for high performance MySQL operations  
  14. Watch out for excessive pagination queries.
  15. Go conservative about MySQL Configs (sometimes less is more).
  16. Be cautious with MySQL configuration parameters innodb_thread_concurrency and thread_concurrency variables, Don't use them if you don't clearly understand how they work !!
  17. High max_connections excessively consumes the RAM. 
  18. Use query cache only if you have repetitive queries, Using query cache on data that changes often will give you a performance hit. 
  19. Remember to warm your MySQL Database in every restart, This get schema, SQL back to memory. 
  20. Fetch only the data you need in SELECT queries 
  21. Consider persistent connection against multiple connections 
  22. Sort_buffer_size  eat memory very fast because it is actually consumed per connection so please be very cautious while increasing it.
  23. Use large redo logs
  24. Measure cost of query with Response Time, Use MySQL slow query log to diagnose query performance.
  25. I use SHOW PROCESSLIST while load increases, This works good for me during diagnostics / forensics of slow queries.
  26. Use Performance Schema wisely, This is an great MySQL performance orchestration platform.    
  27. Avoid using Triggers as much as possible. 
  28. Use ARCHIVE Storage engine for audit purposes. 
  29. You need Backups but do that on a secondary Replication Instance, This actually is a quick win to boost the Performance of your MySQL Primary Master Instance. 
  30. Split reads across several instances / disk, This helps distributing load efficiently 

Life of MySQL DBA

Introduction
What is it like being a MySQL DBA? It is interesting and challenging!  A MySQL DBA is accountable equally for technology and business operations. Everyone agrees the accountability for technology operations but few asked my how on business operations? I am explaining here in this post how MySQL DBAs can equally create  business value?  A serious MySQL DBA is never the career for someone who is looking for 40 hours / week job, It is a very demanding profession! MySQL DBA is accountable for entire MySQL Infrastructure Operations  , It includes but not limited to responsibilities like Architecture, Engineering, MySQL Operations Monitoring / Trending,  Performance Health Check / Diagnostics / Forensics, Tuning , Scalability, High Availability, Disaster Recovery and Upgrades. Recently a college graduate who want to pursue the career of MySQL DBA approached me asking what is it like the life of an MySQL DBA? So I thought let's write a post about the life of an MySQL DBA!!
What will be the profile of  MySQL DBA?
A true MySQL DBA is an courageous problem solver, At times he/she is expected to remain calm and address the problem than getting sensitive/emotional (many times I was called for emergency support during personal vacations, I am ok with it because it's the nature of my Job!! ) because anything happen to MySQL Infrastructure you will be blamed (even when it's not your mistake) so be cool and never ever get into panic/anxiety situation (practice meditation & yoga, It will support you at personal level during turbulence) with any level of odd experience in your MySQL Operations, Expect the worst and be prepared for it ! Remember Murphy's law "whatever can go wrong, will go wrong ! " . Now on technical competency front every MySQL DBA is expected to know MySQL Architecture, MySQL Operations Management (Monitoring & Trending), MySQL Troubleshooting, MySQL Performance, MySQL High Availability and Disaster Recovery. Thankfully MySQL ecosystem is rich when it comes to knowledge sharing and support.  Percona's investments in MySQL research and innovation made MySQL DBA ecosystem more stronger and healthy ! The following are few books (must have) in every MySQL DBA library:
  1. High Performance MySQL (Peter Zaitsev, Vadim Tkachenko and Baron Schwartz )
  2. MySQL Troubleshooting (Sveta Smirnova)
  3. Expert MySQL (Charles Bell)
 What makes you The Expert MySQL DBA ? Practice, Practice & More Practice over multiple MySQL (MySQL GA, Percona Server, WebScaleSQL and MariaDB) infrastructure and continuous learning is key! MySQL DBAs need to automate their routine tasks to find more time for learning, community contributions  and attending MySQL events (You can learn a lot from these events from peers & experts !). Be open to learn from every MySQL flavour (MySQL GA, Percona Server, WebScaleSQL and MariaDB) available  this get you to be an unbiased MySQL advocate. Take the full ownership of your MySQL Infrastructure like Performance, Scalability, High Availability, Disaster Recovery and Security because strategically/economically it never make sense for hiring multiple MySQL DBAs for addressing different areas of MySQL Operations! Last but not the least never limit your scope to just MySQL because most of MySQL implementations are on Linux, Good understanding of Linux Operations comes handy many times in your career so please do some investments for learning Linux!!!
Where do I start my career as MySQL DBA?
Once you have basic knowledge in MySQL DBA Operations, Start actively networking with MySQL DBA Managers and Database Architects. The initial days please don't expect challenging opportunities, There is nobody in this planet who risk their mission critical MySQL database with junior DBA(s) ! The early days of your career sometimes will be limited to assisting Principal / Sr. MySQL DBAs but this will give you access to experience/knowledge of someone who is working with MySQL production infrastructure for many years. The early days of your career as Jr. MySQL DBA is very important because it equally provide to guidance and practice governance in MySQL Operations Management successfully. You must never get bored doing repetitive tasks initial years !!
What does MySQL consultant career path look like?
The value of MySQL consultant grows higher with more years of experience and volume/transaction of MySQL database he/she managed. If you have entrepreneurial interests the independent MySQL contractor (I am an independent MySQL contractor for many years) is very promising, exciting and rewarding!
How MySQL DBAs can equally create  business value
Faster, Scalable & Highly Available MySQL infrastructure is very important for successfully running planet-scale Internet/Mobility Application or SaaS. Think about "Cost-To-MySQL-Outage" for an online commerce or advertisement network business? A seasoned  MySQL DBA should proactively plan Optimal, Scalable, Highly Available and Secured MySQL Operations to address customer experience & revenue!
So have you decided to be a MySQL DBA?     
Good luck !! Start your learning with an positive mind and spend all your productive hours in reading the research & experience of many professional MySQL consultants (there are many available in public domain already!)  

Storing numbers wisely in MySQL

Generally we have two kinds of numbers, Whole numbers and real numbers (numbers with Fractional units). When we are using whole numbers the natural choice is TINYINT, SMALLINT, MEDIUMINT, INT and BIGINT. These datatypes require 8,16,24,32,64 bits of storage space accordingly.
MySQL allows you to specify “width” of integer types, like INT(25). This doesn’t mean /restrict legal range of values but actually specifies the number of characters MySQL’s interactive tools will reserve for display purposes, INT(1) is same as INT(20) for storage and computational purposes. 
Real numbers can have both fractional and decimal numbers which are very large and don’t fit in “BIGINT”. The “FLOAT” and “DOUBLE” support approximate calculation with standard floating point math.
The “DECIMAL” type is good for storing exact fractional numbers. From MySQL 5.0 and newer, the “DECIMAL” type support exact math! Due to lower precision level “FLOATING-POINT” used during MySQL 4.1 days used to give weird result..
“FLOATING-POINT” math is much faster because CPU performs computations natively. “FLOATING-POINT” type typically use less space than “DECIMAL” to store same range of values. MySQL uses “DOUBLE” for its internal calculations on “FLOATING-POINT” types dues to  its greater precision compared to “FLOAT"

Considering expensive computational cost you should choose “DECIMAL” only when you need exact results for fractional numbers, for example, financial data. It makes more sense to use “BIGINT” for high volume transactions which are not so critical or store them in multiples of smallest fractions of currency! 

Profiling MySQL by "Response Time"

Introduction
Being an independent MySQL consultant is fun and equally challenging, I very regularly get the opportunity of doing low-level MySQL Performance Health Check, Diagnostics and Forensics. My customers equally are curious in knowing how optimal is their MySQL infrastructure ? Many times I have heard people defining “Performance” as executing queries optimally using minimum system resources! I contradict this thought by asking in return, If you have bought expensive high processing power infrastructure why not use it to maximum? At MinervaSQL we define performance as total time consumed to complete a task so for us Performance is Response Time . In real life this is very scalable method too because MySQL Server is busy executing queries and how much time consumed by each query leads us to Response Time  and our goal remains reducing the Response Time , Here we measure where the time is consumed.This also brings most powerful principle of performance optimization “Things which cannot be measured can never be maximised !” . Measure where response time is spent and "never ever” aggregate here, target each query independently. Measure every slow query and monitor what is the impact in the server, Calculate percentage of time spend in waiting and executing of each query. 

Profiling MySQL for Response Time   
Accuracy of profiling is key in calculating Response Time, During profiling we measure where MySQL is spending time? In effect here we are measuring how much time is spend in execution ? and how much time spend by waiting ? When we are doing profiling of MySQL, There are two interesting matrices to notice:1. Queries are continuously executing using available infrastructure so this is execution time profiling 2. Queries are waiting for tasks to get completed or for resources , This is called wait time profiling 
Both of these above mentioned matrices are very important for us to measure the time spend by every query and the results will guide us to conclude where we should be spending time during tuning. 
I always list down the queries that can bring down big impact to MySQL performance, If a query consumes only 5% of total time of execution then tuning that specific query can gain only in 5% of improvement (Amdahl’s law)The "lost time” calculation during response time analysis is very important, It is total wall clock time went unaccounted during execution time. Let’s consider overall CPU time for execution of query is 60 seconds and profiling tools subtasks adds upto 50 seconds then “lost time” here is 10 seconds!When doing MySQL profiling two of my favourite tools are SHOW PROFILE (Jeremy Cole) and PT-QUERY-DIGEST of Percona Toolkit developed by Percona  

Conclusion 
I recommend my customers to collect the response time of each query and measure every single activity that consume time which include even application level response time, It is not just MySQL causes performance bottleneck every time and it’s very straightforward to diagnose whether MySQL is the problem. 

MySQL Performance Optimization Checklist

I always thought of maintaining the checklist for MySQL Performance Optimization, This actually helps me to draw the first line of action items and developing scope at advanced stages.  

Optimizing at the Database LevelThe most important factor in making a database application fast is its basic design:
  • Are the tables structured properly? In particular, do the columns have the right data types, and does each table have the appropriate columns for the type of work? For example, applications that perform frequent updates often have many tables with few columns, while applications that analyze large amounts of data often have few tables with many columns.
  • Are the right indexes in place to make queries efficient?
  • Are you using the appropriate storage engine for each table, and taking advantage of the strengths and features of each storage engine you use? In particular, the choice of a transactional storage engine such as  or a nontransactional one such as  can be very important for performance and scalability.
  • Does each table use an appropriate row format? This choice also depends on the storage engine used for the table. In particular, compressed tables use less disk space and so require less disk I/O to read and write the data. Compression is available for all kinds of workloads with  tables, and for read-only  tables.
  • Does the application use an appropriate locking strategy? For example, by allowing shared access when possible so that database operations can run concurrently, and requesting exclusive access when appropriate so that critical operations get top priority. Again, the choice of storage engine is significant. The  storage engine handles most locking issues without involvement from you, allowing for better concurrency in the database and reducing the amount of experimentation and tuning for your code.
  • Are all memory areas used for caching sized correctly? That is, large enough to hold frequently accessed data, but not so large that they overload physical memory and cause paging. The main memory areas to configure are the  buffer pool, the  key cache, and the MySQL query cache.

Optimizing at the Hardware LevelAny database application eventually hits hardware limits as the database becomes more and more busy. A DBA must evaluate whether it is possible to tune the application or reconfigure the server to avoid these bottlenecks, or whether more hardware resources are required. System bottlenecks typically arise from these sources:
  • Disk seeks. It takes time for the disk to find a piece of data. With modern disks, the mean time for this is usually lower than 10ms, so we can in theory do about 100 seeks a second. This time improves slowly with new disks and is very hard to optimize for a single table. The way to optimize seek time is to distribute the data onto more than one disk.
  • Disk reading and writing. When the disk is at the correct position, we need to read or write the data. With modern disks, one disk delivers at least 10–20MB/s throughput. This is easier to optimize than seeks because you can read in parallel from multiple disks.
  • CPU cycles. When the data is in main memory, we must process it to get our result. Having large tables compared to the amount of memory is the most common limiting factor. But with small tables, speed is usually not the problem.
  • Memory bandwidth. When the CPU needs more data than can fit in the CPU cache, main memory bandwidth becomes a bottleneck. This is an uncommon bottleneck for most systems, but one to be aware of.

How the DBA job is evolving for future ?

Introduction 

Being a Database Technology Specialist for almost 16 years I get to talk in several community conferences, colleges and startups, Recently I was talking about the career prospects of being a DBA in a college in Bangalore, India, I was talking about what is it like being a DBA? As usual by the end of the talk I encourage audience to ask questions and one of the very anxious attendees asked me a very interesting question, "What is it like being a DBA in future with rapid advancement in Automation and DevOps ?", The guy who asked me this question is a aspiring DBA and he was so worried about future ... so I thought let me write about this talk in my personal blog.
Where are we headed to in Technology Operations space ?
The days of Scaling Technology Operations with eyeballs are over !! Yes, Automation and DevOps are definitely a big threat for commodity DBAs, Commodity DBAs are the ones who are engaged with typical very-planned / documented DBA activities like Database Software installation, configuration, upgrades, user account / password management, adding / deleting spaces and planned routine backup / recovery operation. To be very brutally honest here, There are no jobs for them in the future!!! This doesn't mean there is no DBA jobs in the future, The specialist DBAs are more valued in the days to come !! Who are these specialist DBAs and how are they different from olden days commodity DBAs?   
Next is "The Specialist DBA!!!"In the past many folks learn basic Database software installation and configuration claim to be a DBA, These guys are so limited with installation and configuration with some limited involvement in user account creation, password management and space allocation for schema / users. Now these are can be automated with DevOps tools or if you are on Amazon RDS most of the limited scope DBA activities are so well taken care of but can you be a serious corporation without hiring professional DBA ? Hmm.... If yes, You must read this post and I am sure in the end you will contact your recruiter to hire a DBA full-time / contract immediately and if no, You will enjoy reading this post and will be so proud about your decision of having a full-time / consulting DBA in your company.
You can only automate the well planned / scheduled activities with DevOps, RDS kind of platform but what about Performance Optimization of your Database System ? Writing High Performance SQL, Optimal Indexing and Managing Locks / Wait Events, What about these, Can you Automate them ? Surely not, Architecting and Developing High Performance Database Applications is a very specialized expertize, This involves deep understanding of Database Architecture, Internals, Optimizer Knowledge, High Performance SQL Patterns, Efficient Indexing Methods and Managing Concurrency / Locks / WaitEvents and you definitely need seasoned Database Technologies Professional to deliver Optimal and Scalable Database Platforms !!
The future of Specialist DBA is bright, Such DBAs are not just limited to your Operational Functions, They are involved in your Strategic Decision making like Technology Architecture and Engineering, These Specialist DBAs build / own Optimal SQL, Efficient Indexing and Concurrency / Isolation governance across the Database Tier of your business. I have come across Database Architecture and Engineering of several high profile Internet Application Platforms built by non professional dedicated DBAs, They will be often over / under Indexed with sub-optimal SQL and unaddressed concurrency / isolation level, This results in serious Performance and Scalability of the entire platform and eventually resulting negative business impact affecting experience of end customer (nobody want to wait 10 seconds for every page load !!! )..
What is recommended to become a successful Specialist DBA ?
  • Have deep understanding on Architecture and Internals of underlying Database Platform
  • Participate actively in every decision making because business is data and you are delivering infrastructure for data movement here, You are solely accountable for Optimal, Scalable, Highly Available and Secured Database Infrastructure 
  • Be conscious about cost of SQL and Indexes for the application, Over indexing and complex SQL in production is never cool !!!
  • Automate all non core activities of your day like installation, configuration, user management etc. !!!       
Conclusion  
The computers and robots can never replace human expertize or specialized skills, They all exist just to simplify our activities and increasing efficiency. Believe in automation and be adaptable because it makes you more productive professionally. Think about how you can contribute to the business success and customer satisfaction, High Performance Database System significantly influence the end customer satisfaction, Eventually customer success is the only goal for every business in this planet, Good luck !!  

MySQL 8 new features

MySQL 8 is incorporated with numerous features which excites all enthusiastic MySQL Database Architects / DBAs / Database Engineers / DBA Managers, I am attracted to few of them which will make my day-to-day MySQL 8 DBA life even more interesting:
MySQL Data Dictionary
In the past MySQL data dictionary was stored in metadata files and nontransactional tables but from MySQL 8 we have transactional data dictionary to store the information about database objects. The major difference between MySQL system tables and data dictionary tables is that system tables contain auxiliary data such as time zone and help information, whereas data dictionary tables contain data required to execute SQL queries. The dictionary data is now protected by the same commit, rollback, and crash-recovery capabilities that protect user data stored in InnoDB tables. This make MySQL data dictionary based object management more interesting.
P.S – InnoDB storage engine continues to use its own storage engine-specific data dictionary.
MySQL 8 supports Roles
MySQL 8 supports roles, They are collections of privileges which are granted to and revoked from user accounts. A much waited feature IMHO, Especially when I am working for customers who are serious about building secured privileges / policies across MySQL users, Eureka I have answer for all those folks now 🙂 !!!
InnoDB AUTO_INCREMENT
MySQL 5.7 and earlier, the auto-increment counter is stored only in main memory, not on disk. To initialize an auto-increment counter after a server restart, InnoDB would execute the equivalent of the following statement on the first insert into a table containing an AUTO_INCREMENT column. From MySQL 8 this behaviour has changed, The current maximum auto-increment counter value is written to the redo log each time it changes and is saved to an engine-private system table on each checkpoint. These changes make the current maximum auto-increment counter value persistent across server restarts. On a server restart following a normal shutdown, InnoDB initializes the in-memory auto-increment counter using the current maximum auto-increment value stored in the data dictionary system table. On a server restart during crash recovery, InnoDB initializes the in-memory auto-increment counter using the current maximum auto-increment value stored in the data dictionary system table and scans the redo log for auto-increment counter values written since the last checkpoint. This feature will increase performance of MySQL operations with several schema objects built with AUTO_INCREMENT counter.
Deadlock detection disabling
We can now dynamically set innodb_deadlock_detect in MySQL 8 which is used to disable deadlock detection. On high concurrency systems, deadlock detection can cause a slowdown when numerous threads wait for the same lock. After carefully evaluating the transaction behaviour of your application you can disable deadlock detection and rely on the innodb_lock_wait_timeout setting for transaction rollback when a deadlock occurs. I will be personally very careful using this feature, Good but too powerful too !
INNODB_CACHED_INDEXES
The INFORMATION_SCHEMA table, INNODB_CACHED_INDEXES, reports the number of index pages cached in the InnoDB buffer pool for each index. This really helps to measure index efficiency more proactively and I am sure this will help me a lot going forward !
Invisible indexes
MySQL 8 now support invisible index, These indexes are not used by optimizer at all but maintained normally like any other indexes, This enables Database Architects / DBAs / Database Engineers / DBA Managers to benchmark MySQL performance on what if any index is removed ? I personally love this feature and will be using this most often in real-life as a MySQL 8 DBA to decide the fate of indexes, Great feature to have !
Much better descending indexes
Yes, We have descending indexes in MySQL 5.7 but they are scanned reverse order which indeed causes performance bottleneck. MySQL 8 solved this problem, Now descending indexes can be scanned in forward order, which is much more performance conscious and optimal . The descending indexes also make it possible for the optimizer to use multiple-column indexes when the most efficient scan order mixes ascending order for some columns and descending order for others. I was waiting for this feature for several years and this will make sort / search of both ASC/DESC based queries equally efficient, Must use feature in MySQL 8 to make your application more index efficient.