December 17 Webinar: Use Your MySQL Knowledge to Become a MongoDB Guru
Use your MySQL expertise to analyze the strengths and weaknesses of MongoDB.
SPEAKER: Tim Callaghan, VP of Engineering at Tokutek
DATE: Tuesday, December 17th
TIME: 1pm ET
MongoDB is a popular NoSQL DBMS that shares the ease-of-use and quick setup that made MySQL famous. But is MongoDB really up to the job? Is it right for your applications? If you understand MySQL well, you know how database systems work.
Join Tim Callaghan, VP/Engineering at Tokutek as he recaps his and CEO of Continuent, Robert Hodges, session from 2013′s Percona Live London. Learn how to lean on your knowledge of topics like schema design, query optimization, indexing, sharding, and high availability to analyze the strengths and weaknesses of MongoDB. System design is all about asking the right questions. Now is your second chance to learn how to use your MySQL expertise to unlock your inner MongoDB guru!
The webinar will last approximately 45 minutes and include a live Q&A session.
Q&A: Geographical disaster recovery with Percona Replication Manager
My December 4 webinar, “Geographical disaster recovery with Percona Replication Manager (PRM),” gave rise to a few questions. The recording of the webinar and the slides are available here, and I’ve answered the questions I didn’t have time to address below.Q1: Hi, I was wondering if corosync will work in cloud environment. As far as I know it is hard to implement because of no support of unicast or multicast.A1: Corosync supports the udpu transport since somewhere in the 1.3.0 branch. udpu stands for udp unicast and it works in AWS for instance. Most recent distribution are using 1.4.x so it is easy to find.Q2: For token wouldn’t it make sense to have a value higher than 3000 to account for any packetloss and the default retry of 3 seconds for tcp communication?A2: Corosync uses udp, not tcp so the argument is somewhat irrelevant.Q3: Is PRM supported with a Percona support contract?A3: Yes, PRM is now supported with a Percona Gold or Platinum Support contract. It is not availabe with Silver support.Q4: Is PRM working with GTID’s in 5.6?A4: There’s a version in testing phases adapted by Frederic Descamps that works with 5.6/GTID. As soon as it is tested properly, I’ll release it. So far, it is very clean in term of logic.Q5: Does Percona Replication Manager do anything different with replication over the built in mySQL replication to combat the single threaded nature?A5: NoQ6: We agree that fencing always has to be configured (even with no shared resources , in cases of mysql stop failure for example) : What about MySQL and Percona’s PRM Ressource agent behavior when fencing? and recommendations concerning fencing? There was no fencing in the demo and there is no fencing configured on the pacemaker crm snippets examples provided in on Percona’s github repo.A6: Fencing is independent of PRM and is well covered elsewhere on the web. I’ll see to add an example configuration with stonith devices. On real hardware, the most common stonith device are IPMI or ILO. These technologies comes with nearly all the server brands.Q7: Is there any other Mysql HA setup supported by Percona’s mysql_prm pacemaker resource agent than the Mysql master slave replication ? Like Multi master setups ? In case yes , will you release some crm configurations snippets examples for other Mysql HA setups ?A7: No, only a single master is supported. The main argument here is that multiple master doesn’t scale writes and are a big source of conflicts.Q8: Why use Percona Replication Manager over XtraDB cluster (omitting ease of Geo-DR) on a write performance perspective , also considering HA and cost ?A8: Percona XtraDB Cluster (PXC) is more advanced and capable but, some workloads, like large transactions, are not working well with PXC. Regular replication is well known and many customers are not willing to try a newer technology like (PXC) and prefer to rely on regular replication. Also, PRM can be mixed with PXC without problem. Example of such configuration will be published soon. In term of cost both are free, support is available from Percona (Gold and Platinium) but (PXC) has a premium.The post Q&A: Geographical disaster recovery with Percona Replication Manager appeared first on MySQL Performance Blog.
Range access: now in an IN predicate near you.
Several users have reported that certain queries with IN predicates can't use index scans even though all the columns in the query are indexed. What's worse, if you reformulate your query without IN, the indexes are used. Let's take some example query. Suppose we have a table with two indexed columns:CREATE TABLE t1 ( col1 INTEGER, col2 INTEGER, ... KEY key1( col1, col2 ));Let's take a look at some queries that could take advantage of the key1 index to read rows without accessing the table.SELECT col1, col2 FROM t1 WHERE col1 = 100;SELECT col1, col2 FROM t1 WHERE col1 > 100 AND col1 < 200;SELECT col1, col2 FROM t1 WHERE col1 > 100 AND col1 < 200 OR col1 > 300 AND col1 < 400;SELECT col1, col2 FROM t1 WHERE col1 = 100 AND col2 > 100 AND cold2 < 200;These queries will use what MySQL calls Index Range Scans. (although the first query could also use Ref Scan). This access method will fetch rows from the index trees given a start and end value. It's also possible to read multiple intervals, each with a start and end value, as we saw in query 3 above.A special case of intervals is when the endpoints are the same value. Range scans can be used for conditions such as col1 = 100 because it's equivalent to the interval 100 <= col1 <= 100. This way we can use range scans for a broader class of queries.Armed with multiple-interval scans, a.k.a. multi-range reads, or MRR for short, we can use the range access for queries such as SELECT col1, col2 FROM t1WHERE col1 = 100 or col1 = 200 or col1 = 300;We can use all columns in the index of course:SELECT col1, col2 FROM t1WHERE col1 = 100 AND col2 = 100 OR col1 = 200 AND col2 = 200 OR col1 = 300 AND col2 = 300;At some point, this syntax becomes unwieldy. And this isn't just aesthetics, for really big queries, we get a combinatorial blowup which can cause parsing to take a long time. This is the reason why SQL has IN predicates to say the same thing:SELECT col1, col2 FROM t1WHERE col1 = 100 OR col2 = 200 OR col2 = 300;means the same asSELECT col1, col2 FROM t1WHERE col1 IN (100, 200, 300);And for rows it gets even more convenient:SELECT col1, col2 FROM t1WHERE col1 = 100 AND col2 = 100 OR col1 = 200 AND col2 = 200 OR col1 = 300 AND col2 = 300;can be written asSELECT col1, col2 FROM t1WHERE (col1, col2) IN ((100, 100), (200, 200), (300, 300));The problem that users saw is that suddenly MySQL doesn't use MRR any more, and resorts to scanning the entire index. This is because the range optimizer ignored IN conditions over rows. The range optimizer is the sub-optimizer that analyzes conditions and translates them into a multi-range structure that can be handed to the storage engine to fetch the rows from the index. It handled IN predicates as long as they were over scalars or just a single row, but completely ignored lists of rows.As of 5.7.3 this hole in the net is stitched up. The range optimizer gladly opens the door for queries with IN predicates as long asThe predicate is only IN, not NOT IN.The row on the predicate's left-hand side is only indexed column references, in the same index.The rows contain only constants or come from a previously read table in nested-loops join.Note that 'constants' is a pretty broad category. It consists of pre-evaluated expressions, even some sub-queries, SQL variables and similar beings.
Online Schema Upgrade in MySQL Galera Cluster using TOI Method
December 10, 2013
As a follow-up to the Webinar on Zero Downtime Schema Changes in Galera Cluster, we’ll now walk you through the detailed steps on how to update your schema. The two methods (TOI and RSU) have both their pros and cons, and given parameters like table size, indexes, key_buffer_size, disk speed, etc., it is possible to estimate the time taken for the schema to be upgraded. Also, please note that a schema change is non-transactional so it would not be possible to rollback the DDL if it fails midway. Therefore, it is always recommended to test the schema changes and ensure you have recoverable backups before performing this on your production clusters.
This post examines the way DDL changes are propagated in Galera, and outlines the steps to upgrade the schema using the TOI method with Percona’s pt-online-schema-change.
Schema Changes in MySQL
MySQL handle table altering by exporting the table data into a temporary table, changing the table structure, and then importing the data back into the table before finally rebuilding all the indexes. Thus, write blocking is the only choice to maintain data integrity. This stalls all the writes after the ALTER command has been issued, up until it completes.
Percona has built a tool to overcome this problem with its Percona Toolkit bundle called pt-online-schema-change. It allows altering of tables without locking them, and works in the following manner:
Creates a shadow copy of target table
Installs triggers in source table to forward updates to target table
Copies source table data in small chunks to target table
Renames target table to replace the source table
However, the tool will not work if any triggers are already defined on the table. For more detailed explanation about the tool, please refer to Percona Toolkit documentation page.
Schema Changes in Galera
DDL statements (ALTER, CREATE, RENAME, TRUNCATE, DROP) are replicated in statement level. Galera has two inbuilt methods in handling these statements:
Total Order Isolation (TOI)
Rolling Schema Upgrade (RSU)
WITH RECURSIVE and MySQL
If you have been using certain DBMSs, or reading recent versions of the SQL standard, you are probably aware of the so-called "WITH clause" of SQL.Some call it Subquery Factoring. Others call it Common Table Expression.In its simplest form, this feature is a kind of "boosted derived table".Assume that a table T1 has three columns: CREATE TABLE T1(YEAR INT, # 2000, 2001, 2002 ...MONTH INT, # January, February, ...SALES INT # how much we sold on that month of that year);Now I want to know the sales trend (increase/decrease), year after year:SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TRENDFROM (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR) AS D1, (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR) AS D2WHERE D1.YEAR = D2.YEAR-1;Both derived tables are based on the same subquery text, but usually a DBMS is not smart enough to recognize it. Thus, it will evaluate "SELECT YEAR, SUM(SALES)... GROUP BY YEAR" twice! A first time to fill D1, a second time to fill D2. This limitation is sometimes stated as "it's not possible to refer to a derived table twice in the same query".Such double evaluation can lead to a serious performance problem. Using WITH, this limitation does not exist, and the following statement evaluates the subquery only once:WITH D AS (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR)SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TRENDFROM D AS D1, D AS D2WHERE D1.YEAR = D2.YEAR-1;This already demonstrates one benefit of WITH.In MySQL, WITH is not yet supported. But it can be emulated with a view:CREATE VIEW D AS (SELECT YEAR, SUM(SALES) AS S FROM T1 GROUP BY YEAR);SELECT D1.YEAR, (CASE WHEN D1.S>D2.S THEN 'INCREASE' ELSE 'DECREASE' END) AS TRENDFROM D AS D1, D AS D2WHERE D1.YEAR = D2.YEAR-1;DROP VIEW D;Instead of a view, I could as well create D as a normal table. But not as a temporary table, because in MySQL a temporary table cannot be referred twice in the same query, as mentioned in the manual.After this short introduction, showing the simplest form of WITH, I would like to turn to the more complex form of WITH: the RECURSIVE form.According to the SQL standard, to use the recursive form, you should write WITH RECURSIVE. However, looking at some other DBMSs, they seem to not require the RECURSIVE word.WITH RECURSIVE is a powerful construct. For example, it can do the same job as Oracle's CONNECT BY clause (you can check out some example conversions between both constructs).Let's walk through an example, to understand what WITH RECURSIVE does. Assume you have a table of employees (this is a very classical example of WITH RECURSIVE):CREATE TABLE EMPLOYEES (ID INT PRIMARY KEY,NAME VARCHAR(100),MANAGER_ID INT,INDEX (MANAGER_ID),FOREIGN KEY (MANAGER_ID) REFERENCES EMPLOYEES(ID));INSERT INTO EMPLOYEES VALUES(333, "Yasmina", NULL),(198, "John", 333),(29, "Pedro", 198),(4610, "Sarah", 29),(72, "Pierre", 29),(692, "Tarek", 333);In other words, Yasmina is CEO, John and Tarek report to her. Pedro reports to John, Sarah and Pierre report to Pedro.In a big company, they would be thousands of rows in this table.Now, let's say that you would like to know, for each employee: "how many people are, directly and indirectly, reporting to him/her"? Here is how I would do it. First, I would make a list of people who are not managers: with a subquery I get the list of all managers, and using NOT IN (subquery) I get the list of all non-managers:SELECT ID, NAME, MANAGER_ID, 0 AS REPORTSFROM EMPLOYEESWHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL);Then I would insert the results into a new table named EMPLOYEES_EXTENDED; EXTENDED stands for "extended with more information", the new information being the fourth column named REPORTS: it is a count of people who are reporting directly or indirectly to the employee. Because we have listed people who are not managers, they have a value of 0 in the REPORTS column.Then, we can produce the rows for "first level" managers (the direct managers of non-managers):SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTSFROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_IDGROUP BY M.ID, M.NAME, M.MANAGER_ID;Explanation: for a row of M (that is, for an employee), the JOIN will produce zero or more rows, one per non-manager directly reporting to the employee.Each such non-manager contributes to the value of REPORTS for his manager, through two numbers: 1 (the non-manager himself), and the number of direct/indirect reports of the non-manager (i.e. the value of REPORTS for the non-manager).Then I would empty EMPLOYEES_EXTENDED, and fill it with the rows produced just above, which describe the first level managers.Then the same query should be run again, and it would produce information about the "second level" managers. And so on.Finally, at one point Yasmina will be the only row of EMPLOYEES_EXTENDED, and when we run the above SELECT again, the JOIN will produce no rows, because E.MANAGER_ID will be NULL (she's the CEO). We are done.It's time for a recap: EMPLOYEES_EXTENDED has been a kind of "temporary buffer", which has successively held non-managers, first level managers, second level managers, etc. We have used recursion. The answer to the original problem is: the union of all the successive content of EMPLOYEES_EXTENDED.Non-managers have been the start of the recursion, which is usually called "the anchor member" or "the seed". The SELECT query which moves from one step of recursion to the next one, is the "recursive member". The complete statement looks like this:WITH RECURSIVE# The temporary buffer, also used as UNION result:EMPLOYEES_EXTENDEDAS( # The seed: SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS FROM EMPLOYEES WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)UNION ALL # The recursive member: SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID GROUP BY M.ID, M.NAME, M.MANAGER_ID)# what we want to do with the complete result (the UNION):SELECT * FROM EMPLOYEES_EXTENDED;MySQL does not yet support WITH RECURSIVE, but it is possible to code a generic stored procedure which can easily emulate it. Here is how you would call it:CALL WITH_EMULATOR("EMPLOYEES_EXTENDED"," SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS FROM EMPLOYEES WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)"," SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID GROUP BY M.ID, M.NAME, M.MANAGER_ID","SELECT * FROM EMPLOYEES_EXTENDED",0,"");You can recognize, as arguments of the stored procedure, every member of the WITH standard syntax: name of the temporary buffer, query for the seed, query for the recursive member, and what to do with the complete result. The last two arguments - 0 and the empty string - are details which you can ignore for now.Here is the result returned by this stored procedure:+------+---------+------------+---------+| ID | NAME | MANAGER_ID | REPORTS |+------+---------+------------+---------+| 72 | Pierre | 29 | 0 || 692 | Tarek | 333 | 0 || 4610 | Sarah | 29 | 0 || 29 | Pedro | 198 | 2 || 333 | Yasmina | NULL | 1 || 198 | John | 333 | 3 || 333 | Yasmina | NULL | 4 |+------+---------+------------+---------+7 rows in setNotice how Pierre, Tarek and Sarah have zero reports, Pedro has two, which looks correct... However, Yasmina appears in two rows! Odd? Yes and no. Our algorithm starts from non-managers, the "leaves" of the tree (Yasmina being the root of the tree). Then our algorithm looks at first level managers, the direct parents of leaves. Then at second level managers. But Yasmina is both a first level manager (of the nonmanager Tarek) and a third level manager (of the nonmanagers Pierre, Tarek and Sarah). That's why she appears twice in the final result: once for the "tree branch" which ends at leaf Tarek, once for the tree branch which ends at leaves Pierre, Tarek and Sarah. The first tree branch contributes 1 direct/indirect report. The second tree branch contributes 4. The right number, which we want, is the sum of the two: 5. Thus we just need to change the final query, in the CALL:CALL WITH_EMULATOR("EMPLOYEES_EXTENDED"," SELECT ID, NAME, MANAGER_ID, 0 AS REPORTS FROM EMPLOYEES WHERE ID NOT IN (SELECT MANAGER_ID FROM EMPLOYEES WHERE MANAGER_ID IS NOT NULL)"," SELECT M.ID, M.NAME, M.MANAGER_ID, SUM(1+E.REPORTS) AS REPORTS FROM EMPLOYEES M JOIN EMPLOYEES_EXTENDED E ON M.ID=E.MANAGER_ID GROUP BY M.ID, M.NAME, M.MANAGER_ID"," SELECT ID, NAME, MANAGER_ID, SUM(REPORTS) FROM EMPLOYEES_EXTENDED GROUP BY ID, NAME, MANAGER_ID",0,"");And here is finally the proper result:+------+---------+------------+--------------+| ID | NAME | MANAGER_ID | SUM(REPORTS) |+------+---------+------------+--------------+| 29 | Pedro | 198 | 2 || 72 | Pierre | 29 | 0 || 198 | John | 333 | 3 || 333 | Yasmina | NULL | 5 || 692 | Tarek | 333 | 0 || 4610 | Sarah | 29 | 0 |+------+---------+------------+--------------+6 rows in setLet's finish by showing the body of the stored procedure. You will notice that it does heavy use of dynamic SQL, thanks to prepared statements. Its body does not depend on the particular problem to solve, it's reusable as-is for other WITH RECURSIVE use cases. I have added comments inside the body, so it should be self-explanatory. If it's not, feel free to drop a comment on this post, and I will explain further. # Usage: the standard syntax:# WITH RECURSIVE recursive_table AS# (initial_SELECT# UNION ALL# recursive_SELECT)# final_SELECT;# should be translated by you to # CALL WITH_EMULATOR(recursive_table, initial_SELECT, recursive_SELECT,# final_SELECT, 0, "").# ALGORITHM:# 1) we have an initial table T0 (actual name is an argument# "recursive_table"), we fill it with result of initial_SELECT.# 2) We have a union table U, initially empty.# 3) Loop:# add rows of T0 to U,# run recursive_SELECT based on T0 and put result into table T1,# if T1 is empty# then leave loop,# else swap T0 and T1 (renaming) and empty T1# 4) Drop T0, T1# 5) Rename U to T0# 6) run final select, send relult to client# This is for *one* recursive table.# It would be possible to write a SP creating multiple recursive tables.delimiter |CREATE PROCEDURE WITH_EMULATOR(recursive_table varchar(100), # name of recursive tableinitial_SELECT varchar(65530), # seed a.k.a. anchorrecursive_SELECT varchar(65530), # recursive memberfinal_SELECT varchar(65530), # final SELECT on UNION resultmax_recursion int unsigned, # safety against infinite loop, use 0 for defaultcreate_table_options varchar(65530) # you can add CREATE-TABLE-time options# to your recursive_table, to speed up initial/recursive/final SELECTs; example:# "(KEY(some_column)) ENGINE=MEMORY")BEGIN declare new_rows int unsigned; declare show_progress int default 0; # set to 1 to trace/debug execution declare recursive_table_next varchar(120); declare recursive_table_union varchar(120); declare recursive_table_tmp varchar(120); set recursive_table_next = concat(recursive_table, "_next"); set recursive_table_union = concat(recursive_table, "_union"); set recursive_table_tmp = concat(recursive_table, "_tmp"); # If you need to reference recursive_table more than # once in recursive_SELECT, remove the TEMPORARY word. SET @str = # create and fill T0 CONCAT("CREATE TEMPORARY TABLE ", recursive_table, " ", create_table_options, " AS ", initial_SELECT); PREPARE stmt FROM @str; EXECUTE stmt; SET @str = # create U CONCAT("CREATE TEMPORARY TABLE ", recursive_table_union, " LIKE ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; SET @str = # create T1 CONCAT("CREATE TEMPORARY TABLE ", recursive_table_next, " LIKE ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; if max_recursion = 0 then set max_recursion = 100; # a default to protect the innocent end if; recursion: repeat # add T0 to U (this is always UNION ALL) SET @str = CONCAT("INSERT INTO ", recursive_table_union, " SELECT * FROM ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; # we are done if max depth reached set max_recursion = max_recursion - 1; if not max_recursion then if show_progress then select concat("max recursion exceeded"); end if; leave recursion; end if; # fill T1 by applying the recursive SELECT on T0 SET @str = CONCAT("INSERT INTO ", recursive_table_next, " ", recursive_SELECT); PREPARE stmt FROM @str; EXECUTE stmt; # we are done if no rows in T1 select row_count() into new_rows; if show_progress then select concat(new_rows, " new rows found"); end if; if not new_rows then leave recursion; end if; # Prepare next iteration: # T1 becomes T0, to be the source of next run of recursive_SELECT, # T0 is recycled to be T1. SET @str = CONCAT("ALTER TABLE ", recursive_table, " RENAME ", recursive_table_tmp); PREPARE stmt FROM @str; EXECUTE stmt; # we use ALTER TABLE RENAME because RENAME TABLE does not support temp tables SET @str = CONCAT("ALTER TABLE ", recursive_table_next, " RENAME ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; SET @str = CONCAT("ALTER TABLE ", recursive_table_tmp, " RENAME ", recursive_table_next); PREPARE stmt FROM @str; EXECUTE stmt; # empty T1 SET @str = CONCAT("TRUNCATE TABLE ", recursive_table_next); PREPARE stmt FROM @str; EXECUTE stmt; until 0 end repeat; # eliminate T0 and T1 SET @str = CONCAT("DROP TEMPORARY TABLE ", recursive_table_next, ", ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; # Final (output) SELECT uses recursive_table name SET @str = CONCAT("ALTER TABLE ", recursive_table_union, " RENAME ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; # Run final SELECT on UNION SET @str = final_SELECT; PREPARE stmt FROM @str; EXECUTE stmt; # No temporary tables may survive: SET @str = CONCAT("DROP TEMPORARY TABLE ", recursive_table); PREPARE stmt FROM @str; EXECUTE stmt; # We are done :-)END|delimiter ;In the SQL Standard, WITH RECURSIVE allows some nice additional tweaks (depth-first or breadth-first ordering, cycle detection). In future posts I will show how to emulate them too.