MySQL

Liveblogging at OOW: State of the Dolphin

Thomas Ulin, VP of MySQL Enginnering, speaks about "State of the Dolphin" at 2011 Oracle OpenWorld.  There are some pretty cool new features in MySQL 5.6 development milestone release 2, and they are all quite stable, which is exciting.  They want to add more features before going GA with MySQL 5.6, but the ones in there are pretty ready to go.

"The 15-minute rule [MySQL can be installed in 15 minutes] is now down to 3 minutes for the full MySQL stack."  Download one package, and a GUI helps you install and configure everything.

MySQL Enterprise HA: Windows Server Failover Clustering, uses Windows Server Failover Clustering from Microsoft and the cluster is managed through the Windows tools.

Ulin talked a lot about how MySQL is good on Windows, and how it is better and lower TCO than Microsoft SQL Server.  They are focusing on Visual Studio, MS Office Integration, Entity Framework, Windows Administration tooling, and more.

Talks about the parts of MySQL Enterprise Edition:

MySQL Workbench

MySQL Backup - completely new from the ground up, Ulin does not have slides but there is a presentation on Wednesday I am looking forward to going to.  

MySQL Enterprise Security: New a few weeks ago, new authentication modules - PAM (uses LDAP, Kerberos, etc) or Windows authentication.  A good slide with how it works:

The application server sends the authentication information to MySQL, which is sent to the PAM library, which verifies the credentials, and returns "yes/no" to the mysql user.  "CREATE USER joe IDENTIFIED WITH pam_plugin AS 'joe';" - NOTE: specific privileges are still managed on the MySQL level (e.g. not everyone gets the SUPER privilege!)

MySQL Enterprise Scalability - thread pooling.  Pool contains a configurable number of thread groups (default=16) and each group manages up to 4096 re-usable threads.  Each connection asssigned to a  group via round robin.  20x better scalability read/write using thread pooling, 3x better scalability for read-only workloads.  There is an API so the thread pooling can be implemented manually.

OVM template for MySQL - on Oracle Linux.

 

Announcements

5.6 DMR 2 (development milestone release)

Close enough to be an RC or GA, they are very confident.  But they are not releasing as GA because they want to add more features.

New features - builds on MySQL 5.5 by improving:

optimizer - better performance, scalability

 - filesort optimizations with small limits

 - avoids creating intermediate sorted files by producing an ordered result set using a single table scan and sort on the fly.  

index condition pushdowns

 - was a 6.0 feature

 - good for composite indexes

 - why wouldn't you have this turned on, btw?  

batched key access and multi-range read

 - Improves performance of disk-bound JOIN queries

 - handles batches of keys instead of traditional nested-loop joins which do one key at a time.  Takes advantage of sequential reads.

postponed materialization

 - of views/subqueries in the FROM clause (aka derived tables)

 - Allows fast EXPLAINs for views/subqueries

 - Avoid materialization when possible, faster bail out

 - A key can be generated for derived tables.  

 - Unsurprisingly, this optimization is huge - 240x better execution times (e.g. drop from 8 minutes to about 2 seconds)

EXPLAIN for INSERT, UPDATE, DELETE

 Persistent Optimizer Statistics

 Optimizer traces

 

performance schema: better instrumentation

- up to about 500 instrumentation points

 - Statements/stages - where do my most resource-intensive queries spend the most time?

 - TAble/Index I/O, Table Locks - which tables/indexes cause the most load/contention?

 - Users/Host/accounts - using the most resources

 - Network I/O

 - Sumaries - agg by thread, user, host, account or object

 

innodb - better transactional throughput

 - new I_S tables: Metrics, Systems, Buffer pool info

 - Dump/restore buffer pool

 - Limit insert buffer size (ibuf)

 

replication for HA, data integrity

 - Better data integrity: crash-safe slaves, replication checksums, crash-safe binlog

 - Better performance: multi-threaded slaves, reduced binlog size for row-based binary logging

- Extra flexibility - time-delayed replication

- Simpler troubleshooting - row-based replication logging of original query

- Enhanged monitoring/management.

"NotOnlySQL" (NoSQL) for more flexibility

Misc

 - ipv6 improvements

 - Unicode support for Windows commandline client

 - import/export to/from partitioned tables

 - explicit partition selection

 - GIS/MyISAM: Precise spatial operations

 

MySQL Cluster 7.2: DMR 2

Close enough to be an RC or GA, they are very confident.  But they are not releasing as GA because they want to add more features.

 - 70x higher complex query performance using Adaptive Query Localization (example was an 11 way join)

 - native memcached API - with no application changes.  reuses standard memcached clients & libraries.  Also eliminates cache invalidation slamming MySQL

 - MSQL 5.5 Server Integration - previous versions were integrated with MySQL 5.1, so in MySQL Cluster 7.2 you have access to the new features - upgrading can be done in an online fashion with the rolling upgrade.

 - Multi-geographical site clustering - DR and data locality, no passive resources.  Can split data nodes and node groups across data centers and have auto-failover.

 - Simplified active/active replication - eliminates requirement for application and schema changes.  There are transaction-level rollbacks.  This helps with conflict detection when more than one master is written to.

 - Consolidated privileges for easier provisioning and administration

MySQL Enterprise Oracle Certifications

 - All the Oracle Fusion MiddleWare ship with the MySQL 5.x JDBC driver.

MySQL is being put into or being supported by a lot of Oracle products, including Fusion MiddleWare, including Oracle E-business suite.  This means you can use MySQL as a data source.

Coming soon: using the Auditing API in MySQL 5.5, Oracle will be adding an audit plugin that can (among other things) produce an audit stream that Oracle Audit Vault can then use.

What Community Resources should be at Oracle OpenWorld?

A short while ago I posted about the Oracle OpenWorld Schedule Matrix of MySQL sessions (in PDF and HTML formats).  We have printed up a (small) number of schedules to have on hand at the MySQL Community kiosk at the User Group Pavillion in Moscone West.

Yes, you read that correctly -- the User Group Pavillion will include a MySQL Community kiosk this year!  Sarah and I have been coordinating the effort to staff the kiosk and figure out what we need to provide.

Sadly, it's just a kiosk (same as all the other User Group organizations get), so we cannot have a ton of flyers there.  To that end, we have created a QR code that resolves to www.kimtag.com/MySQL, which is where we are putting many links.  

To that end, we'd like your help figuring out what we have missed.  In order to keep the list of links as short and relevant as possible we have put as many aggregate links as we could, for example we link to planet.mysql.com instead of individual blogs, and we are only listing the major conferences with over 500 attendees expected.  The links at www.kimtag.com/MySQL as of the time of this blog writing are:

- MySQL sessions at OOW

- Planet MySQL

- dev.mysql.com (docs, etc)

- mysql.com

- MySQL User Groups (forge.mysql.com list) - so if you have a user group, make sure to update the forge page!

- Percona Live 2012 Conference & Expo

- MySQL videos on YouTube

- IOUG MySQL Council

- OurSQL Podcast Blog

- OurSQL iTunes link

- MySQL Experts podcast

- Book: MySQL Administrator's bible*

- Book: High Performance MySQL

- Book: Expert PHP/MySQL

- Book: MySQL High Availability

 

If you think of a link we should put on there, please comment below.

 

For what it's worth, the paper we will have will be:

- The current day's schedule

- A flyer about Percona Live 2012 MySQL Conference & Expo

- A poster of the QR code and a few small paper slips with the QR code

- IOUG MySQL Council business cards

And even that is stretching it, as there will be a laptop at the kiosk provided by Oracle and the kiosk is 24 inches x 24 inches, about 61 centimeters x 61 centimeters.
* Note that I have ordered the books with the MySQL Administrator's Bible first because it's for beginner/intermediate users, whereas High Performance MySQL is for intermediate/advanced users.

Videos from OSCon Data and OSCon 2011

There are 28 videos, all linked below, on the OSCon and OSCon data 2011 playlist that I have put online for free (with permission from the presenters and O'Reilly).  O'Reilly videos are available from the conference proceedings website.  Probably the best way to find all the videos in one place is to search for the 'oscon' tag on YouTube.

How do I choose what talks to film?  Well, to make it easiest on me, I choose what room to film, and then all I have to do is change the tapes every session.  This minimizes (but not completely eliminates) techical issues.  For OSCon Data it was simple - there were 5 rooms, O'Reilly was professionally recording one room, another room was "Products and Services", which left 3 rooms -- and I had 3 video cameras*.
Following is the list of videos I took, in alphabetical order.  Each link takes you to the YouTube page, which shows the presenters, description, and links to the slides (if available) and official O'Reilly Conference page:


* If there had been no technical difficulties whatsoever, there would be 38 videos on this list - 2 from OSCon (which are on the list) and 36 from OSCon data.  Unfortunately, 10 videos did not come out - either I missed the tape change, the audio could not be heard, or permission was not given by the presenters.  Note that in the latter case, presenters just never responded -- I did not have one presenter withhold permisson, though a few have not responded to my request for permission.

Beware: Default charset for mysqldump is utf8, regardless of server default charset

 

I ran into this issue a while ago, and was reminded of it again recently.  mysqldump uses a default charset of utf8, even when the default charset of the server is set differently.  Why does this matter?

The problem exists more in the fact that if you have string data that is in latin1 format, you are allowed to put in non-Latin characters. This can lead to lost data, especially when upgrading a major series (e.g. 5.0 to 5.1 or 5.1 to 5.5), because you're supposed to export and import the data.

Also, when importing a backup of an InnoDB table, if there is an error with one of the parts of the INSERT, the whole INSERT statement rolls back.  I have experienced major data loss because the garbled characters cause an error when INSERTed, and it causes perfectly fine data *not* to import because they're in the same INSERT statement as the garbled data.

For example:

First, set variables such on a MySQL server (5.0 or 5.1, I haven't tested on 5.5):

mysql> show global variables like '%char%';

+--------------------------+----------------------------+

| Variable_name            | Value                      |

+--------------------------+----------------------------+

| character_set_client     | latin1                     | 

| character_set_connection | latin1                     | 

| character_set_database   | latin1                     | 

| character_set_filesystem | binary                     | 

| character_set_results    | latin1                     | 

| character_set_server     | latin1                     | 

| character_set_system     | utf8                       | 

| character_sets_dir       | /usr/share/mysql/charsets/ | 

+--------------------------+----------------------------+

8 rows in set (0.00 sec)

 

Then create these tables with data:

 

CREATE TABLE `test_utf8` (

  `kwid` int(10) unsigned NOT NULL default '0',

  `keyword` varchar(80) NOT NULL default ''

) ENGINE=InnoDB DEFAULT CHARSET=utf8;

 

INSERT INTO `test_utf8` VALUES

(1,'watching'),(2,'poet'),(3,'просмотра'),(4,'Поэту');

 

CREATE TABLE `test_latin1` (

  `kwid` int(10) unsigned NOT NULL default '0',

  `keyword` varchar(80) NOT NULL default ''

) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 

INSERT INTO `test_latin1` VALUES

(1,'watching'),(2,'poet'),(3,'просмотра'),(4,'Поэту');

 

Now compare:

mysqldump test > test_export_utf8.sql

mysqldump --default-character-set=latin1 test > test_export_latin1.sql

 

Note that the test export with the default character set of utf8 has mojibake whereas the export with latin1 does not.

 

So be *extremely* careful when using mysqldump - whether for backups or while upgrading.  You can checksum your data before and after  an export/import with mysqldump to be sure that your data is the same.

 

MySQL Content at Oracle OpenWorld - Session Matrix

While the online content catalog and schedule builder are great tools to help plan out what sessions I want to see at Oracle OpenWorld, what I really want is a matrix of only the MySQL content, preferably in a matrix that easily shows all the sessions per time period.

So I decided to make the matrix myself - view the HTML online at http://technocation.org/files/doc/2011_OOW_MySQL_Content.html

Or download a PDF (one page per day) at http://technocation.org/files/doc/2011_OOW_MySQL_Content.pdf

If you have feedback, please let me know in the comments or via the e-mail address on the matrix.  These documents are for personal use unless other arrangements have been made.

To see full descriptions, click on a speaker's name to be sent to the content catalog's page for that speaker, then click on the session to get the full description.

Disclosure: Truth About MySQL 2012 Conference Planning

I love Percona Live.  I think it is a great meeting of the minds.  However, I do not think it is a good replacement for the big April MySQL conference.  In fact, neither does Baron Schwartz:

"The conference is organized and owned by MySQL, not the users. It isn’t a community event. It isn’t about you and me first and foremost. It’s about a company trying to successfully build a business, and other companies paying to be sponsors and show their products in the expo hall."
Baron Schwartz, April 23, 2008.
http://www.xaprb.com/blog/2008/04/23/like-it-or-not-it-is-the-mysql-conference-and-expo/

Switch "MySQL" for "Percona", and that is exactly what Baron said in today's announcement:

"Emphasis on business. We need a place where vendors, both open-source and closed-source, can showcase their products and services. This is the hand that feeds all of us. It’s good for Percona’s business, and it’s good for everyone else’s too."

...except in 2008, Baron was calling for a community conference because what was good for business was not good for the community.  

Let me pull out that last sentence from the quote.  Read it to yourself, and tell me if you feel warm fuzzies replacing "Percona" with "Oracle" or "MySQL":

"It’s good for _______’s business, and it’s good for everyone else’s too."

Here is a question for you:  Do you think Oracle will send lots of engineers to talk about current and future plans for MySQL to a conference that is open about saying "It is good for our business?"  

It does not matter if it is Percona, Blue Gecko, or PalominoDB, if Oracle has any business sense (and they make quite a bit of money, so signs are that they have business sense) they will not send engineers to a competitor's company-branded conference.

What does Percona's founder, Peter Zaitsev, have to say about the conference?  He's not really happy about it either:

"I would like to see the conference which is focused on the product users interests rather than business interests of any particular company (or personal interests of small group of people), I would like it to be affordable so more people can attend and I’d like to see it open so everyone is invited to contribute and process is as open as possible. "
Peter Zaitzev, April 23, 2008
http://www.mysqlperformanceblog.com/2008/04/23/conference-for-mysql-users/

Peter, I would like to see that too.  In fact, a small group of folks, including Giuseppe and myself, tried to make that happen, and we had SkySQL, MariaDB, Oracle and IOUG all supporting us. 

Giuseppe called for disclosure about the conference, so I will disclose this:  Baron was not truthful when he said "To the best of our knowledge, no one else was planning one".

Giuseppe asked for full disclosure, so here is a copy and paste of a Skype conversation I had with Percona's Tom Basil on June 29th, 2011:

 

 

Sheeri K. Cabral 6/29/11 12:10 PM 

I think we should develop some ideas in case O'Reilly doesn't end up having a MySQL conference....last year the announcement was late, and it was in May, and I'm starting to think they might not be doing a conference this year, since we haven't heard anything yet.

6/29/11 12:10 PM

if that's the case, I'd like to have a conference anyway, and I'd like to explore options with you, because we need a community-run conference (not Collaborate, but maybe a co-located summit).  And obviously you've had success with Percona Live, but a multi-day conference is really different.

6/29/11 12:11 PM

(FWIW I told Colin Charles in May that I was willing to help co-chair the conference, so I'm still willing to give my support in that area).

Tom Basil 6/29/11 12:14 PM 

Sheeri, can't talk now

Sheeri K. Cabral 6/29/11 12:14 PM 

*nod*  can we schedule a conf call maybe?

Tom Basil 6/29/11 12:14 PM 

Headed out in just few min

6/29/11 12:14 PM

yes, next week

6/29/11 12:14 PM

We tried to schedule conference calls but for almost 6 weeks we were pushed back.  
I had indeed offered to chair or co-chair a conference with Colin Charles, but I simply cannot lend a lot of logistical support to a Percona-branded conference.  I volunteer a lot, but it is for the benefit of the MySQL Community, not for the benefit of a company I do not work for.  
I hope that my openness and candor does not blacklist me from Percona events; I am a popular and sought-after speaker at MySQL events and it will be a loss to the community if that happens.
Percona should keep its Percona Live series.  However, the world's most popular open source database deserves and needs exactly what Peter, Percona's founder, said just a few years ago: "the conference which is focused on the product users interests rather than business interests of any particular company".
I believe Percona Live Santa Clara will be a successful event, and I will try to be a part of it.  I hope it will be as successful for the community as it will be for Percona's business.

 

Upcoming Free IOUG Webinar: Securing MySQL

Next week I will give a free IOUG webinar on Securing MySQL on Wednesday, August 10, 2011 from 11:00 AM - 12:00 PM CDT (17:00 GMT):

Securing MySQL is extremely important, but often is not done properly. I will explain the different ways to secure MySQL. In addition to securing users and privileges, file permissions and encrypted connectivity will be discussed. The MySQL server options that contribute to MySQL security will be pointed out, along with tips for eliminating unsecure external scripts. For those who want more auditing capabilities, this session will explain how to see all login attempts (successful and not) and how to lock out accounts with repeated failed logins. The session will conclude with guidelines about how to create security policies for your organization.

To register for this webinar, visit https://www1.gotomeeting.com/register/979260992.  

mydumper & myloader : fast backup and restore

At PalominoDB we do not normally use mysqldump for regular backups, but only in some circumstances (for example MySQL upgrade).
Lately we gave a try to mydumper as an alternative to mysqldump, and results are quite promising.
We found that mydumper performs very fast exporting both small and large datasets!!
We also found that the with large datasets restore with myloader doesn't perform a lot better than simple restore from mysqldump SQL dump: this depends from the storage engine and not from the client used to restore.

On a box we run 2 tests:
1) with a dataset that was fitting in the InnoDB buffer pool;
2) with a dataset larger than the InnoDB buffer pool.

TEST #1

We created 128 tables of 1M rows each, for a total dataset of 31GB on disk:
shell$ time ./sysbench --test=tests/db/parallel_prepare.lua --oltp-tables-count=128 --oltp-table-size=1000000 --mysql-table-engine=innodb --mysql-user=root --num-threads=12 run real 22m0.013s
user 204m22.054s
sys 0m37.430s

Doing the backup with mydumper:
shell$ time ./mydumper -t 8 -B sbtest
real 0m29.807s
user 2m35.111s
sys 0m26.102s

... and with mysqldump:
shell$ time mysqldump --single-transaction sbtest > sbtest.sql
real 6m24.607s
user 5m19.355s
sys 0m46.761s

Within this test, mydumper looks around 13 times faster than mysqldump.

We also tried compression, but I/O was fast enough to make compression only an unnecessary overhead: in other words, on that hardware and with this dataset, mydumper with compression was slower than mydumper without compression.

To complete the test, we tried recovery time, after deleting and recreating and empty database:
shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ./myloader -t 8 -d export-20110720-090954
real 9m12.548s
user 0m55.193s
sys 0m28.316s

shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ( echo "SET SQL_LOG_BIN=0;" ; cat sbtest.sql ) | mysql sbtest
real 46m46.140s
user 9m3.604s
sys 0m48.256s

With this dataset, restore time using myloader was 5 times faster than using the SQL dump from mysqldump.

TEST #2

Test #2 is very similar to test #1 , but with some different in the dataset:
48 tables instead of 128 tables;
10M rows on each table instead of 1M rows;
a total dataset on disk of 114GB instead of 31GB.

First, we created the tables with sysbench:
shell$ time ./sysbench --test=tests/db/parallel_prepare.lua --oltp-tables-count=48 --oltp-table-size=10000000 --mysql-table-engine=innodb --mysql-user=root --num-threads=12 run
real 107m24.657s
user 689m2.852s
sys 2m11.980s

Backup with mydumper:
shell$ time ./mydumper -t 8 -B sbtest
real 7m42.703s
user 15m14.873s
sys 2m20.203s

The size of the backup is quite big because not compressed: 91GB
On average, mydumper was writing on disk at around 200MB/s.

Backup with mysqldump:
shell$ time mysqldump --single-transaction sbtest > sbtest.sql
real 32m53.972s
user 20m29.853s
sys 2m47.674s

mydumper was again faster than mysqldump , but not as much as in the previous test: only 4 times faster.

Was now the time to measure recovery time:
shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ./myloader -t 6 -d export-20110720-171706
real 130m58.403s
user 4m5.209s
sys 1m51.801s

shell$ mysql -e "drop database sbtest ; create database sbtest"
shell$ time ( echo "SET SQL_LOG_BIN=0;" ; cat sbtest.sql ) | mysql sbtest
real 204m18.121s
user 34m33.520s
sys 3m43.826s

myloader is just a bit more than 50% times faster than importing the SQL dump from mysqdump

Conclusion from second test:
a) With larger dataset, mydumper slows down because the system does more I/O as the dataset doesn't fit in memory, but still way faster than mysqldump.
b) With larger dataset, load time with myloader slowed down a lot. Although, the root cause of the performance drop isn't mydumper , but:
- more I/O (dataset + dump don't fit in RAM);
- InnoDB inserts rate degenerates with bigger tables.

On the topic of InnoDB inserts rate degeneration with big tables , probably another blog post will follow.

Notes on hardware and configuration:
CPU: 2 x 6cores with HT enabled
96 GB of RAM
FusionIO

innodb_buffer_pool_size=64G
innodb_log_file_size=2047M
innodb_io_capacity=4000
innodb_flush_log_at_trx_commit=2
(binlog disabled)

More Videos from Open DB Camp

I have gotten to uploading more of the videos from Open DB camp in Sardinia, Italy back in May:

Henrik Ingo speaks about Xtrabackup Manager - video

Linas Virbalas speaks about "Flexible Replication: MySQL -> PostgreSQL, PostgreSQL to MySQL, PostgreSQL to PostgreSQL" - video - slideshare slides

MySQL to MongoDB replication (hackfest results) - video 

Robert Hodges of Continuent speaks about Multi-Master Replication: Problems, Solutions and Arguments - video

There are a few more videos from Open DB Camp to put up, then I start to put up the content from OSCon Data!

Liveblogging at OSCON Data: Drizzle, Virtualizing and Scaling MySQL for the Future

Brian Aker presents "Drizzle, Virtualizing and Scaling MySQL for the Future" at OSCon Data 2011

http://drizzle.org

irc.freenode.net #drizzle

http://blog.krow.net

@brianaker

2005 MySQL 5.0 released - web developers wanted tons of features that were not in the release (making replication better for instance)

2008 Sun buys MySQL

2008 MySQL 6.0 is forked to become Drizzle

2009 Oracle buys Sun

2010 Drizzle developers leave Oracle

2011 First GA release, Drizzle7

MySQL's Architecture - monolithic kernel, not very modular, lots of interdependence.

Drizzle has a microkernel, which includes a listener, parser, optimizer, executioner, storage system, logging/error reporting.

Drizzle can accept SQL and http blog streaming, and memcached and gearman can easily talk to Drizzle.

Drizzle has tried to have no "gotchas"

- If you make a new field with NOT NULL, MySQL makes new values NULL.  Drizzle does not do this.

- No hacky ALTER TABLE

- Real datetime (64 bit), including microseconds

- IPV6 (apparently this is a strong reason for people switching, to support IPV6)

- No updates that complete halfway

- Default character set is UTF-8, default collation is utf8-general (charset in latin1 by default in MySQL, collation is latin1_swedish_ci - "case insensitive" by default)

Replication

- In MySQL, replication is kind of hacky [this is my summary and opinion, but it's basically what Brian said]

- Drizzle is Google Protocol Buffer Based

- Replicates row transformations

- Integrates with RabbitMQ, Cassandra, Memcached, Gearman -- right now.

DML and MySQL binary logs analog:

- DML is stored transactionally by delta in Drizzle

- InnoDB is already logging, no need to add another log for the binary log.  So it just logs DML to the transaction log.

LibDrizzle

- supports Drizzle, MySQL, SQLite

- Asynchronous

- BSD, so Lawyer-free

What else?

- No cost authentication (pam, ldap, htaccess, ...)

- Table functions (new data dictionary, including performance and thread information).  INFORMATION_SCHEMA in Drizzle is *exactly* what's specified in the SQL standard.

- Data types - native type for UUID, boolean, all known types (except SET, because it's broken by design)

- Assertions are in Drizzle, you can ask what the type of the result of combining multiple data types will be.

- About 80 conflicts in the Drizzle parser as opposed to about 200 in the MySQL parser

Roadmap - Drizzle7+

- Replication - faster than MySQL and also allows multiple masters.

Virtualization:

Virtualizing a database gives you about a 40% performance hit.  How can costs be cut?  In MySQL 5.0 the Instance Manager was created to solve that but it hasn't really been worked on.  Drizzle has worked on virtualizing databases internally within Drizzle.

- So drizzle now has catalogs.  

- One catalog has its own set of users, its own schema with tables, etc.

- A catalog is its own sandbox; there is no syntax that allows you to connect from one catalog to another, so there's no security problems.  

- Cuts the 30/40% hit from virtualizing

- Single instance maintenance - only 1 OS and 1 database to configure, unlike VMs

    - Currently only one database configuration so there's one global config for shared memory such as innodb buffer pool, but that will change in the future.

- Still allows for I/O spread on SAN/NAS

 

In Drizzle 7.1 - Percona's xtrabackup supports Drizzle, and ships with drizzle.  xtrabackup supports full and partial backups, no locking, single solution for point-in-time recovery in a single solution.  Because transaction log is stored in database, replication is automatically consistent with the database.  Currently does not do incremental backups with the transaction logs but that's in the future.

DBQP:

- consolidates standard testing tasks, server/test management, reporting, REGRESSION TESTING

- extended architecture allows for complex testing scenarios

- pluggable - supports new testing tools

- randgen, sql-bench, crashme, sysbench, standard drizzle-test-run suite

- Keeping tools and testing configurations in-tree facilitates testing for everyone

- supported by SkySQL

 

Dynamic SQL/execute()

- New UTF-8 parser

- Being extended to allow for plugging in application servers.

 

>120 developers since day 1

avg 26-36 per month that commit

 

Bugs database - http://bugs.launchpad.net/drizzle

Syndicate content
Website by Digital Loom