Blog

Using HAProxy for MySQL failovers

There are a number of solutions available for MySQL architecture that provide automatic failover, but the vast majority involve potentially significant and complex changes to existing configurations. Fortunately, HAProxy can be leveraged for this purpose with a minimum of impact to the current system.  Alex Williams posted this clever solution a couple years ago in his blog.  Here we take a closer look at the details of implementing it into an already existing system.

HAProxy is a freeware load balancer, proxy, and high availability application for the TCP and HTTP protocols.  Since it was built mostly to handle web traffic, it has robust rule writing using HTTP components to check on the health of systems.  The key to Alex's solution was creating xinetd daemons on the database servers that send out HTTP messages.  The HTTP check to determine database statuses works like this:

  1. The HAProxy server sends HTTP requests to the database servers at configured intervals to specified ports
  2. The /etc/services file on the database servers maps those ports to services in their /etc/xinetd.d directory
  3. The services can call any specified script, so we build scripts that connect to the databases and checks for whatever conditions we choose.

The services then return an OK or a Service Unavailable response per the conditions.  Code for these scripts is in included in the Alex's article.

Our database configuration for this implementation is two pairs of Master-Slave databases in an Active-Passive relationship with Master-Master replication between the sets.  Siloing the passive Master-Slave provides a hot spare as well as continuous up-time during deployments provided we have a means of swapping the active/passive roles of each pair.  To accomplish the latter, we built two HAProxy configuration files, haproxy_pair1.cfg and haproxy_pair2.cfg, the only difference between the two being which Master-Slave pair is active.  Having the two files indicate which pair is active also allows immediate visibility of the current configuration.

Just as in Alex's sample configuration, our web application uses two DNS entries, appmaster and appslave to call writes and reads respectively.  The IPs for these addresses are then attached to our HAProxy server, allowing HAProxy to bind to them when it starts and then route them to the appropriate database server.

On our HAProxy server we then customized the /etc/init.d/haproxy script to handle an additional parameter of "flipactive" which provides us the capability to swap the database pairs:

flipactive() {
        # first detect which cfg file haproxy is using
        ps_out=`ps -ef |grep haproxy|grep cfg`
        # if pair2 then use pair1
        if [[ "$ps_out" =~ pair2 ]]  then
             # the -sf does a friendly reread of the config file
             /usr/local/sbin/haproxy -f /etc/haproxy/haproxy_pair1.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
        # if pair1 then use pair2
        else
             /usr/local/sbin/haproxy -f /etc/haproxy/haproxy_pair2.cfg -p /var/run/haproxy.pid -st $(cat /var/run/haproxy.pid)
        fi
}

The rest of the configuration details are covered pretty well in Alex's article as well as in the HAProxy documentation.

We successfully developed and implemented this technique with the devops team at SlideShare last month to build rolling DDL scripting to multiple databases using Capistrano.  It allowed them to have explicit control over which database was being updated, thereby giving them the means necessary to update one database while other served the current application code.

Last Day at PalominoDB

I have been the Community Liaison and a Senior DBA at PalominoDB for 15 months, and doing remote DBA work for 4 years.  In that time I have learned that "consultant" need not be a dirty word, and that in the DBA world it is actually extremely valuable to have a remote DBA with lots of outside experience, and a team of remote DBAs for when your primary contact is sick or goes on holiday.

As with everything, there are downsides to remote database management.  Even though there is a lot of architecture experience among the remote DBAs I know, we are not often invited to architecture meetings.  This is because time is the unit of currency, and while sitting in an hour-long meeting to give 5 minutes of feedback can save hours down the road, it's hard to see that.  Many clients have gotten around this by having all DDL checked and performed by remote DBAs, and that helps a lot.

There is also no ownership - we can recommend solutions and technologies, but the client makes the actual decision about whether something needs to be done or not.  I look forward to actually owning architecture after 4 years of making "strong recommendations".

Since folks will ask, I have taken a job as a Senior DBA/Architect with Mozilla, starting Monday.  A former co-worker told me about the job; I was not particularly looking for anything, but I was intrigued.

I have said before that it is hard to find a good in-house DBA if you are not a huge company like Facebook or Google or Yahoo, and that is still true.  At Mozilla, they are 100% open and public about their ideas, and they do a lot of behind-the-scenes work.  Sound familiar?

They also allow their developers to develop on whatever platforms work best.  Their biggest database is their crash reporting database (and they do read it, so do submit your crashes).  They have MySQL, PostgreSQL, and MongoDB, and are starting to move some applications around, as developers are not always aware of what platforms will work best.  There is another DBA, so I will not be alone, but I expect to be just as challenged at Mozilla as I have been at PalominoDB and Pythian.

So, to keep up with PalominoDB, you can:

- like the PalominoDB page on Facebook

- follow @palominodb on Twitter

- connect with Laine on LinkedIn

- follow PalominoDB on LinkedIn

- continue to read PalominoDB's blog and Planet MySQL

 

To keep up with me, you can:

- follow @sheeri on Twitter

- read my blog and Planet MySQL

- connect with me on LinkedIn

- subscribe to the OurSQL podcast (details at http://www.oursql.com/?page_id=2)

Rails and Database Session Variables

Ruby's ActiveRecord provides a very robust means of manipulating database table structures especially in the context of automated deployments such as Capistrano.

When there is a master-slave database configuration, it may not always be prudent to perform DDL statements on the master and let those propagate through to the slave, especially in high-volume sites where Rails deployments may involve multiple migrations since replication lag may occur and present some significant problems.  In addition, when no downtime is specified, a DDL rollout may break the current application code, so a targeted deployment may be more prudent.

In MySQL, the solution would be to set the session variable SQL_LOG_BIN = 0 which causes subsequent statements not to get written to the binary log and therefore won't get replicated to slaves.  Unfortunately, the given connection options of ActiveRecord do not accommodate the setting of database options.  However, one way to accomplish setting this variable would be to have developers explicitly set it in the migration files:

class AddAccounts < ActiveRecord::Migration

   def self.up

     execute("set SESSION sql_log_bin=0")

     create_table :api_accounts do |t|

       t.column "user_id", :int

       t.column "name", :text

       t.column "account",:text

       t.column "created_at", :datetime

       t.column "updated_at", :datetime

     end

   end

   def self.down

     execute("set SESSION sql_log_bin=0")

     drop_table :api_accounts

  end

end

But since this would be a hassle for developers and is easily overlooked, we can leverage Capistrano's architecture to monkey patch the ActiveRecord::Migration class so that this variable is set whenever the migrations are invoked.  So we constructed a file, config/initializers/active_record.rb:

#opens the ActiveRecord::Migration class
#use alias_method to add functionality to the ActiveRecord::Migration.migrate method
class ActiveRecord::Migration
  class << self
     alias_method :migrate_conn, :migrate
     def migrate(direction)
        ActiveRecord::Base.connection_pool.with_connection do |conn|
        @connection = conn
        connection.execute("SET SQL_LOG_BIN=0")
        @connection = nil
        end
        migrate_conn(direction)
     end
  end
end
Note that the database user that invokes the migrations needs to have the SUPER privilege granted to it in order to be able to set this variable.

We successfully developed and implemented this technique with the devops team at SlideShare last month to build rolling DDL scripting to multiple databases using Capistrano.  It allowed them to have explicit control over which database was being updated, thereby giving them the means necessary to update one database while the other served the current application code.     

 

Non-deterministic Functions and the Binary Log

I wrote this post because I ran across this issue when debugging why tables with triggers/functions that were not getting replicated to slaves. The problem appears when binlog_format is set up  as STATEMENT. You can check the ‘change log’ checking the following link for more information about default values [1].

If you have non-deterministic functions  [2]  that insert or modify your data with dynamic SQL like this :

<CODE>
DELIMITER $$
CREATE FUNCTION `insert_dynamic_data` (in_param VARCHAR(30)) RETURNS bigint(20)
BEGIN
 ....
 INSERT INTO `table_to_ins` (param) VALUES(in_param);
 ...
 /* You can also add here @@warning_count or row_count() check
     and return -1 to catch errors*/
 return LAST_INSERT_ID();
END
$$
</CODE>

The problem starts when you plan to user “replication” through binary logs events. In that cases, you must consider:

"If a stored program that modifies data is nondeterministic, it is not repeatable. This can result in different data on a master and slave, or cause restored data to differ from the original data."  [3]

When you create a function, by default is NOT DETERMINISTIC, which means that for each execution could the code will be executed again. In the other hand, DETERMINISTIC will check if the parameters are the same and if is that condition comes true, will return the same result as before without executing again the code.

If you want to create the function with the binlog_format in STATEMENT and log_bin activated without the log_bin_trust_function_creators, you will get the following error:

Version 5.1.41:
mysql> DELIMITER $$
mysql> CREATE FUNCTION fx_pru () RETURNS int
    -> BEGIN
    ->  insert into prueba select 'po',round(rand()*100), rand();
    -> RETURN 1;
    -> END
    -> $$
ERROR 1418 (HY000): This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

What happened so? The problem was that the funciton were created previous to activate the log_bin.

You have to ways to fix it: activating binlog_format as ROW or declare in your /etc/my.cnf the log-bin-trust-function-creators which allows insertions in the binary log without enforcing deterministic property. In the specified case, we cannot declare this function as DETERMINISTIC because it uses dynamic values in the parameters  (always we expect different values and different results).

What happens with the execution of the function? Basically, it creates a lock contention ( if you usually use Nagios, you will see a mysql-lock-contention alarm), but MySQL will not raise any error or alarm, which causes confusion when trying to find the error. Other statements that run outside the function will be executed normally. In the previous example, this function was only for inserts, but every statement inside the function will not work. Derived  from this problem, statements inside the function will not be executed directly.

You wouldn't  see any errors. The only thing you could realize is the lock contention. The lock is generated because the statement hangs a lock into the table trying to reach a commit that never comes.

It is a bug? No, is a feature. It prevents unknown and potentially slow executions from being replicated, causing lagging into the replication process.

I recommend a link about this and related problems with triggers: [4]

1: http://www.filehippo.com/es/download_mysql/changelog/6962/
2: http://en.wikipedia.org/wiki/Nondeterministic_algorithm
3: http://dev.mysql.com/doc/refman/5.1/en/stored-programs-logging.html
4: http://dba.stackexchange.com/questions/321/dynamic-sql-in-mysql-stored-routines

How to recreate an InnoDB table after the tablespace has been removed

Does your error log ever get flooded with errors like this one?

 

[ERROR] MySQL is trying to open a table handle but the .ibd file for
table my_schema/my_logging_table doesnot exist.
Have you deleted the .ibd file from thedatabase directory under
the MySQL datadir, or have you used DISCARD TABLESPACE?
See http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html how you can resolve the problem.

 

No? That is great!

 

We had a case where, in order to quickly solve a disk space issue, a SysAdmin decided to remove the biggest file in the filesystem, and of course this was an InnoDB table used for logging.

That is, he ran:

shell> rm /var/lib/mysql/my_schema/my_logging_table.ibd

He could have run TRUNCATE TABLE, but that's another story.

 

The results were not ideal:

  1. The table did not exist anymore.

  2. Errors in the application while trying to write to the table.

  3. MySQL flooding the error log.

 

The solution for this problem is to:

  • run DISCARD TABLESPACE ( InnoDB will remove insert buffer entries for that tablespace);

  • run DROP TABLE ( InnoDB will complaint that the .ibd file doesn't exist, but it will remove it from the internal data dictionary );

  • recover the CREATE TABLE statement from the latest backup ( you have backups, right? );

  • issue the CREATE TABLE statement to recreate the table.

 

Example:

mysql> ALTER TABLE my_logging_table DISCARD TABLESPACE;

Query OK, 0 rows affected (0.05 sec)

 

In the error log you will see something like:

InnoDB: Error: cannot delete tablespace 251
InnoDB: because it is not found in the tablespace memory cache.
InnoDB: Warning: cannot delete tablespace 251 in DISCARD TABLESPACE.
InnoDB: But let us remove the insert buffer entries for this tablespace.

 

mysql> DROP TABLE my_logging_table;

Query OK, 0 rows affected (0.16 sec)

 

In the error log you will see something like:

InnoDB: Error: table 'my_schema/my_logging_table'
InnoDB: in InnoDB data dictionary has tablespace id 251,
InnoDB: but tablespace with that id or name does not exist. Have
InnoDB: you deleted or moved .ibd files?
InnoDB: This may also be a table created with CREATE TEMPORARY TABLE
InnoDB: whose .ibd and .frm files MySQL automatically removed, but the
InnoDB: table still exists in the InnoDB internal data dictionary.
InnoDB: Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.0/en/innodb-troubleshooting.html
InnoDB: for how to resolve the issue.
InnoDB: We removed now the InnoDB internal data dictionary entry
InnoDB: of table `my_schema/my_logging_table`.

 

And finally:

mysql> CREATE TABLE `my_logging_table` (
(... omitted ...)
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Query OK, 0 rows affected (0.05 sec)

 

Of course, the final step - is a stern talking to with the SysAdmin.

Getting a List of Users From the MySQL General Log

From time to time, organizations want to know if there are any users that are not used.  For clients using MySQL 5.1 and up, that can handle a 1% overhead for general logging, we will enable the general log for a period of time and analyze the users that connect.

Note: we have some extremely busy clients, and we very rarely have a problem turning the general log on, other than making sure we're rotating and compressing logs so we do not run out of disk space.

Once we have the logs, I run this little perl tool I made -- I call it genlog_users.pl:

#!/usr/bin/perl

 

my $infile=$ARGV[0];

my %seen=();

my @uniq=();

 

open (INPUT, "<$infile");

while (<INPUT>) {

  my $line=$_;

  if ($line=~/Connect/) {

    if ($line=~/(\S*@\S*)/) { push(@uniq, $1) unless $seen{$1}++; }

 } # end if line matches Connect

}

close INPUT;

open (OUTPUT, ">>..users.txt");

$,="\n";

print OUTPUT (keys %seen);

print OUTPUT ("\n");

close OUTPUT;

 

----------

I hope it is useful for whoever stumbles on this; I know it has been useful for me in the past -- it's just doing some string matching, and I bet if I used Python it would be done in half the lines, but it's already fewer than 20 lines, so it's pretty small to begin with.

PalominoDB Percona Live: London Slides are up!

 

Percona Live: London was a rousing success for PalominoDB.  I was sad that I could not attend, but I got a few people who sent "hellos" to me via my coworkers.  But on to the most important stuff -- slides from our presentations are online!

René Cannao spoke about MySQL Backup and Recovery Tools and Techniques (description) slides (PDF)

 

Jonathan delivered a 3-hour tutorial about Advanced MySQL Scaling Strategies for Developers (description) slides (PDF)

Enjoy!

Automatic Downtimes in Nagios Without Using Time Periods

Monitoring systems are a great thing, and we rely heavily on Nagios here at PalominoDB.  We also rely heavily on xtrabackup for (mostly) non-locking, "warm" backups of MySQL.  In order to get a consistent backup with a proper binlog position on a slave, xtrabackup stops replication for a short period of time.  If the monitoring system catches this, the pager will go off, and usually in the middle of the night.

Nagios can have specific windows when it is on or off for a particular host or service, but you have to remember that when you change the time the backup runs.  I prefer to call a script from cron, just before I call the backup script, so that I can easily see that they are related.

Note that this script works even if you have Nagios password protected with .htaccess, because wget allows for a username and password.  This script is not perfect - there is not a lot of error checking, and log files are hard-coded.  But it does what it needs to.

The script is called like this:

./scheduleDowntimeByService service length host

Sample cron entries (note that the service name depends on what you are checking and the service names themselves):

00 5 * * 0 /home/palominodb/bin/scheduleDowntimeByService.pl MySQL+Replication+Lag 3600 hostname

 

00 5 * * 0 /home/palominodb/bin/scheduleDowntimeByService.pl MySQL+Slave+IO+Thread 3600 hostname

 

00 5 * * 0 /home/palominodb/bin/scheduleDowntimeByService.pl MySQL+Slave+SQL+Thread 3600 hostname

Here is the script itself:

 

cat scheduleDowntimeByService.pl 

#!/usr/bin/perl

use strict;

use POSIX;

 

my $service = shift @ARGV;

my $length = shift @ARGV;

my $host = shift @ARGV;

 

unless ($length ) {

        $length = 3600;

}

 

 

my $startTime = time();

my $endTime = $startTime + $length;

 

my $nagios_start = POSIX::strftime("%m-%d-%Y %H:%M:00", localtime($startTime));

my $nagios_end = POSIX::strftime("%m-%d-%Y %H:%M:00", localtime($endTime));

 

$nagios_start =~ s@:@%3A@g;

$nagios_start =~ s@-@%2D@g;

$nagios_start =~ s@ @%20@g;

$nagios_end =~ s@:@%3A@g;

$nagios_end =~ s@-@%2D@g;

$nagios_end =~ s@ @%20@g;

 

my $URL = 'https://monitoring.company.com/nagios//cgi-bin/cmd.cgi?fixed=1&start_time=' .  $nagios_start . '&end_time=' . $nagios_end . '&cmd_typ=56&cmd_mod=2&host=' . $host . '&service=' . $service . '&com_data=Backups&btnSubmit=Commit';

 

my $cmd = "/usr/bin/wget --quiet --user=user --password=PASS -O /tmp/nagios_downtime.html '$URL'"

;

open ( L, ">>/tmp/nagios_downtime.log" );

print L print $cmd . "\n";

print L `$cmd`;

close L;

Exporting the mysql.slow_log table into slow query log format

Using pt-query-digest is an excellent way to perform a SQL review. However, sometimes you don't have access to the slow_query_log_file. For example, when MySQL runs on Amazon RDS, the slow_query_log_file is unavailable (see the RDS FAQ). To get around this, export the mysql.slow_log table.

To export the data, run the following SQL command from an instance that has database access. The data is exported into a MySQL slow-log format, which pt-query-digest can analyze:

mysql -u user -p -h host.rds.amazonaws.com -D mysql -s -r -e "SELECT CONCAT( '# Time: ', DATE_FORMAT(start_time, '%y%m%d %H%i%s'), '\n', '# User@Host: ', user_host, '\n', '# Query_time: ', TIME_TO_SEC(query_time),  '  Lock_time: ', TIME_TO_SEC(lock_time), '  Rows_sent: ', rows_sent, '  Rows_examined: ', rows_examined, '\n', sql_text, ';' ) FROM mysql.slow_log" > /tmp/mysql.slow_log.log

The -s (--silent) option is used to suppress echoing of the query.

The -r (--raw) option is used to disable character escaping making \n into an actual newline, otherwise it's echoed as '\n'.

Once the export is complete, run pt-query-digest to do the analysis. A simple review command is:

pt-query-digest --limit 100% /tmp/mysql.slow_log.log > /tmp/query-digest.txt

query-digest.txt is now ready for review.

I won USD$100 and you can, too

Last week, I won USD$100*.  Gazzang is giving away $100 every Friday in October, in honor of National Cyber Security Awareness Month.  There are only 2 more weeks left -- all you have to do is tweet @Gazzang_Inc one way to improve your own cyber security.

My winning tweet:

 one way is to secure MySQL! With Gazzang or the tips in  or .

The tip can be any kind of cyber security.  I believe you have to tweet on Friday, though.....so wait until tomorrow!

* I donated my winnings to http://www.technocation.org
Syndicate content
Website by Digital Loom