Rails and Database Session Variables

Ruby's ActiveRecord provides a very robust means of manipulating database table structures especially in the context of automated deployments such as Capistrano.

When there is a master-slave database configuration, it may not always be prudent to perform DDL statements on the master and let those propagate through to the slave, especially in high-volume sites where Rails deployments may involve multiple migrations since replication lag may occur and present some significant problems.  In addition, when no downtime is specified, a DDL rollout may break the current application code, so a targeted deployment may be more prudent.

In MySQL, the solution would be to set the session variable SQL_LOG_BIN = 0 which causes subsequent statements not to get written to the binary log and therefore won't get replicated to slaves.  Unfortunately, the given connection options of ActiveRecord do not accommodate the setting of database options.  However, one way to accomplish setting this variable would be to have developers explicitly set it in the migration files:

class AddAccounts < ActiveRecord::Migration

   def self.up

     execute("set SESSION sql_log_bin=0")

     create_table :api_accounts do |t|

       t.column "user_id", :int

       t.column "name", :text

       t.column "account",:text

       t.column "created_at", :datetime

       t.column "updated_at", :datetime



   def self.down

     execute("set SESSION sql_log_bin=0")

     drop_table :api_accounts



But since this would be a hassle for developers and is easily overlooked, we can leverage Capistrano's architecture to monkey patch the ActiveRecord::Migration class so that this variable is set whenever the migrations are invoked.  So we constructed a file, config/initializers/active_record.rb:

#opens the ActiveRecord::Migration class
#use alias_method to add functionality to the ActiveRecord::Migration.migrate method
class ActiveRecord::Migration
  class << self
     alias_method :migrate_conn, :migrate
     def migrate(direction)
        ActiveRecord::Base.connection_pool.with_connection do |conn|
        @connection = conn
        connection.execute("SET SQL_LOG_BIN=0")
        @connection = nil
Note that the database user that invokes the migrations needs to have the SUPER privilege granted to it in order to be able to set this variable.

We successfully developed and implemented this technique with the devops team at SlideShare last month to build rolling DDL scripting to multiple databases using Capistrano.  It allowed them to have explicit control over which database was being updated, thereby giving them the means necessary to update one database while the other served the current application code.     


New England Database Summit

The New England Database Summit is an all day conference-style event where participants from the research community and industry in the New England area can come together to present ideas and discuss their research and experiences working with on data-related problems.  It is an academic conference with applications to real life, and includes any type of database.

The 5th annual NEDB will be held in Cambridge, MA MIT (in 32-123) on Friday, February 3, 2012.  Anyone who would like is welcome to present a poster (registration required), or submit a short paper for review.  We plan to accept 8--10 papers for presentation (15 minutes) at the meeting.   All posters will be accepted.

For more details, and to register and / or upload a paper, see:

Non-deterministic Functions and the Binary Log

I wrote this post because I ran across this issue when debugging why tables with triggers/functions that were not getting replicated to slaves. The problem appears when binlog_format is set up  as STATEMENT. You can check the ‘change log’ checking the following link for more information about default values [1].

If you have non-deterministic functions  [2]  that insert or modify your data with dynamic SQL like this :

CREATE FUNCTION `insert_dynamic_data` (in_param VARCHAR(30)) RETURNS bigint(20)
 INSERT INTO `table_to_ins` (param) VALUES(in_param);
 /* You can also add here @@warning_count or row_count() check
     and return -1 to catch errors*/
 return LAST_INSERT_ID();

The problem starts when you plan to user “replication” through binary logs events. In that cases, you must consider:

"If a stored program that modifies data is nondeterministic, it is not repeatable. This can result in different data on a master and slave, or cause restored data to differ from the original data."  [3]

When you create a function, by default is NOT DETERMINISTIC, which means that for each execution could the code will be executed again. In the other hand, DETERMINISTIC will check if the parameters are the same and if is that condition comes true, will return the same result as before without executing again the code.

If you want to create the function with the binlog_format in STATEMENT and log_bin activated without the log_bin_trust_function_creators, you will get the following error:

Version 5.1.41:
mysql> DELIMITER $$
mysql> CREATE FUNCTION fx_pru () RETURNS int
    -> BEGIN
    ->  insert into prueba select 'po',round(rand()*100), rand();
    -> RETURN 1;
    -> END
    -> $$
ERROR 1418 (HY000): This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

What happened so? The problem was that the funciton were created previous to activate the log_bin.

You have to ways to fix it: activating binlog_format as ROW or declare in your /etc/my.cnf the log-bin-trust-function-creators which allows insertions in the binary log without enforcing deterministic property. In the specified case, we cannot declare this function as DETERMINISTIC because it uses dynamic values in the parameters  (always we expect different values and different results).

What happens with the execution of the function? Basically, it creates a lock contention ( if you usually use Nagios, you will see a mysql-lock-contention alarm), but MySQL will not raise any error or alarm, which causes confusion when trying to find the error. Other statements that run outside the function will be executed normally. In the previous example, this function was only for inserts, but every statement inside the function will not work. Derived  from this problem, statements inside the function will not be executed directly.

You wouldn't  see any errors. The only thing you could realize is the lock contention. The lock is generated because the statement hangs a lock into the table trying to reach a commit that never comes.

It is a bug? No, is a feature. It prevents unknown and potentially slow executions from being replicated, causing lagging into the replication process.

I recommend a link about this and related problems with triggers: [4]


How to recreate an InnoDB table after the tablespace has been removed

Does your error log ever get flooded with errors like this one?


[ERROR] MySQL is trying to open a table handle but the .ibd file for
table my_schema/my_logging_table doesnot exist.
Have you deleted the .ibd file from thedatabase directory under
the MySQL datadir, or have you used DISCARD TABLESPACE?
See how you can resolve the problem.


No? That is great!


We had a case where, in order to quickly solve a disk space issue, a SysAdmin decided to remove the biggest file in the filesystem, and of course this was an InnoDB table used for logging.

That is, he ran:

shell> rm /var/lib/mysql/my_schema/my_logging_table.ibd

He could have run TRUNCATE TABLE, but that's another story.


The results were not ideal:

  1. The table did not exist anymore.

  2. Errors in the application while trying to write to the table.

  3. MySQL flooding the error log.


The solution for this problem is to:

  • run DISCARD TABLESPACE ( InnoDB will remove insert buffer entries for that tablespace);

  • run DROP TABLE ( InnoDB will complaint that the .ibd file doesn't exist, but it will remove it from the internal data dictionary );

  • recover the CREATE TABLE statement from the latest backup ( you have backups, right? );

  • issue the CREATE TABLE statement to recreate the table.



mysql> ALTER TABLE my_logging_table DISCARD TABLESPACE;

Query OK, 0 rows affected (0.05 sec)


In the error log you will see something like:

InnoDB: Error: cannot delete tablespace 251
InnoDB: because it is not found in the tablespace memory cache.
InnoDB: Warning: cannot delete tablespace 251 in DISCARD TABLESPACE.
InnoDB: But let us remove the insert buffer entries for this tablespace.


mysql> DROP TABLE my_logging_table;

Query OK, 0 rows affected (0.16 sec)


In the error log you will see something like:

InnoDB: Error: table 'my_schema/my_logging_table'
InnoDB: in InnoDB data dictionary has tablespace id 251,
InnoDB: but tablespace with that id or name does not exist. Have
InnoDB: you deleted or moved .ibd files?
InnoDB: This may also be a table created with CREATE TEMPORARY TABLE
InnoDB: whose .ibd and .frm files MySQL automatically removed, but the
InnoDB: table still exists in the InnoDB internal data dictionary.
InnoDB: Please refer to
InnoDB: for how to resolve the issue.
InnoDB: We removed now the InnoDB internal data dictionary entry
InnoDB: of table `my_schema/my_logging_table`.


And finally:

mysql> CREATE TABLE `my_logging_table` (
(... omitted ...)

Query OK, 0 rows affected (0.05 sec)


Of course, the final step - is a stern talking to with the SysAdmin.

Announcing PalominoDB's Non Profit Program

I've always had a dream of being able to use what we are doing at PalominoDB not only for our for profit clients, but for those who go out day to day helping those who need it.  We always try to work with clients who make people's lives better, but there are those whose entire purpose is to provide aid, education and empowerment to those who are disadvantaged or whose freedoms are at risk.  I am constantly inspired by companies such as Worldreader ( are a perfect example - who work to provide e-Readers to those who have no access to books or libraries in places such as Kenya.  The challenge has always been how to provide support from our team when they are constantly busy.  In a growing organization, resources are tight, and we are blessed with non-stop work from world class clients.


As we've grown to a sustainable size, I've had more time to think about issues such as this, and I would like to announce our newest program at PalominoDB - donation of hours.  A good portion of our clients are on retainer agreements with a monthly minimum.  Sometimes work is light, sometimes it is heavy and some clients just keep us around as insurance and rarely use our hours.  Regardless, we have to staff for a certain workload and thus must enforce the minimum.  I've always found myself frustrated at having to charge for hours not worked, and constantly brainstorm ways to provide maximum benefit to our regular clients.  


PalominoDB would now like to announce our donation of hours program - whereby we are setting up relationships with non-profits who can use our resources - and the hours for this work will be donated by our clients who have unused hours and wish to see them go to a good cause, rather than paying the 50% unused hours rate.  Clients can also donate a fixed amount of hours per month to the program.  We will start our program with one non-profit, Worldreader, and will solicit for other companies who can make effective use of our resources.   Should you know of any deserving companies, please don't hesitate to let us know!


You can see a video of Worldreader's work here.  Please take a look at their donations page here, as there are great opportunities to donate for e-readers or books or to sponsor classes and schools.

Getting a List of Users From the MySQL General Log

From time to time, organizations want to know if there are any users that are not used.  For clients using MySQL 5.1 and up, that can handle a 1% overhead for general logging, we will enable the general log for a period of time and analyze the users that connect.

Note: we have some extremely busy clients, and we very rarely have a problem turning the general log on, other than making sure we're rotating and compressing logs so we do not run out of disk space.

Once we have the logs, I run this little perl tool I made -- I call it



my $infile=$ARGV[0];

my %seen=();

my @uniq=();


open (INPUT, "<$infile");

while (<INPUT>) {

  my $line=$_;

  if ($line=~/Connect/) {

    if ($line=~/(\S*@\S*)/) { push(@uniq, $1) unless $seen{$1}++; }

 } # end if line matches Connect


close INPUT;

open (OUTPUT, ">>..users.txt");


print OUTPUT (keys %seen);

print OUTPUT ("\n");

close OUTPUT;



I hope it is useful for whoever stumbles on this; I know it has been useful for me in the past -- it's just doing some string matching, and I bet if I used Python it would be done in half the lines, but it's already fewer than 20 lines, so it's pretty small to begin with.

PalominoDB Percona Live: London Slides are up!


Percona Live: London was a rousing success for PalominoDB.  I was sad that I could not attend, but I got a few people who sent "hellos" to me via my coworkers.  But on to the most important stuff -- slides from our presentations are online!

René Cannao spoke about MySQL Backup and Recovery Tools and Techniques (description) slides (PDF)


Jonathan delivered a 3-hour tutorial about Advanced MySQL Scaling Strategies for Developers (description) slides (PDF)


Automatic Downtimes in Nagios Without Using Time Periods

Monitoring systems are a great thing, and we rely heavily on Nagios here at PalominoDB.  We also rely heavily on xtrabackup for (mostly) non-locking, "warm" backups of MySQL.  In order to get a consistent backup with a proper binlog position on a slave, xtrabackup stops replication for a short period of time.  If the monitoring system catches this, the pager will go off, and usually in the middle of the night.

Nagios can have specific windows when it is on or off for a particular host or service, but you have to remember that when you change the time the backup runs.  I prefer to call a script from cron, just before I call the backup script, so that I can easily see that they are related.

Note that this script works even if you have Nagios password protected with .htaccess, because wget allows for a username and password.  This script is not perfect - there is not a lot of error checking, and log files are hard-coded.  But it does what it needs to.

The script is called like this:

./scheduleDowntimeByService service length host

Sample cron entries (note that the service name depends on what you are checking and the service names themselves):

00 5 * * 0 /home/palominodb/bin/ MySQL+Replication+Lag 3600 hostname


00 5 * * 0 /home/palominodb/bin/ MySQL+Slave+IO+Thread 3600 hostname


00 5 * * 0 /home/palominodb/bin/ MySQL+Slave+SQL+Thread 3600 hostname

Here is the script itself:




use strict;

use POSIX;


my $service = shift @ARGV;

my $length = shift @ARGV;

my $host = shift @ARGV;


unless ($length ) {

        $length = 3600;




my $startTime = time();

my $endTime = $startTime + $length;


my $nagios_start = POSIX::strftime("%m-%d-%Y %H:%M:00", localtime($startTime));

my $nagios_end = POSIX::strftime("%m-%d-%Y %H:%M:00", localtime($endTime));


$nagios_start =~ s@:@%3A@g;

$nagios_start =~ s@-@%2D@g;

$nagios_start =~ s@ @%20@g;

$nagios_end =~ s@:@%3A@g;

$nagios_end =~ s@-@%2D@g;

$nagios_end =~ s@ @%20@g;


my $URL = '' .  $nagios_start . '&end_time=' . $nagios_end . '&cmd_typ=56&cmd_mod=2&host=' . $host . '&service=' . $service . '&com_data=Backups&btnSubmit=Commit';


my $cmd = "/usr/bin/wget --quiet --user=user --password=PASS -O /tmp/nagios_downtime.html '$URL'"


open ( L, ">>/tmp/nagios_downtime.log" );

print L print $cmd . "\n";

print L `$cmd`;

close L;

Exporting the mysql.slow_log table into slow query log format

Using pt-query-digest is an excellent way to perform a SQL review. However, sometimes you don't have access to the slow_query_log_file. For example, when MySQL runs on Amazon RDS, the slow_query_log_file is unavailable (see the RDS FAQ). To get around this, export the mysql.slow_log table.

To export the data, run the following SQL command from an instance that has database access. The data is exported into a MySQL slow-log format, which pt-query-digest can analyze:

mysql -u user -p -h -D mysql -s -r -e "SELECT CONCAT( '# Time: ', DATE_FORMAT(start_time, '%y%m%d %H%i%s'), '\n', '# User@Host: ', user_host, '\n', '# Query_time: ', TIME_TO_SEC(query_time),  '  Lock_time: ', TIME_TO_SEC(lock_time), '  Rows_sent: ', rows_sent, '  Rows_examined: ', rows_examined, '\n', sql_text, ';' ) FROM mysql.slow_log" > /tmp/mysql.slow_log.log

The -s (--silent) option is used to suppress echoing of the query.

The -r (--raw) option is used to disable character escaping making \n into an actual newline, otherwise it's echoed as '\n'.

Once the export is complete, run pt-query-digest to do the analysis. A simple review command is:

pt-query-digest --limit 100% /tmp/mysql.slow_log.log > /tmp/query-digest.txt

query-digest.txt is now ready for review.

I won USD$100 and you can, too

Last week, I won USD$100*.  Gazzang is giving away $100 every Friday in October, in honor of National Cyber Security Awareness Month.  There are only 2 more weeks left -- all you have to do is tweet @Gazzang_Inc one way to improve your own cyber security.

My winning tweet:

 one way is to secure MySQL! With Gazzang or the tips in  or .

The tip can be any kind of cyber security.  I believe you have to tweet on Friday, wait until tomorrow!

* I donated my winnings to
Syndicate content
Website by Digital Loom