Currently https://cryptfolio.com is running off a single MySQL server. I want to switch this to a master/slave setup, which I intend will:
- increase performance (0.10% of queries are blocked by updates), but not too much since the database is very write-heavy (59% of queries)
- increase visible performance greatly (often graphs are blocked by pending writes especially to
- improve reliability (if the slave goes down, I can point web servers to master; if the master goes down, I can enable read-only site mode)
- cost more money to run (since I’ll be running two nodes rather than one) but still much cheaper (and more interesting) than going through S3
- databases suitable for memcachedb/keystores are not yet used that heavily
I’ll be using the MySQL guide to set up replication and cover my steps here.
1. Setup master
[mysqld] log-bin=mysql-bin # enable binary logging server-id=1 # this ID cannot be used by any other server
- Create a new slave user
CREATE USER 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass'; GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com';
- Restart server
2. Setup slave
- Purchase new Linode.
- Set up MySQL, iptables and SSH appropriately.
- Tune mysql with
mysqltunerand previous experience
[mysqld] server-id=99 # this ID cannot be used by any other server
- Restart server
3. Locate master’s position in binary log
Make sure you have an existing session open. Also have another command line ready to export the current master database snapshot with
- Open up a new MySQL session and execute
FLUSH TABLES WITH READ LOCK;to lock all tables to prevent database inconsistency
- In the other existing MySQL session, execute
SHOW MASTER STATUS;to display the current binary log position
- In another command line, execute
mysqldump --all-databases --master-data > dbdump.dbto create a master database snapshot
- Close the new MySQL session to release the database lock once the dump is complete
4. Import the master snapshot and initialise the slave
- Copy over the snapshot from master to slave:
scp user@hostname:/path/to/dbdump.db ~/dbdump.db
- Shut down slave MySQL. Edit
skip-slave-start=true(so that the slave doesn’t start while we’re importing data). Start up MySQL.
- Import the database into slave:
mysql -u user -p < dbdump.db
- Log into slave MySQL.
SHOW SLAVE STATUS;to make sure that
Slave_IO_Running=No(master/slave binlog) and
Slave_SQL_Running=No(importing SQL). Make sure that
Read_Master_Log_Posis the same value as reported with
SHOW MSATER STATUSearlier.
- Configure the master connection data:
CHANGE MASTER TO master_host='...',master_user='...',master_password='...';(the log pos and file will have been set with
- Start the slave:
SHOW SLAVE STATUS; is your friend and primary means to understand what’s going on. Interesting keys:
Slave_IO_Running- if Yes, then the slave is connected to master (should be Yes)
Slave_SQL_Running- if Yes, then the slave is processing SQL queries from the binlog (should be Yes)
Slave_SQL_Runningis No, this field will display the last SQL error that caused sync to fail (should be empty)
Seconds_Behind_Master- a guess of how far slave is behind master. Should be 0.
Once you’ve got everything setup and running, edit
my.cnf and disable
skip-slave-start, and restart the MySQL server to make sure everything works OK.
Lots of Duplicate entry errors
Make sure that you aren’t accidentally writing to your slave database! Your slave database will have all the same users and permissions, so bad writes won’t be noticed.
- Check to see if the duplicate entries are of auto_increment fields and primary keys. Check to see if the data is already in the slave database. If so, you might be able to skip the failing query with
stop slave; set global sql_slave_skip_counter=2; start slave;
- If there are lots of duplicate primary keys, maybe the mysqldump failed. You can try temporarily setting (in
slave_skip_errors=1062to get past the initial failing queries, and then turning it off while the slave catches up to master.