- increase performance (0.10% of queries are blocked by updates), but not too much since the database is very write-heavy (59% of queries)
- increase visible performance greatly (often graphs are blocked by pending writes especially to summary_instances and balances tables)
- improve reliability (if the slave goes down, I can point web servers to master; if the master goes down, I can enable read-only site mode)
- cost more money to run (since I'll be running two nodes rather than one) but still much cheaper (and more interesting) than going through S3
- databases suitable for memcachedb/keystores are not yet used that heavily
1. Setup master184.108.40.206 Setting the Replication Master Configuration
- Edit my.cnf and set:
[mysqld] log-bin=mysql-bin # enable binary logging server-id=1 # this ID cannot be used by any other server# Create a new slave user
CREATE USER 'repl'@'%.mydomain.com' IDENTIFIED BY 'slavepass'; GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.mydomain.com';# Restart server
2. Setup slave
- Purchase new Linode.
- Set up MySQL, iptables and SSH appropriately.
- Tune mysql with mysqltuner and previous experience
- Edit my.cnf and set:
[mysqld] server-id=99 # this ID cannot be used by any other server# Restart server
3. Locate master's position in binary log
Make sure you have an existing session open. Also have another command line ready to export the current master database snapshot with mysqldump.
- Open up a new MySQL session and execute FLUSH TABLES WITH READ LOCK; to lock all tables to prevent database inconsistency
- In the other existing MySQL session, execute SHOW MASTER STATUS; to display the current binary log position
- In another command line, execute mysqldump --all-databases --master-data > dbdump.db to create a master database snapshot
- Close the new MySQL session to release the database lock once the dump is complete
4. Import the master snapshot and initialise the slave
- Copy over the snapshot from master to slave: scp user@hostname:/path/to/dbdump.db ~/dbdump.db
- Shut down slave MySQL. Edit /etc/mysql/my.cnf and set skip-slave-start=true (so that the slave doesn't start while we're importing data). Start up MySQL.
- Import the database into slave: mysql -u user -p < dbdump.db
- Log into slave MySQL.
- Execute SHOW SLAVE STATUS; to make sure that Slave_IO_Running=No (master/slave binlog) and Slave_SQL_Running=No (importing SQL). Make sure that Read_Master_Log_Pos is the same value as reported with SHOW MSATER STATUS earlier.
- Configure the master connection data: CHANGE MASTER TO master_host='...',master_user='...',master_password='...'; (the log pos and file will have been set with mysqldump --master-data)
- Start the slave: START SLAVE;
- Slave_IO_Running - if Yes, then the slave is connected to master (should be Yes)
- Slave_SQL_Running- if Yes, then the slave is processing SQL queries from the binlog (should be Yes)
- Last_Error - if Slave_SQL_Running is No, this field will display the last SQL error that caused sync to fail (should be empty)
- Seconds_Behind_Master - a guess of how far slave is behind master. Should be 0.
Lots of Duplicate entry errors
*Make sure that you aren't accidentally writing to your slave database!* Your slave database will have all the same users and permissions, so bad writes won't be noticed.
- Check to see if the duplicate entries are of auto_increment fields and primary keys. Check to see if the data is already in the slave database. If so, you might be able to skip the failing query with stop slave; set global sql_slave_skip_counter=2; start slave;
- If there are lots of duplicate primary keys, maybe the mysqldump failed. You can try temporarily setting (in my.cnf) slave_skip_errors=1062 to get past the initial failing queries, and then turning it off while the slave catches up to master.