Simple //naive// backup

Back up all databases into one file at once is a bad idea.

  1. If there is a single problem (corrupted table, unreadable file, whatever), the rest of DB will not be dumped
  2. Restoring one DB within many from one file is difficult, time consuming and with slightly bigger databases ~ 10GB i wish you a lot of RUM

Here is an example anyway

mysqldump -u root -p --all-databases > all_db_backup.sql

In a script

mysqldump -u root --password="my_secret" --all-databases > all_db_backup.sql

If you are going to use this simple and not so clever backup, please at least make the file root readable only.

Better way

The backup user just need lock and read privileges. Create a backup user and grant lock tables and select privileges to it with the following MariaDB/MySQL commands

GRANT LOCK TABLES, SELECT ON *.* TO 'backupuser'@'localhost' IDENTIFIED BY 'some_passwd';
FLUSH privileges;

Then list all databases and in backup one after another. Use umask to keep you files root readable only. Here is our script for 7 days rotating backups.

DAY=$(date +"%w")
umask 077
mkdir -p $BACKUP_TMP
mysql -u $USR  --password=$PASS -Bse "SHOW DATABASES" | while read DB ; do
        mysqldump --no-tablespaces --single-transaction -u $USR --password=$PASS ${DB} | bzip2 > $BACKUP_TMP/${DB}.sql.bz2


Correcting an error


phpmyadmin nemohl neco neco neco


stop slave;
start slave;

show slave status \G
....: YES
....: YES 

What replication is not

  • Replication is not a backup policy. A mistyped DELETE statement will be replicated on the slave too, and you could end up with two, perfectly synchronized, empty databases. Replication can help protect against hardware failure though.
  • Replication is not an answer to all performance problems. Although updates on the slave are more optimized than if you ran the updates normally, if you use MyISAM tables, table-locking will still occur, and databases under high-load could still struggle.
  • Replication is not a guarantee that the slave will be in sync with the master at any one point in time. Even assuming the connection is always up, a busy slave may not yet have caught up with the master, so you can't simply interchange SELECT queries across master and slave servers.


Moving to another server

First import mysql tables whith all users

mysql -u root -p mysql <temp/mysql.sql 

Then you can do flush privileges or use the old password. Finaly import all databases

cd /srv/temp
for i in *; do 
   echo $db;
   mysqladmin create $db --password=xxxxx;
   mysql -u root --password=xxxxxx $db <$i;

unfortunately in the middle the cycle failed (i caused it intentionaly)

for i in *; do
   test $X -eq 0 && mv $i $i.done;
   if [ $i = 'gld.sql' ]; then

Reparing Tables

mysql> CHECK TABLE fixtures;
mysql> REPAIR TABLE fixtures;
mysql> REPAIR TABLE fixtures USE_FRM;


Regain root access


# /etc/init.d/mysql stop
# mysqld_safe --skip-grant-tables &
# mysql -u root

mysql> use mysql;
mysql> update user set password=PASSWORD("NewMySQLPassword") where User="root";
mysql> flush privileges;
mysql> quit 

# /etc/init.d/mysql stop
# /etc/init.d/mysql start


max_connections        = 500

innodb_buffer_pool_dump_at_shutdown = 0
innodb_buffer_pool_dump_pct = 100
innodb_buffer_pool_instances = 8
innodb_buffer_pool_load_at_startup = 0
innodb_buffer_pool_size = 220G
innodb_log_file_size = 5G
thread_cache_size = 500
thread_pool_size = 12
innodb_read_io_threads = 64
innodb_write_io_threads = 64

tmp_table_size = 12G
max_heap_table_size = 12G


query_cache_type = 0
query_cache_limit = 128M
query_cache_min_res_unit = 4k
query_cache_size = 0

max_allowed_packet = 1G

key_buffer_size = 64M

aria_pagecache_buffer_size = 1G

back_log = 500
sort_buffer_size = 2M

join_buffer_size = 4M
table_open_cache = 10000
performance_schema = ON

innodb_flush_log_at_trx_commit = 2
innodb_lru_scan_depth = 128
innodb_autoinc_lock_mode = 2

#skip-log-error = OFF

linux/mysql.txt · Last modified: 2019/10/21 09:53 by admin