[教學][研究] 雙主機 MySQL Cluster 7.0.9 架設
2010/02/25
Lu
參考文件 (MySQL 5.1 + MySQL Cluster NDB 6.X/7.X)
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster.html[教學][研究] 4主機 MySQL Cluster 架設
http://forum.icst.org.tw/phpbb/viewtopic.php?f=10&t=17903[教學][研究] 雙主機 MySQL Cluster 架設
http://forum.icst.org.tw/phpbb/viewtopic.php?f=10&t=17904一、概念說明
MySQL Cluster(叢集,大陸稱為 "集群" 或 "群集") 架構大致如下

NDB 是 Network DataBase (網路資料庫)
MySQL Cluster中主機分三類
1. 管理主機 : 服務程式(Deamon) 稱為 ndb_mgmd (NDB Management Daemon的意思)
管理工具稱為 ndb_mgm (NDB Management 的意思),預設使用 Port 1186。
2. Data Node: 實際存放資料的主機 (Storage),服務程式稱為 ndbd (NDB Daemon的意思)。
3. SQL Node : 提供存取資料庫內容,服務程式稱為 mysqld (MySQL Deamon的意思)。
一般各類主機可能一台或多台,管理主機和 Node 都使用不同主機。
建議至少 4 台,管理一台、SQL Node一台、Data Node兩台 (Data Node如果只有一台,乾脆不要用 Cluster,直接用單機版 MySQL)
雙機架設實際測試過程中,狀況百出,建議先學會4台的架設。
(不知是否 Node 共用主機的關係)
---------------------------------------------------------------------------
一、環境
Windows XP x86 + VMware Workstation 7.01 架設兩台 VM,都安裝 CentOS 5.4 x86
centos1 eth0:192.168.128.101 (管理 + SQL node + Data node)
centos2 eth0:192.168.128.102 (管理 + SQL node + Data node)
VMware Workstation 7.01 預設 RHEL5 (VM) 使用 1GB RAM,小弟降低到 512 MB。
---------------------------------------------------------------------------
二、安裝
為了省麻煩,防火牆先關閉。
CentOS 用 yum 安裝的 MySQL 是不能架設 Cluster 的,必須移除,
然後去 MySQL 官方網站註冊和下載 MySQL Cluster 使用的套件
(rpm 安裝和 tar.gz 安裝的很多路徑不同,如果使用非 rpm 請自己另外研究)
MySQL-Cluster-gpl-client-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-debuginfo-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-devel-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-embedded-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-extra-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-management-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-server-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-shared-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-storage-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-test-7.0.9-0.rhel5.i386.rpm
MySQL-Cluster-gpl-tools-7.0.9-0.rhel5.i386.rpm
如果不想安裝全部,各Node基本需要的套件如下
SQL Node
rpm -Uhv MySQL-Cluster-gpl-server-7.0.9-0.rhel5.i386.rpm
rpm -Uhv MySQL-Cluster-gpl-client-7.0.9-0.rhel5.i386.rpm
Data Node
rpm -Uhv MySQL-Cluster-gpl-storage-7.0.9-0.rhel5.i386.rpm
MGM Node
rpm -Uhv MySQL-Cluster-gpl-management-7.0.9-0.rhel5.i386.rpm
rpm -Uhv MySQL-Cluster-gpl-tools-7.0.9-0.rhel5.i386
小弟是在 Windows XP 上下載後,做成 .iso 檔案,掛載到 VMware 光碟機,
然後用 mount 掛載到 Linux 上的 /media 目錄使用
命令如下 (兩台都要做)
代碼:
service iptables stop
yum -y remove mysql*
mount /dev/cdrom /media
cd /media
rpm -ivh *.rpm
---------------------------------------------------------------------------
三、設定
(1) 設定 MGM 設定檔 config.ini
這裡有 config.ini 的基本範例
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-config-example.html兩台機器請執行下面命令
代碼:
mkdir -p /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini
內容如下
代碼:
[ndbd default]
Hostname =192.168.128.101
[ndb_mgmd default]
Hostname =192.168.128.101
[ndb_mgmd]
Hostname =192.168.128.101
[ndb_mgmd]
Hostname =192.168.128.102
[ndbd]
Hostname=192.168.128.101
[ndbd]
Hostname=192.168.128.102
[mysqld]
Hostname=192.168.128.101
[mysqld]
Hostname=192.168.128.102
[mysqld]
[mysqld]
這裡設定了兩個 MGM Node,2個 Data Node (ndbd),4個 SQL Node (mysqld)
(2) 設定 my.cnf (mysqld及ndbd、ndb_mgmd均使用它)
這裡有 my.cnf 的基本範例
http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-config-example.html代碼:
vi /etc/my.cnf
內容如下
代碼:
[mysqld]
ndbcluster
ndb-connectstring=192.168.128.101,192.168.128.102
# provide connectstring for management server host (default port: 1186)
[ndbd]
ndb-connectstring=192.168.128.101,192.168.128.102
# provide connectstring for management server host (default port: 1186)
[ndb_mgm]
ndb-connectstring=192.168.128.101,192.168.128.102
# provide location of cluster configuration file
[ndb_mgmd]
config-file=/var/lib/mysql-cluster/config.ini
---------------------------------------------------------------------------
四、啟動
啟動 MySQL Cluster 順序: 首先啟動管理節點服務器(ndb_mgmd),然後啟動存儲節點服務器(ndbd),最後才啟動SQL節點服務器(service mysql start)
停止 MySQL Cluster : 執行 ndb_mgm -e showdown,它會把所有 MGM Node 和所有 Data Node 的 ndb_mgmd 和 ndbd 都停止掉。(mysqld 還留著)
在 SQL Node 上停止 mysqld 服務命令為 (其實 ndb_mgm -e showdown 就足夠讓 Cluster 幾乎停工)
代碼:
[root@centos1 ~]# mysqladmin –u root shutdown
(1) 啟動管理節點
代碼:
[root@centos1 mysql-cluster]# ndb_mgmd --initial --ndb-nodeid=1
2010-02-22 14:43:04 [MgmtSrvr] INFO -- NDB Cluster Management Server. mysql-5.1.39 ndb-7.0.9b
2010-02-22 14:43:04 [MgmtSrvr] INFO -- Reading cluster configuration from '/var/lib/mysql-cluster/config.ini'
2010-02-22 14:43:04 [MgmtSrvr] WARNING -- at line 20: Cluster configuration warning:
arbitrator with id 1 and db node with id 3 on same host 192.168.128.101
arbitrator with id 2 and db node with id 4 on same host 192.168.128.102
arbitrator with id 5 and db node with id 3 on same host 192.168.128.101
arbitrator with id 6 and db node with id 4 on same host 192.168.128.102
Running arbitrator on the same host as a database node may
cause complete cluster shutdown in case of host failure.
[root@centos1 mysql-cluster]# ndb_mgm -e show
Connected to Management Server at: 192.168.128.101:1186
ERROR Message: The cluster configuration is not yet confirmed by all defined management servers. This management server is still waiting for node 2 to connect.
Could not get configuration
* 4012: Failed to get configuration
* The cluster configuration is not yet confirmed by all defined management servers. This management server is still waiting for node 2 to connect.
--initial 表示重新讀取 config.ini 設定,並且啟動管理服務;如果 config.ini 修改過,必須加上此參數,否則可以不用加上此參數。
--ndb-nodeid=1 表示掛載在 ID=1 的 Node 上,基本上可以不加,但若啟動時無法自動判斷時,必須加上。
有一個警告提示說arbitrator(仲裁人)發現不同node在相同主機上,可能引起整個叢集失敗,不理它。
MGM Node 如果不只一台時,它會嘗試連另外一台 MGM,連不上則會錯誤。
此時用 ndb_mgm 檢查一下情況
代碼:
[root@centos2 mysql-cluster]# ndb_mgmd --initial
2010-02-05 04:06:42 [MgmtSrvr] INFO -- NDB Cluster Management Server. mysql-5.1.39 ndb-7.0.9b
2010-02-05 04:06:42 [MgmtSrvr] INFO -- Reading cluster configuration from '/var/lib/mysql-cluster/config.ini'
2010-02-05 04:06:42 [MgmtSrvr] WARNING -- at line 21: Cluster configuration warning:
arbitrator with id 1 and db node with id 3 on same host 192.168.128.101
arbitrator with id 2 and db node with id 4 on same host 192.168.128.102
arbitrator with id 5 and db node with id 3 on same host 192.168.128.101
arbitrator with id 6 and db node with id 4 on same host 192.168.128.102
Running arbitrator on the same host as a database node may
cause complete cluster shutdown in case of host failure.
You have new mail in /var/spool/mail/root
會發現無法使用,因為有其他管理主機仍未成功啟動 ndb_mgmd,請到該主機啟動 ndb_mgmd。
再次檢查狀態,可以看到整個 Cluster 的架構和狀態
代碼:
[root@centos1 mysql-cluster]# ndb_mgm -e show
Connected to Management Server at: 192.168.128.101:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 (not connected, accepting connect from 192.168.128.101)
id=4 (not connected, accepting connect from 192.168.128.102)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9)
id=2 (not connected, accepting connect from 192.168.128.102)
[mysqld(API)] 4 node(s)
id=5 (not connected, accepting connect from 192.168.128.101)
id=6 (not connected, accepting connect from 192.168.128.102)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
[root@centos1 media]# ps aux | grep ndb
root 16719 0.0 0.5 29752 2748 ? Rsl 10:11 0:00 ndb_mgmd --ndb-nodeid=1
root 16746 0.0 0.0 444 132 pts/2 R+ 10:14 0:00 grep ndb
PS: MGM 目前只有一台連上,有時候等數分鐘後會第二台才連上,或一直沒連上 (這個尚待研究)。
ndb_mgm 指令建議在 ndb_mgmd 有連上的主機上操作。
如果啟動不順利,或要修改 config.ini,請執行ndb_mgm -e showdown,它會把所有 MGM Node 和所有 Data Node 的 ndb_mgmd 和 ndbd 都停止掉。
請在每台主機上執行 ps aux | grep ndb 確認停止情況,因為管理功能若異常,可能會發生某些 MGM 主機上的 ndb_mgmd 仍在執行的情形。
此時只能用 kill -9 強行中斷執行,例如:
代碼:
[root@centos1 mysql-cluster]# ps aux | grep ndb
root 3473 0.0 0.1 3916 660 pts/3 R+ 14:12 0:00 grep ndb
root 21315 0.0 0.5 31116 2868 ? Rsl 13:29 0:02 ndb_mgmd --ndb-nodeid=1
[root@centos1 mysql-cluster]# kill -9 21315
(2) 啟動 Data Node (ndbd)
在 centos1 主機上執行
代碼:
[root@centos1 media]# ndbd --ndb-nodeid=3 --initial
2010-02-22 10:14:39 [ndbd] INFO -- Configuration fetched from '192.168.128.101:1186', generation: 1
在 centos2 主機上執行
代碼:
[root@centos2 ~]# ndbd --ndb-nodeid=4 --initial
2010-02-04 23:33:27 [ndbd] INFO -- Configuration fetched from '192.168.128.101:1186', generation: 1
在任意一台 MGM 主機上檢查狀態:
代碼:
[root@centos2 ~]# ndb_mgm -e show
Connected to Management Server at: 192.168.128.101:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=4 @192.168.128.102 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9)
id=2 @192.168.128.102 (mysql-5.1.39 ndb-7.0.9)
[mysqld(API)] 4 node(s)
id=5 (not connected, accepting connect from any host)
id=6 (not connected, accepting connect from any host)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
可以發現 id=3 和 id=4 各有一台主機連上。
(3) 啟動 SQL Node (mysqld)
在 centos1 主機上執行
代碼:
[root@centos1 ~]# service mysql start
Starting MySQL... [ OK ]
或用下面方式執行
代碼:
[root@centos1 ~]# mysqld_safe --ndb-nodeid=5 --user=mysql &
在 centos2 主機上執行
代碼:
[root@centos2 ~]# service mysql start
Starting MySQL... [ OK ]
或用下面方式執行
代碼:
[root@centos2 ~]# mysqld_safe --ndb-nodeid=6 --user=mysql &
執行 ps 檢查狀態
代碼:
[root@centos1 ~]# ps aux | grep mysql
root 17115 0.0 0.2 4532 1208 pts/2 S 10:22 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --pid-file=/var/lib/mysql/centos1.pid
mysql 17185 0.1 3.3 118504 17512 pts/2 Sl 10:22 0:00 /usr/sbin/mysqld --basedir=/ --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/lib/mysql/centos1.pid
root 17274 0.0 0.1 3916 664 pts/2 R+ 10:24 0:00 grep mysql
再次檢查工作狀態
代碼:
[root@centos1 ~]# ndb_mgm -e show
Connected to Management Server at: 192.168.128.101:1186
Cluster Configuration
---------------------
[ndbd(NDB)] 2 node(s)
id=3 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=4 @192.168.128.102 (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)
[ndb_mgmd(MGM)] 2 node(s)
id=1 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9)
id=2 @192.168.128.102 (mysql-5.1.39 ndb-7.0.9)
[mysqld(API)] 4 node(s)
id=5 @192.168.128.101 (mysql-5.1.39 ndb-7.0.9)
id=6 @192.168.128.102 (mysql-5.1.39 ndb-7.0.9)
id=7 (not connected, accepting connect from any host)
id=8 (not connected, accepting connect from any host)
因為實際測試有時候執行會失敗(如下)
或 mysqld_safe 啟動成功(ps檢查有),但是 ndb_mgm 說沒有連上的情況,最好檢查一下。
代碼:
[root@centos1 media]# mysqld_safe --ndb-nodeid=5 --user=mysql &
[1] 16845
[root@centos1 media]# 100222 10:18:38 mysqld_safe Logging to '/var/log/mysqld.log'.
100222 10:18:38 mysqld_safe Starting mysqld daemon with databases from /usr/local/var
100222 10:18:42 mysqld_safe mysqld from pid file /usr/local/var/centos1.pid ended
[1]+ Done mysqld_safe --ndb-nodeid=5 --user=mysql
---------------------------------------------------------------------------
七.測試:
(1) 在 centos1 寫入,在 centos2 可讀到資料
在 centos1 主機建立 db1 資料庫和 table1 資料表
(稍後的備份還原測試要用,請不要只建立資料庫)
代碼:
[root@centos1 ~]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.39-ndb-7.0.9-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database db1;
Query OK, 1 row affected (0.16 sec)
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| db1 |
| mysql |
| test |
+--------------------+
4 rows in set (0.00 sec)
mysql> use db1
Database changed
mysql> create table table1 (name varchar(10));
Query OK, 0 rows affected (0.26 sec)
mysql> show tables;
+---------------+
| Tables_in_db1 |
+---------------+
| table1 |
+---------------+
1 row in set (0.04 sec)
mysql> \q
Bye
[root@centos1 ~]#
在 centos2 看看可否讀取到
代碼:
[root@centos2 ~]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.39-ndb-7.0.9-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| db1 |
| mysql |
| test |
+--------------------+
4 rows in set (0.00 sec)
mysql> use db1
Database changed
mysql> show tables;
+---------------+
| Tables_in_db1 |
+---------------+
| table1 |
+---------------+
1 row in set (0.04 sec)
mysql> \q
Bye
[root@centos2 ~]#
------------------------------------------------------
(2) 資料庫備份測試
代碼:
[root@centos1 ~]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> start backup
Connected to Management Server at: 192.168.128.101:1186
Waiting for completed, this may take several minutes
Node 3: Backup 1 started from node 1
Node 3: Backup 1 started from node 1 completed
StartGCP: 535 StopGCP: 538
#Records: 2057 #LogRecords: 0
Data: 50928 bytes Log: 0 bytes
ndb_mgm>
備份的結果是在每個 Data Node 都產生一個備份,用ll命令分別在兩台主機上檢查看看
代碼:
[root@centos1 ~]# ll /var/lib/mysql-cluster/BACKUP/BACKUP-1/
total 44
-rw-r--r-- 1 root root 26432 Feb 22 10:33 BACKUP-1-0.3.Data
-rw-r--r-- 1 root root 8712 Feb 22 10:33 BACKUP-1.3.ctl
-rw-r--r-- 1 root root 52 Feb 22 10:33 BACKUP-1.3.log
[root@centos2 ~]# ll /var/lib/mysql-cluster/BACKUP/BACKUP-1/
total 44
-rw-r--r-- 1 root root 24928 Feb 4 23:51 BACKUP-1-0.4.Data
-rw-r--r-- 1 root root 8712 Feb 4 23:51 BACKUP-1.4.ctl
-rw-r--r-- 1 root root 52 Feb 4 23:51 BACKUP-1.4.log
檔案名稱格式為 BACKUP-backup_id.node_id.ctl
BACKUP-1.3.ctl 表示 Node 3 的 Backup 1
第一次備份時候可以不加上 Backup ID,第二次不加上會產生下面錯誤
代碼:
[root@centos1 ~]# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> start backup
Connected to Management Server at: 192.168.128.101:1186
Waiting for completed, this may take several minutes
Node 3: Backup 1 started from 1 has been aborted. Error: 1350
Backup failed
* 3001: Could not start backup
* Backup failed: file already exists (use 'START BACKUP <backup id>'): Temporary error: Temporary Resource error
設定 Backup ID 為 4 後,備份成功。
代碼:
ndb_mgm> start backup 4
Waiting for completed, this may take several minutes
Node 3: Backup 4 started from node 1
Node 3: Backup 4 started from node 1 completed
StartGCP: 103 StopGCP: 106
#Records: 2056 #LogRecords: 0
Data: 50788 bytes Log: 0 bytes
檢查目前備份情況
代碼:
[root@centos1 ~]# ll /var/lib/mysql-cluster/BACKUP/
total 8
drwxr-x--- 2 root root 4096 Feb 22 10:33 BACKUP-1
drwxr-x--- 2 root root 4096 Feb 22 11:37 BACKUP-4
--------------------------------------------------------
(3) 資料庫還原測試
資料庫還原步驟
a.修改config.ini,再其追加個[mysqld],否則還原會出現 No free node id found for mysqld(API) 錯誤
(上面的 config.ini 開了 4 個 [mysqld],還有兩個未用,所以步驟 a 可跳過)
b.執行 ndb_mgm -e shutdown 停止 ndb_mgmd 和 ndbd
c.每台 MGM 都執行 ndb_mgmd --initial (在 config.ini文件目錄下執行此命令),讓新增的[mysqld]生效
d.執行 ndbd --initial (有幾個節點,執行此命令幾次,必須把資料庫中內容清空)
e.在 Data Node 上執行ndb_restore -c mgmd -n node_id -m -b backup_id -r [backup_path=]/path/to/backup/files
例如 ndb_restore -n 3 -b 1 -r /var/lib/mysql-cluster/BACKUP/BACKUP-1/
代碼:
[root@centos1 ~]# ndb_mgm -e shutdown
Connected to Management Server at: 192.168.128.101:1186
2 NDB Cluster node(s) have shutdown.
Disconnecting to allow management server to shutdown.
[root@centos1 ~]# ndb_mgmd --initial
[root@centos1 ~]# mysql
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.1.39-ndb-7.0.9-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| db1 |
| mysql |
| test |
+--------------------+
4 rows in set (0.00 sec)
mysql> use db1
Database changed
mysql> show tables;
Empty set (0.00 sec)
可以看到 --initial 參數會把資料庫中內容清空,所以資料表不見了,但是空的資料庫會留下。
開始還原
代碼:
[root@centos1 ~]# ndb_restore -n 3 -b 1 -m -r /var/lib/mysql-cluster/BACKUP/BACKUP-1
Nodeid = 3
Backup Id = 1
backup path = /var/lib/mysql-cluster/BACKUP/BACKUP-1
Opening file '/var/lib/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.3.ctl'
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.39 ndb-7.0.9
Stop GCP of Backup: 0
Connected to ndb!!
Successfully restored table `db1/def/table1`
Successfully restored table event REPL$db1/table1
Opening file '/var/lib/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1-0.3.Data'
_____________________________________________________
Processing data in table: sys/def/NDB$EVENTS_0(3) fragment 0
_____________________________________________________
Processing data in table: mysql/def/NDB$BLOB_4_3(5) fragment 0
_____________________________________________________
Processing data in table: db1/def/table1(7) fragment 0
_____________________________________________________
Processing data in table: sys/def/SYSTAB_0(2) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_schema(4) fragment 0
_____________________________________________________
Processing data in table: mysql/def/ndb_apply_status(6) fragment 0
Opening file '/var/lib/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.3.log'
Restored 0 tuples and 0 log entries
NDBT_ProgramExit: 0 - OK
現在再次檢查 db1 資料庫中,應該可以看到還原的 table1。
如果不用 --initial 把資料庫內容清空,還原會失敗
代碼:
[root@centos1 ~]# ndb_restore -n 3 -b 1 -m -r /var/lib/mysql-cluster/BACKUP/BACKUP-1
Nodeid = 3
Backup Id = 1
backup path = /var/lib/mysql-cluster/BACKUP/BACKUP-1
Opening file '/var/lib/mysql-cluster/BACKUP/BACKUP-1/BACKUP-1.3.ctl'
Backup version in files: ndb-6.3.11 ndb version: mysql-5.1.39 ndb-7.0.9
Stop GCP of Backup: 0
Connected to ndb!!
Create table `db1/def/table1` failed: 721: Schema object with given name already exists
Restore: Failed to restore table: `db1/def/table1` ... Exiting
NDBT_ProgramExit: 1 - Failed
---------------------------------------------------------------------------
八、防火牆
設定防火牆在開機後自動啟動
chkconfig iptables on
修改防火牆設定檔案
vim /etc/sysconfig/iptables
在適當位置增加下面規則
-A RH-Firewall-1-INPUT -s 192.168.128.102 -p tcp --dport 1186 -j ACCEPT
重新啟動防火牆載入規則
service iptables start
或直接下命令增加規則 (立刻生效,但重新啟動後會消失)
在 192.168.128.101 執行
iptables -A RH-Firewall-1-INPUT -s 192.168.128.102 -p tcp --dport 1186 -j ACCEPT
在 192.168.128.102 執行
iptables -A RH-Firewall-1-INPUT -s 192.168.128.101 -p tcp --dport 1186 -j ACCEPT
或在兩台都執行
iptables -A RH-Firewall-1-INPUT -p tcp --dport 7789 -j ACCEPT
執行下面命令將設定存到 /etc/sysconfig/iptables,讓防火牆重新啟動後生效
iptables-save
(完)