- [Show page]
- [Old revisions]
- [[unknown link type]]
- []
Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
linux:cluster:install [2015/12/11 14:49] vondra |
linux:cluster:install [2015/12/11 17:59] (current) vondra [Installation on Debian 8 Jessie] |
||
---|---|---|---|
Line 1: | Line 1: | ||
====== Installation on Debian 8 Jessie ====== | ====== Installation on Debian 8 Jessie ====== | ||
- install necessary packages:<code bash> | - install necessary packages:<code bash> | ||
- | apt-get install cman clvm gfs2-utils gfs2-cluster multipath-tools </code> | + | apt-get install cman clvm gfs2-utils gfs2-cluster multipath-tools xmlstarlet ntp</code> |
- in case you are using iscsi attached quorum disk - install iscsi:<code bash> | - in case you are using iscsi attached quorum disk - install iscsi:<code bash> | ||
apt-get install open-iscsi</code> | apt-get install open-iscsi</code> | ||
- install fencing-agents either via apt or from git (better):<code bash> | - install fencing-agents either via apt or from git (better):<code bash> | ||
apt-get install fence-agents</code>or [[linux:netio-fencing | the newest version from git]] | apt-get install fence-agents</code>or [[linux:netio-fencing | the newest version from git]] | ||
- | * in case you are using Netio 230A copy its agent from [[linux:netio-fencing]] to /usr/sbin/fence_netio_230A | + | * in case you are using Netio 230A copy its agent from [[linux:netio-fencing#netio_230a_agent]] to /usr/sbin/fence_netio_230A |
- disable waiting for quorum device:<code bash>echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/default/cman</code> | - disable waiting for quorum device:<code bash>echo "CMAN_QUORUM_TIMEOUT=0" >> /etc/default/cman</code> | ||
- edit /etc/lvm/lmv.conf, set:<code bash> | - edit /etc/lvm/lmv.conf, set:<code bash> | ||
Line 14: | Line 14: | ||
====== Adding node to the cluster ====== | ====== Adding node to the cluster ====== | ||
- | - add node to the /etc/hosts for all nodes in cluster e.g.:<code bash>127.0.0.1 localhost | + | - add node to the /etc/hosts for all nodes in cluster e.g. (essential when cluster based on FQDN instead of IPs):<code bash>127.0.0.1 localhost |
127.0.1.1 zemi.starlab.cz zemi | 127.0.1.1 zemi.starlab.cz zemi | ||
Line 26: | Line 26: | ||
10.0.0.130 lemmy.starlab.cz | 10.0.0.130 lemmy.starlab.cz | ||
10.0.0.140 zemi.starlab.cz</code> | 10.0.0.140 zemi.starlab.cz</code> | ||
- | - | + | - copy /root/cluster_bin directory from existing node to new one |
+ | - add new node on one of existing nodes:<code bash>ccs_tool addnode 10.0.0.140 -v 1 -f ac_node port=4 -n 4</code> | ||
+ | - distribute the configruation in cluster<code bash>cluster_bin/distribute_config</code> | ||
+ | - disable services on startup (it will be handled manually)<code bash> | ||
+ | systemctl disable cman.service | ||
+ | systemctl disable clvm.service | ||
+ | systemctl disable gfs2-cluster.service</code> | ||
+ | - (optional) if is qourum disk connected via iscsi set it conect automatically | ||
+ | - if using gfs2 filesystem add journal for it:<code bash>gfs2_jadd -j Number MountPoint</code> | ||
+ | - run <code bash>cluster_bin/start_cluster.sh</code> |
linux/cluster/install.1449841767.txt.gz · Last modified: 2015/12/11 14:49 by vondra