Pages

Saturday 24 May 2014

CEPH (Software based storage) Part 1 - Basic Installation


Server configuration: 

In part 1, i used group of 3 server with named:

  • Server 1 named "Mon1" : ceph monitor 1 (admin role)
    Server 1 named "Node1" : ceph monitor 2; ceph node 1
    Server 1 named "Node2" : ceph monitor 3; ceph node 2

Install Ubuntu 13 on each server and do the same below task (from 1 to 8) on each server:

1_ Update Ubuntu
sudo apt-get update
sudo apt-get dist-upgrade
2_ Install ssh
sudo apt-get install ssh -y
3_ Configure SSH

Open the /etc/ssh/sshd_config and change the value of below parameter like this:
sudo vi /etc/ssh/sshd_config 
PermitRootLogin yes 
PermitEmptyPasswords yes 
4_ Install NTP (and make sure NTP configuration is correct)
sudo apt-get install ntp -y
After install, do a NTP configure:

Edit file /etc/ntp.conf 
- Add "iburst" to the end of the first entry on the main time server 
- On the secondary servers, comment out all time servers and add: 
server main.server.name.here iburst 

When done, restart NTP service and verify NTP service is working properly:
sudo /etc/init.d/ntp reload
ntpq –p
5_ Update hosts file

In this step, edit /etc/hosts that make sure all server can see each other by the name (short name and long name)
Ex: 192.168.1.1             mon1        mon1.domain.ext
      192.168.1.2             node1       node1.domain.ext

6_ Add ceph user
sudo useradd -d /home/ceph -m ceph 
sudo passwd ceph
7_ Add root priv to ceph user
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph 
sudo chmod 0440 /etc/sudoers.d/ceph 
sudo more /etc/sudoers.d/ceph          <== this step just verify
sudo service ssh restart
8_ Install ceph 

Example here i install the version of ceph is "emperor"

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

echo deb http://ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 

sudo apt-get -q update

sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
PLEASE NOTE: If you copy and paste above command, please ensure run its on same line.

After this step, dir /etc/ceph will be created on each server, that dir will be using for working home dir.

The reason i do this step manually on each server because if you are going to use ceph-deploy (function "ceph-deploy" will be installed on admin monitor node, in some forward steps below) to install ceph for other node, if your network too slow (if your network too fast , that OK, please ignore this manually installation, go ahead using ceph-deploy), after 5 minutes running script ceph-deploy, your ceph-deploy will be dropped (with stupid reason "disconnect...the host take a long time for response") and your installation will be failed.

If you want to install ceph by ceph-deploy, it's quite simple, just stand in /etc/ceph dir on admin monitor server and run:
Ceph-deploy install mon1 node1 node2
With mon1, node1, node2 are server where ceph will be installed in.

The same tasking for all node will be end here. Some next steps are performed on particular node.
This step should be done on admin monitor node : mon1
1_Su ceph user and create ssh keyring and copy to other node for manage:
su ceph
ssh-keygen             <== keep pressing enter till its done
ssh-copy-id ceph@node1         <== copy ssh-id to node1, osd node
ssh-copy-id ceph@node1
............
ssh-copy-id ceph@other-node-if-you-have
2_ Edit the local config for the ceph user
cd ~/.ssh/
vi config
And add:
  • Host *
    User ceph
3_ Test ssh connection to other node
ssh <node-name>
It shouldn’t prompt for anything except maybe to learn the ssh key of the host, no passwords though.

4_ Exit ceph user session
5_ Now, it's time to install ceph-deploy on admin monitor node


wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

echo deb http://ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

sudo apt-get update && sudo apt-get install ceph-deploy
On the same line, please.

6_ Now, back to ceph user session by "su ceph"
7_ Make sure the ceph user has write permissions to the /etc/ceph directory (if it's not exist, just create it)
sudo chmod –R 777 /etc/ceph
sudo chown -R ceph /ect/ceph
PLEASE NOTE:
a) Right above step (step number 7) should be done on each node
b) After this step, in future using, anytime you want to do a command involve ceph, please stand in /etc/ceph dir.

8_ Create cluster
ceph-deploy new mon1 node1 node2
9_ Add monitor
ceph-deploy mon create mon1 node1 node2
10_ Gather key
ceph-deploy gatherkeys mon1
Verify 3 new keyrings have just created.

11_ Copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

As ceph user, in /etc/ceph dir and on admin monitor node, do command:
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph-deploy admin mon1 node1 node2 
12_ Create osd
ceph-deploy --overwrite-conf osd prepare node1:/dev/sdb node1:/dev/sdc node2:/dev/sdb node2:/dev/sdc node2:/dev/sdd node2:/dev/sde

ceph-deploy --overwrite-conf osd activate node1:/dev/sdb node1:/dev/sdc node2:/dev/sdb node2:/dev/sdc node2:/dev/sdd node2:/dev/sde
Assume that on node 1 i have 2 more disk (not partition, you can use partition as well), node 2 i have 4 more disk.
After this step, login to node1 and node2, some mounted partition will be show.


Now, try to test using Block device:
Create an Linux client (will be called ceph client) for using OSD disk. Perform this task
- Install ceph : you can use ceph-deploy script (run from admin monitor node) or install ceph manuall as step 8.
- Do step 1 to 7 and make sure the ceph user has write permissions to the /etc/ceph directory.
- Run following command:
rbd create nhut_block_dev --size 10240
sudo rbd map nhut_block_dev --pool rbd --name client.admin
sudo mkfs.ext4 -m0 /dev/rbd/rbd/nhut_block_dev
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/nhut_block_dev /mnt/ceph-block-device
cd /mnt/ceph-block-device
nhut_block_dev : name of new Block device on ceph client.
10240 = 10GB

Now you have your storage on your hand.












Other CEPH post:



No comments:

Post a Comment