Server configuration:
In part 1, i used group of 3 server with named:
Server 1 named "Mon1" : ceph monitor 1 (admin role) Server 1 named "Node1" : ceph monitor 2; ceph node 1 Server 1 named "Node2" : ceph monitor 3; ceph node 2
Install Ubuntu 13 on each server and do the same below task (from 1 to 8) on each server:
1_ Update Ubuntu
2_ Install ssh
3_ Configure SSH
Open the /etc/ssh/sshd_config and change the value of below parameter like this:
4_ Install NTP (and make sure NTP configuration is correct)
After install, do a NTP configure:
Edit file /etc/ntp.conf
- Add "iburst" to the end of the first entry on the main time server
- On the secondary servers, comment out all time servers and add:
server main.server.name.here iburst
When done, restart NTP service and verify NTP service is working properly:
5_ Update hosts file
In this step, edit /etc/hosts that make sure all server can see each other by the name (short name and long name)
Ex: 192.168.1.1 mon1 mon1.domain.ext
192.168.1.2 node1 node1.domain.ext
6_ Add ceph user
7_ Add root priv to ceph user
8_ Install ceph
Example here i install the version of ceph is "emperor"
PLEASE NOTE: If you copy and paste above command, please ensure run its on same line.
After this step, dir /etc/ceph will be created on each server, that dir will be using for working home dir.
The reason i do this step manually on each server because if you are going to use ceph-deploy (function "ceph-deploy" will be installed on admin monitor node, in some forward steps below) to install ceph for other node, if your network too slow (if your network too fast , that OK, please ignore this manually installation, go ahead using ceph-deploy), after 5 minutes running script ceph-deploy, your ceph-deploy will be dropped (with stupid reason "disconnect...the host take a long time for response") and your installation will be failed.
If you want to install ceph by ceph-deploy, it's quite simple, just stand in /etc/ceph dir on admin monitor server and run:
With mon1, node1, node2 are server where ceph will be installed in.
The same tasking for all node will be end here. Some next steps are performed on particular node.
This step should be done on admin monitor node : mon1
1_Su ceph user and create ssh keyring and copy to other node for manage:
2_ Edit the local config for the ceph user
And add:
Host * User ceph
3_ Test ssh connection to other node
It shouldn’t prompt for anything except maybe to learn the ssh key of the host, no passwords though.
4_ Exit ceph user session
5_ Now, it's time to install ceph-deploy on admin monitor node
On the same line, please.
6_ Now, back to ceph user session by "su ceph"
7_ Make sure the ceph user has write permissions to the /etc/ceph directory (if it's not exist, just create it)
PLEASE NOTE:
a) Right above step (step number 7) should be done on each node
b) After this step, in future using, anytime you want to do a command involve ceph, please stand in /etc/ceph dir.
8_ Create cluster
9_ Add monitor
10_ Gather key
Verify 3 new keyrings have just created.
11_ Copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.
As ceph user, in /etc/ceph dir and on admin monitor node, do command:
12_ Create osd
Assume that on node 1 i have 2 more disk (not partition, you can use partition as well), node 2 i have 4 more disk.
After this step, login to node1 and node2, some mounted partition will be show.
Now, try to test using Block device:
Create an Linux client (will be called ceph client) for using OSD disk. Perform this task
- Install ceph : you can use ceph-deploy script (run from admin monitor node) or install ceph manuall as step 8.
- Do step 1 to 7 and make sure the ceph user has write permissions to the /etc/ceph directory.
- Run following command:
nhut_block_dev : name of new Block device on ceph client.
10240 = 10GB
Now you have your storage on your hand.
Other CEPH post:
No comments:
Post a Comment