Pages

Saturday 24 May 2014

CEPH (Software based storage) Part 4 - Ceph and Open Stack

CEPH (Software based storage) Part 5 - Ceph and Open Stack

This guide will take me through how to configure Ceph Block storage that using by Openstack

Assumption:

- Ceph system (built up from part-1)
- Openstack controller server (built on your site). This configure demotrate on OpenStack Havana with Cinder Block Storage
- Openstack compute was built on separate server.
- User of Nova, glance, cinder is the same name.


1) Do this step on ceph monitor admin

  • # su ceph
    # cd /etc/ceph
    
    # ceph osd pool create volumes 128
    # ceph osd pool create images 128

2) Do this step on each compute node  and controller

  • # mkdir /etc/ceph
    # useradd -d /home/ceph -m ceph
    # passwd ceph
    # chown -R ceph /etc/ceph/
    # echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    # chmod 0440 /etc/sudoers.d/ceph
    # service ssh restart


3) Do this step on ceph monitor admin, to copy ceph.conf to compute node and controller node

  • # ssh <compute_node> sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
    # ssh <controller node> sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

4) Now, install CEPH on controller node and Compute node (use guidance in Part-1)


5) Do this step on ceph monitor admin

Create a new user for Nova/Cinder and Glance. Execute the following:

  • # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'

Example Result:

[client.cinder]
        key = AQAkcp5TEMvwCxAAbYtpVMiPhMcVOmIH4vbEdw==


  • # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

Example Result:

[client.glance]
        key = AQA2cp5TQDwaKBAAnN7vTmJ8ChOKDbmYQt58mA==


6) Add the keyrings for client.cinder, client.glance to the appropriate nodes and change their ownership:

  • # ceph auth get-or-create client.glance | ssh <controller node> sudo tee /etc/ceph/ceph.client.glance.keyring
    # ssh <controller node> sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
    
    # ceph auth get-or-create client.cinder | ssh <compute node> sudo tee /etc/ceph/ceph.client.cinder.keyring
    # ssh <compute node> sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring


Nodes running nova-compute need the keyring file for the nova-compute process. They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder.

Create a temporary copy of the secret key on the nodes running nova-compute:
This step running on ceph admin monitor node

  • # ceph auth get-key client.cinder | ssh <compute node> tee client.cinder.key

Example result :
AQAkcp5TEMvwCxAAbYtpVMiPhMcVOmIH4vbEdw==

7) Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:

# su ceph
# cd /home/ceph

# uuidgen

90af0017-5503-4419-bc27-1ca58553cf9c


And run:

  • cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
      <uuid>90af0017-5503-4419-bc27-1ca58553cf9c</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    EOF

# sudo virsh secret-define --file secret.xml

Result:

Secret 90af0017-5503-4419-bc27-1ca58553cf9c created

And Run:
  • # sudo virsh secret-set-value --secret 90af0017-5503-4419-bc27-1ca58553cf9c --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set

We would want to save this uuid of the secret for configuring nova-compute later.

8) CONFIGURING GLANCE
This step run on controller node
Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-api.conf and add:

  • default_store=rbd
    rbd_store_user=glance
    rbd_store_pool=images
    show_image_direct_url=True

Copy the keyrings file to glance directory. The ceph.client.glance.keyring we created on some first step (intergrate CEPH with CINDER)


  • # cp /etc/ceph/ceph.client.glance.keyring /etc/glance/
    # cd /etc/glance
    # chown glance:glance /etc/glance/ceph.client.glance.keyring

9) CONFIGURING CINDER
This step run on compute node
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:

  • volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_pool=volumes
    rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot=false
    rbd_max_clone_depth=5
    glance_api_version=2
    rbd_user=cinder
    rbd_secret_uuid=90af0017-5503-4419-bc27-1ca58553cf9c

<==If you’re using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier.


10) CONFIGURING NOVA

On every Compute nodes, edit /etc/nova/nova.conf and add:

  • libvirt_images_type=rbd
    libvirt_images_rbd_pool=volumes
    libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=cinder
    rbd_secret_uuid=90af0017-5503-4419-bc27-1ca58553cf9c

It is also a good practice to disable any file injection. Usually, while booting an instance Nova attempts to open the rootfs of the virtual machine. Then, it injects directly into the filesystem things like: password, ssh keys etc... At this point, it is better to rely on the metadata service and cloud-init. On every Compute nodes, edit /etc/nova/nova.conf and add:

  • libvirt_inject_password=false
    libvirt_inject_key=false
    libvirt_inject_partition=-2

Restart service:

  • # glance-control api restart <== should be done on controller
    # service nova-compute restart <== should be done on compute node
    # service cinder-volume restart <== should be done on compute node
    # service glance-registry restart <== should be done on controller
    # service glance-api restart <== should be done on controller



======= Checking whether Ceph is already working with OpenStack Cinder and Glance===========

On Controller node, run

$ cinder create 1

Then you can check both status in Cinder and Ceph:

For Cinder run:

$ cinder list 

For Ceph run:

$ rbd -p <cinder-pool> ls 

Ex: $ rbd -p volumes ls


If the image is there, you’re good.

CEPH (Software based storage) Part 3 - Ceph advanced configuration

CEPH (Software based storage) Part 3 - Ceph advanced configuration







writting














CEPH (Software based storage) Part 2 - Object Storage - Install Rados Gateway and Experience

CEPH (Software based storage) Part 2 - Object Storage - Install Rados Gateway and Experience





writting





CEPH (Software based storage) Part 1 - Basic Installation


Server configuration: 

In part 1, i used group of 3 server with named:

  • Server 1 named "Mon1" : ceph monitor 1 (admin role)
    Server 1 named "Node1" : ceph monitor 2; ceph node 1
    Server 1 named "Node2" : ceph monitor 3; ceph node 2

Install Ubuntu 13 on each server and do the same below task (from 1 to 8) on each server:

1_ Update Ubuntu
sudo apt-get update
sudo apt-get dist-upgrade
2_ Install ssh
sudo apt-get install ssh -y
3_ Configure SSH

Open the /etc/ssh/sshd_config and change the value of below parameter like this:
sudo vi /etc/ssh/sshd_config 
PermitRootLogin yes 
PermitEmptyPasswords yes 
4_ Install NTP (and make sure NTP configuration is correct)
sudo apt-get install ntp -y
After install, do a NTP configure:

Edit file /etc/ntp.conf 
- Add "iburst" to the end of the first entry on the main time server 
- On the secondary servers, comment out all time servers and add: 
server main.server.name.here iburst 

When done, restart NTP service and verify NTP service is working properly:
sudo /etc/init.d/ntp reload
ntpq –p
5_ Update hosts file

In this step, edit /etc/hosts that make sure all server can see each other by the name (short name and long name)
Ex: 192.168.1.1             mon1        mon1.domain.ext
      192.168.1.2             node1       node1.domain.ext

6_ Add ceph user
sudo useradd -d /home/ceph -m ceph 
sudo passwd ceph
7_ Add root priv to ceph user
echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph 
sudo chmod 0440 /etc/sudoers.d/ceph 
sudo more /etc/sudoers.d/ceph          <== this step just verify
sudo service ssh restart
8_ Install ceph 

Example here i install the version of ceph is "emperor"

wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

echo deb http://ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 

sudo apt-get -q update

sudo env DEBIAN_FRONTEND=noninteractive DEBIAN_PRIORITY=critical apt-get -q -o Dpkg::Options::=--force-confnew --no-install-recommends --assume-yes install -- ceph ceph-mds ceph-common ceph-fs-common gdisk
PLEASE NOTE: If you copy and paste above command, please ensure run its on same line.

After this step, dir /etc/ceph will be created on each server, that dir will be using for working home dir.

The reason i do this step manually on each server because if you are going to use ceph-deploy (function "ceph-deploy" will be installed on admin monitor node, in some forward steps below) to install ceph for other node, if your network too slow (if your network too fast , that OK, please ignore this manually installation, go ahead using ceph-deploy), after 5 minutes running script ceph-deploy, your ceph-deploy will be dropped (with stupid reason "disconnect...the host take a long time for response") and your installation will be failed.

If you want to install ceph by ceph-deploy, it's quite simple, just stand in /etc/ceph dir on admin monitor server and run:
Ceph-deploy install mon1 node1 node2
With mon1, node1, node2 are server where ceph will be installed in.

The same tasking for all node will be end here. Some next steps are performed on particular node.
This step should be done on admin monitor node : mon1
1_Su ceph user and create ssh keyring and copy to other node for manage:
su ceph
ssh-keygen             <== keep pressing enter till its done
ssh-copy-id ceph@node1         <== copy ssh-id to node1, osd node
ssh-copy-id ceph@node1
............
ssh-copy-id ceph@other-node-if-you-have
2_ Edit the local config for the ceph user
cd ~/.ssh/
vi config
And add:
  • Host *
    User ceph
3_ Test ssh connection to other node
ssh <node-name>
It shouldn’t prompt for anything except maybe to learn the ssh key of the host, no passwords though.

4_ Exit ceph user session
5_ Now, it's time to install ceph-deploy on admin monitor node


wget -q -O- 'https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc' | sudo apt-key add -

echo deb http://ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list

sudo apt-get update && sudo apt-get install ceph-deploy
On the same line, please.

6_ Now, back to ceph user session by "su ceph"
7_ Make sure the ceph user has write permissions to the /etc/ceph directory (if it's not exist, just create it)
sudo chmod –R 777 /etc/ceph
sudo chown -R ceph /ect/ceph
PLEASE NOTE:
a) Right above step (step number 7) should be done on each node
b) After this step, in future using, anytime you want to do a command involve ceph, please stand in /etc/ceph dir.

8_ Create cluster
ceph-deploy new mon1 node1 node2
9_ Add monitor
ceph-deploy mon create mon1 node1 node2
10_ Gather key
ceph-deploy gatherkeys mon1
Verify 3 new keyrings have just created.

11_ Copy the configuration file and admin key to your admin node and your Ceph Nodes so that you can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time you execute a command.

As ceph user, in /etc/ceph dir and on admin monitor node, do command:
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
ceph-deploy admin mon1 node1 node2 
12_ Create osd
ceph-deploy --overwrite-conf osd prepare node1:/dev/sdb node1:/dev/sdc node2:/dev/sdb node2:/dev/sdc node2:/dev/sdd node2:/dev/sde

ceph-deploy --overwrite-conf osd activate node1:/dev/sdb node1:/dev/sdc node2:/dev/sdb node2:/dev/sdc node2:/dev/sdd node2:/dev/sde
Assume that on node 1 i have 2 more disk (not partition, you can use partition as well), node 2 i have 4 more disk.
After this step, login to node1 and node2, some mounted partition will be show.


Now, try to test using Block device:
Create an Linux client (will be called ceph client) for using OSD disk. Perform this task
- Install ceph : you can use ceph-deploy script (run from admin monitor node) or install ceph manuall as step 8.
- Do step 1 to 7 and make sure the ceph user has write permissions to the /etc/ceph directory.
- Run following command:
rbd create nhut_block_dev --size 10240
sudo rbd map nhut_block_dev --pool rbd --name client.admin
sudo mkfs.ext4 -m0 /dev/rbd/rbd/nhut_block_dev
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/nhut_block_dev /mnt/ceph-block-device
cd /mnt/ceph-block-device
nhut_block_dev : name of new Block device on ceph client.
10240 = 10GB

Now you have your storage on your hand.












Other CEPH post:



Monday 19 May 2014

Network through-put checking with IPERF/ NETPERF / VNSTAT

IP server: 10.0.0.1
IP client: 10.0.0.2

2 device connect with each other via ethenet switch

1) IPERF

On Server:
#sudo apt-get install iperf
# iperf -s


On Client
#sudo apt-get install iperf
#iperf -c 10.0.0.1 -i 1 -t 20

See the result on both server and client

2) NETPERF and VNSTAT

On Server, install vnstat
#sudo apt-get install vnstat
Run vnstat to listen network traffic (change limit configuration in /etc/vnstat.conf as your needs)

#vnstat -i <interface> -l


On Client
#sudo apt-get install netperf

Run netperf
#netperf  -H 10.0.0.1

(or run multi-process if you want to create a huge traffic
netperf  -H 10.0.0.1 &
netperf  -H 10.0.0.1 &
netperf  -H 10.0.0.1 &
netperf  -H 10.0.0.1 &
netperf  -H 10.0.0.1
)