Pages

Saturday 24 May 2014

CEPH (Software based storage) Part 4 - Ceph and Open Stack

CEPH (Software based storage) Part 5 - Ceph and Open Stack

This guide will take me through how to configure Ceph Block storage that using by Openstack

Assumption:

- Ceph system (built up from part-1)
- Openstack controller server (built on your site). This configure demotrate on OpenStack Havana with Cinder Block Storage
- Openstack compute was built on separate server.
- User of Nova, glance, cinder is the same name.


1) Do this step on ceph monitor admin

  • # su ceph
    # cd /etc/ceph
    
    # ceph osd pool create volumes 128
    # ceph osd pool create images 128

2) Do this step on each compute node  and controller

  • # mkdir /etc/ceph
    # useradd -d /home/ceph -m ceph
    # passwd ceph
    # chown -R ceph /etc/ceph/
    # echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
    # chmod 0440 /etc/sudoers.d/ceph
    # service ssh restart


3) Do this step on ceph monitor admin, to copy ceph.conf to compute node and controller node

  • # ssh <compute_node> sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
    # ssh <controller node> sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf

4) Now, install CEPH on controller node and Compute node (use guidance in Part-1)


5) Do this step on ceph monitor admin

Create a new user for Nova/Cinder and Glance. Execute the following:

  • # ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'

Example Result:

[client.cinder]
        key = AQAkcp5TEMvwCxAAbYtpVMiPhMcVOmIH4vbEdw==


  • # ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

Example Result:

[client.glance]
        key = AQA2cp5TQDwaKBAAnN7vTmJ8ChOKDbmYQt58mA==


6) Add the keyrings for client.cinder, client.glance to the appropriate nodes and change their ownership:

  • # ceph auth get-or-create client.glance | ssh <controller node> sudo tee /etc/ceph/ceph.client.glance.keyring
    # ssh <controller node> sudo chown glance:glance /etc/ceph/ceph.client.glance.keyring
    
    # ceph auth get-or-create client.cinder | ssh <compute node> sudo tee /etc/ceph/ceph.client.cinder.keyring
    # ssh <compute node> sudo chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring


Nodes running nova-compute need the keyring file for the nova-compute process. They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder.

Create a temporary copy of the secret key on the nodes running nova-compute:
This step running on ceph admin monitor node

  • # ceph auth get-key client.cinder | ssh <compute node> tee client.cinder.key

Example result :
AQAkcp5TEMvwCxAAbYtpVMiPhMcVOmIH4vbEdw==

7) Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:

# su ceph
# cd /home/ceph

# uuidgen

90af0017-5503-4419-bc27-1ca58553cf9c


And run:

  • cat > secret.xml <<EOF
    <secret ephemeral='no' private='no'>
      <uuid>90af0017-5503-4419-bc27-1ca58553cf9c</uuid>
      <usage type='ceph'>
        <name>client.cinder secret</name>
      </usage>
    </secret>
    EOF

# sudo virsh secret-define --file secret.xml

Result:

Secret 90af0017-5503-4419-bc27-1ca58553cf9c created

And Run:
  • # sudo virsh secret-set-value --secret 90af0017-5503-4419-bc27-1ca58553cf9c --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Secret value set

We would want to save this uuid of the secret for configuring nova-compute later.

8) CONFIGURING GLANCE
This step run on controller node
Glance can use multiple back ends to store images. To use Ceph block devices by default, edit /etc/glance/glance-api.conf and add:

  • default_store=rbd
    rbd_store_user=glance
    rbd_store_pool=images
    show_image_direct_url=True

Copy the keyrings file to glance directory. The ceph.client.glance.keyring we created on some first step (intergrate CEPH with CINDER)


  • # cp /etc/ceph/ceph.client.glance.keyring /etc/glance/
    # cd /etc/glance
    # chown glance:glance /etc/glance/ceph.client.glance.keyring

9) CONFIGURING CINDER
This step run on compute node
OpenStack requires a driver to interact with Ceph block devices. You must also specify the pool name for the block device. On your OpenStack node, edit /etc/cinder/cinder.conf by adding:

  • volume_driver=cinder.volume.drivers.rbd.RBDDriver
    rbd_pool=volumes
    rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_flatten_volume_from_snapshot=false
    rbd_max_clone_depth=5
    glance_api_version=2
    rbd_user=cinder
    rbd_secret_uuid=90af0017-5503-4419-bc27-1ca58553cf9c

<==If you’re using cephx authentication, also configure the user and uuid of the secret you added to libvirt as documented earlier.


10) CONFIGURING NOVA

On every Compute nodes, edit /etc/nova/nova.conf and add:

  • libvirt_images_type=rbd
    libvirt_images_rbd_pool=volumes
    libvirt_images_rbd_ceph_conf=/etc/ceph/ceph.conf
    rbd_user=cinder
    rbd_secret_uuid=90af0017-5503-4419-bc27-1ca58553cf9c

It is also a good practice to disable any file injection. Usually, while booting an instance Nova attempts to open the rootfs of the virtual machine. Then, it injects directly into the filesystem things like: password, ssh keys etc... At this point, it is better to rely on the metadata service and cloud-init. On every Compute nodes, edit /etc/nova/nova.conf and add:

  • libvirt_inject_password=false
    libvirt_inject_key=false
    libvirt_inject_partition=-2

Restart service:

  • # glance-control api restart <== should be done on controller
    # service nova-compute restart <== should be done on compute node
    # service cinder-volume restart <== should be done on compute node
    # service glance-registry restart <== should be done on controller
    # service glance-api restart <== should be done on controller



======= Checking whether Ceph is already working with OpenStack Cinder and Glance===========

On Controller node, run

$ cinder create 1

Then you can check both status in Cinder and Ceph:

For Cinder run:

$ cinder list 

For Ceph run:

$ rbd -p <cinder-pool> ls 

Ex: $ rbd -p volumes ls


If the image is there, you’re good.

No comments:

Post a Comment