Installing ceph with ceph-deploy

A ceph-deploy package is created for Ubuntu raring and installed with

dpkg -i ceph-deploy_0.0.1-1_all.deb

A ssh key is generated without a password and copied over to the root .ssh/authorized_keys file of each host on which ceph-deploy will act:

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/
The key fingerprint is:
The key's randomart image is:
+--[ RSA 2048]----+
|            .o.  |
|            oo.o |
|           . oo.+|
|          . o o o|
|        SE o   o |
|     . o. .      |
|      o +.       |
|       + =o .    |
|       .*..o     |
# for i in 12 14 15
 ssh bm00$ cat \>\> .ssh/authorized_keys < .ssh/

Each host is installed with Ubuntu raring and has a spare, unused, disk at /dev/sdb. The ceph packages are installed with:

ceph-deploy  install

The short version of each FQDN is added to /etc/hosts on each host, because ceph-deploy will assume that it exists:

for host in
 getent hosts | \
   sed -e 's/\.the\.re//' | ssh $host cat \>\> /etc/hosts

The ceph cluster configuration is created with:

# ceph-deploy new

and the corresponding mon are deployed with

ceph-deploy mon create

Even after the command returns, it takes a few seconds for the keys to be generated on each host: the ceph-mon process shows when it is complete. Before creating the osd, the keys are obtained from a mon with:

ceph-deploy gatherkeys

The osds are then created with:

ceph-deploy osd create

After a few seconds the cluster stabilizes, as shown with

# ceph -s
   health HEALTH_OK
   monmap e1: 3 mons at {bm0012=188.165:6789/0,bm0014=188.165:6789/0,bm0015=188.165:6789/0}, election epoch 24, quorum 0,1,2 bm0012,bm0014,bm0015
   osdmap e14: 3 osds: 3 up, 3 in
    pgmap v106: 192 pgs: 192 active+clean; 0 bytes data, 118 MB used, 5583 GB / 5583 GB avail
   mdsmap e1: 0/0/1 up

A 10GB RBD is created, mounted and destroyed with:

# rbd create --size 10240 test1
# rbd map test1 --pool rbd
# mkfs.ext4 /dev/rbd/rbd/test1
# mount /dev/rbd/rbd/test1 /mnt
# df -h /mnt
Filesystem      Size  Used Avail Use% Mounted on
/dev/rbd1       9.8G   23M  9.2G   1% /mnt
# umount /mnt
# rbd unmap /dev/rbd/rbd/test1
# rbd rm test1
Removing image: 100% complete...done.

Ubuntu raring package

A series of patches fix minor build and deploy problems for the ceph-deploy package:

  • the debian packages need python-setuptools as a build dependency
  • Add python-pushy to the list of packages required to run ceph-deploy when installed on debian
  • The list of path added by ceph-deploy does not cover all the deployment scenarios. In particular, when installed from a package it will end up in /usr/lib/python2.7/dist-packages/ceph_deploy . The error message is removed : the from will fail if it does not find the module.
  • add missing python-setuptools runtime dependency to debian/control

Reseting the installation

To restart from scratch ( i.e. discarding all data and all installation parameters ), uninstall the software with

ceph-deploy uninstall

and purge any leftovers with

for host in
 ssh $host apt-get remove --purge ceph ceph-common ceph-mds

Remove the configuration files and data files with

for host in
 ssh $host rm -fr /etc/ceph /var/lib/ceph

Reset the disk with

for host in
 ssh $host <<EOF
umount /dev/sdb1
dd if=/dev/zero of=/dev/sdb bs=1024k count=100
sgdisk -g --clear /dev/sdb
This entry was posted in Raring, Ubuntu, ceph. Bookmark the permalink.

9 Responses to Installing ceph with ceph-deploy

  1. majianpeng says:

    There are some questions about create osd.
    A:ceph-deploy osd create
    On bm0012, Was sdb mounted?

    B:I wanted to set many osd on bm0012.How to do?


    • Loic Dachary says:

      /dev/sdb was not mounted and only had an empty GPT partition table on it. If you had another disk ( say /dev/sdc ) on you would add and have another OSD for that disk.

  2. Raphael Lehmann says:

    Good walkthrough, but for me it does not work.
    ceph-deploy always hang-up while creating the osds:
    “ceph-deploy -v osd create”
    The output is:
    “Preparing cluster ceph disks
    Deploying osd to
    Host is now ready for osd use.
    Preparing host disk /dev/sdb journal None activate True”
    and after waiting 5 minutes and pressing Ctrl+C:
    “^CTraceback (most recent call last):
    File “/usr/bin/ceph-deploy”, line 22, in
    File “/usr/lib/pymodules/python2.7/ceph_deploy/”, line 112, in main
    return args.func(args)
    File “/usr/lib/pymodules/python2.7/ceph_deploy/”, line 293, in osd
    prepare(args, cfg, activate_prepared_disk=True)
    File “/usr/lib/pymodules/python2.7/ceph_deploy/”, line 177, in prepare
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 255, in
    (conn.operator(type_, self, args, kwargs))
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 66, in operator
    return self.send_request(type_, (object, args, kwargs))
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 315, in send_request
    m = self.__waitForResponse(handler)
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 420, in __waitForResponse
    m = self.__recv()
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 601, in __recv
    m = self.__istream.receive_message()
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 104, in receive_message
    return Message.unpack(self.__file)
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 96, in unpack
    header = read(file, Message.PACKING_SIZE)
    File “/usr/lib/python2.7/dist-packages/pushy/protocol/”, line 56, in read
    partial = – len(data))
    I installed ceph-deploy from the ceph-repository ( on a new Ubuntu raring and i use IPv6.

  3. Josh says:

    How would you place the journal on another disk? I have a SSD (sda) and a bigger disk (sdb).

    I have tried create node1:sdb:/dev/sda1 and the logs say it is created and activated. The osd however never comes up it remains down. Any thoughts? Both disks are gpt. Am I doing anything wrong? Thanks

    • Loic Dachary says:

      Hi Josh,

      The syntax seems right. Would you like to chat about that on ? With a little more information I’m sure the problem will be fixed.


  4. nabil says:

    Hi Sir,
    i follow your steps, When i try to execute ceph-deploy gatherkeys ceph-server01 it gives me
    [ceph_deploy.cli][INFO ] Invoked (1.2.7): /usr/bin/ceph-deploy gatherkeys ceph-server01
    [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-server01 for /etc/ceph/ceph.client.admin.keyring
    [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
    [ceph_deploy.gatherkeys][WARNIN] Unable to find /etc/ceph/ceph.client.admin.keyring on ['ceph-server01']
    [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
    [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-server01 for /var/lib/ceph/bootstrap-osd/ceph.keyring
    [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
    [ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-osd/ceph.keyring on ['ceph-server01']
    [ceph_deploy.gatherkeys][DEBUG ] Checking ceph-server01 for /var/lib/ceph/bootstrap-mds/ceph.keyring
    [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
    [ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['ceph-server01']

    any advice ?

Leave a Reply

Your email address will not be published. Required fields are marked *


You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>