Author Archives: Loic Dachary

Ceph disaster recovery scenario

A datacenter containing three hosts of a non profit Ceph and OpenStack cluster suddenly lost connectivity and it could not be restored within 24h. The corresponding OSDs were marked out manually. The Ceph pool dedicated to this datacenter became unavailable … Continue reading

Posted in ceph | Leave a comment

puppet-ceph update

End of last year, a new puppet-ceph module was bootstrapped with the ambitious goal to re-unite the dozens of individual efforts. I’m very happy with what we’ve accomplished. We are making progress although our community is mixed, but more importantly, … Continue reading

Posted in ceph, puppet | Leave a comment

Ceph erasure code jerasure plugin benchmarks (Highbank ARMv7)

The benchmark described for Intel Xeon is run with a Highbank ARMv7 Processor rev 0 (v7l) processor (the maker of the processor was Calxeda ), using the same codebase: The encoding speed is ~450MB/s for K=2,M=1 (i.e. a RAID5 equivalent) … Continue reading

Posted in ceph | Leave a comment

workaround DNSError when running teuthology-suite

Note: this is only useful for people with access to the Ceph lab. When running a Ceph integration tests using teuthology, it may fail because of a DNS resolution problem with: $ ./virtualenv/bin/teuthology-suite –base ~/software/ceph/ceph-qa-suite \ –suite upgrade/firefly-x \ –ceph … Continue reading

Posted in ceph | Leave a comment

Locally repairable codes and implied parity

When a Ceph OSD is lost in an erasure coded pool, it can be recovered using the others. For instance if OSD X3 was lost, OSDs X1, X2, X4 to X10 and P1 to P4 are retrieved by the primary … Continue reading

Posted in ceph | Leave a comment

Ceph erasure code jerasure plugin benchmarks

On a Intel(R) Xeon(R) CPU E5-2630 0 @ 2.30GHz processor (and all SIMD capable Intel processors) the Reed Solomon Vandermonde technique of the jerasure plugin, which is the default in Ceph Firefly, performs better. The chart is for decoding erasure … Continue reading

Posted in ceph | Leave a comment

Create a partition and make it an OSD

Note: it is similar to Creating a Ceph OSD from a designated disk partition but simpler. In a nutshell, to use the remaining space from /dev/sda and assuming Ceph is already configured in /etc/ceph/ceph.conf it is enough to: $ sgdisk … Continue reading

Posted in ceph | Leave a comment

enable secondary network interface and ignore the default route

When two network interfaces are associated to an OpenStack instance, the Ubuntu precise guest will only configure the first one. Assuming the second can be configured via DHCP, it can be added with: cat > /etc/network/interfaces.d/eth1.cfg <<EOF auto eth1 iface … Continue reading

Posted in Havana, openstack | Leave a comment

Recovering from a cinder RBD host failure

OpenStack Havana Cinder volumes associated with a RBD Ceph pool are bound to a host. cinder service-list –host bm0014.the.re@rbd-ovh +—————+———————–+——+———+——-+ | Binary | Host | Zone | Status | State | +—————+———————–+——+———+——-+ | cinder-volume | bm0014.the.re@rbd-ovh | ovh | enabled … Continue reading

Posted in Havana, ceph, openstack | 3 Comments

Non profit OpenStack & Ceph cluster distributed over five datacenters

A few non profit organizations (April, FSF France, tetaneutral.net…) and volunteers constantly research how to get compute, storage and bandwidth that are: 100% Free Software Content neutral Low maintenance Reliable Cheap The latest setup, in use since ocbober 2013, is … Continue reading

Posted in Havana, ceph, openstack | 2 Comments