Monthly Archives: May 2017

A tool to rebalance uneven Ceph pools

The algorithm to fix uneven CRUSH distributions in Ceph was implemented as the crush optimize subcommand. Given the output of ceph report, crush analyze can show buckets that are over/under filled: $ ceph report > ceph_report.json $ crush analyze –crushmap … Continue reading

Posted in ceph, crush | Leave a comment

An algorithm to fix uneven CRUSH distributions in Ceph

The current CRUSH implementation in Ceph does not always provide an even distribution. The most common cause of unevenness is when only a few thousands PGs, or less, are mapped. This is not enough samples and the variations can be … Continue reading

Posted in ceph, crush, libcrush | Leave a comment

Ceph space lost due to overweight CRUSH items

When a CRUSH bucket contains five Ceph OSDs with the following weights: weight osd.0 5 osd.1 1 osd.2 1 osd.3 1 osd.4 1 20% of the space in osd.0 will never be used by a pool with two replicas. The … Continue reading

Posted in ceph, crush | Leave a comment

Comprendre la démocratie liquide

J’ai beaucoup de mal à expliquer l’idée de démocratie liquide et ce n’est pas faute d’avoir essayé. Peut-être que le coté récursif de la délégation de vote n’est pas naturel pour les non-informaticiens. A l’occasion de l’entre deux tour des … Continue reading

Posted in Liquid Democracy | Leave a comment

Ceph full ratio and uneven CRUSH distributions

A common CRUSH rule in Ceph is step chooseleaf firstn 0 type host meaning Placement Groups (PGs) will place replicas on different hosts so the cluster can sustain the failure of any host without losing data. The missing replicas are … Continue reading

Posted in ceph, crush, libcrush | Leave a comment