Author Archives: Loic Dachary

Improving PGs distribution with CRUSH weight sets

In a Ceph cluster with a single pool of 1024 Placement Groups (PGs), the PG distribution among devices will not be as expected. (see Predicting Ceph PG placement for details about this uneven distribution). In the following, the difference between … Continue reading

Posted in ceph, crush, libcrush | Leave a comment

Faster Ceph CRUSH computation with smaller buckets

The CRUSH function maps Ceph placement groups (PGs) and objects to OSDs. It is used extensively in Ceph clients and daemons as well as in the Linux kernel modules and its CPU cost should be reduced to the minimum. It … Continue reading

Posted in ceph, libcrush | 1 Comment

Predicting Ceph PG placement

When creating a new Ceph pool, deciding for the number of PG requires some thinking to ensure there are a few hundred PGs per OSD. The distribution can be verified with crush analyze as follows: $ crush analyze –rule data … Continue reading

Posted in ceph | Leave a comment

How many objects will move when changing a crushmap ?

After a crushmap is changed (e.g. addition/removal of devices, modification of weights or tunables), objects may move from one device to another. The crush compare command can be used to show what would happen for a given rule and replication … Continue reading

Posted in ceph | Leave a comment

Predicting which Ceph OSD will fill up first

When a device is added to Ceph, it is assigned a weight that reflects its capacity. For instance if osd.1 is a 1TB disk, its weight will be 1.0 and if osd.2 is a 4TB disk, its weight will be … Continue reading

Posted in ceph | Leave a comment

logging udev events at boot time

Adapted from Peter Rajnoha post: create a special systemd unit to monitor udev during boot: cat > /etc/systemd/system/systemd-udev-monitor.service <<EOF [Unit] Description=udev Monitoring DefaultDependencies=no Wants=systemd-udevd.service After=systemd-udevd-control.socket systemd-udevd-kernel.socket Before=sysinit.target systemd-udev-trigger.service [Service] Type=simple ExecStart=/usr/bin/sh -c “/usr/sbin/udevadm monitor –udev –env > /udev_monitor.log” [Install] WantedBy=sysinit.target … Continue reading

Posted in Uncategorized | Leave a comment

Testing Ceph with ARMv8 OpenStack instances

The Ceph integration tests can be run on ARMv8 (aka arm64 or aarch64) OpenStack instances on CloudLab or Runabove. When logged in CloudLab an OpenStack cluster suitable for teuthology must be created. To start an experiment click Change Profile to … Continue reading

Posted in ceph, openstack | Leave a comment

Semi-reliable GitHub scripting

The githubpy python library provides a thin layer on top of the GitHub V3 API, which is convenient because the official GitHub documentation can be used. The undocumented behavior of GitHub is outside of the scope of this library and … Continue reading

Posted in Uncategorized | Leave a comment

teuthology forensics with git, shell and paddles

When a teuthology integration test for Ceph fails, the results are analyzed to find the source of the problem. For instance the upgrade suite: pool_create failed with error -4 EINTR issue was reported early October 2015, with multiple integration job … Continue reading

Posted in ceph | Leave a comment

On demand Ceph packages for teuthology

When a teuthology jobs install Ceph, it uses packages created by gitbuilder. These packages are built every time a branch is pushed to the official repository. Contributors who do not have write access to the official repository, can either ask … Continue reading

Posted in ceph | Leave a comment