Welcome to the Free Software contributions diary of Loïc Dachary. Although the posts look like blog entries, they really are technical reports about the work done during the day. They are meant to be used as a reference by co-developers and managers. Erasure Code Patents StreamScale.

Lowering Ceph scrub I/O priority

Note: the following does not currently work in Firefly because of http://tracker.ceph.com/issues/9677 . It has been backported to Firefly and will likely be in 0.80.8.

The disk I/O of a Ceph OSD thread scrubbing is the same as all other threads by default. It can be lowered with ioprio options for all OSDs with:

ceph tell osd.* injectargs '--osd_disk_thread_ioprio_priority 7'
ceph tell osd.* injectargs '--osd_disk_thread_ioprio_class idle'

All other threads in the OSD will be be (best effort) with priority 4 which is the default for daemons. The disk thread will show as idle:

$ sudo iotop --batch --iter 1 | grep 'ceph-osd -i 0' | grep -v be/4
 4156 idle loic        0.00 B/s    0.00 B/s  0.00 %  0.00 % ./ceph-osd -i 0 ..

Continue reading

Posted in ceph | 3 Comments

Running Ceph with the tcmalloc heap profiler

When running a Ceph cluster from sources, the tcmalloc heap profiler can be started for all daemons with:

CEPH_HEAP_PROFILER_INIT=true \
  CEPH_NUM_MON=1 CEPH_NUM_OSD=3 \
  ./vstart.sh -n -X -l mon osd

The osd.0 stats can be displayed with

$ ceph tell osd.0 heap stats
*** DEVELOPER MODE: setting PATH, PYTHONPATH and LD_LIBRARY_PATH ***
osd.0tcmalloc heap stats:------------------------------------------------
MALLOC:        6084984 (    5.8 MiB) Bytes in use by application
MALLOC: +       180224 (    0.2 MiB) Bytes in page heap freelist
MALLOC: +      1430776 (    1.4 MiB) Bytes in central cache freelist
MALLOC: +      7402112 (    7.1 MiB) Bytes in transfer cache freelist
MALLOC: +      5873424 (    5.6 MiB) Bytes in thread cache freelists
MALLOC: +      1290392 (    1.2 MiB) Bytes in malloc metadata
MALLOC:   ------------
MALLOC: =     22261912 (   21.2 MiB) Actual memory used (physical + swap)
MALLOC: +            0 (    0.0 MiB) Bytes released to OS (aka unmapped)
MALLOC:   ------------
MALLOC: =     22261912 (   21.2 MiB) Virtual address space used
MALLOC:
MALLOC:           1212              Spans in use
MALLOC:             65              Thread heaps in use
MALLOC:           8192              Tcmalloc page size
------------------------------------------------

See the Ceph memory profiling documentation for more information.

Posted in ceph | Leave a comment

Ceph development environment in Docker

The Docker package is installed with

sudo apt-get install docker.io

and the loic user is made part of the docker group to allow it to run containers.

$ grep docker /etc/group
docker:x:142:loic

The most popular ubuntu image collection reported by

$ docker search ubuntu | head -2
NAME    DESCRIPTION                 STARS ...
ubuntu  Official Ubuntu base image  715   ...

is pulled locally with

docker pull ubuntu

A container is created from the desired image (as found by docker images) is selected with:

docker run -v /home/loic:/home/loic -t -i ubuntu:14.04

the home directory is mounted into the container because it contains the local Ceph clone used for development. The user loic is recreated in the container with

adduser loic

and the necessary development packages are installed with

apt-get build-dep ceph
apt-get install libudev-dev git-core python-virtualenv emacs24-nox ccache

The state of the container is saved for re-use with

$ docker ps
CONTAINER ID        IMAGE               ...
2c694d6d5f90        ubuntu:14.04        ...
$ docker commit 2c694d6d5f90 ubuntu-14.04-ceph-devel

Ceph is then compiled and tested locally with

cd ~/software/ceph/ceph
./autogen.sh
./configure --disable-static --with-debug \
   CC='ccache gcc' CFLAGS="-Wall -g" \
   CXX='ccache g++' CXXFLAGS="-Wall -g"
make -j4
make check

Continue reading

Posted in ceph | 1 Comment

OpenStack Upstream Training challenges

The OpenStack Upstream Training scheduled november 1st, 2014 in Paris will have a unprecedented number of participants and for the first time there is a shortage of Lego. In addition to the 80 pounds of spare parts (picture fourground), six new buildings have been acquired today (Tower Bridge, Sydney Opera, Parisian Restaurant, Pets Shop, Palace Cinema and Grand Emporium). They will be at Lawomatic for assembly form October 1st to October 31st. Anyone willing to participate please send me an email.

Once this first challenge is complete, the buildings will have to be transported to the Hyatt conference rooms where the training will take place. The rendez-vous point is 8am Lawomatic Saturday November 1st, 2014. Each of us will carefully transport a building (or part of it in the case of the Tower Bridge) in the subway. There will be coffee and croissants upon arrival :-)

Posted in Upstream University, openstack | Leave a comment

List the versions of OSDs in a Ceph cluster

List the versions that each OSD in a Ceph cluster is running. It is handy to find out how mixed the cluster is.

# ceph tell osd.* version
osd.0: { "version": "ceph version 0.67.4 (ad85ba8b6e8252fa0c7)"}
osd.1: { "version": "ceph version 0.67.5 (a60acafad6096c69bd1)"}
osd.3: Error ENXIO: problem getting command descriptions from osd.3
osd.6: { "version": "ceph version 0.72.2 (a913ded64099cfd60)"}
osd.7: { "version": "ceph version 0.72.1 (4d923874997322de)"}
osd.8: { "version": "ceph version 0.72.1 (4d923874997322de)"}
...
Posted in ceph | Leave a comment

HOWTO extract a stack trace from teuthology

When a teuthology test suite fails on Ceph, it shows in pulpito. For instance there is one failure in the monthrash test suite with details and a link to the logs. By removing the teuthology.log part of the link a directory listing shows all informations archived for this run are available.
In the example above the logs show:

client.0.plana34.stderr:+ ceph_test_rados_api_io
client.0.plana34.stdout:Running main() from gtest_main.cc
client.0.plana34.stdout:[==========] Running 43 tests from 4 test cases.
client.0.plana34.stdout:[----------] Global test environment set-up.
client.0.plana34.stdout:[----------] 11 tests from LibRadosIo
client.0.plana34.stdout:[ RUN      ] LibRadosIo.SimpleWrite
client.0.plana34.stdout:[       OK ] LibRadosIo.SimpleWrite (1509 ms)
client.0.plana34.stdout:[ RUN      ] LibRadosIo.ReadTimeout
client.0.plana34.stderr:Segmentation fault (core dumped)

That shows ceph_test_rados_api_io is running from the plana34 machine and core dumped and the remote/plana34/coredump subdirectory contains the corresponding core dump.
The teuthology logs show the repository from which the binary was downloaded (it was produced by gitbuilder).

echo deb http://gitbuilder.ceph.com/ceph-deb-precise-x86_64-basic/sha1/f5c1d3b6988bae5ffb914d2ac0b2858caeffe12c precise main | sudo tee /etc/apt/sources.list.d/ceph.list

and running this line on an Ubuntu precise 12.04 64bits as suggested by the name of the subdirectory precise-x86_64 will make the corresponding binary packages available. It is also possible to download them directly from the pool/main/c/ceph subdirectory. The packages that are suffixed with -dbg retain the debug symbols that are necessary for gdb to display an informative stack trace.
The ceph_test_rados_api_io binary is part of the ceph-test package and can be extracted with

$ dpkg --fsys-tarfile ceph-test_0.85-726-gf5c1d3b-1precise_amd64.deb | \
  tar xOf -  ./usr/bin/ceph_test_rados_api_io \
  > ceph_test_rados_api_io

and the stack trace displayed with

$ gdb /usr/bin/ceph_test_rados_api_io 1411176209.8835.core
(gdb) bt
#0  0x00007f541b95750a in pthread_rwlock_wrlock () from /lib/x86_64-linux-gnu/libpthread.so.0
#1  0x00007f541bd41341 in RWLock::get_write(bool) () from /usr/lib/librados.so.2
#2  0x00007f541bd2bbc9 in Objecter::op_cancel(Objecter::OSDSession*, unsigned long, int) () from /usr/lib/librados.so.2
#3  0x00007f541bcf1349 in Context::complete(int) () from /usr/lib/librados.so.2
#4  0x00007f541bdad5ea in RWTimer::timer_thread() () from /usr/lib/librados.so.2
#5  0x00007f541bdb149d in RWTimerThread::entry() () from /usr/lib/librados.so.2
#6  0x00007f541b953e9a in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#7  0x00007f541b16a3fd in clone () from /lib/x86_64-linux-gnu/libc.so.6
#8  0x0000000000000000 in ?? ()
Posted in ceph | Leave a comment

Running python rados tests in Ceph

When Ceph is built from sources, make check will not run the test_rados.py tests.
A minimal cluster is required and can be run from the src directory with:

CEPH_NUM_MON=1 CEPH_NUM_OSD=3 ./vstart.sh -d -n -X -l mon osd

The test can then be run with

$ PYTHONPATH=pybind nosetests -v
   test/pybind/test_rados.py

and if only the TestIoctx.test_aio_read is of interest, it can be appended to the filename:

$ PYTHONPATH=pybind nosetests -v
   test/pybind/test_rados.py:TestIoctx.test_aio_read
test_rados.TestIoctx.test_aio_read ... ok

-------------------------------
Ran 1 test in 4.227s

OK
Posted in ceph | Leave a comment

t540p touchpad disable mouse, keep buttons

To use the touchpad to click (but not to move the mouse) and keep using the trackpad for mouse movement:

synclient AreaBottomEdge=1
Posted in Uncategorized | Leave a comment

Ceph placement group memory footprint, in debug mode

A Ceph cluster is run from sources with

CEPH_NUM_MON=1 CEPH_NUM_OSD=5 ./vstart.sh -d -n -X -l mon osd

and each ceph-osd uses approximately 50MB of resident memory

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
loic      7489  1.7  0.2 586080 43676 ?        Ssl  17:55   0:01  ceph-osd
loic      7667  1.6  0.2 586080 43672 ?        Ssl  17:55   0:01  ceph-osd

A pool is created with 10,000 placement groups

$ ceph osd pool create manypg 10000
pool 'manypg' created

the creation completes within half an hour

$ ceph -w
...
2014-09-19 17:57:35.193706 mon.0 [INF] pgmap v40: 10152
   pgs: 10000 creating, 152 active+clean; 0 bytes data, 808 GB used, 102 GB / 911 GB avail
...
2014-09-19 18:35:08.668877 mon.0 [INF] pgmap v583: 10152
   pgs: 46 active, 10106 active+clean; 0 bytes data, 815 GB used, 98440 MB / 911 GB avail
2014-09-19 18:35:13.505841 mon.0 [INF] pgmap v584: 10152
   pgs: 10152 active+clean; 0 bytes data, 815 GB used, 98435 MB / 911 GB avail

The OSD now use approximately 150MB which suggests that each additional placement group uses ~10KB of resident memory.

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
loic      7489  0.7  1.0 725952 166144 ?       Ssl  17:55   2:02 ceph-osd
loic      7667  0.7  0.9 720808 160440 ?       Ssl  17:55   2:03 ceph-osd
Posted in ceph | Leave a comment

Running node-rados from sources

The nodejs rados module comes with an example that requires a Ceph cluster.
If Ceph was compiled from source, a cluster can be run from the source tree with

rm -fr dev out ;  mkdir -p dev
CEPH_NUM_MON=1 CEPH_NUM_OSD=3 \
 ./vstart.sh -d -n -X -l mon osd

It can be used by modifying the /etc/ceph/ceph.conf in the example to the one from the sources : $CEPHSOURCE/src/ceph.conf. The expected output is

$ node exemple.js
fsid : c041968a-a895-4a5c-a0a7-6621e08a4f07
ls pools : rbd
 --- RUN Sync Write / Read ---
Read data : 01234567ABCDEF
 --- RUN ASync Write / Read ---
 --- RUN Attributes Write / Read ---
testfile3 xattr = {"attr1":"first attr","attr2":"second attr","attr3":"last attr value"}
Posted in ceph | Leave a comment