Welcome to the Free Software contributions diary of Loïc Dachary. Although the posts look like blog entries, they really are technical reports about the work done during the day. They are meant to be used as a reference by co-developers and managers. Erasure Code Patents StreamScale.

Semi-reliable GitHub scripting

The githubpy python library provides a thin layer on top of the GitHub V3 API, which is convenient because the official GitHub documentation can be used. The undocumented behavior of GitHub is outside of the scope of this library and needs to be addressed by the caller.

For instance creating a repository is asynchronous and checking for its existence may fail. Something similar to the following function should be used to wait until it exists:

    def project_exists(self, name):
        retry = 10
        while retry > 0:
                for repo in self.github.g.user('repos').get():
                    if repo['name'] == name:
                        return True
                return False
            except github.ApiError:
            retry -= 1
        raise Exception('error getting the list of repos')

    def add_project(self):
        r = self.github.g.user('repos').post(
        assert r['full_name'] == GITHUB['username'] + '/' + GITHUB['repo']
        while not self.project_exists(GITHUB['repo']):

Another example is merging a pull request. It sometimes fails (503, cannot be merged error) although it succeeds in the background. To cope with that, the state of the pull request should be checked immediately after the merge failed. It can either be merged or closed (although the GitHub web interface shows it as merged). The following function can be used to cope with that behavior:

    def merge(self, pr, message):
        retry = 10
        while retry > 0:
                current = self.github.repos().pulls(pr).get()
                if current['state'] in ('merged', 'closed'):
                logging.info('state = ' + current['state'])
            except github.ApiError as e:
                logging.exception('merging ' + str(pr) + ' ' + message)
            retry -= 1
        assert retry > 0

These two examples have been implemented as part of the ceph-workbench integration tests. The behavior described above can be reproduced by running the test in a loop during a few hours.

Posted in Uncategorized | Leave a comment

teuthology forensics with git, shell and paddles

When a teuthology integration test for Ceph fails, the results are analyzed to find the source of the problem. For instance the upgrade suite: pool_create failed with error -4 EINTR issue was reported early October 2015, with multiple integration job failures.
The first step is to look into the teuthology log which revealed that pools could not be created.

failed: error rados_pool_create(test-rados-api-vpm049-15238-1) \
  failed with error -4"

The 4 stands for EINTR. The paddles database is used by teuthology to store test results and can be queried via HTTP. For instance:

curl --silent http://paddles.front.sepia.ceph.com/runs/ |
  jq '.[] | \
      select(.name | contains("upgrade:firefly-hammer-x")) | \
      select(.branch == "infernalis") | \
      select(.status | contains("finished")) \
      | .name' | \
  while read run ; do eval run=$run ; \
    curl --silent http://paddles.front.sepia.ceph.com/runs/$run/jobs/ | \
      jq '.[] | "http://paddles.front.sepia.ceph.com/runs/\(.name)/jobs/\(.job_id)/"' ; \
  done | \
  while read url ; do eval url=$url ; \
    curl --silent $url | \
      jq 'if((.description != null) and \
             (.description | contains("parallel")) and \
             (.success == true)) then "'$url'" else null end' ; \
  done | grep -v null

shows which successful jobs the upgrade:firefly-hammer-x suites run against the infernalis branch (the first jq expression) were involved in a parallel test (that is the name of a subdirectory of the suite). This was not sufficient to figure out the root cause of the problem because:

  • it only provides access to the last 100 runs
  • it does allow to grep the teuthology log file for a string

With the teuthology logs in the /a directory (it’s actually a 100TB CephFS mount half full), the following shell snippet can be used to find the upgrade tests that failed with the error -4 message in the logs.

for run in *2015-{07,08,09,10}*upgrade* ; do for job in $run/* ; do \
  test -d $job || continue ; \
  config=$job/config.yaml ;   test -f $config || continue ; \
  summary=$job/summary.yaml ; test -f $summary || continue ; \
  if shyaml get-value branch < $config | grep -q hammer && \
     shyaml get-value success < $summary | grep -qi false && \
     grep -q 'error -4' $job/teuthology.log  ; then
       echo $job ;
   fi ; \
done ; done

It looks for all upgrade runs, back to July 2015. shyaml is used to query the branch from the job configuration and only keep those targeting hammer. If the job failed (according to the success value found in the summary file), the error is looked up in the teuthology.log file. The first failed job is found early september:


It happened on a regular basis after that date but was only reported early October. The commits merged in the hammer branch around that time are displayed with:

git log --merges --since 2015-09-01 --until 2015-09-11 --format='%H' ceph/hammer | \
while read sha1 ; do \
  echo ; git log --format='** %aD "%s":https://github.com/ceph/ceph/commit/%H' ${sha1}^1..${sha1} ; \
done | perl -p -e 'print "* \"PR $1\":https://github.com/ceph/ceph/pull/$1\n" if(/Merge pull request #(\d+)/)'

It can be copy pasted in redmine issue. It turns out that a pull request merged September 6th was responsible for the failure.

Posted in ceph | Leave a comment

On demand Ceph packages for teuthology

When a teuthology jobs install Ceph, it uses packages created by gitbuilder. These packages are built every time a branch is pushed to the official repository.

Contributors who do not have write access to the official repository, can either ask a developer with access to push a branch for them or setup a gitbuilder repository, using autobuild-ceph. Asking a developer is inconvenient because it takes time and also because it creates packages for every supported operating system, even when only one of them would be enough. In addition there often is a long wait queue because the gitbuilder of the sepia lab is very busy. Setting up a gitbuilder repository reduces wait time but it has proven to be too time and resources consuming for most contributors.

The buildpackages task can be used to resolve that problem and create the packages required for a particular job on demand. When added to a job that has an install task, it will:

  • always run before the install task regardless of its position in the list of tasks (see the buildpackages_prep function in the teuthology internal tasks for more information).
  • create an http server, unless it already exists
  • set gitbuilder_host in ~/.teuthology.yaml to the http server
  • find the SHA1 of the commit that the install task needs
  • checkout the ceph repository at SHA1 and build the package, in a dedicated server
  • upload the packages to the http server, using directory names that mimic the gitbuilder conventions used in the lab gitbuilder and destroy the server used to build them

When the install task looks for packages, it uses the http server populated by the buildpackages task. The teuthology cluster keeps track of which packages were built for which architecture (via makefile timestamp files). When another job needs the same packages, the buildpackages task will notice they already have been built and uploaded to the http server and do nothing.

A test suite verifies the buildpackages task works as expected and can be run with:

teuthology-openstack --verbose \
   --key-name myself --key-filename ~/Downloads/myself \
   --ceph-git-url http://workbench.dachary.org/ceph/ceph.git \
   --ceph hammer --suite teuthology/buildpackages

The –ceph-git-url is the repository from which the branch specified with –ceph is cloned. It defaults to http://github.com/ceph/ceph which requires write access to the official Ceph repository.

Posted in ceph | Leave a comment

Gitlab CI runner installation

The instructions to install GitLab CI runner are adapted to Ubuntu 14.04 to connect to GitLab CI and run jobs when a commit is pushed to a branch.

A runner token must be obtained from GitLab CI, at the http://cong.dachary.org:8080/projects/1/runners URL for instance.

The gitlab-ci-multi-runner/ is installed as follows:

$ curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash
$ sudo apt-get install gitlab-ci-multi-runner
$ $ sudo gitlab-ci-multi-runner register
Please enter the gitlab-ci coordinator URL (e.g. http://gitlab-ci.org:3000/):


Please enter the gitlab-ci token for this runner:
Please enter the gitlab-ci description for this runner:
[cong]: runner1
INFO[0156] 4418775e Registering runner... succeeded
Please enter the executor: shell, parallels, docker, docker-ssh, ssh:
[shell]: docker
Please enter the Docker image (eg. ruby:2.1):
If you want to enable mysql please enter version (X.Y) or enter latest?

If you want to enable postgres please enter version (X.Y) or enter latest?

If you want to enable redis please enter version (X.Y) or enter latest?

If you want to enable mongo please enter version (X.Y) or enter latest?

INFO[0281] Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

It is configured to run each job in a golang docker container. The project git repository is expected to have a .gitlab-ci.yml file at the root. For instance if .gitlab-ci.yml was:

  script: "type go"

the GitLab runner would succeed with:

Posted in gitlab | 1 Comment

faster debugging of a teuthology workunit

The Ceph integration tests run via teuthology rely on workunits found in the Ceph repository. For instance:

  • the /cephtool/test.sh workunit is modified
  • it is pushed to a wip- in the official Ceph git repository
  • the gitbuilder will automatically build packages for all supported distributions for this wip- branch
  • the rados/singleton/all/cephtool suite can be run with teuthology-suite –suite rados/singleton
  • the workunit task fetches the workunits directory from the Ceph git repository and runs it

There is no need for Ceph to be packaged each time the workunit script is modified. Instead it can be fetched from a pull request:

  • the cephtool/test.sh workunit is modified
  • the pull request number 2043 is created or updated with the modified workunit
  • the workunit.yaml file is created with
          branch: refs/pull/2043/head
  • the rados/singleton/all/cephtool suite can be run with teuthology-suite –suite rados/singleton $(pwd)/workunit.yaml
  • the workunit task fetch the workunits directory in the branch refs/pull/2043/head from the Ceph git repository and runs it

For each pull request, github implicitly creates a reference in the target git repository. This reference is mirrored to git.ceph.com where the workunit task can extract it. The teuthology-suite command accepts yaml files in argument and they are assumed to be relative to the root of a clone of the ceph-qa-suite repository. By providing an absolute path ($(pwd)/workunit.yaml) the file is read from the current directory instead and there is no need to commit it to the ceph-qa-suite repository.

Posted in ceph | Leave a comment

write-only ssh based rsync server

A write-only rsync server can be used by anyone to upload content with no risk of deleting existing files. Assuming access to the rsync server is handled via ssh, the following line can be added to the ~/.ssh/authorized_keys file

command="rrsync /usr/share/nginx/html" ssh-rsa AAAAB3NzaC1y...

The rrsync script is found in the rsync package documentation and installed with:

gzip -d < /usr/share/doc/rsync/scripts/rrsync.gz > /usr/bin/rrsync
chmod +x /usr/bin/rrsync
Posted in Uncategorized | Leave a comment

Scaling out the Ceph community lab

Ceph integration tests are vital and expensive. Contrary to unit tests that can be run on a laptop, they require multiple machines to deploy an actual Ceph cluster. As the community of Ceph developers expands, the community lab needs to expand.

The current development workflow and its challenges

When a developer contributes to Ceph, it goes like this:

  • The Developer submits a pull request
  • After the Reviewer is satisfied with the pull request, it is scheduled for integration testing (by adding the needs-qa label)
  • A Tester merges the pull request in an integration branch, together with other pull requests that needs-qa and set a label informing (s)he did so (for instance if Kefu Chai did it, he would set the wip-kefu-testing label)
  • The Tester waits for the packages to be built for the integration branch
  • The Tester schedules a suite of integration tests in the community lab
  • When the suite finishes, the Tester analyzes the integration tests results, finds the pull request responsible for a failure (which can be challenging when there are more than a handfull of pull requests in the integration branch)
  • For each failure the Tester adds a comment to the faulty pull request with a link to the integration test logs, kindly asking the developer to address the issue
  • When the integration tests are clean, the Tester merges the pull requests

As the number of contributors to Ceph increases, running the integration tests and analyzing their results becomes the bottleneck, because:

  • getting the integration tests results usually takes a few days
  • only people with access to the community lab can run integration tests
  • analyzing test results is time consuming

Increasing the number of machines in the community lab would run integration tests faster. But acquiring hardware, hosting it and monitoring it not only takes months, it also require significant system administration work. The community of Ceph developers is growing faster than what the community lab. And to make things even more complicated, as Ceph evolves the number of integration tests increases and require even more resources.

When a developer frequently contributes to Ceph, (s)he is granted access to the VPN that allows her/him to schedule integration tests. For instance Abhishek Lekshmanan and Nathan Cutler who routinely run and analyze integration tests for backports now have access to the community lab and can do that on their own. But the process to get access to the VPN takes weeks and the learning curve to use it properly is significant.

Although it is mostly invisible to the community lab user, the system administration workload to keep it running is significant. Dan Mick, Zack Cerza and others fix problems on a daily basis. As the size of the community lab grows, this workload increases and requires skills that are difficult to acquire.

Simplifying the workflow with public OpenStack clouds

As of July 2015, it became possible to run integration tests on public OpenStack clouds. More importantly, it takes less than one hour for a new developer to register and schedule an integration test. This new facility can be leveraged to simplify the workflow as follows:

  • The Developer submits a pull request
  • The Developer is required to attach a successfull run of integration tests demonstrating the feature or the bug fix
  • After the Reviewer is satisfied with the pull request, it is merged.

There is no need for a Tester because the Developer now has the ability to run integration tests and interpret the results.

The interpretation of the test results is simpler because there is only one pull request for a run. The Developer can compare her/his run to a recent run from the community lab to verify the unmodified code passes. (S)He also can debug a failed test in interactive mode.

Contrary to the community lab, the test cluster has a short life span and requires no system administration skills. It is created in the cloud, on demand, and can be destroyed as soon as the results have been analyzed.

The learning curve to schedule and interpret integration tests is reduced. The Developer needs to know about the teuthology-openstack command and how to interpret a test failure. But (s)he does not need the other teuthology-* commands nor does (s)he have to get access to the VPN of the community lab.

Posted in ceph | Leave a comment

Sorting Ceph backport branches

When there are many backports in flight, they are more likely to overlap and conflict with each other. When a conflict can be trivially resolved because it comes from the context of a hunk, it’s often enough to just swap the two commits to avoid the conflict entirely. For instance let say a commit on

void foo() { }
void bar() {}

adds an argument to the foo function:

void foo(int a) { }
void bar() {}

and the second commit adds an argument to the bar function:

void foo(int a) { }
void bar(bool b) {}

If the second commit is backported before the first, it will conflict because it will find that the context of the bar function has the foo function without an argument.

When there are dozens of backport branches, they can be sorted so that the first to merge is the one that cherry picks the oldest ancestor in the master branch. In other words given the example above, a cherry-pick of the first commit be merged before the second commit because it is older in the commit history.

Sorting the branches also gracefully handles interdependent backports. For instance let say the first branch contains a few backported commits and a second branch contains a backported commit that can’t be applied unless the first branch is merged. Since it is required for each Ceph branch proposed for backports to pass make check, the most commonly used strategy is to include all the commits from the first branch in the second branch. This second branch is not intended to be merged and the title is usually prefixed with DNM (Do Not Merge). When the first branch is merged, the second is rebased against the target and the redundant commits disapear from the second branch.

Here is a three lines shell script that implements the sorting:

# Make a file with the hash of all commits found in master
# but discard those that already are in the hammer release.
git log --no-merges \
  --pretty='%H' ceph/hammer..ceph/master \
  > /tmp/master-commits
# Match each pull request with the commit from which it was
# cherry-picked. Just use the first commit: we expect the other to be
# immediate ancestors. If that's not the case we don't know how to
# use that information so we just ignore it.
for pr in $PRS ; do
  git log -1 --pretty=%b ceph/pull/$pr/merge^1..ceph/pull/$pr/merge^2 | \
   perl -ne 'print "$1 '$pr'\n" if(/cherry picked from commit (\w+)/)'
done > /tmp/pr-and-first-commit
# For each pull request, grep the cherry-picked commit and display its
# line number. Sort the result in reverse order to get the pull
# request sorted in the same way the cherry-picked commits are found
# in the master history.
SORTED_PRS=$(while read commit pr ; do
  grep --line-number $commit < /tmp/master-commits | \
  sed -e "s/\$/ $pr/" ; done  < /tmp/pr-and-first-commit | \
  sort -rn | \
  perl -p -e 's/.* (.*)\n/$1 /')
Posted in ceph | Leave a comment

Ceph integration tests made simple with OpenStack

If an OpenStack tenant (account in the OpenStack parlance) is available, the Ceph integration tests can be run with the teuthology-openstack command , which will create the necessary virtual machines automatically (see the detailed instructions to get started). To do its work, it uses the teuthology OpenStack backend behind the scenes so the user does not need to know about it.
The teuthology-openstack command has the same options as teuthology-suite and can be run as follows:

$ teuthology-openstack \
  --simultaneous-jobs 70 --key-name myself \
  --subset 10/18 --suite rados \
  --suite-branch next --ceph next
Scheduling rados/thrash/{0-size-min-size-overrides/...
Suite rados in suites/rados scheduled 248 jobs.

web interface:
ssh access   : ssh ubuntu@ # logs in /usr/share/nginx/html

As the suite progresses, its status can be monitored by visiting the web interface::

And the horizon OpenStack dashboard shows resource usage for the run:

Continue reading

Posted in ceph, openstack | 2 Comments

HOWTO setup a postgresql server on Ubuntu 14.04

In the context of the teuthology (the integration test framework for Ceph, there needs to be a PostgreSQL available, locally only, with a single user dedicated to teuthology. It can be setup from a new Ubuntu 14.04 install with:

    sudo apt-get -qq install -y postgresql postgresql-contrib 

    if ! sudo /etc/init.d/postgresql status ; then
        sudo mkdir -p /etc/postgresql
        sudo chown postgres /etc/postgresql
        sudo -u postgres pg_createcluster 9.3 paddles
        sudo /etc/init.d/postgresql start
    if ! psql --command 'select 1' \
          'postgresql://paddles:paddles@localhost/paddles' > /dev/null
        sudo -u postgres psql \
            -c "CREATE USER paddles with PASSWORD 'paddles';"
        sudo -u postgres createdb -O paddles paddles

If anyone knows of a simpler way to do the same thing, I’d be very interested to know about it.

Posted in PostegreSQL, ceph | Leave a comment