ReleaseEngineering/Applications/BuildbotBridge: Difference between revisions

No edit summary
(deleting obsolete page)
 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Buildbot Bridge (BBB) ==
The Buildbot Bridge is a set of services that allow us to schedule jobs in Taskcluster but run them in Buildbot. The Taskcluster Listener listens for pending Taskcluster Tasks and creates BuildRequests for them. The Buildbot Listener listens for Buildbot events for those jobs, and updates the Taskcluster Tasks accordingly. The Reflector reclaims Taskcluster Tasks and polls for changes that can't be detected by listening for Taskcluster on Buildbot events.


The source code is located at https://github.com/mozilla/buildbot-bridge
== Interactions with other systems ==
The Bridge interacts with and has credentials for many different systems:
* [https://tools.taskcluster.net/ Taskcluster] - To claim and resolve Tasks
* Buildbot Scheduler DB - To create BuildRequests
* Buildbot Bridge Database - To track ongoing jobs
* [https://pulse.mozilla.org Pulse] - To subscribe to Taskcluster and Buildbot exchanges
* [https://secure.pub.build.mozilla.org/buildapi/self-serve Self Serve] - To cancel Builds and BuildRequests
== Development ==
When working on low or medium risk patches to the Buildbot Bridge it's easiest to write unit tests, get review, and then use the supported development environment for testing. For more major changes, it's better to set up your own instances of each component, pointed at your own Buildbot Master. Details on each of these is below:
=== Official Development Instance ===
Working with the official development evironment is just like working with production. It is managed by Puppet, and must be deployed through it. Once your new code is deployed to it (see below for how to do that), you can create Tasks for builders on the "alder" project branch and dev will pay attention to them.
=== DIY ===
You will need a few things to have a fully functioning Buildbot Bridge set-up:
* [https://tools.taskcluster.net/auth/ Taskcluster credentials]
* [[ReleaseEngineering/How_To/Setup_Personal_Development_Master | A Build and/or Test Buildbot master]]
** You must be running command_runner.py and pulse_publisher.py as well.
** If you need both, they must be pointed at the same database
* [[https://pulse.mozilla.org/profile A Pulse account]]
Once you have those, adjust the config to use them as well as your own provisioner id, worker group, and worker id. For example, here is the config that bhearsum uses:
<pre>
{
    "taskcluster_queue_config": {
        "credentials": {
            "clientId": "<redacted>",
            "accessToken": "<redacted>"
        }
    },
    "buildbot_scheduler_db": "sqlite:////builds/buildbot/bhearsum/build1/master/state.sqlite",
    "bbb_db": "sqlite:///bbb.db",
    "selfserve_url": "http://foo.com/junk",
    "pulse_user": "bhearsum-publisher",
    "pulse_password": "<redacted>",
    "pulse_queue_basename": "queue/bhearsum-publisher",
    "restricted_builders": [],
    "ignored_builders": [],
    "tclistener": {
        "pulse_exchange_basename": "exchange/taskcluster-queue/v1",
        "worker_type": "buildbot-bridge-bhearsum",
        "provisioner_id": "buildbot-bridge-bhearsum",
        "logfile": "tclistener.log"
    },
    "bblistener": {
        "pulse_exchange": "exchange/bhearsum-publisher/buildbot",
        "tc_worker_group": "buildbot-bridge-bhearsum",
        "tc_worker_id": "buildbot-bridge-bhearsum",
        "logfile": "bblistener.log"
    },
    "reflector": {
        "interval": 60,
        "logfile": "reflector.log"
    }
}
</pre>
Note that the selfserve_url is junk. This is because there is no staging vesrion of BuildAPI (like most of the rest of our CI infrastructure), so you'll be unable to test changes that depend on it in dev unless you set-up your own instance.
== Deployment ==
The Buildbot Bridge services run on multiple machines for redundancy and increased throughput. The installations are fully deployed and managed by Puppet. The running services are managed by supervisord. You can find them in "/builds/bbb" on the following Buildbot masters.
* Production:
** buildbot-master70.bb.releng.use1.mozilla.com [no longer used for bbb, was replaced with bm86]
** buildbot-master86.bb.releng.scl3.mozilla.com
** buildbot-master72.bb.releng.usw2.mozilla.com
** buildbot-master82.bb.releng.scl3.mozilla.com
* Dev:
** buildbot-master84.bb.releng.scl3.mozilla.com
=== How to update ===
To deploy new Buildbot Bridge code you must generate a new Python package and have Puppet deploy it. Once your code has been reviewed and landed, do the following to deploy it:
# Bump the version in setup.py
# Run "python setup.py sdist" to generate a new tarball.
# [[ReleaseEngineering/PuppetAgain/Data#.._add_a_data_file | Copy the tarball to the puppet server]]
# Update the dev or prod [https://github.com/mozilla/build-puppet/blob/master/manifests/moco-config.pp version in Puppet].
# Wait for Puppet to update the installations and restart the instances
=== How to restart the services ===
'''with ansible (much easier!)'''
pip install ansible and checkout releng ansible repo as described [[ReleaseEngineering/How_To/Use_Ansible_for_AdHoc_Updates#Set_it_up_locally|here]]:
then run:
  ansible-playbook -v -i bbb-inventory.ini supervisord-action.yml -e desired_state=restarted -l all
'''manually'''
running the following as root (don't forget it runs on multiple different machines):
for i in bblistener tclistener reflector; do supervisorctl restart buildbot_bridge_$i; done
=== What to expect in /builds/bbb ===
<pre>
root@buildbot-master70.bb.releng.use1.mozilla.com bbb]# ls -tlr
total 2756
drwxr-xr-x 3 cltbld cltbld    4096 May  5 08:18 lib
lrwxrwxrwx 1 cltbld cltbld      15 May  5 08:18 lib64 -> /builds/bbb/lib
drwxr-xr-x 2 cltbld cltbld    4096 May  5 08:18 include
-rw------- 1 cltbld cltbld    1241 May 21 05:41 config.json
drwxr-xr-x 2 cltbld cltbld    4096 May 25 07:07 bin
-rw-r--r-- 1 cltbld cltbld 2372446 May 25 10:55 reflector.log
-rw-r--r-- 1 cltbld cltbld  359635 May 25 11:48 bblistener.log
-rw-r--r-- 1 cltbld cltbld  56047 May 26 07:12 tclistener.log
</pre>
As you can see, each service has its own log file. The supervisord logs sometimes have additional information in error cases, and can be found in /var/log/supervisord.

Latest revision as of 21:49, 19 November 2018