Balrog

From MozillaWiki
Jump to navigation Jump to search

If you are looking for the general documentation that used to live here, it has been moved into the Balrog repository, and a built version of it is available on Read The Docs.

This page will continue to host information about Balrog that doesn't make sense to put into the repository, such as meeting notes and things related to our hosted versions of Balrog.

Balrog is the software that runs the server side component of the update system used by Firefox and other Mozilla products. It is the successor to AUS (Application Update Service), which did not scale to our current needs nor allow us to adapt to more recent business requirements. Balrog helps us ship updates faster and with much more flexibility than we’ve had in the past.

Infrastructure

Environments

We have a number of different Balrog environments with different purposes:

Environment App URL Deploys Purpose
Production Admin https://aus4-admin.mozilla.org (VPN Required) Manually by CloudOps Manage and serve production updates
Public https://aus5.mozilla.org and others (see the Client Domains page for details)
Stage Admin https://balrog-admin.stage.mozaws.net/ (VPN Required) When version tags (eg: v2.40) are created A place to submit staging Releases and verify new Balrog code with automation
Public https://aus4.stage.mozaws.net/
Dev Admin https://balrog-admin.dev.mozaws.net (VPN Required) Whenever new code is pushed to Balrog's master branch Manual verification of Balrog code changes in a deployed environment
Public https://aus5.dev.mozaws.net/

Support & Escalation

If the issue may be visible to users, please make sure #moc is also notified. They can also assist with the notifications below.

RelEng is the first point of contact for issues. To contact them, follow the standard RelEng escalation path.

If RelEng is unable to correct the issue, they may escalate to CloudOps.

Monitoring & Metrics

Metrics from RDS, EC2, and Nginx are available in the Datadog Dashboard.

We aggregate exceptions from both the public apps and admin app to CloudOps' Sentry instance.

ELB Logs

The production instance of Balrog publishes logs to two different S3 buckets:

  • The nginx access logs (that contain all of the update requests we receive) are published to balrog-us-west-2-elb-logs. These logs are very large, and you're unlikely to be able to download them for local querying. The best way to work with them is through Athena.
  • The rest of the logs are published to net-mozaws-prod-us-west-2-logging-balrog, in the "firehose/s3" directory. Within that there are subdirectories for different parts of Balrog:
    • balrog.admin.syslog.admin contains the admin wsgi app output.
    • balrog.admin.nginx.{access,error} contain the admin access & error logs from nginx. The access logs are generally a subset of the wsgi app output (which logs requests with a bit of extra detail).
    • balrog.admin.syslog.agent contains the agent app output.
    • balrog.admin.syslog.cron contains cronjob output (eg: the history cleanup and production database dump)
    • balrog.web.syslog.web contains the public wsgi app output. Note that this app does _not_ log requests, so this is largely warning/exception output. If you care about requests to the public app, use the nginx access logs (see above).

Backups

Balrog uses the built-in RDS backups. The database in snapshotted nightly, and incremental backups are done throughout the day. If necessary, we have the ability to recover to within a 5 minute window. Database restoration is done by CloudOps, and they should be contacted immediately if needed.

Deploying Changes

Balrog's stage and production infrastructure is managed by the Cloud Operations team. This section describes how to go from a reviewed patch to deploying it in production. You should generally begin this process at least 24 hours before you want the new code live in production. This gives the new code a chance to bake in stage.

At a high level, the deployment process looks like this:

  • Verify the new code in dev
  • Bake the new code in stage
  • Deploy to prod

Each part of this process is described in more detail below.

Is now a good time?

Before you deploy, consider whether or not it's an appropriate time to. Some factors to consider:

  • Are we in the middle of an important release such as a chemspill? If so, it's probably not a good time to deploy.
  • Is it Friday? You probably don't want to deploy on a Friday except in extreme circumstances.
  • Do you have enough time to safely do a push? Most pushes take at most 60 minutes to complete once the production push has begun.

Schema Upgrades

If you need to do a schema change you must ensure that either the current production code can run with your schema change applied, or that your new code can run with the old schema. Code and schema changes cannot be done at the same instant, so you must be able to support one of these scenarios. Generally, additive changes (column or table additions) should do the schema change first, while destructive changes (column or table deletions) should do the schema change second. You can simulate the upgrade with your local Docker containers to verify which is right for you.

When you file the deployment bug (see below), include a note about the schema change in it. Something like:

This push requires a schema change that needs to be done _prior_ to the new code going out. That can be performed by running the Docker image with the "upgrade-db" command, with DBURI set.

bug 1295678 is an example of a push with a schema change.

Verification in dev

The dev environment automatically deploys new code from the master branch of the Balrog repository (including any necessary schema changes). Before beginning the deployment procedure, you should do some functional testing there. At the very least, you should do explicit testing of all the new code that would be included in the push. Eg: if you're changing the format of a blob, make sure that you can add a new blob of that type, and that the XML response looks correct.

If you have schema changes you must also ensure that the existing deployed code will work with the new schema. To do this, CloudOps will downgrade the dev apps. You should do some routine testing (make some changes to some objects, try some update requests) to ensure that everything works. If you have any issues you CANNOT proceed to production.

Baking in stage

To get the new code in stage you must create a new Release in Github as follows:

  1. Tag the repository with a "vX.Y" tag. Eg: "git tag -s vX.Y && git push --tags"
  2. Diff against the previous release tag. Eg: "git diff v2.24 v2.25"
    • Look for anything unexpected, or any schema changes. If schema changes are present, see the above section for instructions on handling them.
  3. Create a new Release on Github. This create new Docker images tagged with your version, and deploy them to stage. It may take upwards of 30 minutes for the deployment to happen.

Once the changes are deployed to stage, let them bake for at least 24 hours. You can do additional targeted testing here if you wish, or simply wait for nightlies/releases to prod things along. It's a good idea to watch Sentry for new exceptions that may show up, and Datadog for any notable changes in the shape of the traffic.

Pushing to production

Pushing live requires CloudOps. For non-urgent pushes, you should begin this procedure a few hours in advance to give CloudOps time to notice and respond. For urgent pushes, file the bug immediately and escalate if no action is taken quickly. Either way, you must follow this procedure to push:

  1. File a bug to have the new version pushed to production.
    • Wednesdays around 11am Pacific are usually the best day to push to production, because they are generally free of release events, nightlies, and cronjobs. Unless you have a specific need to deploy on a different day, you should request the prod push for a Wednesday between those hours
    • You should link any bugs being deployed is the "Blocks" field.
    • Make sure you substitute the version number and choose the correct options from the bug template.
  2. Once the push has happened, verify that the code was pushed to production by checking the __version__ endpoints on the Admin and Public apps.
  3. Bump the in-repo version to the next available one to ensure the next push gets a new version.

Meeting Notes