CloudServices/Sagrada/Metlog

< CloudServices‎ | Sagrada
Revision as of 20:34, 17 October 2011 by Rmiller (talk | contribs)

Overview

The Metrics project is part of Project Sagrada, providing a service for applications to capture and inject arbitrary data into a back end storage suitable for out-of-band analytics and processing.

Project

Engineers

  • Rob Miller
  • Victor Ng

User Requirements

The first version of the Metrics system will focus on providing an easy mechanism for the Sync and BrowserID projects (and any other internal Mozilla services) to efficiently send profiling data and any other arbitrary metrics information that may be desired into one or more backend storage locations. Once the data has made it to its final destination, there should be available to those w/ appropriate access the ability to do analytics queries and report generation on the accumulated data.

Requirements:

  • Services apps should be provided an easy to use API that will allow them to send arbitrary text data into the metrics and reporting infrastructure.
  • Processing and I/O load generated by the API calls made by the services apps must be extremely small to allow for minimal impact on app performance even when there is a very high volume of messages being passed.
  • API should provide a mechanism for arbitrary metadata to be attached to every message payload.
  • Overall system should provide a sensible set of message categories so that commonly generated types of messages can be labeled as such, and so that the processing and reporting functionality can easily distinguish between the various types of message payloads.
  • Message taxonomy must be easily extendable to support message types that are not defined up front.
  • Message processing system must be able to distinguish between different message types, so the various types can be routed to the appropriate back end(s) for effective analysis and reporting.
  • Service app owners must have access to an interface (or interfaces) that will provide reporting and querying capabilities appropriate to the various types of messages that have been sent into the system.

Proposed Architecture

The proposed Services Metrics architecture will consist of 3 layers:

generator
The generator portion of the system is the actual service application that is generating the data that is to be sent into the system. We will provide libraries (described below) that app authors can use to easily plug in. The libraries will take messages generated by the applications, serialize them, and then send them out (using ZeroMQ as the transport, by default). The metrics generating apps that need to be supported initially are based on the following platforms:
  • Mozilla Services team's Python app framework (sync, reg, sreg, message queue, etc.)
  • Node.js (BrowserID).
router
The router is what will be listening for the messages sent out by the provided libraries. It will deserialize these messages and examine the metadata to determine the appropriate back end(s) to which the message should be delivered. The format and protocol for delivering these messages to the endpoints will vary from back end to back end. We plan on using logstash as the message router, because it is already planned to be installed on every services server machine, and it is built specifically for this type of event-based messager routing.
endpoints
Different types of messages lend themselves to different types of presentation, processing, and analytics. We will start with a small selection of back end destinations, but we will be able to add to this over time as we generate more types of metrics data and we spin up more presentation and query layers. Proposed back ends are as follows:
  • statsd: (Phase 1) statsd is already in the pipeline to be running on every Services machine
  • Bagheera: (Phase 1) Bagheera is a REST service provided by the Mozilla Metrics team that will insert data into the Metrics team's Hadoop infrastructure, available for later processing.
  • Sentry: (Phase 1) Sentry is an exception logging infrastructure that provides useful debugging tools to service app developers. Sentry is not yet planned on being provided by any Mozilla operations team, using it would require buy-in from and coordination with a Mozilla internal service provider (probably the Services Ops team).
  • Esper: (Phase 2) System for "complex event processing", i.e. one which will watch various statistic streams in real time looking for anomalous behavior.
  • ArcSight ESM (Phase 2) Security risk analysis engine.
  • OpenTSDB (Phase 2) A "Time Series Database" providing fine grained real time monitoring and graphing.

Proposed API

The atomic unit for the Services Metrics system is the "message". The structure of a message is inspired by that of the well known syslog message standard, with some slight extensions to allow for more rich metadata. Each message will consist of the following fields:

  • timestamp: Time at which the message is generated.
  • logger: String token identifying the message generator, such as the name of the service application in question.
  • type: String token identifying the type of message payload
  • severity: Numerical code from 0-7 indicating the severity of the message, as defined by RFC 5424.
  • payload: Actual message contents.
  • tags: Arbitrary set of key/value pairs that includes any additional data that may be useful for back end reporting or analysis.
  • env_version: API version number of the "message envelope", i.e. any changes to the message data structure (exclusive of message-type-specific changes that may be embedded within the tags or the payload) must increment the env_version value. The structure described in this document is envelope version 0.8.

We will provide a "metlog" library that will both ease generation of these messages and that will handle packaging them up and delivering them into the message processing infrastructure. Implementations of this library will likely be available in both Python and Javascript, but the Python library will be available first and this document will, for now, only describe the Python API. The Javascript API will be similar, modulo syntactic sugar that is available in Python but not in JS (e.g. decorators, context managers), and will be documented in detail in the future. The proposed Python API is as follows:

MetlogClient(bindstrs, logger="", severity=6)
Primary metlog client class which can accept metlog messages and will deliver them to the message processor.
  • bindstrs: A string (or a sequence of strings) representing the location of the upstream message processor. By default these should be ZeroMQ bind strings.
  • logger: Default for all subsequent metlog calls which do not explicitly pass this value.
  • severity: Default for all subsequent metlog calls which do not explicitly pass this value.
MetlogClient.metlog(type, timestamp=None, logger=None, severity=None, message="", tags=None)
Sends a single log message along to the metlog listener(s). Most of the arguments correspond to the message fields described above. Only type is strictly required, the rest will be populated by reasonable defaults if they aren't provided:
  • timestamp: Defaults to current system time
  • logger: Defaults to the current value of MetlogClient.logger
  • severity: Defaults to the current value of MetlogClient.severity
  • message: Defaults to an empty string
  • tags: Defaults to an empty dictionary
MetlogClient.timer(name, timestamp=None, logger=None, severity=None, tags=None, rate=1)
Can be used as either a context manager or a decorator. Will calculate the time required to execute the enclosed code, and will generate and send a metlog message (of type "timer") containing the timing information upon completion.
  • name: A required string label for the timer that will be added to the message tags
  • timestamp: Defaults to current system time
  • logger: Defaults to the current value of MetlogClient.logger
  • severity: Defaults to the current value of MetlogClient.severity
  • tags: Defaults to an empty dictionary
  • rate Represents what fraction of these invocations should actually be timed; a value of 0.3 would mean that the code would be timed and the results sent off approximately 30% of the time it was executed
MetlogClient.incr(name, timestamp=None, logger=None, severity=None, tags=None)
Sends an "increment counter" message to metlog. name is a required string label for the counter that will be added to the message metadata.
  • name: A required string label for the counter that will be added to the message tags
  • timestamp: Defaults to current system time
  • logger: Defaults to the current value of MetlogClient.logger
  • severity: Defaults to the current value of MetlogClient.severity
  • tags: Defaults to an empty dictionary

Use Cases

Python App Framework performance metrics

The Python framework that underlies the Services Apps will be annotated w/ timer calls to automatically generate performance metrics for such key activities as authentication and execution of the actual view callable. The sample rate of these calls will be able to be specified in the app configuration, where a value of 0 can be entered to turn off the timers altogether. These will ultimately feed into a statsd / graphite back end provided by Services Ops, where app owners will be able to see graphs of the captured data.

Python App Framework exception logging

In addition to timing information, the Python framework for services apps can automatically capture exceptions, sending a full traceback and some amount of local variable information as part of the message payload. This can ultimately be delivered to a Sentry installation for developer introspection and debugging.

Ad-Hoc service app metrics gathering

Any service app will have the ability to easily generate arbitrary message data and metadata for delivery into the services metrics system. Any messages not specifically recognized as being intended for statsd or sentry will be delivered to a Hadoop cluster provided by the Metrics team, allowing for later analysis via custom map-reduce jobs or Hive queries.