Telemetry/Custom analysis with spark
This page is currently in-progress.
Introduction
Spark is a data processing engine designed to be fast and easy to use. We have setup Jupyter workbooks that use Spark to analyze our Telemetry data. Jupyter workbooks can be easily shared and updated among colleagues to enable richer analysis than SQL alone.
The Spark clusters can be spun up on analysis.telemetry.mozilla.org, which is abbreviated as atmo. The Spark Python API is called pyspark.
Setting Up a Spark Cluster On atmo
- Go to analysis.telemetry.mozilla.org
- Click “Launch an ad-hoc Spark cluster”.
- Enter some details:
- The “Cluster Name” field should be a short descriptive name, like “chromehangs analysis”.
- Set the number of workers for the cluster. Please keep in mind to use resources sparingly; use a single worker to write and debug your job.
- Upload your SSH public key.
- Click “Submit”.
- A cluster will be launched on AWS preconfigured with Spark, IPython and some handy data analysis libraries like pandas and matplotlib.
Once the cluster is ready, you can tunnel IPython through SSH by following the instructions on the dashboard, and running the ssh shell command. For example:
ssh -i ~/.ssh/id_rsa -L 8888:localhost:8888 hadoop@ec2-54-70-129-221.us-west-2.compute.amazonaws.com
Finally, you can launch IPython in Firefox by visiting http://localhost:8888.
The Python Jupyter Notebook
When you access http://localhost:8888, two example Jupyter notebooks are available to peruse. To create a new Jupyter notebook, select new -> Python 2.
Starting out, we recommend looking through the "Telemetry Hello World" notebook. It gives a nice overview of Jupyter and analyzing telemetry data using pyspark and plotly.
Using Jupyter
Jupyter Notebooks contain a series of cells. Each cell contains code or markdown. To switch between the two, use the dropdown at the top. To run a cell, use shift-enter; this either compiles the markdown or runs the code. To create new cell, select Insert -> Insert Cell Below.
A cell can output text or plots. To output plots inlined with the cell, run the following command, usually below your import statements:
%pylab inline
MozTelemetry
stub - add info about python moztelemetry functions
Detailed documentation for the library can be found at the Python MozTelemetry Documentation.
FAQ
Please add more FAQ as questions are answer by you or for you.
How can I load parquet datasets in a Jupyter notebook?
Use sqlContext.read.load, e.g.:
dataset = sqlContext.read.load("s3://the_bucket/the_prefix/the_version", "parquet")