FirefoxOS/New security model

< FirefoxOS
Revision as of 10:39, 28 March 2015 by Sicking (talk | contribs)

Goals

  • Enable exposing "sensitive APIs" to 3rd party developers.
  • Use the same update and security model for gaia and for 3rd party content.
  • Don't require content which uses "senstivie APIs" to be installed. Users should be able to simply browse to it.
  • Don't have separate cookie jars for separate apps. At least for normal content which doesn't use "sensitive APIs".
  • Ensure that content which uses "sensitive APIs" always runs in a separate process. Enforce in the parent process that only these separate processes can trigger "sensitive APIs". I.e. hacking a child process should not permit access to more sensitive APIs.
  • Enable content which uses "sensitive APIs" to have normal http(s) URLs such that they can use OAuth providers like facebook.
  • Enable content which uses "sensitive APIs" to use service workers.

"Sensitive APIs" here means APIs that we have not figured out how to safely expose to normal web pages. About 5-10% of the content in our marketplace falls into this category, and none of the content on the rest of the web fall into this category. I.e. most content does not use sensitive APIs, and can and should remain as normal websites.


Implementation

Signing

We will require that all content which uses "sensitive APIs" is signed. For now only the firefox marketplace will be allowed to do the signing. Possibly this will be changed in the future, but that's likely more a policy change than a code change.

Signing is done by having the developer package the content into a package and submit it to the mozilla marketplace. The marketplace will review the app and then add a signature to the package. The developer can then download the signed package and upload to the developer's website.

Issue: Should we allow other forms manual review of each app? Can the marketplace "review a developer" and give the developer access to automatic signing?

Issue: Is there a reason to restrict signed packages to only be hosted on the marketplace?

Issue: Should we enable the marketplace to host signed packages for developers which doesn't want to run a web server?

The format used for the packaging will be the one defined in the W3C packaging spec draft. A header is added to the package to indicate that it's a signed package. The advantage of this packaging format, compared to zip, is that it's streamable.

The format used for the signature is still to be determined, but hopefully we can use the same file formats and file names as used today. However it's important that the signatures also cover the header data for each resource, as well as the header data for the package itself.

Issue: Decide on exact signature format


Verifying signatures

To load a webpage in a signed package, the user navigates to a URL like "https://website.com/RSSReader2000/package.pak!//index.html". The part before the "!//" is the URL to the package itself. The part after the "!//" is the resource path inside the package.

So loading signed content does not require an installation to happen. Simply navigating to a URL like the above is enough.

When the user navigates to such a page, Gecko will download the package from the webserver. Gecko will then see in the header of the package that the package is signed.

Before serving any resources from the package to the rest of Gecko, the network layer will first wait for the signatures to be loaded from the package. It will also verify that the resource that is currently being loaded is covered by, and matches, the signature.

Issue: Should we require that the signature-files live at the start of the package. That way we'd always have the signature available before the file contents covered by the signature.

We should likely cache which resources in the package that we've checked the signature of, so that we don't have to recheck if a resource is loaded multiple times.

Another thing that needs to be done before any content is served by the network layer is to look in the manifest and populate the nsIPermissionManager database with any permissions enumerated in the manifest. After having checked that the manifest properly matches the signature of course.


CSP

We need to make sure that it can't load scripts from outside of the signed package. And we need to make sure that it can't use inline scripts.

The plan is to use the CSP code to accomplish this. We can mainly leverage existing code which enables applying a default CSP policy to certain content. We'll use this to apply a default CSP to all signed content similarly to how we currently apply a default CSP to all privileged apps.

We'll also need to extend it to enable it to enforce loads to happen "from same package", rather than just "from same origin".

Issue: Does CSP allow putting limits on where serviceworkers can be loaded from? We need to restrict ServiceWorkers scopes as well as script-urls to be from inside the package.

We also can't allow signed content to be opened in an <iframe>, other than by pages from the same signed package. This is partially to prevent signed content from getting clickjacked. However it's also because we want to always open signed content in a separate OS process, and currently gecko does not support out-of-process plain <iframe>s.

Hopefully this is a restriction we can eventually relax, for example by allowing pages in a signed package to opt in to being iframe-able. But this will require out-of-process <iframe>s and so will have to wait.


Process isolation

In order to ensure that only signed content can access the APIs that it has been signed for, we want to always use separate child processes to run such content.

This means that when a user navigates from an unsigned page to a signed page, that we need to switch which process render the pages. Right now this can only be done by creating a new <iframe mozbrowser>.

However only Gecko knows that a particular URL is signed. Gaia could not simply look at a URL to know if it will return signed content or not. And Gecko only knows that it's signed content once response data starts arriving.

Even if we add some way for gecko to signal to the <iframe mozbrowser> embedder that a new <iframe mozbrowser> needs to be created, this will make going "back"/"forward" between the two very messy.

Issue: We need to figure out how to make navigation work. This will likely require very tricky <iframe mozbrowser> work.

We also need to change security checks that currently are done in the parent process. Currently many of them are heavily based on app-ids and installed apps. This may need to be changed.

Issue: We need to figure out if changes are needed to the security checks of sensitive APIs.


Installing and updating

Signed packages follow normal http semantics. I.e. if the package still exists in our http cache when the user revisits a signed page, but the cache headers indicate that the content needs to be updated, we do a normal GET request to see if a new version needs to be downloaded.

If a new version of the package is being sent, we follow the same behavior as when visiting a package for the first time. I.e. we need to reverify signatures as well as update any permissions in the nsIPermissionManager database.

However, we want to avoid having to download a whole package if just part of it has changed. In order to support that we hope to enable the server to respond to the GET request for an updated package with just a "diff" of what's changed between the previous and current version.

One possible way to do this would be to have gecko indicate that it supports a new type of content encoding as well as send the etag of the current package file. The server can then look at the etag and if it has (or can generate) a diff between the clients version and the latest version, it can respond with a special content-encoding as well as the package diff.

Gecko can then use the diff to patch the existing package.

Issue: Decide on details of diff mechanism and format

Note that sending a diff is entirely the server's choice. If the server doesn't support this newly created diff mechanism, then it will simply serve a full package. Likewise if the user is on a very old version which the server doesn't have a diff for, the server can simply serve a full package.

In the case when a diff is received, it is probably fine to not support streaming package content. I.e. in that case its probably fine to wait for the full diff to be downloaded and applied, before returning any data from the Gecko network layer.

We do in that case still need to verify signatures of the new package version.

Issue: Decide on how to verify signatures when a diff is downloaded. Also decide if we can verify signatures incrementally or not.

Installing a signed package mainly consists of pinning it in the http cache such that it doesn't get evicted. We still need to check for updates according to normal "app update" scheduling.


Service Workers

One of the central pieces of the new Gaia architecture is the use of service workers. This isn't just to support offline for gaia apps, but also to support dynamic generation of page markup, and the ability to run logic in order to decide what resource to return for a given URL.

In order to make service workers work with the package update logic we should couple package update with service worker update. When the ServiceWorker spec require the browser to check for updates of, or download updates of, the ServiceWorker script, we instead update the full signed package.

This means that both when we do an "automatic" ServiceWorker update check, such as when the user visit a page which uses the ServiceWorker, and when the ServiceWorkerRegistration.update() function is called, that we update the full package rather than just the ServiceWorker script.

Once a new package has been downloaded, we go through the normal ServiceWorker update cycle. I.e. Gecko fire both "install" and "activate" events on the ServiceWorker. This will happen any time that a package is updated, even if the contents of the ServiceWorker script hasn't changed.

Gecko need to still serve the previous package content until the "activate" event for the new ServiceWorker version fires. I.e. until the new version has been installed, the old version of the package needs to be served for any network requests.

Issue: How do we enable the newly installing serviceworker to load content from the new package version, even though the previous package version is the one pinned in the cache.

Issue: Does CSP allow putting limits on where serviceworkers can be loaded from? We need to restrict ServiceWorkers scopes as well as script-urls to be from inside the package.


Origins and cookie jars

The biggest change here is that we should stop always using different cookie jars for different apps. In particular normal unsigned content should always use the same cookie jar no matter which app it belongs to.

However signed packages will get their own cookie jars. So a signed package will not share cookies, IndexedDB data, etc with unsigned content from the same domain. It will also not share data with other signed packages from the same domain. This is to ensure that unsigned content from the same domain can't read for example sensitive data that the signed content has cached in IndexedDB.

Issue: Do requests from a signed package to unsigned content use the package's cookie jar? Or the normal cookie jar. I.e. if a signed package does an XHR request to a normal website, does that use the website's cookies?

Issue: Figure out how to give signed content its own cookie jar. One potential solution here is to remove our close tie between cookie jar and appid. Another possible solution would be to make the various APIs use the full package path instead of the domain as key.

However when we are loading the package itself, we don't use the cookie jar of the signed app. Instead we use the cookie jar of the unsigned content for the origin which the package lives un. We have to do it this way since when we fetch a signed package the first time, we don't know that the package is signed, and so we use the normal cookie jar for that domain.

Issue: What happens if unsigned content does an XHR request to a URL inside a signed package. There doesn't seem to be any security issues involved in allowing that.

Issue: Does this mean that the *cookies* used for signed content is the same as the cookies used for unsigned content? I.e. that only IDB/localStorage/permissions are separate for signed content. That seems to be the case if network requests to normal websites from signed content uses the normal cookie jar. What does document.cookies return? Should we make it return null?

Issue: One potential solution here is to do security checks in the parent only to protect the storage data and permissions of the signed content. And make sure to flag the principals used for documents loaded from signed packages as belonging to the appropriate package. But for anything related to network, just treat signed content like normal content belonging to the normal cookie jar.

Issue: Would it be simpler to make signed content use an entirely separate cookie jar. Including for XHR requests and <iframe>s to content outside of the signed package? That might allow us to use a more generic cookie jar feature.

Signed content must never be considered same-origin with unsigned content, or content from another signed package. This is to ensure that unsigned content from the same https domain can't open the signed content in an <iframe> and then reach in to the opened page and use its privileges.

The mechanism which is used to ensure that signed packages get a unique cookie jar should also be used to make sure that principals from signed an unsigned pages are never considered same-origin.

Issue: Figure out exactly what field to use to indicate which signed package a principal belongs to.