Electrolysis: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
Line 117: Line 117:


The performance testing infrastructure is going to require some new features to deal with processes. In particular, we'll want to measure memory usage and Tbeachball for the chrome and any content processes separately.
The performance testing infrastructure is going to require some new features to deal with processes. In particular, we'll want to measure memory usage and Tbeachball for the chrome and any content processes separately.
== Subprojects ==
[[IPC Protocols]] for safe communication (cjones)

Revision as of 11:42, 15 May 2009

The Mozilla platform will use separate processes to display the browser UI and web content.

Goals

Initial goals:

  • Provide better application UI responsiveness
  • Improve stability from slow rendering or crashes caused by content
  • Improve performance, especially on multi-core machines

Potential future goals:

Core Team

  • Benjamin Smedberg (coordinator)
  • Joe Drew
  • Jason Duell
  • Chris Jones (cjones)
  • Ben Turner
  • Boris Zbarsky

Volunteers welcome! Please email benjamin@smedbergs.us

Implementation

Phase I: Bootstrap

Get something hacked together as quickly as possible. This is probably not the Firefox chrome, but a really simplistic page with a URL bar.

  • Run a single content process at startup
  • The content process will manage and draw its own native widgets (HWND)
    • Focus handling may be completely awry
  • The content process will run its own native event loop
  • The content process will not use a profile
    • This means there will be no usable global history
  • The content process will perform its own networking
    • This means that secure websites will be completely broken
  • The chrome process will use the content process to render <xul:browser type="content|content-primary">
  • Targeted links will not work
  • Session history may not work, although it would be nice to at least get back/forward buttons
  • Tackle a single platform at first: still deciding between Linux/Windows

Schedule goal: 15-July-2009

Phase II: Parallel Improvements

Once the initial bootstrap is complete, many of the following tasks can be completed in parallel:

  • Move all networking to the parent process
  • Proxy any necessary profile access to the parent process
    • history (bz suggests making content process history checks purely async and kicking these off eagerly during parsing)
    • preferences
    • ?
  • Hook up docshell-y stuff
    • session history
    • link targets
  • Identify and fix operations where chrome tries to touch content:
    • Find in page
    • context menus
    • snapshots of content (tab preview/fennec)
    • session restore
  • Widget issues
    • Focus
    • Drag and drop
    • Get all the tier-1 platforms working

After phase II, we should be able to run the Firefox UI and pass all tests except for accessibility.

Note: bz suggests that Firefox chrome does a lot of extension-y stuff like touching the content DOM, and it might make sense to leave some set of Firefox UI semi-broken until phase III. Let's see how bad it gets.

SWAG schedule goal: 1-November-2009

Phase III: extensions, compatibility, and performance

Lots of tuning and polish.

  • Make extensions useful again. roc proposed a solution of proxying arbitrary scripts across the chrome->content boundary... bsmedberg would prefer to alter the API so that you have to explicitly cross the boundary with shared-nothing scripts... we'll decide when we get there.
  • Fix up accessibility (will need help!)
  • Measure and tune performance
    • Tune chrome and content startup time
      • Especially make sure we're not doing unnecessary work on global observers
      • And NSS/necko are not initialized in the content processes
    • Hook up crash reporting and other process monitoring tools
      • Should use Breakpad's out-of-process exception handling for content processes on Win/mac
  • Hunt all regressions
  • Polish

After this phase, we should be ready for a release. UI responsiveness should be noticeably better, and this would help mobile. A single misbehaving website could still take down all the user's content, however.

There is no good way to even SWAG the schedule for this phase.

Phase IV: Multiple content processes

  • Run multiple content processes on-demand per tab/tabgroup/domain/whatever
  • Graphics: figure out shared/separate image caches
  • Graphics: figure out shared/separate font-metrics caches
  • Hook up multiple process monitoring for users

Frankly once phase III is done, I expect this will be fairly quick and painless: the major risk is startup time and memory use regressions.

Future Phases

  • Security sandboxing (involves removing the native widgets and event loop from the content processes)

Risks

There are lots of potential long-pole items in each phase. There are a fair number of unknowns.

The current plan doesn't implement arbitrary cross-process proxies of JS or XPCOM objects, and we'd really like to avoid them if possible (the risk of deadlocks and other strange behavior is high). But the ways in which chrome touches content are not well-known. Need input from frontend hackers and some design work.

There have been mutterings about taking the chromium network stack wholesale (replacing necko, basically). This may or may not be the fastest path to success: it really depends on how hard it is to map the APIs together and how much we're willing to change callers versus reimplementing the XPCOM API on top of the chromium stack.... needs discussion and a decision within a month or so. If we take chromium networking, we should probably do it on mozilla-central in parallel with phase I.

Testing

The performance testing infrastructure is going to require some new features to deal with processes. In particular, we'll want to measure memory usage and Tbeachball for the chrome and any content processes separately.

Subprojects

IPC Protocols for safe communication (cjones)