Performance/Fenix/Performance reviews: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Created page with "Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do...")
 
(Added list on how to performance test Fenix to be more concise on what to do.)
Line 18: Line 18:
## <code> Cold view nav start (cold_view_nav_start in the script) </code>. This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)
## <code> Cold view nav start (cold_view_nav_start in the script) </code>. This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)
## <code> Cold main session restore (cold_main_session_restore in the script) </code>. This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
## <code> Cold main session restore (cold_main_session_restore in the script) </code>. This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
# After determining your the path your changes affect, running the scripts should be your next step. Here's a few things to keep in mind:
#After determining the path your changes affect, these are the steps that you should follow:
## The usual iteration counts used is 25. Running less iterations might affect the results due to noise
 
## Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing '''  
* Run <code>measure_start_up.py</code> located in perf-tools. '''Note''':  
# Once you have gathered your results, you can analyze them using ([https://github.com/mozilla-mobile/perf-tools/blob/main/analyze_durations.py <code>analyze_duations.py</code>]) which is found in the perf-tools repository.  
**The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise
# Repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are)
**Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing '''  
  python3 measure_start_up.py {path_changes_affect} {path_to_repo} {release_channel} -p fenix -c {how_many_iterations_to_test} --no_start_up_cache
 
* Once you have gathered your results, you can analyze them using <code>analyze_durations.py</code> in perf-tools.  
  python3 analyze_durations.py {path_to_output_of_measure_start_up.py}
 
 
'''NOTE''':For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are)


An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).  
An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).  

Revision as of 19:45, 13 October 2021

Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do to test the impact of the modified code. Before testing, the ideal situation to test is as follow.

  1. Use the current reference low-end device. As of September 2021, the reference phone is (ideally) a Motorolla G5 or anything close to that.
  2. Ensure your network connection is good if your code depends on network requests. However, in the case that you need to test on a throttled connection, there are ways to emulate that.
  3. Make sure your phone battery is not too low and that the battery saver mode is not on (if your phone allows for it).
  4. Ensure your phone is not too hot. A heated phone could lead to the CPU being throttled to allow the phone to cool.

Before testing, clone (this repo from mozilla-mobile containing the scripts to test)

Testing Start Up code

To test start up code, the approach is usually simple:

  1. Using the repository cloned from the mozilla-mobile, use measure_start_up.py.

The arguments for start-up should include your target (Fenix or Focus).

  1. Determine the start-up path that your code affects this could be:
    1. Cold main first frame (cold_main_first_frame in the script) . This is the first frame drawn by the application. This path is taken by all "type" of start-ups
    2. Cold view nav start (cold_view_nav_start in the script) . This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)
    3. Cold main session restore (cold_main_session_restore in the script) . This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
  2. After determining the path your changes affect, these are the steps that you should follow:
  • Run measure_start_up.py located in perf-tools. Note:
    • The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise
    • Make sure the application you're testing is a fresh install. If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing
 python3 measure_start_up.py {path_changes_affect} {path_to_repo} {release_channel} -p fenix -c {how_many_iterations_to_test} --no_start_up_cache
  • Once you have gathered your results, you can analyze them using analyze_durations.py in perf-tools.
  python3 analyze_durations.py {path_to_output_of_measure_start_up.py}


NOTE:For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using git rev-parse ${SHA}^ where ${SHA} is the first commit on the branch where the changes are)

An example of using these steps to review a PR can be found (here).

Testing non start-up changes

Testing for non start-up changes is a bit different than the steps above since the performance team doesn't have tools as of now to test different part of the browser.

  1. The first step here would be to instrument the code to take (manual timings). By getting timings before and after the changes, it could potentially indicate any changes in performance.
  2. Using profiles and markers.
    1. (Profiles) can be a good visual representative for performance changes. A simple way to find your code and its changes could be either through the call tree, the flame graph or stack graph. NOTE: some code may be missing from the stack since pro-guard may inline it, or the sampling rate of the profiler is more than the time taken by the code.
    2. Another useful tool to find changes in performance is markers. Markers can be good to show the time elapsed between point A and point B or to pin point when a certain action happens.