Performance/Fenix/Performance reviews: Difference between revisions
(nits) |
(→Testing Start Up code: clarify location of measure_start_up.py) |
||
Line 5: | Line 5: | ||
To test start up code, the approach is usually simple: | To test start up code, the approach is usually simple: | ||
# | # From [https://github.com/mozilla-mobile/perf-tools the <code>mozilla-mobile/perf-tools</code> repository], use <code>measure_start_up.py</code>.<br>The arguments for start-up should include your target (<code>Fenix</code> or <code>Focus</code>). | ||
The arguments for start-up should include your target (<code>Fenix</code> or <code>Focus</code>). | |||
# Determine the start-up path that your code affects this could be: | # Determine the start-up path that your code affects this could be: | ||
## <code> Cold main first frame (cold_main_first_frame in the script) </code>. This is the first frame drawn by the application. This path is taken by all "type" of start-ups | ## <code> Cold main first frame (cold_main_first_frame in the script) </code>. This is the first frame drawn by the application. This path is taken by all "type" of start-ups | ||
Line 27: | Line 26: | ||
'''NOTE''':For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are) | '''NOTE''':For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are) | ||
An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]). | An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]). | ||
== Testing non start-up changes == | == Testing non start-up changes == |
Revision as of 18:06, 14 October 2021
Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do to test the impact of the modified code.
Testing Start Up code
To test start up code, the approach is usually simple:
- From the
mozilla-mobile/perf-tools
repository, usemeasure_start_up.py
.
The arguments for start-up should include your target (Fenix
orFocus
). - Determine the start-up path that your code affects this could be:
Cold main first frame (cold_main_first_frame in the script)
. This is the first frame drawn by the application. This path is taken by all "type" of start-upsCold view nav start (cold_view_nav_start in the script)
. This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)Cold main session restore (cold_main_session_restore in the script)
. This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
- After determining the path your changes affect, these are the steps that you should follow:
Example:
- Run
measure_start_up.py
located in perf-tools. Note:- The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise
- Make sure the application you're testing is a fresh install. If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing
python3 measure_start_up.py cold_view_nav_start /Users/computername/repositories/fenix/ nightly -p fenix -c 50 --no_start_up_cache
where p
is the product, c
is the iteration count
- Once you have gathered your results, you can analyze them using
analyze_durations.py
in perf-tools.
python3 analyze_durations.py /Users/computername/output/measure_start_up_results.txt
NOTE:For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using git rev-parse ${SHA}^
where ${SHA}
is the first commit on the branch where the changes are)
An example of using these steps to review a PR can be found (here).
Testing non start-up changes
Testing for non start-up changes is a bit different than the steps above since the performance team doesn't have tools as of now to test different part of the browser.
- The first step here would be to instrument the code to take (manual timings). By getting timings before and after the changes, it could potentially indicate any changes in performance.
- Using profiles and markers.
- (Profiles) can be a good visual representative for performance changes. A simple way to find your code and its changes could be either through the call tree, the flame graph or stack graph. NOTE: some code may be missing from the stack since pro-guard may inline it, or the sampling rate of the profiler is more than the time taken by the code.
- Another useful tool to find changes in performance is markers. Markers can be good to show the time elapsed between point A and point B or to pin point when a certain action happens.