Mobile/Testing/05 09 18: Difference between revisions

→‎Action Items: - some action items
(Created page with "= Previous Action Items = = Status reports = == Autophone == == Dev team == == Product Integrity == * {{bug|1445716}} geckoview junit tests are running ** https://develope...")
 
(→‎Action Items: - some action items)
 
(4 intermediate revisions by 3 users not shown)
Line 4: Line 4:


== Autophone ==
== Autophone ==
* Autophone.legacy still putting along.
* Next stage
** Review bc's Autophone.next patches
** android-emu vs. android-em ?
*** gbrown likes android-emu
** Once android_hardware_unittest.py lands, merge android_{emulator,hardware}_unittests?
*** gbrown likes that idea
** mozharness versions of Autophone.legacy's S1S2 and Talos tests?
*** probably tier 2 / lower priority


== Dev team ==
== Dev team ==
Line 13: Line 22:
* working on x86 emulator setup on packet.net
* working on x86 emulator setup on packet.net
** limited taskcluster support now available
** limited taskcluster support now available
* Maja working on {{bug|1323620}} for wpt
= Round Table =
= Round Table =
* is there a webgl benchmark that we want to run on bitbar?
* how can we measure input latency?


= Action Items =
= Action Items =
* davidb- consider glterrain or glvideo as a useful benchmark for webgl on GeckoView- or propose other benchmarks.
* all - get stakeholders (UX, Stuart, etc.) to determine which input latency items we should measure; figure out how to measure them
Confirmed users
3,376

edits