Community:SummerOfCode15: Difference between revisions

Jump to navigation Jump to search
→‎Automation & Tools: - fix mentor links for real
(→‎Automation & Tools: - adjust mentors and add links to mozillians profile)
(→‎Automation & Tools: - fix mentor links for real)
Line 161: Line 161:


| Python, AngularJS, SQL, Javascript  
| Python, AngularJS, SQL, Javascript  
| Joel Maher
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| Will LaChance, [[https://mozillians.org/en-US/u/mwargers/ Joel Maher]]
| [[https://mozillians.org/en-US/u/wlach/ Will Lachance]], [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| The impact here is the ability for developers and release managers to see the performance impact of their changes while helping us track this.
| The impact here is the ability for developers and release managers to see the performance impact of their changes while helping us track this.
|-
|-
Line 168: Line 168:
| Of the thousands of unitests which are run for each platform and each push we find many intermittent failures.  This is a pain point for developers when they test their code on try server.  Now that we have TreeHerder, it isn't much work to automatically annotate jobs as intermittent or a regression/failure.  In mochitest we have --bisect-chunk which will retry the given test and determine if it is an intermittent or a real regression.  The goal here is to do this automatically for all jobs on try server.  Jobs will still turn orange.  With the outcome of this project failures would need to have a different view in the UI.
| Of the thousands of unitests which are run for each platform and each push we find many intermittent failures.  This is a pain point for developers when they test their code on try server.  Now that we have TreeHerder, it isn't much work to automatically annotate jobs as intermittent or a regression/failure.  In mochitest we have --bisect-chunk which will retry the given test and determine if it is an intermittent or a real regression.  The goal here is to do this automatically for all jobs on try server.  Jobs will still turn orange.  With the outcome of this project failures would need to have a different view in the UI.
| Python, Javascript
| Python, Javascript
| Joel Maher
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| [[https://mozillians.org/en-US/u/mwargers/ Joel Maher]]
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| This will build off an existing set of tools while helping us bridge the gap towards a much better review and automated landing of patches system.  In the short term, this will aid in developers who see failures and either do multiple pushes, many retriggers, or just ignore them- in summary we will not need to worry as much about wasting resources related to intermittents.
| This will build off an existing set of tools while helping us bridge the gap towards a much better review and automated landing of patches system.  In the short term, this will aid in developers who see failures and either do multiple pushes, many retriggers, or just ignore them- in summary we will not need to worry as much about wasting resources related to intermittents.
|-
|-
Line 175: Line 175:
| With our thousands of test files, there are hundreds that have dangerous api calls which result in leftover preferences, permissions, and timing issues.  A lot of work has been done here, we need to fix tests and expand our work on these resources to all our tests.  In addition to cleaning up dangerous test code, we need to understand our tests and how reliable they are.  We need to build tools that will allow us to determine how safe and reliable our tests are individually and as part of a suite.  Upon completion of this project we should have the majority of tests cleaned up, and a toolchain that can be easily run and generate a report on how stable each test is.
| With our thousands of test files, there are hundreds that have dangerous api calls which result in leftover preferences, permissions, and timing issues.  A lot of work has been done here, we need to fix tests and expand our work on these resources to all our tests.  In addition to cleaning up dangerous test code, we need to understand our tests and how reliable they are.  We need to build tools that will allow us to determine how safe and reliable our tests are individually and as part of a suite.  Upon completion of this project we should have the majority of tests cleaned up, and a toolchain that can be easily run and generate a report on how stable each test is.
| Python, Javascript
| Python, Javascript
| Joel Maher
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| [[https://mozillians.org/en-US/u/mwargers/ Martijn Wargers]], [[https://mozillians.org/en-US/u/mwargers/ Joel Maher]]
| [[https://mozillians.org/en-US/u/mwargers/ Martijn Wargers]], [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]
| The impact this has is helping us cleanup our tests to reduce intermittents and give us tools to write better tests and understand our options for running tests in different configurations.
| The impact this has is helping us cleanup our tests to reduce intermittents and give us tools to write better tests and understand our options for running tests in different configurations.
|}
|}
Confirmed users
3,376

edits

Navigation menu