QA/TCM/Meeting Notes: Difference between revisions

From MozillaWiki
< QA‎ | TCM
Jump to navigation Jump to search
No edit summary
Line 1: Line 1:
== .1 ==
== .1 ==
=== 1-25-2011 ===
Things Done:
* carljm
** finished up user accounts
* ericam
** corresponding on naming/branding
* camd
** servlet's up and running
* aakashd
** filed bug for .6
** finish up basic naming/branding
Things to accomplish and when:
* carljm:
** put login/user account on staging
* ericam
** corresponding on naming/branding
* camd
** tried to set up hudson
** ran it against the latest tcmplatform pull, found errors
* aakashd
** send e-mail on feedback from the wireframes
Flags:
* Hitting errors on the latest database pull
* Agile like spiderman. We can bob and weave. Sidenote: buy spiderman outfits for team.
=== 1-20-2011 ===
=== 1-20-2011 ===
Things Done:
Things Done:

Revision as of 16:54, 25 January 2011

.1

1-25-2011

Things Done:

  • carljm
    • finished up user accounts
  • ericam
    • corresponding on naming/branding
  • camd
    • servlet's up and running
  • aakashd
    • filed bug for .6
    • finish up basic naming/branding

Things to accomplish and when:

  • carljm:
    • put login/user account on staging
  • ericam
    • corresponding on naming/branding
  • camd
    • tried to set up hudson
    • ran it against the latest tcmplatform pull, found errors
  • aakashd
    • send e-mail on feedback from the wireframes


Flags:

  • Hitting errors on the latest database pull
  • Agile like spiderman. We can bob and weave. Sidenote: buy spiderman outfits for team.

1-20-2011

Things Done:

  • carljm
    • worked on django core
    • continued work on login
  • ericam
  • camd
    • rebuilt test server
  • aakashd
    • file bugs for .3
    • started work on naming/branding with names and personality

Things to accomplish and when:

  • carljm:
    • login/user account
  • ericam
  • camd
    • how to get test servlet up and running
  • aakashd
    • file bugs for .6
    • finish up basic naming/branding
    • send e-mail on feedback from the wireframes


1-18-2011

Things Done:

  • carljm
    • user reg and login started, but python code in place to get API data
    • got prototype server up
  • ericam
    • finished wireframes for run tests, frontpage, registration, run tests,
  • camd
    • issues with calling platform on a testserver and my automation
  • aakashd
    • laid out testcases for .3 and .6

Things to accomplish and when:

  • carljm:
    • finish user registration and login
    • finish product and cycles list
  • ericam
    • make adjustments per carljm's recommendation for the wireframes
  • camd
    • finish a servlet representing the platform that'll allow the automation to run without hiccups
  • aakashd
    • file bugs for .3 and .6
    • follow-up with roy on branding and naming

Flags:

  • move code freeze to 1/27
  • create testcase
  • questions about tabs per user
    • not logged-in: run tests, results
    • tester: run tests, create testcase, results
    • admin: run tests, manage, results

1-13-2011

Things Done:

  • carljm
    • worked on registrations and logins,
  • ericam
    • worked on the test cycle chooser and run-tests wireframes
  • camd
    • succeeded in getting a create object test for user and company
    • refactoring URI mapping to accomodate changes uTest has made for my tests
    • more smoketests now run against the qa test server
  • aakashd
    • filed bugs for .2 and .5 release

Things to accomplish and when:

  • carljm:
    • working on registrations and logins, eric's working on the test cycle chooser and run-tests wireframesd
  • ericam
    • finish up test cycle chooser and run-tests wireframes
  • camd
    • get more lettuce tests to pass against camd.mv.mozilla.com TCM server
    • create more lettuce smoketests against TCM apis
  • aakashd
    • get .3 and .6 bugs filed
    • work on branding and naming

Flags:


1-11-2011

Things Done:

  • carljm
    • Got majority of the site work
    • triaged through bugs needed for .1
  • camd
    • Worked on automation and ran some smoketests on some APIs
  • aakashd
    • 1st try at a project schedule up

Things to accomplish and when:

  • carljm:
    • Get HTML-styles for wireframes completed and pushed
  • camd
    • get automation running
    • host it up in a public place for utest to see
    • file bugs as necessary
  • aakashd
    • .2, .3, .4 bugs filed

Flags:

  • camd's testruns have a lot of dependencies (fyi in the future when creating tests)

1-6-2011

Things Done:

  • carljm
    • Setup basic structure of the code (handle dependencies)
    • Hook into
  • camd
    • Setting up a dedicated server for running tests
  • aakashd
    • 1st try at a project schedule up

Things to accomplish and when:

  • carljm:
    • HTML versions of the wireframes for .1, 1/
    • Requirements offered from uTest for .1
  • camd
    • get automation running
    • host it up in a public place for utest to see
    • file bugs as necessary
  • aakashd
    • .2, .3, .4 bugs filed

Flags:

  • Calls advertised for .1 are all there
  • Environments
    • user profile default environment variables:
      • no default when users are managing their profiles or beginning to run tests
      • talk to utest about adding that feature
      • file a bug in TCM:Future for default env vars

12-13-2010

test suite

  • added priority and order to testcases

management

  • manage products?
    • admin options
  • manage companies?
    • companies get associated to the urls
  • testsuites
    • clone 1.0+
  • testruns
    • clone 1.0+
  • testcases
    • step model for adding testcases
    • checkbox for leave older version alone, fixing a minor typo or change all existing versions of this testcase (WARNING)

run tests

  • step number failed
  • categorize by test cycle and then a list view

12-9-2010

Testcase Edit

  • if a test run is in draft, then update testcase;
  • only add a new version or update the testcase (edit testcase)
  • "update the testcase" or "create a new version"

Test Case Bulk Edit

  • no api availability for bulk editing
  • when adding testcase to test suite, there's external variables that need to be dealt with. when can't do a bulk edit of adding a test suites
  • take out bulk edit for 1.0 and move to future

Testcase Manage

  • lots of possible api calls with test suite
  • scale with hundreds
  • can on hover (file a bug)

Testcase Import

  • steps for testcase are not free form test field; each step has its own identity
  • should each step have its own object?
  • what does a test case look like in the data model? (matt, emily)
  • import feature would need to be UI-side


carljm's Notes

Platform wishlist

  • Manage
    • Backend API can currently only support one-at-a-time actions, not bulk.
    • Deletion: may want to consider asking the backend to only put deleted items into "deleted" state, not actually remove; this would allow for user-friendly "undo" on delete. Only worth it for things where a deletion actually loses significant amounts of data.

UI Implementation Notes

  • Manage
    • Don't allow deletion of active test cycle or test run, or test case version referenced in active test run, etc. Backend will throw error anyway.
    • Removing test cycle also removes all the test runs for that cycle.
    • Test Run states: uTest uses the terminology Draft, Activated, Closed. We may as well use the same terms.
    • Test Run management screen is missing Test Cycle selection, start/end date, a few other attributes.
    • Test Runs can't be assigned directly, so listing a TestRun as assigned to a particular tester doesn't make sense. They just have a self-assignable or not (boolean) attribute, which determines whether the TestCases in that TestRun can be self-assigned. Only test cases as part of a test run can be assigned to someone; though the UI will probably want to provide bulk-assignment features.
    • In general, important to have filters in management interface. Ability to filter things by status, boolean attributes, etc. Wireframes are missing this; will also need platform support. (For example, searching for TestCases when creating a TestSuite need to be able to filter on non-textual attributes of a TestCase).
    • TestSuite and TestCase creation need to be within context of a product: this doesn't seem to be reflected in the wireframes.
    • Environments should be associatable with anything (test case, test suite, product, etc) via a UI that is consistent throughout the management interface.
    • When adding TestCases to a TestSuite or TestRun, need to be able to specify order and priority for each TestCase, also blocking (does Mozilla care about this attribute?).
    • TestCase creation shouldn't include adding to TestSuite (how do you specify priority, order, etc, when you can't see the rest of the TestCases in the TestSuite?) TestCase editing should include a read-only view of what test suites the test case is a part of, though.
    • What about management screens for products and components and companies?
    • Is the assumption that each company will have a company-specific separate site and URL? Or that there may be a single site for multiple companies?
    • Need management screens for setting up EnvironmentTypes and EnvironmentGroups, in order to populate the environment dropdowns with options.
    • Test case editing "edit an older version" needs to be clearer that you are effectively superseding the current latest version, since we don't support branching.
    • TestCase steps are actual entities, not just a freeform string field. Each step has separate action and expected result.
    • TestCase editing should make versioning explicit and optional. So you can edit an existing version in-place (for minor edits, typos etc), or create a new version. This replaces the "Update Test Runs associated" checkbox: if you create a new version, TestRuns referencing the old version will remain unchanged. If you edit a version in-place, TestRuns referencing it will see the change.
    • Bulk TestCase edit: remove "add to testsuite," doesn't work well with the need to specify order of test cases in test suite.
    • TestCase list: may need to make test-suite list a one-by-one ajax call on hover or on button click for more details.
  • Run Tests
    • How is user profile default environment supposed to work, given that the relevant environment factors are product-dependent and a user might be testing multiple products in totally different spaces?
    • RunTests environment selection needs to be filters on the testcase list, not a prior step. Otherwise it's confusing if nothing shows up. Not all environment stuff needs to be fixed for an entire testing session.
    • RunTests-choose: Test runs, not cycles. Is it needed to select multiple test runs? Seems like one at a time makes more sense and keeps things simpler.
    • RunTests-choose: Progress bar is just percentage of tests assigned to me that are completed?
    • RunTests-choose: need drill-down to list of test cases for claiming self-assigned, and viewing ones that have already been assigned, before moving on to execution.
    • Missing the UI for assigning test cases: need to be able to assign environments.
  • RunTests - executing
    • When failing a TestCase, need to be able to describe what failed (and possibly also identify a particular step as having failed, since steps are separated).
    • Just show environment details, don't make them link to it. And it should be read-only.
    • Sort test-results on these screens by environment, because that's the most sensible ordering for a tester. Within that, by order.
    • Show priority on test-result, also give option to sort by priority?
    • Need not just attachments from test case, but also way for tester to upload attachment related to their result.
    • Need start button and then succeeded/failed button for each test (because backend wants to record time taken to perform test). Also buttons provide clearer UI path than dropdowns.
    • Is there any point to displaying the full controls for multiple tests at once? A tester only cares about one at a time. Maybe just display titles and such for the previous and next few to give some context if that's important, but not all the controls.
    • Maybe need a company and/or product-level configuration to toggle the availability of the "make it better" feature: some users won't want that at all.
  • Results
  • test cycle
      • Doesn't make sense to list a tester on a test cycle; there will almost surely be multiple testers involved.
      • Does it even make sense to be able to approve or reject an entire test cycle at once? Don't you need to look at the individual results?
      • "Recreate with failures only" should be test-run level, within the test cycle. Not an option on a test cycle.
    • Test run
      • Need to be able to "re-test" (optionally failures only) the entire test run. By default assigned to same tester, can be reassigned.
    • Test case
      • Really needs to be results, not test cases: can have multiple results per test case (one for each environment group).
      • Need re-test option for individual test case, with way to assign it to a new tester.
      • Don't need both Status and Result columns, they are the same thing.
    • Test suite
      • At the execution level, test suites are no longer really relevant. They are just a convenience for adding test cases to a test run. So this screen probably shouldn't exist.

Features to postpone

  • Import/export.
  • Bulk-editing of TestCases.