Labs/Ubiquity/Usability/Usability Testing/Fall 08 1.2 Tests/Tester 08a: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
mNo edit summary
 
(7 intermediate revisions by one other user not shown)
Line 1: Line 1:
== Tester 008 ==
== Tester 008 ==
"My idea is that the interface should be so intuitive that one doesn't even have to try, it should just do what you think it should do."
''Embed [http://www.viddler.com/explore/indolering/videos/11/ video] here.''


== Highlights ==
== Highlights ==
Line 8: Line 8:


== Preliminary Recommendations ==
== Preliminary Recommendations ==
This tester highlights a deficit of statistical UI testing, remote testing of click through on a page cannot show when the user clicks on something thinking it will do something it doesn't.  We can't very well log all of the users keystrokes.  Is there a way to monitor this behavior?
===Ubiquity Core===
===Ubiquity Core===
# Merge Ub with the awesome bar
# Merge Ub with the awesome bar
# Use data gathering to capture failed commands to increase intelligence of the thesaurus
# Use data gathering to capture failed commands to increase intelligence of the thesaurus
# Consider inserting iframes, working with providers to support commands directly.
# Consider inserting iframes (as opposed to JPEG screen captures), working with providers to support commands directly.
# Make a fallback of google
# Make Google a fallback
# Make help non-linear
# Make help non-linear


Line 26: Line 28:
|-
|-
| How do users try and access Ubiquity
| How do users try and access Ubiquity
*Number of things they tried before launching Ub.
#Number of things they tried before launching Ub.
*Time before launching Ub
#Time before launching Ub
*Did the instructor have to directly show them how?
#Did the instructor have to directly show them how?
|
#3
#20 minutes
#No
|-
|-
| How do they learn the command syntax?
| How do they learn the command syntax?
Line 42: Line 48:
**High error rates
**High error rates
*User feedback of poor commands, correlate with data
*User feedback of poor commands, correlate with data
| *Tester put in commands elsewhere that they did not belong, can we monitor that?
|}
|}


Line 56: Line 63:
* Tries video 13:30
* Tries video 13:30
* F*ng loves the demo 14:00
* F*ng loves the demo 14:00
* Massive amount of random guessing of commands, most of which is not apparent as he just types randomly on the keyboard 29:30</div>
* Randomly guesses commands 29:30

Latest revision as of 00:36, 12 May 2015

Tester 008

Embed video here.

Highlights

  1. Mistaking the awesome bar for Ub 04:20 -Specifically the Google "feeling lucky" function 05:00!
  2. Random guessing of commands 29:30

Preliminary Recommendations

This tester highlights a deficit of statistical UI testing, remote testing of click through on a page cannot show when the user clicks on something thinking it will do something it doesn't. We can't very well log all of the users keystrokes. Is there a way to monitor this behavior?

Ubiquity Core

  1. Merge Ub with the awesome bar
  2. Use data gathering to capture failed commands to increase intelligence of the thesaurus
  3. Consider inserting iframes (as opposed to JPEG screen captures), working with providers to support commands directly.
  4. Make Google a fallback
  5. Make help non-linear

Translate Command

Raskin's 1st law of Interface Design "A computer shall not harm your work or, through inaction, allow your work to come to harm " 22:15 -I believe a single user has guessed to reload the page, after three previous failed attempts.

Metrics

Research Questions
Performance Benchmarks
How do users try and access Ubiquity
  1. Number of things they tried before launching Ub.
  2. Time before launching Ub
  3. Did the instructor have to directly show them how?
  1. 3
  2. 20 minutes
  3. No
How do they learn the command syntax?
Do users value Ubiquity?
  • Feedback
  • Followup studies
How would we identify problematic commands via statistical analysis?
  • look at failed commands & commonalities
    • Lack of completion
    • High error rates
  • User feedback of poor commands, correlate with data
*Tester put in commands elsewhere that they did not belong, can we monitor that?

Timeline

  • "Take the Ubiquity Tutorial, that sounds boring" 00:50
  • Reads everything but skips over hot-key.
  • Decides to try tutorial 2:15, immediately hates visual presentation.
  • Immediately skips past the hot key explanation
  • Tries typing in command and hitting enter without trying hotkey. 04:00
  • Mistakes the Awesome bar for Ub 4:20
  • Mistakes Google's "feeling lucky" function for Ub 05:00
  • 12:08 "My idea is that the interface should be so intuitive that one doesn't even have to try, it should just do what you think it should do."
  • Gives up on Tutorial after almost 10 minutes 13:00
  • Tries video 13:30
  • F*ng loves the demo 14:00
  • Randomly guesses commands 29:30