Labs/Ubiquity/Usability/Usability Testing/Fall 08 1.2 Tests/Tester 08a: Difference between revisions
< Labs | Ubiquity | Usability | Usability Testing | Fall 08 1.2 Tests
Jump to navigation
Jump to search
Indolering (talk | contribs) m (→Metrics) |
m (GPHemsley moved page Wiki:Labs/Ubiquity/Usability/Usability Testing/Fall 08 1.2 Tests/Tester 08 to Labs/Ubiquity/Usability/Usability Testing/Fall 08 1.2 Tests/Tester 08a without leaving a redirect) |
||
(5 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
== Tester 008 == | == Tester 008 == | ||
''Embed [http://www.viddler.com/explore/indolering/videos/11/ video] here.'' | |||
== Highlights == | == Highlights == | ||
Line 8: | Line 8: | ||
== Preliminary Recommendations == | == Preliminary Recommendations == | ||
This tester highlights a deficit of statistical UI testing, remote testing of click through on a page cannot show when the user clicks on something thinking it will do something it doesn't. We can't very well log all of the users keystrokes. Is there a way to monitor this behavior? | |||
===Ubiquity Core=== | ===Ubiquity Core=== | ||
# Merge Ub with the awesome bar | # Merge Ub with the awesome bar | ||
# Use data gathering to capture failed commands to increase intelligence of the thesaurus | # Use data gathering to capture failed commands to increase intelligence of the thesaurus | ||
# Consider inserting iframes, working with providers to support commands directly. | # Consider inserting iframes (as opposed to JPEG screen captures), working with providers to support commands directly. | ||
# Make a fallback | # Make Google a fallback | ||
# Make help non-linear | # Make help non-linear | ||
Line 61: | Line 63: | ||
* Tries video 13:30 | * Tries video 13:30 | ||
* F*ng loves the demo 14:00 | * F*ng loves the demo 14:00 | ||
* | * Randomly guesses commands 29:30 |
Latest revision as of 00:36, 12 May 2015
Tester 008
Embed video here.
Highlights
- Mistaking the awesome bar for Ub 04:20 -Specifically the Google "feeling lucky" function 05:00!
- Random guessing of commands 29:30
Preliminary Recommendations
This tester highlights a deficit of statistical UI testing, remote testing of click through on a page cannot show when the user clicks on something thinking it will do something it doesn't. We can't very well log all of the users keystrokes. Is there a way to monitor this behavior?
Ubiquity Core
- Merge Ub with the awesome bar
- Use data gathering to capture failed commands to increase intelligence of the thesaurus
- Consider inserting iframes (as opposed to JPEG screen captures), working with providers to support commands directly.
- Make Google a fallback
- Make help non-linear
Translate Command
Raskin's 1st law of Interface Design "A computer shall not harm your work or, through inaction, allow your work to come to harm " 22:15 -I believe a single user has guessed to reload the page, after three previous failed attempts.
Metrics
Research Questions |
Performance Benchmarks |
How do users try and access Ubiquity
|
|
How do they learn the command syntax? | |
Do users value Ubiquity?
| |
How would we identify problematic commands via statistical analysis?
|
*Tester put in commands elsewhere that they did not belong, can we monitor that? |
Timeline
- "Take the Ubiquity Tutorial, that sounds boring" 00:50
- Reads everything but skips over hot-key.
- Decides to try tutorial 2:15, immediately hates visual presentation.
- Immediately skips past the hot key explanation
- Tries typing in command and hitting enter without trying hotkey. 04:00
- Mistakes the Awesome bar for Ub 4:20
- Mistakes Google's "feeling lucky" function for Ub 05:00
- 12:08 "My idea is that the interface should be so intuitive that one doesn't even have to try, it should just do what you think it should do."
- Gives up on Tutorial after almost 10 minutes 13:00
- Tries video 13:30
- F*ng loves the demo 14:00
- Randomly guesses commands 29:30