Confirmed users
58
edits
Andrenatal (talk | contribs) |
Andrenatal (talk | contribs) |
||
Line 31: | Line 31: | ||
* Automatic retrain | * Automatic retrain | ||
* We should also build scripts to automatically adapt the acoustic model per user with his own voice, to constantly auto-improve the service individually for him but also for the service as overall. | ** We should also build scripts to automatically adapt the acoustic model per user with his own voice, to constantly auto-improve the service individually for him but also for the service as overall. | ||
* Privacy | * Privacy | ||
* Some argued with me about privacy on online services. At the ideal screnario, actually online recognition is required only for LVSCR, while FSG can be handled offline if architected correctly. I think letting users to choose or not to let us use his voice to improve models is how other OSes handle this issue. | ** Some argued with me about privacy on online services. At the ideal screnario, actually online recognition is required only for LVSCR, while FSG can be handled offline if architected correctly. I think letting users to choose or not to let us use his voice to improve models is how other OSes handle this issue. | ||
* Offline and online | * Offline and online | ||
* The same speech server can be designed to run both online as offline, letting the responsibility to handle transmission to the middleware that handle the connections with the front. | ** The same speech server can be designed to run both online as offline, letting the responsibility to handle transmission to the middleware that handle the connections with the front. | ||
== Web Speech API == | == Web Speech API == |