Monday 31 December 2012

XVoice: speech control of Linux desktop applications


XVoice

An open source speech control project up for adoption

The Early Days

Around the end of the 1990's IBM released a Linux version of their ViaVoice speech recognition engine. It was always a beta product, it never had the full set of features of the original Windows program, but at the time it was the only good recognition engine for Linux, so I started playing around with it. 

Soon I discovered an open source project, XVoice. XVoice was an application that used the ViaVoice engine for speech-to-text, but then used the resulting text to control the Linux desktop. It was a hack that used a bunch of programs in ways they were never designed for, and it achieved something rather exciting: you could speak to Linux and it would do your bidding. 

One of the great features of the ViaVoice engine was that it allowed you to define a grammar, and then the engine would match whatever speech was input against that grammar. This meant that without training, recognition rates were near-perfect for a domain specific grammar (nicely defined in BNF). 

Progress and Success

After a few months of regular development in early 2000, XVoice had proper support for user-defined command grammars. These grammars mapped spoken commands to keystrokes, and you could have multiple grammars, one for each application. XVoice had some (hacky) heuristics; you could specify a regex that would match against the window title, which then would automatically load the right grammar file. You could control the mouse too, XVoice split the screen into ever smaller 3x3 grids that you navigated until the mouse was where you wanted it. The grammars were hierarchical  so you could include the grammar for spelling out numbers in your emacs control grammar, and they supported pattern substitution, so the command sent to an application could include some of the words you said. 

There were some quite motivated users who contributed a lot to the development. One was a programmer who used Vi but had severe RSI that was making it difficult for him to work. He defined a comprehensive Vi grammar that allowed him to program, and interestingly, claimed he was more efficient because he was using higher level Vi commands than he would normally. Some Emacs users had huge grammars that let them read news, send emails, program in lisp and who knows what else. 

As an aside, my experience working on XVoice left me in no doubt that for regular people, voice control of your computer is a fun trick, but the only people who would use this on an ongoing basis are those who have no other choice. Talking for several hours a day is physically difficult, and even with the clever grammars some people designed, it's not something you'd choose to do unless you had to. We had at least a few quadriplegic users who were starting to use the system with some success, for them, XVoice was the only way they could operate a Unix machine. This realization made the future of the project clearer: we'd focus on features that helped people who couldn't type at all or only with great difficulty. 

As a programming project, working on XVoice was just great, and I learned a lot from the other programmers who were much more experienced and capable than I was. By the end (version 0.9.6 I think...) it was a really cool program, being used by people who really needed it to work. We encountered and solved some of the problems of voice control of graphical user interfaces. The command grammar was a pretty elegant and extensible system, and I recall that the sense at the time was that most of the interesting work ahead lay in defining bigger and better libraries of grammars. 

An Abrupt End

Unfortunately IBM didn't seem very interested in ViaVoice on Linux. We had some contact with the developers, who were quite helpful, but the "official" IBM people would never tell anyone what the plans for continuing Linux support were. The only contact we had with them was when they demanded that we make it clear on the XVoice related websites that we didn't distribute ViaVoice with XVoice, that people had to buy it separately (which we always did). 

Then one day IBM discontinued ViaVoice for Linux. It just disappeared from their website. At the time, CMU Sphinx was the only plausible candidate for an open-source speech-to-text engine that we could use instead of ViaVoice, but it wasn't very mature and had some issues that would have been tough to work with. The main coders on the project had personal or work issues that meant they couldn't work on XVoice for a while, and so the project lost momentum. 

Wistful Thinking

Every once in a while when I read an article about speech to text, particularly about command and control systems, I wish that we'd had the time to rework XVoice to work against an open source engine. It's disappointing that there's still no clear (open source or other) solution for people who can only interact using speech. Many big tech companies pay lip service to accessibility, but in this case the big boys didn't do anyone any favors.

The Future

I think it's important to distinguish between projects that focus on building better general recognition accuracy for dictation, and accessibility oriented command and control systems. Complete control with speech is a difficult problem, I don't think you solve it as part of a larger generic command and control platform. You have to focus on accessibility and talk to users who have real accessibility issues, and get them to work with you to overcome them. If you are working on command and control, it's worth remembering that these people are the only ones who will be still using your software when the novelty wears off and their throat is sore. In the final few months, that's where XVoice was focusing, there were a bunch of awkward problems we'd need to fix, but it was pretty exciting. 

The key feature that XVoice or any other command and control system relies on is the ability to feed context-specific grammars to the recognition engine on the fly. The underlying accuracy of the engine isn't very critical if the grammar is sufficiently constrained. All modern engines are likely good enough. But as far as I know, the current HTML5 implementations of speech input don't yet support setting grammars. CMU Sphinx appears to, but it's not clear how well it works in practise, their configuration files seem quite complex. 

The XVoice code is all GPL, it's on Sourceforge and now on github  so please feel free to go nuts. Before today it was about 10 years since I looked at it but the docs are actually pretty good and the code isn't as bad to read as I expected. It's mostly pre-RAII C++, so would need a cleanup and a dose of smart pointers to bring it up to modern standards. Even if the project isn't resurrected, the ideas around how command grammars are structured and used might be useful to another project, or the code for generating X events could be reused. There's a sample set of grammars in the modules subdirectory that make for interesting reading - there's even one for "netscape", how quaint. 


4 comments:

  1. Have you heard about Simon? I have no idea how well it works, I've just recently seen it on HN:
    http://simon-listens.org/index.php?L=1&id=122

    ReplyDelete
  2. Interestingly, Canonical just announced the Ubuntu Phone, which includes Speech control to access menu-bar items, see 12:30 in the video here:
    http://www.youtube.com/watch?v=cpWHJDLsqTU#!
    Might be interesting for the xvoice people.

    ReplyDelete
  3. Thanks for sharing the post.. parents are worlds best person in each lives of individual..they need or must succeed to sustain needs of the family. lifehacks

    ReplyDelete