Out-Law / Your Daily Need-To-Know

Out-Law News 1 min. read

EMMA prepared to harmonise web interfaces


The ability to communicate over the web by voice, pen, keyboard or all three at the same time came a step closer on Monday when the World Wide Web Consortium, known as W3C, published its first working draft of EMMA, or the Extensible MultiModal Annotation language.

EMMA forms part of a set of standards being developed by a W3C working group, with the ultimate goal of, according to the W3C web site, creating a "new class of mobile devices that support multiple modes of interaction" with the internet.

At present there are many different 'modes of interaction', on many different devices, but there are few, if any, that will allow the user to communicate over the internet or interact with web sites using all available types of interface.

The W3C is therefore seeking to create a language and a framework that will allow data inputted into the internet by means of voice, pen, keyboard, joystick or mouse to be interpreted into one globally recognisable format. The hope is that shortly users will be able to speak into their mobile phone, see the results come up on a PDA, alter it with an electronic pen, and send it off with a touch of the keypad.

According to the W3C, EMMA comprises the "data exchange format for the interface between input processors and interaction management systems", and is based on the better known internet interface language known as Extensible Markup Language (XML). Other standards currently being developed will deal with the general framework of the system and how to recognise input from an electronic pen or stylus.

The W3C is interested in hearing comments on the working draft. They should be sent to the discussion board at [email protected].

We are processing your request. \n Thank you for your patience. An error occurred. This could be due to inactivity on the page - please try again.