My fourth artefact looked deeper into navigation and the effect of voice control. Speech recognition provides an additional element which can help aid accessibility. My original route involved testing the navigation on users by creating a prototype; instead I have created voice controlled navigation as a prototype.
The structure for naming the links came from using the name of the navigation link in the speech recognition box; this provided an easy method for testing the links later on.
As I have limited skills within Android application SDK, I constructed the application using just HTML5, CSS3 and JavaScript. The speech API worked using Flash to control the microphone and to capture the voice sound. This was passed through JavaScript into the website, providing commands to certain HTML tags (e.g. ) creating an effective way to match the speed to that of a normal application.
I hosted the website on a local server to allow for greater speed upon testing. The website was tested through a browser (Firefox) which ran from a server (Wamp). I planned out several actions to perform e.g. testing links, which I planned using certain ‘words’ specified in the code that would be used to navigate around the website. This resulted in making several videos which was recorded using CamStudio.
The results from analysing the video concluded that sometimes links had to be repeated twice for the navigating to work. Other problems were that the scroll feature didn’t work and zooming in didn’t increase content size e.g. text size. Overall, the navigation experience of using voice was successful upon navigating. But I’m not going to continue to use speech recognition in my artefact as an additional element in the navigation. This is due to the lack of data currently out there in supporting the development of my application.
The structure for naming the links came from using the name of the navigation link in the speech recognition box; this provided an easy method for testing the links later on.
As I have limited skills within Android application SDK, I constructed the application using just HTML5, CSS3 and JavaScript. The speech API worked using Flash to control the microphone and to capture the voice sound. This was passed through JavaScript into the website, providing commands to certain HTML tags (e.g. ) creating an effective way to match the speed to that of a normal application.
I hosted the website on a local server to allow for greater speed upon testing. The website was tested through a browser (Firefox) which ran from a server (Wamp). I planned out several actions to perform e.g. testing links, which I planned using certain ‘words’ specified in the code that would be used to navigate around the website. This resulted in making several videos which was recorded using CamStudio.
The results from analysing the video concluded that sometimes links had to be repeated twice for the navigating to work. Other problems were that the scroll feature didn’t work and zooming in didn’t increase content size e.g. text size. Overall, the navigation experience of using voice was successful upon navigating. But I’m not going to continue to use speech recognition in my artefact as an additional element in the navigation. This is due to the lack of data currently out there in supporting the development of my application.
No comments:
Post a Comment