BBC yesterday posted an article about blind people who restore rudimentary vision using a system that turns the input from a head mounted camera into sound. The result has enabled a blind woman to distinguish similar objects, roughly make out obstacles in her environment and detect whether the lights in a room are on or off.
These remarkable results are thanks to a system called vOICe (where the three middle letters stand for “Oh I See”), developed by Dr Peter Meijer, a senior scientist at Philips Research Laboratories in the Netherlands.
The method sounds simple enough. Currently “brighter areas sound louder [and] height is indicated by pitch” turning each camera pixel into a specific sound. A stream of pixels then makes up the whole image with a total resolution of up to several thousand pixels. That is not much, compared to the fact that even most rudimentary digital cameras have a few hundred thousand pixels. But as you can see in the examples below, it is surely enough to make out the content of the image.
Color is not represented in the vOICe soundscape, making the “soundscapes” grayscale images. A user can however activate a color identifier that speaks out color names.
Here are two examples of images turned into vOICe soundscapes. Even with an “untrained ear”, the sounds make perfect sense when compared to the images, although I would not claim I could make my way to the bathroom using this information alone. Try them:
![]() |
![]() |
44K WAV soundscape | 88K WAV soundscape |
Images and sound courtesy of the Seeing With Sound website | |
On the project website, you can even try loading your own images and turning them into soundscapes. You MUST try some of the examples there, they are quite amazing.
What vOICe effectively does is that it uses the user’s auditory input instead of the damaged optical input to get the visual information to the brain. I wonder if it is then processed in the same parts of the brain as it would have had it been “normal” vision?
This method has some obvious advantages over retinal and brain implants, in that it is non-intrusive. An article in Wired a couple of years ago this year discussed another method where the nerves in a user’s tongue was used to receive visual information. It is however not likely for either of these methods that the resolution will ever get close to the “real” thing. Bionic eyes (the implants) are much more likely to get to that level at some point, even though certainly they are not yet. This in no way undermines the importance and brilliance of the vOICe system and other “optical nerve bypassing” methods.
Thanks for the tip Magga.