As Artificial Intelligence advances and seeps into everyday technology, people with disabilities are finding that barriers are broken as a result. From smart home technologies that make home environments safer, more accessible and independent to features, such as predictive text or speech-to-text transcription, AI can create a future free of barriers for people with disabilities.
With a billion people worldwide living with disabilities, the need for inclusive technology keeps growing, and AI can make a difference for them as long as we design these systems to be inclusive from the very beginning, taking advantage of its ability to learn from their users in order to become even more helpful.
How AI Is Already Providing Greater Accessibility
While some Artificial Intelligence technology is still fairly new, we are already seeing how AI can create a future free of barriers for people with disabilities. Text-to-speech translation, for example, describes emojis and pictures seen on social media to smart glasses, which in turn can possibly serve blind people. Likewise, as self driving cars become more of a reality, they could be a major improvement for the blind and mobility impaired to move around the city freely, as the AI installed in them helps navigate the streets, determine location and get you there safely.
While all of this refers to the future, there are many ways in which AI is already improving accessibility. As voice-enabled tech becomes more and more mainstream, people with sight or mobility limitations are taking advantage of it in order to better access the Web and even navigate around their homes.
AI devices using voice commands, such as Amazon’s Echo and Google Home, are facilitating the lives of people with mobility impairments who depend on them in order to control home technologies, such as thermostats, blinds and curtains, lightning and appliances, while voice-search makes browsing the web easier now than ever for those who are unable to tap their queries into a screen.
The progress is only growing. For example, the Elderly Care pilot is being developed by Age UK and Accenture to run on the new Amazon Echo Show – an Alexa assistant that includes a screen -. Once it is done, it will use voice technology, on-screen prompts and cloud-based AI technology to help seniors get access to reading and learning materials, communicate with their loved ones, find local events and even answer the phone and door.
As we mentioned earlier, thanks to AI, visually impaired people are able to interpret images as well as text. Facebook is already providing screen-captioning tools that describe photos, while Google’s Cloud Vision API can understand the context of objects in photos.
Microsoft’s Seeing AI app uses visually impaired people’s mobile phone cameras to narrate the visual world for them, helping them see currency, read handwriting, recognise colors and even identify people and their emotions, providing independence for over 200,000 users that have already downloaded it.
Additionally, thanks to speech to text technologies powered by AI, real-time language captioning is now possible, allowing students and other hearing impaired people to get information at the same time as everyone else during lectures, thanks to an advanced automatic speech recognition technology that converts spoken language into text.
Last year, Microsoft made an announcement that promises even bigger changes, as the AI for Accessibility program was launched, granting $25 million and 5 years to accelerate the development of accessible AI solutions. Comprising grants, technology investments and expertise along with innovations integrated into Microsoft Cloud services, the aim is to make it possible for disabled users to achieve more in the Employment, Modern Life and Human Connection scenarios.
As AI continues to advance, it will continue to bring tangible changes to the lives of everyone, including the disabled.