This story is part of, where our editors will bring you the latest news and the hottest gadgets of the entirely virtual CES 2021.
The products atthat generate the most buzz are typically TVs, computers and phones, but there’s also plenty of important and groundbreaking tech geared toward improving the everyday lives of users that doesn’t receive as much attention. This includes a range of products primarily focused on accessibility, from hearing aids to braille keyboards to apps that guide visually impaired users.
The make up 15% of the global population, are often overlooked.pandemic has highlighted just how critical many of these products are. Tech has played a crucial role in keeping people connected for tasks like remote learning, work and hangouts, but the needs of people with disabilities, who
Thankfully, it’s an issue that’s slowly getting more attention as companies not only realize the importance of inclusivity, but also see the financial value of making their products accessible to more people. A 2016 report by Nielsen found that consumers with disabilities, along with their families, friends and associates, make up a trillion-dollar market segment. Additionally, a 2018 Accenture report found that if companies engaged in greater disability inclusion, they’d gain access to a talent pool of more than 10.7 million people. More-diverse hiring would let companies do a better job of making their products accessible.
“Inclusive design is now a topic that’s talked about in the mainstream,” said Greg Stilson, senior director of global innovation at American Printing House (APH), which makes a hybrid QWERTY and braille keyboard. “Even five years ago, inclusive design was not top of mind for UX designers,” he added, referring to user experience design.
Here are some of the accessibility and assistive-technology offerings showcased at this year’s all-digital CES that aim to level the playing field.
APH’s Mantis Q40 is a Bluetooth QWERTY keyboard that includes a refreshable braille display. This frees users who are blind or visually impaired from having to choose between a traditional keyboard or braille device. Instead, as they type, the braille display at the bottom presents written information to complement a screen reader, which speaks descriptions aloud to users.
The device can connect to up to five different gadgets at once via Bluetooth, and includes one USB connection. It works with Mac, PC and iOS devices, with Android and Chromebook support coming soon. It sells for a hefty $2,495.
The company also sells a smaller, lower-cost Chameleon 20 device, at $1,595, which features a traditional braille keyboard entry rather than QWERTY. It includes the same functionality with external devices and same connectivity capabilities as the Mantis Q40.
A key component of these tools is that they build on the accessibility initiatives that more big tech companies are increasingly adopting, Stilson said. He noted that though it’s exciting to see companies like Samsung, Apple, Google and Microsoft place a stronger emphasis on inclusive design, “accessibility doesn’t always equal efficiency. You can make a computer talk, but are there interaction tools that make it an efficient experience?”
Products like the Mantis Q40 and Chameleon 20, he says, add that layer of efficiency, while letting users keep braille at their fingertips.
The high price tag on the keyboards is due to the cost of producing specialized devices with refreshable braille technology, Stilson said. But APH is developing a Dynamic Tactile Device that would create multiple lines of braille, so a blind student, for instance, could have immediate access to an image, diagram or shape that’s being discussed in class. APH hopes to be able to create lower-cost refreshable braille tech with the device.
Hearing devices have started commanding more attention at CES in recent years, especially as they’ve been getting a boost from artificial intelligence. There’s a serious need for these tools, given nearly 500 million people globally have disabling hearing loss, according to the World Health Organization.
Hearing aid manufacturer Oticon launched Oticon More on Tuesday, which is designed to help users with hearing loss better understand speech and pick up on more sounds they need to hear.
Embedded on the hearing aid’s chip is a deep neural network, something that uses mathematical modeling to process data in a complex way. The DNN is trained on 12 million real-life sounds, the company says. When a sound goes through the hearing aid, it’s compared to results from the DNN’s learning phase. This lets the device offer a more natural and balanced representation of sounds, according to Oticon, and it’s able to process speech in a noisy environment more like a human brain does.
Oticon More supports streaming from the iPhone and from some Android devices. It comes in eight colors, and costs are set by individual hearing care professionals.
Other smart hearing devices on the virtual CES floor this year include the Kite personal sound amplifier, which features noise reduction technology and resembles a pair of earbuds attached to a neckband. It has three listening modes: focus mode, which hones in on the person in front of you and is designed for one-on-one conversations; environment mode, which provides more overall awareness while also scaling back unwanted noise; and group mode for social settings, which enhances speech 180 degrees in front of the wearer and reduces background noise.
Signia also offers a range of smart hearing aids, as well as a Face Mask Mode in its app that helps wearers better understand speech through face masks. Users can tap a button in the app, which then tells hearing aids to focus on a person’s speech signals, making words sound cleaner and reducing background noise. After Face Mask Mode is deactivated, the hearing aids go back to relaying surrounding noises in a natural-sounding balance. The app is available on iOS and Android.
Hearing aid wearers looking to better separate speech from surrounding noise may find HeardThat useful. The smartphone app, which launched late last year and is available on iOS and Android, uses machine learning to accomplish that goal.
To create the app, neural networks were trained using thousands of hours of recorded speech to distinguish useful speech from other noise. Whereas other speech-assistive devices tend to either amplify or reduce all sounds, including what’s useful, HeardThat separates and discards noise, the company says. This makes it easier for users to understand speech.
The app isn’t designed to replace hearing aids or serve as an alternative to them, says Bruce Sharpe, CEO of Singular Hearing, which makes HeardThat. Rather, it’s meant to be an accessory to hearing aids.
To use the app, connect your hearing device or headphones to your phone and place in front of you, pointed toward the person you’re talking with.
HeardThat is free, but the company plans to eventually roll out a subscription service.
Smartphones also have the capacity to help users who are blind or visually impaired navigate their environment. The Aware app, from Sensible Innovations, offers turn-by-turn descriptive navigation for users, who can place their phone in their pocket and listen as the app announces places they pass.
Users can tell the app where they want to go, and it’ll tell them when they’ve arrived at their destination. Aware also provides an audio description of locations, such as a store’s layout.
Aware is available on iOS and will launch soon for Android.
Liopa, a company that’s developed -based lip-reading technology, created an app called Sravi that’s designed to recognize specific phrases by analyzing lip movements. That can be helpful for people with speech difficulties, or patients in critical care with ailments that render them incapable of speaking.
The app is in trials within the UK’s National Health Service, and is slated to launch commercially around early spring.
Sravi has been helpful in intensive care units within the NHS, given the flood ofpatients who are on ventilators, says Liopa CEO Liam McQuillan. ICU clinicians tend to use tracheostomies to wean patients off of ventilators, which prevents the patients from talking. Those patients can benefit from an app like Sravi, McQuillan says, and the company has seen a spike in demand for its technology.
Clients use the app by downloading it to their phone or tablet and then holding the device up toward a patient. Sravi captures video of the patient speaking, and a deep neural network maps lip movements to figure out what someone is attempting to say. That information can be sent back to the health care provider’s phone or tablet in textual form or as a synthetic voice.
The range of accessible products showcased at CES highlights a growing awareness of the need for inclusive tech design and product offerings, which Stilson doesn’t expect to see slow down in the coming years.
“In fact,” he says, “I see that amping up more and more.”
The information contained in this article is for educational and informational purposes only and is not intended as health or medical advice. Always consult a physician or other qualified health provider regarding any questions you may have about a medical condition or health objectives.