From smartwatches that don't fit women's wrists to racist chatbots, technology is awash with stories of disasters that could have been avoided. The common denominator is usually the lack of research and the – often unintentional – exclusion of diverse voices from the product design process.
When people in positions of privilege create solutions, they often unintentionally fail to account for their own explicit and implicit personal biases. Nancy calls this concept of not recognizing our own blindspots the "nobility complex." If teams are homogenous, and marginalized groups in an organization aren't empowered to speak up, then the technology that's being created is going to be limited and self-serving by default.
The tech industry's obsession with launches poses another challenge. The main objective is to get something out of the door, often at the expense of in-depth research. [...]
Making research more representative also means delivering value beyond empathy. Although empathy is a popular buzzword among product teams, it should be seen as just the baseline. Trying to put yourself into the users' shoes is a great starting point, but not quite sufficient when you build for scale.
To avoid the nobility complex, Nancy recommends turning assumptions into questions. Instead of trying to validate your stakeholders or your own ideas, stay curious and ask yourself what you may not know about the experience. Additional context will change your perspective, and when you stop designing for Western conventions, products can become more scalable. Understanding global perspectives tends to create better, more inclusive products.
[...] Over the years I have seen and used a lot of different patterns to add additional descriptive content to SVGs. But it was unclear which of these options was the best to use for the most coverage of browsers and screen readers. There are articles that touch on the subject, but many are dated or do not cover all of the patterns available, so I decided to do my own high-level browser/screen reader testing. [...]
When undertaking a multistep process—such as creating a user profile or checking out an online shopping basket—the steps in the process are usually listed above the content, with the active step indicated in some way (such as a color change or bold text). But there's often no structural way provided for a screen-reader user to know which step is the current one.<ol id="steps"><li>Contact details</li><li aria-current="step">Payment details</li><li>Authorize payment</li></ol>
There are a few very visual tests that just aren't feasible for me to tackle. It wouldn't be easy for someone who can't see the screen to, for example, test whether content is cropped or overlapped at different screen sizes, resolutions and orientations. Taking a screenshot and running it through the JAWS OCR function can provide some insight into whether content that should be on the screen is missing or incomplete, but I can't be precise enough to write up a robust description and recommendation of the problem.
I also tend to avoid issues involving colors, fonts and text spacing, as well as images containing complex content such as graphs, charts and diagrams. Where possible, I don't test for keyboard focus either, because JAWS is very good at plugging gaps by, for example, adding elements to the tab order when they may have inadvertently been excluded by the developer.
[...] Working with a colleague at TPGi, we created a spreadsheet listing all the WCAG 2.1 levels A and AA success criteria, and we split them into issues that I can reliably test for and those where I can't provide reliable results (Word docx file). Between us, we worked out that I can test for around two thirds of the 2.1 success criteria. Naturally, in my ideal world, I would be able to carry out a complete audit, and I'll continue to push boundaries and come up with solutions. But even I have my limitations.