Categories
- Accessibility (4)
- Conferences (2)
- Food (1)
- Generating ideas (1)
- Good experience (3)
- Job advice (1)
- Mobile (1)
- Productivity (1)
- Random words (1)
- Reviews (2)
- Site info (3)
- Stories (1)
published by Ben Allen
Let's say your Agile software team(s) have started to incorporate accessibility requirements, in other words, you want to be WCAG 2.0 compliant. Where do you start? How do you break down the problem? How do you train your people?
Understanding the types of tools available can be a useful lens to look through when considering this problem. While there are many accessibility testing tools, in my humble opinion, there are only 3 types of testing tool. Let's explore the types of tool, options within each, and consequences for accessibility training.
Quick links:
The tools to be discussed have a developer and Quality Assurance (QA) focus. The tools are focused on web accessibility rather than native mobile or document testing. Design and requirements writing tools are also not considered.
Agile teams love automated testing. It helps you move fast and not break things. There are parts of accessibility testing that can be completed in a purely automated fashion i.e., you point the tool at a web page and you get concrete results back.
For WCAG 2.0, depending on the content within the page under test, you can cover 20-30% of WCAG requirements with automation. Every accessibility tester I know will start with an automated test.
Accessibility automation, assuming you have zero false positives, is extremely desirable because:
Hopefully it's clear, accessibility automation is very much like other forms of software automation. Assuming you get reliable results, you should want to automate all the things, fix everything that automation discovers, and move on to harder accessibility problems.
Tools that incorporate accessibility automation include:
Great tools will:
So automation is great but it can't do everything. When you discover the limits of accessibility automation you might be tempted to pick up different types of assistive technology and do different types of "experience testing". If you've been evaluating websites for accessibility for a long time, this might be the preferred option, but if you or your team are just starting out then assistive technology will seem scary & intimidating. Can we do better?
Manual testing with a tool assist ("tool assist" for short) is a critical part of testing. I would define these tools as those that can analyse HTML/CSS/JavaScript code and highlight the interesting parts of the web page for manual review.
The classic example of this might be something like reviewing alternative text ("alt text") of images. A purely automated test can tell you whether alt text is present or not, but it can't tell you whether that alt text is appropriate in the context of the page. This is where a "tool assist" can help! A tool can analyse the page, find all the images, calculate the accessible name, and present them to you in a nice list. You can evaluate all the images in the page by reviewing the list in the tool with what you see in the page.
Compare this technique with other methods:
Put a different way, in addition to learning the WCAG 2.0 requirements, your team would need more training so that:
I'd argue that, for most teams, this kind of training is unnecessarily complex and expensive. "Tool assist" testing is very desirable because:
Great tools will do all the things that "great" automation tools do (see above), plus:
If we've completed automated testing and "tool assist" testing then we've managed to test a lot of WCAG 2.0, so what's left? Now it's time to dive into manual testing. It's important to realise that not all manual tests are "hard". It's easy to equate manual testing with "screen reader testing" but that's not accurate. There are plenty of manual tests which you can do with just visual checks.
Manual testing has its own spectrum of difficulty. Some tests require a visual check, some require a tool and a little training, and some require a whole new way of interacting with a web page. Let's review some examples (not an exhaustive list, and level of difficulty assumes you are sighted):
Manual testing is hard, relative to automated and "tool assist" testing, because you're asking your team to interpret WCAG 2.0, come up with a testing methodology, pick a tool, report results, and offer solutions. At every decision point, you have room for inconsistency. If you're trying to take accessibility practices to more than a handful of teams, inconsistency is your enemy because it will erode confidence in your program.
How do you battle inconsistency? Let's take it point by point:
If you have a small budget, and cannot afford an off-the-shelf product, consider building your own documentation with a focus on these problems.
If you want a jump start on "interpretation", "testing methodology", and "tools" then using Trusted Tester could be a good way to go. Accessibility Insights for Web, which is heavily influenced by Trusted Tester, also provides a good start for these problems.
"Live instructor led training" is also a good solution to inconsistency.
I don't think it's about features. I think it's more about what your users make use of, and what you can afford. In other words, do you have the cash for a JAWS license, and do you have a smart phone you can use for accessibility testing? My recommendation, especially if your budget is small, use NVDA, VoiceOver for iOS, and Colour Contrast Analyser.
One other important consideration when you are considering screen readers is "browser combinations". In theory you could use NVDA with any browser available for Windows. The problem is, you will get very different results if some of your team are using NVDA and Internet Explorer, while others are using NVDA and Chrome. My recommendation would be as follows:
Documenting methodology is table stakes. Great tools will include:
To recap, I'm suggesting we have 3 types of accessibility testing:
Viewing your testing in this way means you can build a training program around this methodology.
Instructor led training must be supported by some form of knowledge base/on-demand training too. You must have good reference material in addition to great training. Your team needs a way to recap on training, and get more context whenever necessary.
Here is 1 example of how you could build training around the "testing types" identified but you could choose to put different points into different buckets:
In this example, you're gradually introducing your team to accessibility testing and building confidence at each stage. You are avoiding the classic "drinking from the fire hose" problem. After each session, your training attendees should be able to test with confidence, add value to their Agile team, and find issues with a low number of false positives.
Most of your training efforts will focus on manual testing. When approaching this task consider what testing tasks are left over after automated and tool assist testing are complete. In addition, work out what problems are common within your product. This will keep your training focused, and will be easier to build and deliver.
I'd advise that good accessibility training should leave out the very hardest parts of WCAG and the parts of WCAG that are not relevant to your product. For example, if your product doesn't contain videos, don't waste any time talking about video accessibility. Instead, make sure your team know where to go when they get stuck on an accessibility problem. For example, a Slack channel, or "accessibility office hours" might be useful support forums.
Understanding the types of tool available is critical to building out an accessibility testing program which can scale. While a lot of focus is placed on automated and manual testing, it is important to consider the "tool assist" category too. A good "tool assist" tool, like axe beta, offers many of the benefits of automated testing, and is much easier to learn when compared to common alternatives.
In addition to testing tools, you need a knowledge base which can help document your methodology, offer context, and make issue description and remediation advice a copy-and-paste exercise.
Assuming you've selected your tools, including your knowledge base, and got your methodology in a good place, then you can build a good training program which incorporates how to use these tools and where to go for help.
How do you know if you've got the right tools for your team? How do you know if your training is working? That's probably a blog post for another day.
All ideas discussed help you think about accessibility at scale. If you're in the business of figuring out these types of problem, then I recommend you pick up a copy of Agile Accessibility Explained by Dylan Barrell. This is the only book I know of which tackles scale-type-problems and offers very practical advice.
Full disclosure: I know Dylan and offered feedback on the book.
Feedback, questions, or comments? Ping me on Twitter @benjaminallen.