As someone who works in this industry, I knew that an AI tool would eventually appear to tackle this challenge. It's fair to acknowledge that writing accessible code is challenging, especially when many developers and tech professionals aren't trained or educated in the topic.
However, this is not it. I'd call on UserWay to publish their training data and assessment criteria to understand why this tool performs the way it does. For someone who understands accessibility, this might save some time, but every output of the tests I did required significant work to improve to meet basic accessibility requirements.
In AI everything depends on what the engine is trained on. 96% of the web has accessibility issues, or is completely inaccessible. If UserWay trained this on standard websites, or just derived this out of ChatGPT, we have an issue of "garbage in, garbage out". We can't expect reliable information from a model trained on inaccessible content.
I did three tests:
1. Generate a text entry for a 2-factor authentication component, that accepts only numbers with a maximum of 6
2. Generate a login component where the password has standard requirements that I provided
3. Generate a "sign up for our newsletter" popup with an option to close and skip
All three had issues, with 2 being the most worrisome using a `title` attribute to convey essential information to users (but only screen reader users for some reason). All three had unneccessary `aria-label` attributes applied, which would make the components nearly unusuable for voice control users.