
Fui testar a aplicação com apenas uma simples pergunta de criação de um html com PWA. iniciou o código mais travou no meio do código e do nada e deu um refresh ai não consegui mais visualizar o restante da conclusão do código. enfim minha experiência vou péssima.
This is not a good tool in its current state. Because OpenAI has scraped the web indiscriminately, its output includes many of the errors and bad advice that are so common across the web and make accessibility so difficult for developers to master. A better tool would have been trained on vetted resources; instead, this one just regurgitates bad information. For example, a button used to toggle visibility should be marked up using the button tag instead of a link, but many front end frameworks incorrectly use links for this purpose, and this tool recommends that you do, too. (Google "Button Cheat Sheet" for a pithy explanation of why this is a problem.) This is just one example; the tool also adds incorrect ARIA in ways that will actually prevent users of assistive technologies from operating interactive controls. Bad accessibility advice makes the web even less accessible than it already is.
Hi, Good product and well done. But i dont see any use case for this. Well ChatGPT can do the same and accurately since that Uses GPT-4 and Browsing its better to use ChatGPT. What is the use cases can you define some of them?
This is super cool ! I am not a full time developer, but rather work in product with a basic understanding of code- for people like me to have a helping hand that can solve these specific problems is super helpful ! All the best wishes.
As someone who works in this industry, I knew that an AI tool would eventually appear to tackle this challenge. It's fair to acknowledge that writing accessible code is challenging, especially when many developers and tech professionals aren't trained or educated in the topic. However, this is not it. I'd call on UserWay to publish their training data and assessment criteria to understand why this tool performs the way it does. For someone who understands accessibility, this might save some time, but every output of the tests I did required significant work to improve to meet basic accessibility requirements. In AI everything depends on what the engine is trained on. 96% of the web has accessibility issues, or is completely inaccessible. If UserWay trained this on standard websites, or just derived this out of ChatGPT, we have an issue of "garbage in, garbage out". We can't expect reliable information from a model trained on inaccessible content. I did three tests: 1. Generate a text entry for a 2-factor authentication component, that accepts only numbers with a maximum of 6 2. Generate a login component where the password has standard requirements that I provided 3. Generate a "sign up for our newsletter" popup with an option to close and skip All three had issues, with 2 being the most worrisome using a `title` attribute to convey essential information to users (but only screen reader users for some reason). All three had unneccessary `aria-label` attributes applied, which would make the components nearly unusuable for voice control users.