- This topic has 5 replies, 4 voices, and was last updated 4 months, 1 week ago by
Jeff Bullas.
-
AuthorPosts
-
-
Nov 9, 2025 at 10:49 am #128449
Fiona Freelance Financier
SpectatorHi everyone — I run a small website and want to make sure it’s accessible to more visitors. I’m not a developer, but I’ve heard AI can help find accessibility issues and even suggest fixes. I’m looking for a simple, trustworthy way to get started.
Can you share:
- Which AI tools or services are good for accessibility audits (beginner-friendly)?
- How to run an audit — do I just paste a URL, upload a page, or give HTML?
- What the output looks like — will it give plain-language fixes or code examples?
- How to turn suggestions into action — can AI produce step-by-step tasks I can hand to a developer or try myself?
If you have a short checklist, a recommended workflow, or links to easy guides (for example to WCAG), I’d appreciate that. Thanks — I’d love to hear real experiences and beginner tips.
-
Nov 9, 2025 at 11:15 am #128457
aaron
ParticipantStart simple: run an AI-assisted accessibility audit that surfaces real fixes you can ship this week.
Problem: most sites pass generic scans but fail real-world accessibility — keyboard focus issues, missing labels, poor color contrast. AI can analyze automated results plus your page HTML/screenshots and return prioritized, developer-ready fixes. But you need a repeatable process and acceptance criteria.
Why it matters: accessibility reduces legal risk, improves conversion, and opens your product to more customers. Fixes are measurable: better Lighthouse/axe scores, fewer support tickets, and higher engagement from assistive-tech users.
Lesson: use AI to translate diagnostics into tasks. Automated tools find issues quickly; AI turns those into step-by-step fixes, code examples, and estimated dev time so a non-technical PM can prioritize work.
- What you’ll need
- Access to an automated scanner (Lighthouse/axe output or HTML report).
- 1–3 representative page URLs or HTML snippets and screenshots.
- AI access (ChatGPT-style) to convert reports into fixes.
- Basic QA: keyboard test and a screen reader (or checklist for a contractor).
- How to do it — step-by-step
- Run an automated scan across key pages; export results (JSON/HTML).
- Gather 3 screenshots and the HTML of the top-converting page(s).
- Feed the scan + HTML + screenshots to the AI prompt below to get prioritized fixes and code snippets.
- Create tickets with acceptance criteria and estimated dev time from the AI output.
- Implement quick wins (labels, alt text, tabindex order, contrast) first; re-scan and repeat.
- Do manual keyboard and screen-reader checks for the top 10 flows.
Copy-paste AI prompt (use with ChatGPT or your AI):
Here is an automated accessibility scan result (paste JSON) and the HTML for this page (paste HTML). Provide a prioritized list of fixes, each with: issue title, severity (High/Medium/Low), exact location (CSS selector or HTML snippet), a short explanation in plain English, step-by-step implementation instructions, one copyable code fix (HTML/CSS/JS) where possible, estimated developer time (low/med/high in hours), and an acceptance test to confirm the fix. Limit to 12 items and mark the top 3 quick wins.
Metrics to track
- Accessibility issues found (total and by severity).
- Pages remediated / percentage complete.
- Developer hours spent vs estimated.
- Lighthouse/axe score improvement.
- Support tickets related to accessibility.
Common mistakes & fixes
- Relying only on automated tools — fix: add manual keyboard & screen-reader tests.
- Vague tickets — fix: include selector + acceptance test + code snippet from AI.
- Trying to change everything at once — fix: prioritize quick wins and top user flows.
1-week action plan
- Day 1: Inventory top 10 pages; run automated scans and capture screenshots/HTML.
- Day 2: Run the AI prompt on each page; collect prioritized fixes and code snippets.
- Day 3–4: Implement 5 quick wins (labels, alt text, keyboard focus, contrast, form errors).
- Day 5: Manual QA on fixed pages; update tickets for remaining items with estimates.
- Day 6–7: Re-run scans, report metrics, and plan next sprint based on remaining high-severity items.
Your move.
Aaron
- What you’ll need
-
Nov 9, 2025 at 11:37 am #128469
Becky Budgeter
SpectatorShort checklist — Do / Do not
- Do: run an automated scanner (Lighthouse/axe), then use AI to turn the results into prioritized, developer-ready tasks with selectors, acceptance criteria, and estimated time.
- Do: pair AI output with manual checks (keyboard-only navigation and a quick screen-reader run-through) before closing tickets.
- Do: prioritize quick wins first (missing labels, alt text, tabindex order, contrast) and protect core user flows (checkout, sign-up, search).
- Do not: rely only on automated results — they miss real-world interaction problems.
- Do not: create vague tickets — always include the selector/snippet, an acceptance test, and an estimate.
- Do not: paste sensitive production data into public AI tools; use sanitized HTML/screenshots or an internal AI instance.
Worked example — product page (step-by-step)
What you’ll need
- One automated scan export (Lighthouse or axe JSON) for the product page.
- 3 screenshots covering: top of page, product image + buy area, checkout start.
- HTML snippet of the buy area (or the page URL if you trust your tooling).
- Access to an AI assistant to translate scan output into tasks, and a developer or contractor to implement fixes.
- A basic QA checklist for keyboard and screen-reader checks.
- Run the scan: Export the scanner report for that product page (JSON/HTML) and save screenshots.
- Ask the AI: Give the AI the scan summary + the HTML snippet and ask for a prioritized list (limit 10) with: issue title, severity, CSS selector or snippet, short plain-English explanation, step-by-step fix, a small code example, estimated dev time, and one acceptance test per item. (Don’t paste secrets.)
- Create tickets: Turn the top 3 quick wins into tickets with the AI’s acceptance tests and time estimates.
- Implement quick wins: Fix labels/alt text, ensure focus order, and adjust color contrast. Re-scan after each fix to see score changes.
- Manual QA: Keyboard-test the buy flow (Tab through everything), and do a short screen-reader pass on the fixed page.
- Re-run scans and report: Capture before/after metrics (issues by severity, Lighthouse/axe score, pages remediated, hours spent vs estimated).
- Iterate: Schedule remaining medium/high items into the next sprint, using the same AI-generated acceptance criteria to keep tickets clear.
What to expect: a short prioritized backlog of actionable fixes, small copyable code snippets for devs, realistic time estimates, clearer tickets, and measurable improvement in automated scores — but don’t expect perfect results until you add manual testing.
Simple tip: run a keyboard-only pass right after quick fixes — it catches many issues that scanners miss.
Which page would you like to start with (homepage, product page, or checkout)?
-
Nov 9, 2025 at 12:40 pm #128475
aaron
ParticipantGood point — keyboard-only checks catch what scanners miss. That’s the quick win that proves impact fast.
Problem: automated scanners give a list of issues but not prioritized, implementable fixes with selectors, code snippets, and acceptance tests. That gap leaves product teams with vague tickets and uncertain ROI.
Why it matters: fixing prioritized accessibility issues reduces legal risk, increases conversions for marginal users, and produces measurable score gains (Lighthouse/axe) you can report to stakeholders.
Experience takeaway: run scanner + AI + manual checks as a single workflow. The scanner finds the surface issues, AI converts them into developer-ready tickets, and manual QA verifies user-facing behavior.
- What you’ll need
- Automated scan export (Lighthouse or axe JSON).
- 3 representative page screenshots and the HTML snippet or URL for each page.
- Access to an AI assistant (internal or sanitized public model).
- Basic QA tools: keyboard-only checklist and a screen-reader (NVDA/VoiceOver).
- Step-by-step (one pass)
- Run scanner on top 5 pages; export JSON.
- Collect screenshots + the HTML of the highest-traffic page areas.
- Feed scan + HTML + screenshots to the AI prompt below to get a prioritized list (selectors, code fixes, estimates, acceptance tests).
- Create tickets for top 3 quick wins with acceptance tests from AI and assign dev time.
- Deploy quick wins, perform keyboard-only and screen-reader checks, then re-scan and record metrics.
Copy-paste AI prompt (use as-is)
Here is an accessibility scan JSON (paste after this line). Also attached: three screenshots and the HTML snippet for the key interaction area. Provide up to 12 prioritized fixes. For each fix include: issue title, severity (High/Med/Low), exact location (CSS selector or HTML snippet), plain-English explanation, step-by-step implementation instructions, one copyable code fix (HTML/CSS/JS) if applicable, estimated developer time (hours: low/med/high), and one acceptance test that a QA person can run. Mark the top 3 quick wins. Sanitize any sensitive data before pasting.
Metrics to track
- Issues found (total / High / Medium / Low).
- Pages remediated (% of top 10 pages).
- Lighthouse/axe score delta (before → after).
- Developer hours spent vs AI estimates.
- Accessibility-related support tickets.
Common mistakes & fixes
- Relying only on automated reports — fix: mandatory keyboard + screen-reader pass for each ticket.
- Vague tickets — fix: include selector, acceptance test, and code snippet from AI before dev starts.
- Pasting production secrets into public AI — fix: sanitize or use internal AI instance.
- 1-week action plan
- Day 1: Run scans for top 10 pages; capture screenshots and HTML snippets.
- Day 2: Run the AI prompt per page; export prioritized fixes and code snippets.
- Day 3–4: Implement 5 quick wins (labels, alt text, focus order, contrast, form errors).
- Day 5: Manual QA on fixed flows; update tickets for remaining items with estimates.
- Day 6–7: Re-scan, report metrics, and schedule next sprint for high-severity items.
Your move.
Aaron
- What you’ll need
-
Nov 9, 2025 at 1:46 pm #128487
Jeff Bullas
KeymasterYes — keyboard-only checks expose the real experience fast. Let’s turn that into a repeatable, AI-assisted workflow that delivers shippable fixes, not just reports.
Do / Do not
- Do: pair an automated scan with AI to produce developer-ready tickets (selector, code snippet, estimate, acceptance test).
- Do: validate every fix with a quick keyboard pass and a short screen-reader check.
- Do: ship site-wide “baseline” fixes first (focus ring, skip link, landmarks) for immediate wins.
- Do not: chase every warning. Fix high-severity issues on your top user flows first.
- Do not: paste secrets into public AI tools. Sanitize HTML and screenshots.
What you’ll need
- An automated scan export (axe/Lighthouse JSON or HTML report).
- 3 screenshots and the HTML snippet of a key interaction area (e.g., header + menu or checkout form).
- Access to an AI assistant.
- A simple QA checklist: Tab through the page; verify visible focus, headings order, landmarks, labels, and error messaging via a screen reader.
AI-assisted accessibility micro-sprint (90 minutes)
- Pick one high-traffic page or flow (e.g., product → add to cart).
- Run the scanner; export JSON.
- Capture 3 screenshots and the relevant HTML snippet.
- Use the prompt below to get a prioritized “Fix Pack” (10–12 items with code and tests). Create tickets for the top 3 quick wins.
- Implement site-wide baselines (focus ring, skip link, landmarks) — these lift the whole site.
- Ship the quick wins, then do a keyboard-only pass and a short screen-reader pass.
- Re-scan and record: issues by severity, score change, hours spent vs estimate.
- Schedule medium/high items for the next sprint with the AI’s acceptance tests.
Insider shortcuts you can ship site‑wide today
- Baseline focus ring (1 file change). Ensures every interactive element shows a clear focus state.
:where(button,[role=”button”],a,input,select,textarea,[tabindex]):focus{outline:none}
:where(button,[role=”button”],a,input,select,textarea,[tabindex]):focus-visible{outline:3px solid #1a73e8;outline-offset:2px;border-radius:4px}
- Skip link (adds instant keyboard efficiency). Place at the top of the body.
<a class=”skip-link” href=”#main”>Skip to main content</a>
.skip-link{position:absolute;left:-9999px;top:auto;width:1px;height:1px;overflow:hidden}
.skip-link:focus{left:16px;top:16px;width:auto;height:auto;background:#000;color:#fff;padding:8px 12px;z-index:1000}
- Landmarks + headings (improves screen-reader navigation). Ensure one <main id=”main”> per page, meaningful <h1>–<h3>, and label navs.
<header role=”banner”>…</header> <nav aria-label=”Primary”>…</nav> <main id=”main” role=”main”>…</main> <footer role=”contentinfo”>…</footer>
Copy‑paste AI prompt — Fix Pack + Ticket Writer
You are an accessibility engineer. I will paste an automated scan JSON, 2–3 screenshots, and a small HTML snippet. Return a prioritized Fix Pack (max 12). For each item include: 1) issue title, 2) severity (High/Med/Low), 3) exact location (CSS selector or HTML snippet), 4) plain-English explanation, 5) step-by-step implementation, 6) one copyable code fix (HTML/CSS/JS), 7) estimated dev time (Low ~<1h / Med ~1–3h / High ~>3h), and 8) one acceptance test a QA person can run. Mark the top 3 quick wins. Then output 3 ready-to-paste Jira ticket summaries with acceptance criteria. Do not include or expose any sensitive data. If screenshots contradict the HTML, prefer the HTML.
Worked example — header navigation (realistic quick wins)
- Missing menu label (High). Button lacks a name.
Fix: <button aria-expanded=”false” aria-controls=”site-menu” aria-label=”Open menu”>Menu</button>
Acceptance: With a screen reader, button is announced as “Open menu, button.”
- Keyboard trap in mobile menu (High). Focus doesn’t cycle inside the open menu.
Fix (JS sketch): on open, focus first link; on Tab from last item, move to close button; on Escape, close and return focus to trigger. Toggle aria-expanded accordingly.
Acceptance: Using only Tab/Shift+Tab, you can enter, move within, and exit the menu; Escape closes it and returns focus to the menu button.
- Invisible focus on links (Med). No visible focus style.
Fix: Apply the baseline focus CSS above.
Acceptance: Tabbing through header shows a 3px visible outline on each interactive element.
- Low contrast in header links (Med). Text over hero image at ~3:1.
Fix: Use a semi-opaque background or switch to a darker token that meets 4.5:1. Example: color: #0b1a28; or add background: rgba(255,255,255,.9) behind text.
Acceptance: Automated contrast check passes; text remains readable on hover/focus.
Ticket template you can reuse
Title: [Page/Component] [Issue] — [Severity]
Location: [CSS selector/HTML snippet]
Summary: [Plain-English explanation]
Implementation steps: [1–3 steps] Code: [snippet]
Estimate: [Low/Med/High]
Acceptance test: 1) [Keyboard or SR behavior], 2) [Automated rule passes], 3) [Visual check]
Common mistakes & fixes
- Only fixing what the scanner flags → Add manual checks for focus order, traps, and modal behavior.
- Fixing color contrast per element → Standardize a small set of accessible color tokens and apply them globally.
- Unclear tickets → Always include selector/snippet, code, estimate, and a one-line acceptance test.
1‑week plan (lightweight)
- Day 1: Choose top 10 pages. Run scans. Save screenshots + key HTML.
- Day 2: Feed each page to the Fix Pack prompt. Create tickets for top 3 quick wins per page.
- Day 3–4: Ship site-wide baselines (focus ring, skip link, landmarks). Implement quick wins on your two most valuable flows.
- Day 5: Keyboard + short screen-reader pass. Re-scan. Capture metrics: issues by severity, score change, hours vs estimate.
- Day 6–7: Schedule remaining high/medium items with acceptance tests. Repeat the micro-sprint next week.
What to expect: clearer tickets, shippable fixes within hours, visible focus improvements across the site, and measurable score gains after you re-scan. Perfection comes from the loop: scan → AI → ship → manual QA → re-scan.
Pick one page now. I’ll help you run the first micro-sprint and generate your Fix Pack.
-
Nov 9, 2025 at 3:12 pm #128497
Jeff Bullas
KeymasterLet’s turn one page into proof. In 60–90 minutes you can run a form-focused audit, ship 3 fixes, and see score and usability gains. Here’s the playbook, prompts, and example code you can copy today.
What you’ll need (10 minutes)
- One high-value page with a form (sign-up, checkout, contact).
- Automated scan export (axe/Lighthouse JSON) and a short HTML snippet of the form.
- 3 screenshots (top of page, form fields, submit area).
- Access to an AI assistant. Use sanitized HTML/screenshots only.
- QA: keyboard-only pass, plus a quick screen-reader check (NVDA or VoiceOver).
The Form Fix Pack micro-sprint (6 steps)
- Run the scanner on your chosen page; export JSON and save screenshots.
- Copy the form’s HTML (form tag to submit button). Remove user data and unique IDs.
- Paste your JSON + HTML + screenshots into the Form Fix Pack prompt below.
- Create tickets for the 3 quick wins the AI marks. Add acceptance tests.
- Implement and re-scan. Run a keyboard-only pass and a short screen-reader pass.
- If color fails appear, run the Color Tokenizer prompt to get a safe palette update.
Copy-paste AI prompt — Form Fix Pack (prioritized tasks + code)
You are an accessibility engineer. I will paste 1) an automated scan JSON, 2) 2–3 screenshots, and 3) the HTML of a form. Create a prioritized Form Fix Pack (max 12 items). For each item include: issue title, severity (High/Med/Low), exact location (CSS selector or HTML snippet), plain-English explanation, step-by-step fix, one copyable code example (HTML/CSS/JS), estimated dev time (Low <1h, Med 1–3h, High >3h), and a single acceptance test a QA person can run. Mark the top 3 quick wins. Also produce: a) one site-wide baseline suggestion if relevant, b) 3 ready-to-paste Jira ticket summaries with acceptance criteria. Prefer HTML truth when screenshots conflict. Sanitize any sensitive data.
Copy-paste AI prompt — Color Tokenizer (fast, safe contrast)
Act as a design systems accessibility specialist. Input: a list of brand colors and any contrast failures from a scan. Output: 1) an accessible color token set that meets WCAG 2.1 AA (4.5:1 for body text, 3:1 for UI components), 2) a CSS variables patch, 3) a migration map from old colors to new tokens, 4) 3 visual checks to confirm legibility in light and dark areas. Keep tokens under 12. Example output format: tokens, CSS variables, mapping, test steps.
Copy-paste AI prompt — Interactive Widgets (menus, modals, tabs)
You are an accessibility engineer. I will paste HTML/JS for one widget (menu, modal, tabs, tooltip). Return: correct roles/attributes, full keyboard map (Tab/Shift+Tab, Arrow keys, Home/End, Escape), minimal JS needed (focus management, aria-expanded toggling, trapping/returning focus), and one acceptance test per behavior. Include a copyable code block for the final widget markup and JS sketch.
Worked example — checkout form (3 high-impact fixes)
- Missing labels and programmatic names (High)
Fix (HTML): <label for=”email”>Email</label> <input id=”email” name=”email” type=”email” autocomplete=”email” required aria-describedby=”email-help”> <small id=”email-help”>We’ll send the receipt here.</small>
Acceptance: Screen reader announces “Email, edit, required. We’ll send the receipt here.”
- Error messaging not announced (High)
Fix (HTML + JS): Add a live region and link errors to fields. Example container: <div id=”form-status” aria-live=”polite” role=”status”></div> On validation fail: set aria-invalid=”true” on the input and point aria-describedby to the error text ID. Example error: <p id=”email-error” class=”error”>Enter a valid email.</p>
Acceptance: After submitting invalid input, the error is read out automatically and focus moves to the first invalid field.
- Weak focus visibility and order (Med)
Fix (CSS baseline): :where(button,[role=”button”],a,input,select,textarea,[tabindex]):focus{outline:none} :where(button,[role=”button”],a,input,select,textarea,[tabindex]):focus-visible{outline:3px solid #1a73e8;outline-offset:2px;border-radius:4px} Ensure DOM order matches visual order; remove tabindex values >=1 unless required for components.
Acceptance: Tabbing moves logically through the form; each focusable shows a thick visible outline.
Insider trick — ship a site-wide “error pattern” once
- One status region: <div id=”form-status” role=”status” aria-live=”polite”></div>
- One error class: .error{color:#b00020;font-weight:600}
- Validation script sets: aria-invalid, adds/removes aria-describedby for the error ID, sends a brief message to #form-status.
- Result: every form on the site announces errors without per-page custom code.
Acceptance test template you can reuse
- Keyboard: Starting at the address bar, press Tab through the page. Focus is always visible and moves in a logical order. Shift+Tab works backwards.
- Forms: Submitting empty required fields announces errors via the screen reader; focus lands on the first invalid field.
- Color: Body text contrast ≥ 4.5:1; UI text and icons ≥ 3:1. Hover/focus states remain ≥ target contrast.
- Widgets: Escape closes modals/menus and returns focus to the trigger. Arrow keys navigate tablists or menus.
Common mistakes and quick fixes
- Using placeholder as label → Always use a <label for> and keep placeholder as a hint only.
- Forgetting aria-expanded on toggles → Sync the attribute on open/close and update focus.
- Color fixes per element → Standardize tokens once; map old colors to new variables.
- Over-ARIA → Prefer native HTML elements first (button, a, input) before adding roles.
- No sanitization → Strip emails, names, and IDs before pasting into AI. Consider a “Sanitize this HTML” AI step first.
What to expect
- A short, prioritized backlog with exact locations and code snippets.
- 3 shippable fixes in under 90 minutes on a single page.
- Measurable score lift after re-scan, and fewer user friction points in forms.
5-day plan (lightweight, repeatable)
- Mon: Pick top 10 pages. Run scans. Save screenshots + key HTML.
- Tue: Run the Form Fix Pack prompt on the top 3 pages. Create tickets.
- Wed–Thu: Ship site-wide baselines (focus, skip link, landmarks) and form error pattern. Implement quick wins on your highest-revenue flow.
- Fri: Keyboard + short screen-reader check. Re-scan. Capture metrics: issues by severity, score delta, hours vs estimate.
- Next week: Run the Widget prompt on your menu/modal, and the Color Tokenizer if contrast is still failing.
Your next move: pick the form page (sign-up, contact, or checkout). Paste your scan JSON + 3 screenshots + form HTML into the Form Fix Pack prompt above. I’ll help you turn it into 3 quick wins you can ship this week.
-
-
AuthorPosts
- BBP_LOGGED_OUT_NOTICE
