Accessibility is the largest unfunded mandate in software development. Everyone agrees it matters. Almost no one budgets enough time for it. The result is a familiar pattern: an app ships, someone runs an audit, a dispiriting list of WCAG violations lands on the backlog, and the team spends weeks retrofitting fixes into code that was never designed to accommodate them.

AI coding assistants change the economics of this equation in the same way they've changed the economics of testing and architecture -- by making the right way fast enough that there's no reason to skip it. Every accessibility label, every contrast check, every semantic role annotation, every keyboard navigation handler is exactly the kind of structured, pattern-following, specification-driven code that AI generates fluently. The human judgment -- deciding what the experience should feel like for a VoiceOver user navigating your checkout flow, or whether your color system works for someone with deuteranopia -- still belongs to you. But the implementation, the boilerplate, the platform-specific API dance? That's where the AI earns its keep.

This article dissects WCAG 2.2 principle by principle, maps each to Apple's Human Interface Guidelines and Google's Material Design accessibility guidance, and shows how AI assistants can help you build compliance into your app from the ground up -- or migrate an existing codebase toward it systematically.


The Four Pillars: WCAG's POUR Framework

WCAG 2.2 organizes its guidance around four principles, commonly abbreviated as POUR: Perceivable, Operable, Understandable, and Robust. Every success criterion in the specification falls under one of these. Understanding them as architectural concerns -- not just checklist items -- is the key to building accessibility that doesn't feel bolted on.

Apple's Human Interface Guidelines and Google's Material Design guidelines both align with POUR, though neither explicitly uses the acronym. Apple frames accessibility as a foundational design concern alongside color, typography, and layout. Google integrates accessibility into Material Design as a cross-cutting requirement that touches every component. Both ecosystems provide platform-specific APIs that map directly to WCAG success criteria.

What follows is a deep walk through each principle, its constituent guidelines, what Apple and Google say about them, and precisely how AI coding assistants help you satisfy each one.


Principle 1: Perceivable

Information and user interface components must be presentable to users in ways they can perceive. If a user can't see, hear, or otherwise detect your content, it doesn't exist for them.

1.1 Text Alternatives

WCAG requires that all non-text content -- images, icons, charts, decorative graphics -- has a text alternative that serves an equivalent purpose. This is the single most common accessibility failure in mobile apps, and one of the easiest for AI to address systematically.

Apple's HIG mandates that every meaningful image and icon includes an accessibilityLabel. SwiftUI makes this straightforward with the .accessibilityLabel() modifier, but the challenge is coverage: in a large app, it's easy to forget one image, one custom icon, one decorative graphic that should be marked as such. Google's guidance is equivalent -- every ImageView needs a contentDescription, every Compose Image needs a contentDescription parameter or an explicit semantics { } block.

This is where AI-assisted development shines brightest. Ask your AI assistant to audit a screen's code for missing accessibility labels, and it will scan every image, icon, and custom view, flagging each one that lacks a text alternative. Better yet, adopt the practice of generating screens with labels included from the start. When you prompt the AI with "create a product card component showing a product image, name, price, and add-to-cart button," specify that all elements must include accessibility labels. The AI will generate accessibilityLabel("Product image: \(product.name)") on the image, mark decorative separators as .accessibilityHidden(true), and annotate the button with an action-oriented label like "Add (product.name) to cart" rather than a generic "Add."

For charts, graphs, and data visualizations -- where text alternatives require summarizing visual information -- the AI can generate descriptive summaries. Provide the underlying data and ask: "Write an accessibility description for a bar chart showing monthly revenue from January through June, with a notable spike in March." The AI produces a concise, informative description that a screen reader user can understand without seeing the visual.

1.2 Time-Based Media

Audio and video content requires captions, transcripts, and audio descriptions. While generating accurate captions for arbitrary media is outside the scope of a coding assistant, the AI helps enormously with the infrastructure: building a captioning overlay system, integrating with caption file formats (WebVTT, SRT), creating a media player component that surfaces caption controls prominently, and ensuring that auto-play is disabled by default (a requirement under both WCAG and Apple's HIG).

Ask the AI to:

"Create a video player component that loads WebVTT captions, shows a visible caption toggle, respects the system's caption styling preferences, and pauses on load until the user explicitly plays."

The platform-specific caption preference APIs (AVPlayer's appliesMediaSelectionCriteriaAutomatically on Apple, CaptioningManager on Android) are exactly the kind of obscure-but-critical integration the AI handles well.

1.3 Adaptable Content

Content must be presentable in different ways -- assistive technologies must be able to parse your UI's structure and meaning without losing information. This means using semantic markup: headings should be headings, lists should be lists, form fields should have associated labels, and the reading order should match the visual order.

Apple implements this through the accessibility hierarchy -- the tree of elements that VoiceOver traverses. SwiftUI views automatically participate, but custom views need explicit annotation with .accessibilityElement(), .accessibilityAddTraits(), and grouping with .accessibilityElement(children: .combine) or .accessibilityElement(children: .contain). Google's equivalent is the AccessibilityNodeInfo tree that TalkBack reads, with Compose providing semantics { } blocks and Modifier.semantics { heading() } for structural annotation.

AI assistants excel at generating semantically rich UI code because the patterns are well-defined. When you describe a screen, the AI can produce not just the visual layout but the semantic structure: headers annotated with heading traits, grouped form fields with their labels associated programmatically, list items with their position announced ("Item 3 of 12"), and custom controls with appropriate roles. The key prompt pattern is:

"Generate this screen with full VoiceOver/TalkBack semantic structure, including headings, groupings, and reading order annotations."

1.4 Distinguishable

Users must be able to see and hear content, including separating foreground from background. This guideline encompasses color contrast, text resizing, text spacing, and the requirement that color alone is never the only means of conveying information.

Color contrast is the most precisely measurable accessibility criterion. WCAG 2.2 requires a minimum contrast ratio of 4.5:1 for normal text and 3:1 for large text (Level AA). Apple's HIG specifies the same 4.5:1 ratio and encourages the use of semantic system colors that automatically adapt to light and dark modes. Google's Material Design 3 builds contrast compliance into its dynamic color system, where algorithmically generated palettes are designed to maintain sufficient contrast across tonal variations.

AI assistants can validate contrast ratios at code-generation time. When you define a color palette, ask the AI to:

"Verify that every foreground/background combination in this theme meets WCAG AA contrast ratios and flag any that fall below 4.5:1 for text or 3:1 for non-text elements."

The AI computes the ratios and identifies violations before a single pixel renders on screen.

Dynamic Type and text scaling is where Apple's ecosystem excels. The HIG strongly recommends supporting Dynamic Type across all text styles, allowing users to scale text from extra small to the accessibility sizes that can reach 300% of the default. Google's equivalent is the sp (scale-independent pixel) unit and the font scale setting in Android's accessibility options. WCAG 2.2 requires that text can be resized up to 200% without loss of content or functionality (Success Criterion 1.4.4).

When generating UI code, always instruct the AI to use scalable text units. For SwiftUI: "Use .font(.body) and Dynamic Type-compatible text styles, never fixed point sizes." For Compose: "Use MaterialTheme.typography text styles with sp units, never fixed dp for text." For Flutter: "Use Theme.of(context).textTheme and respect MediaQuery.textScaleFactorOf(context)." The AI applies these conventions consistently, and a follow-up prompt can verify:

"Audit this file for any hardcoded text sizes that don't respect the system's text scaling preference."

Color as sole indicator is a subtler requirement. Red for errors, green for success -- these work for most users but fail for the 8% of men and 0.5% of women with color vision deficiency. WCAG requires a secondary indicator (an icon, a text label, a pattern) alongside color. Apple's HIG explicitly recommends using symbols and labels alongside color cues. Material Design similarly advises pairing color with icons or text.

Ask the AI to review your error states, success confirmations, and status indicators:

"Does this UI rely on color alone to convey any state? If so, add a secondary indicator -- an icon, a text label, or a shape change -- for each one."

The AI identifies every instance where color is the only differentiator and generates the supplementary indicator.


Principle 2: Operable

All users must be able to operate the interface. This means keyboard accessibility, sufficient time to complete tasks, no seizure-inducing content, clear navigation, and -- new in WCAG 2.2 -- reduced reliance on complex gestures.

2.1 Keyboard Accessible

Every function available through a touchscreen must also be available through alternative input methods: keyboard, switch control, voice control, or other assistive devices. On iOS, this means supporting Full Keyboard Access and Switch Control. On Android, it means supporting external keyboards and Switch Access.

Apple's HIG emphasizes that all interactive elements should be reachable through VoiceOver's swipe navigation and Full Keyboard Access's tab navigation. Google's guidelines require that every user flow is completable through TalkBack navigation and that custom views properly report their accessibility actions.

AI assistants generate keyboard-accessible code by default when prompted correctly. The key is to use native components wherever possible -- native buttons, text fields, toggles, and sliders already have keyboard and assistive technology support built in. When custom components are necessary, tell the AI:

"Create this custom slider control with full VoiceOver/TalkBack support, including adjustable value announcements, increment/decrement actions, and keyboard arrow key handling."

The AI generates the platform-specific accessibility action implementations that make custom controls behave like native ones to assistive technology.

2.2 Enough Time

If your app includes timeouts -- session expiration, timed forms, auto-advancing carousels -- users must be able to extend or disable the timeout. This is critical for users with motor or cognitive disabilities who need more time to complete tasks.

WCAG requires that time limits can be turned off, adjusted, or extended, with at least 20 seconds to request an extension. When building any timed feature, ask the AI to include the accessibility safeguards:

"Add a timeout warning dialog that appears 30 seconds before session expiration, with a button to extend the session, and respect the system's accessibility preference to disable auto-timeout where available."

2.3 Seizures and Physical Reactions

Content must not flash more than three times per second. This is both a WCAG requirement and an Apple App Store guideline. WCAG 2.2 extends this to physical reactions -- vestibular motion sensitivity triggered by parallax scrolling, zooming animations, or moving backgrounds.

Apple's HIG explicitly respects the "Reduce Motion" accessibility setting (UIAccessibility.isReduceMotionEnabled in UIKit, accessibilityReduceMotion in SwiftUI's @Environment). Google provides Settings.Global.ANIMATOR_DURATION_SCALE, which users can set to zero to disable animations.

When generating any animation, prompt the AI:

"Implement this transition with a reduced-motion alternative. When the user has Reduce Motion enabled, replace the animation with a simple crossfade or instant transition."

The AI generates the conditional logic that checks the system preference and provides the alternative, a pattern that should be applied to every animated transition in your app.

2.4 Navigable

Users must be able to find content and know where they are. This encompasses page titles, focus order, link purpose, multiple ways to find content, headings, and visible focus indicators.

Focus management is one of the most commonly neglected accessibility concerns in mobile apps. When a modal appears, focus should move to it. When it dismisses, focus should return to the trigger. When a screen loads, focus should land on a logical starting point. Both Apple and Google provide APIs for programmatic focus control, but they're rarely used correctly.

Ask the AI to:

"Implement focus management for this modal dialog: move VoiceOver/TalkBack focus to the dialog title on presentation, trap focus within the dialog while it's visible, and return focus to the triggering button on dismissal."

The AI generates the platform-specific implementation -- UIAccessibility.post(notification: .screenChanged, argument: dialogTitle) on iOS, AccessibilityEvent.TYPE_WINDOW_STATE_CHANGED on Android -- that makes the experience coherent for assistive technology users.

Focus indicators received significant attention in WCAG 2.2 with two new success criteria: Focus Not Obscured (2.4.11, AA) requires that the focused element isn't fully hidden by other content, and Focus Appearance (2.4.13, AAA) specifies a minimum visible focus indicator. When building custom components, tell the AI to:

"Ensure all interactive elements show a clearly visible focus ring when focused via keyboard or switch control, with a minimum 2px outline that contrasts at 3:1 against both the component and the background."

2.5 Input Modalities

Users interact through various input methods beyond traditional touch: voice, switch, stylus, head tracking. WCAG 2.5 covers pointer gestures, pointer cancellation, label in name, motion actuation, and -- new in 2.2 -- target size and dragging movements.

Target size (2.5.8, AA in WCAG 2.2) requires that interactive targets are at least 24x24 CSS pixels, with Apple's HIG recommending 44x44 points and Google specifying 48x48 dp as the minimum touch target. This is a measurable, enforceable standard that AI can validate automatically.

Ask the AI to:

"Audit all interactive elements in this screen for minimum touch target size. Flag any button, link, toggle, or interactive area smaller than 44x44pt (iOS) or 48x48dp (Android). For undersized elements, suggest a hit area expansion using .frame(minWidth: 44, minHeight: 44) or Modifier.sizeIn(min = 48.dp)."

The AI scans the layout and identifies every violation, generating the fix inline.

Dragging alternatives (2.5.7, AA in WCAG 2.2) requires that any action achievable through dragging can also be achieved through a single pointer action. If your app has drag-to-reorder lists, drag-and-drop interfaces, or slider-based inputs, each needs an alternative. Ask the AI to:

"Add a non-dragging alternative for this reorder list -- a context menu with 'Move Up' and 'Move Down' options on each item, accessible via long press and through VoiceOver's custom actions."


Principle 3: Understandable

Information and UI operation must be understandable. This covers readable text, predictable behavior, and input assistance.

3.1 Readable

The language of the page and any changes in language must be programmatically determinable. This allows screen readers to switch pronunciation rules automatically. On iOS, set accessibilityLanguage on elements with foreign-language text. On Android, use LocaleSpan in text or set the locale on accessibility nodes.

AI assistants handle this well because it's a mechanical annotation task. When your app contains mixed-language content -- a recipe app with French dish names, a travel app with local place names -- ask the AI to:

"Annotate all foreign-language text elements with their correct language code for screen reader pronunciation."

The AI generates the appropriate accessibilityLanguage or LocaleSpan for each element.

3.2 Predictable

Interfaces should behave consistently. Navigation should be consistent across screens. Focus changes should not trigger unexpected context changes. Form inputs should not submit or navigate automatically when a selection is made.

Both Apple and Google enforce this through their design guidelines. Apple's HIG recommends consistent placement of navigation elements and predictable responses to gestures. Material Design's principles emphasize that actions should have clear, expected outcomes.

When generating navigation and form code, instruct the AI:

"Never auto-submit on selection. Never navigate on focus change. Always require an explicit user action (tap, press, submit) to trigger state changes or navigation."

The AI builds these safeguards into the interaction logic, preventing the kind of surprise context changes that disorient all users and devastate assistive technology users.

3.3 Input Assistance

When users make errors, the error must be identified and described in text. Where possible, the app should suggest corrections. Where input has legal or financial consequences, submissions should be reversible, verifiable, or confirmable.

WCAG 2.2 adds two important criteria here. Redundant Entry (3.3.7, A) requires that information the user has already provided is either auto-populated or available for selection, reducing repetitive data entry. Accessible Authentication (3.3.8, AA) requires that authentication doesn't depend on cognitive function tests -- no CAPTCHA puzzles, no memory-dependent password requirements -- with alternatives like biometric login, passkeys, or email-based verification.

Apple's ecosystem strongly supports accessible authentication through Face ID, Touch ID, and passkeys. Google provides Credential Manager and biometric authentication APIs. When building login flows, ask the AI to:

"Implement authentication with biometric primary, passkey fallback, and email magic link as the final alternative -- no CAPTCHA, no cognitive tests, with clear error messages that describe what went wrong and how to fix it."

For form validation broadly, the AI generates accessible error handling fluently. Prompt:

"Add inline validation to this form. When a field fails validation, display the error message directly below the field, associate it programmatically with the field using accessibilityValue or Modifier.semantics { error() }, and move VoiceOver/TalkBack focus to the first error when the user attempts to submit."

The result is an error experience that works equally well for sighted and non-sighted users.


Principle 4: Robust

Content must be robust enough to be interpreted reliably by a wide variety of user agents, including assistive technologies. In practice, this means your UI components must correctly expose their roles, names, values, and states to the platform's accessibility API.

4.1 Compatible

Every custom component must have a correct accessibility role, a meaningful name, and dynamically updated state information. A custom toggle must announce itself as a toggle, report whether it's on or off, and announce its state change when activated. A custom dropdown must announce itself as a popup, report the currently selected value, and describe how to interact with it.

Apple provides the UIAccessibilityTraits system (.button, .header, .adjustable, .selected, etc.) and SwiftUI's .accessibilityAddTraits() modifier. Google provides AccessibilityNodeInfo.setClassName() and Compose's semantics { role = Role.Switch } for role mapping, with stateDescription for custom state announcements.

This is perhaps the accessibility area where AI assistance has the highest leverage. Every custom component needs a handful of accessibility annotations, and getting them wrong means the component is invisible or confusing to assistive technology users. When building custom components, make the prompt explicit:

"Create this custom star-rating control. It must announce as 'Rating: 3 out of 5 stars' to VoiceOver, support increment/decrement with swipe gestures, update its announcement dynamically when the value changes, and include a hint explaining how to adjust the rating."

The AI generates the full accessibility implementation alongside the visual implementation, treating them as inseparable. This is the mindset shift that makes WCAG compliance sustainable: accessibility semantics are not added later -- they're part of the component's definition.

WCAG 2.2's Status Messages criterion (4.1.3, AA) requires that status updates -- success confirmations, loading indicators, error counts, search result counts -- are announced to screen readers without receiving focus. On iOS, use UIAccessibility.post(notification: .announcement, argument: message). On Android, use live regions with ViewCompat.setAccessibilityLiveRegion(). In Compose, use Modifier.semantics { liveRegion = LiveRegionMode.Polite }.

Prompt the AI:

"Whenever this list finishes loading, announce the result count to VoiceOver/TalkBack without moving focus. Use a polite announcement so it doesn't interrupt the user's current context."

The AI generates the appropriate platform-specific live region or announcement call.


Beyond WCAG: Platform-Specific Accessibility Features

WCAG provides the floor. Apple and Google each build significantly above it with platform-specific features that your app should support.

Apple's Accessibility Ecosystem

Apple's accessibility toolkit goes deep, and the HIG provides specific guidance for each feature.

VoiceOver is the screen reader, and it's the primary way blind and low-vision users interact with iOS apps. Beyond basic labeling, VoiceOver supports custom actions (.accessibilityAction), custom rotor items (.accessibilityRotor) for navigating between specific elements like headings, links, or custom categories, and custom content descriptions (.accessibilityCustomContent) for providing additional detail without cluttering the primary label.

Dynamic Type goes beyond WCAG's 200% text resize requirement -- Apple's accessibility sizes can reach 300% or more. Your layouts must accommodate this without truncation, overlap, or loss of functionality. Ask the AI to stress-test:

"How does this layout behave at the largest Dynamic Type accessibility size? Identify any text that would truncate, any layouts that would overlap, and any scrollable areas that might become unreachable."

Reduce Motion, Reduce Transparency, Increase Contrast, Differentiate Without Color, Bold Text -- Apple provides a suite of display preferences that users can enable individually. Each has a corresponding API check, and your app should respect all of them. The AI can generate a centralized accessibility preferences manager:

"Create a utility that observes all of Apple's accessibility display preferences and exposes them as reactive properties that my views can bind to."

Assistive Access, introduced in iOS 17, simplifies the entire device interface for users with cognitive disabilities. Apps that follow accessibility standards generally work in this mode, but the AI can help you verify:

"Review this app's navigation structure for Assistive Access compatibility. Are the primary functions accessible within two taps? Are labels clear and concise? Are there any interaction patterns that require complex gestures?"

Google's Accessibility Ecosystem

TalkBack is Android's screen reader equivalent. It shares the same semantic requirements as VoiceOver -- labels, roles, states, traversal order -- but uses Android-specific APIs. The AI generates TalkBack-compatible code by default when using standard Compose or View components, but custom components need explicit annotation. Google's guidelines specifically recommend testing every user flow end-to-end with TalkBack enabled and adjusting the speech speed to catch issues with announcement verbosity.

Switch Access allows interaction through one or more physical switches, and the AI can help you verify that all interactive elements are reachable through switch scanning:

"Audit this screen for Switch Access compatibility. Ensure every interactive element is focusable, that the focus order is logical, and that no actions require gestures unavailable through switch scanning."

Live Captions, Sound Amplifier, Select to Speak -- Android provides system-level accessibility features that your app should not interfere with. The AI helps by generating code that respects system accessibility service states and avoids overriding system accessibility behaviors.

Material Design's accessibility audit checklist specifically recommends testing with TalkBack at 2x speed, verifying touch targets, checking color contrast with the Accessibility Scanner app, and using Layout Inspector to verify the accessibility tree. The AI can generate automated test scripts that replicate these manual checks.


Other Accessibility Resources and Standards

The European Accessibility Act (EAA)

Effective June 2025, the EAA requires that products and services sold in EU member states meet accessibility standards based on EN 301 549, which references WCAG 2.1 and is expected to adopt WCAG 2.2. If your app serves European users, WCAG AA compliance is not optional -- it's a legal requirement. AI assistants can help you map your app's current state against EN 301 549's requirements and generate a remediation plan.

Section 508 (United States)

Section 508 of the Rehabilitation Act requires federal agencies and their contractors to make electronic and information technology accessible. It references WCAG 2.0 Level AA, with movement toward WCAG 2.1/2.2 adoption. If your app targets government users or receives federal funding, the AI can generate the compliance documentation alongside the code fixes.

WAI-ARIA for Hybrid and Web-Based Apps

If your app uses web views, hybrid rendering (React Native Web, Flutter Web), or embedded HTML content, WAI-ARIA (Accessible Rich Internet Applications) roles and attributes become critical. The AI generates ARIA-compliant markup when producing web content: semantic HTML elements, role attributes for custom widgets, aria-label and aria-describedby for labeling, aria-live for dynamic content announcements, and aria-expanded/aria-controls for interactive disclosure patterns.

The BBC Mobile Accessibility Guidelines

The BBC publishes one of the most thorough mobile-specific accessibility guideline sets, covering areas where WCAG's web-centric language requires interpretation for native apps. It's an excellent supplementary resource, and the AI can help you cross-reference:

"Compare my app's current accessibility implementation against the BBC Mobile Accessibility Guidelines and identify any gaps not already covered by my WCAG AA compliance work."

The Inclusive Design Principles

Microsoft's Inclusive Design framework -- Recognize Exclusion, Learn from Diversity, Solve for One Extend to Many -- provides a philosophical complement to WCAG's technical specifications. While WCAG tells you what to build, Inclusive Design tells you why and for whom. AI assistants can help operationalize these principles by generating persona-driven test scenarios:

"Create a test plan that walks through the checkout flow from the perspective of a user with low vision using magnification, a user with motor impairment using Switch Control, and a user with cognitive disability who needs simple clear language."


Building Accessibility From the Ground Up

The AI-Assisted Accessibility Architecture

If you're starting a new project, accessibility should be a first-class architectural concern, not a layer added after visual design is complete.

Step 1: Define your semantic component library. Before writing any feature code, ask the AI to generate a base component library where every component includes accessibility semantics by default:

"Create a ButtonComponent, TextFieldComponent, CardComponent, and ListItemComponent. Each must include configurable accessibility labels, correct roles, state announcements for dynamic changes, and minimum touch target enforcement. Make it impossible to instantiate a ButtonComponent without providing an accessibility label."

The "impossible without a label" constraint is powerful. By making the accessibility label a required parameter (not optional with a default), you eliminate the most common category of violation: forgotten labels. The AI generates the API, and the compiler enforces it.

Step 2: Build your color system with contrast validation. Ask the AI to generate a theme system where every color token pair (text on surface, icon on background, etc.) is validated against WCAG contrast ratios at definition time:

"Create a color theme system that validates contrast at initialization. If any foreground/background pair fails the 4.5:1 text ratio or 3:1 non-text ratio, log a warning in debug builds and throw an assertion failure in test builds."

This makes contrast violations impossible to ship without deliberately suppressing the check.

Step 3: Integrate accessibility testing into CI. The AI can generate automated accessibility test suites that run alongside your unit tests. For iOS, this means using Xcode's Accessibility Inspector API or third-party tools like AccessibilitySnapshot. For Android, this means Espresso's accessibility checks or Compose's semantics testing. For Flutter, this means the Semantics widget assertions in widget tests.

Prompt:

"Generate a test suite that verifies every screen in my app has no missing accessibility labels, no touch targets smaller than 44x44pt, no insufficient contrast ratios in the current theme, and that the VoiceOver traversal order matches the visual reading order."

The AI generates the tests, and CI catches regressions before they reach users.

Step 4: Create an accessibility overlay for development. Drawing from the first article in this series, build a debug overlay that visualizes accessibility information during development: element labels, touch target boundaries, contrast ratios, focus order numbers, and semantic roles. The AI generates this overlay as a diagnostic tool that makes accessibility visible to every developer on the team, not just those who remember to test with VoiceOver.

Automated Accessibility Auditing With AI

Beyond generating accessible code, AI assistants can serve as continuous auditors.

Code review for accessibility. When reviewing any PR, paste the code and ask:

"Audit this code for WCAG 2.2 AA compliance. Check for missing accessibility labels, incorrect or missing roles, hardcoded text sizes, color-only state indicators, missing focus management in modal presentations, and touch targets below minimum size. For each violation, explain the WCAG criterion, the platform guideline it violates, and provide the fix."

Screen-by-screen audit. For existing apps, take screenshots or describe screens and ask the AI to identify potential violations:

"This screen shows a product grid with images, titles, prices, and a filter button. The filter uses a slider for price range. What WCAG 2.2 AA violations are likely present, and how should each be addressed?"

Accessibility test data generation. The AI generates test strings that stress accessibility edge cases: extremely long labels (to test truncation), right-to-left text (to test BiDi support), strings with special characters, and localized text at maximum length (German and Finnish strings that test layout expansion):

"Generate a set of test strings in 10 languages for this product name field, including the longest reasonable translation, to verify my layout doesn't break with Dynamic Type at maximum size."


Migrating an Existing App to WCAG Compliance

For established apps, the path to compliance is a structured migration, not a single sprint. AI assistants make each phase faster and more systematic.

Phase 1: Audit and Triage

Run automated scanning tools (Xcode Accessibility Inspector, Android Accessibility Scanner, axe for web views) to establish a baseline. Then feed the results to the AI:

"Here are 47 accessibility violations from our automated scan. Categorize them by WCAG principle, severity (A vs AA vs AAA), estimated effort to fix (small/medium/large), and suggest an order of remediation that maximizes user impact per hour of work."

The AI produces a prioritized backlog that puts high-impact, low-effort fixes first (missing labels on primary buttons, insufficient contrast on key text) and sequences the larger structural work (keyboard navigation, focus management, semantic restructuring) appropriately.

Phase 2: Foundation Fixes

Address the violations that affect the entire app: color contrast across the theme, text scaling support, missing language declarations, and the semantic component library. These are foundational because they propagate to every screen.

Ask the AI to generate the fixes at the system level:

"Update my color theme to meet WCAG AA contrast ratios. Here are my current tokens -- for any pair that fails, suggest the minimum adjustment to the lighter or darker color that achieves compliance while preserving the brand identity."

The AI makes mathematically precise adjustments, not guesses.

Phase 3: Screen-by-Screen Remediation

Work through each screen with the AI as a pair programmer. For each screen, describe its purpose and paste its code. Ask the AI to add complete accessibility annotations: labels, roles, traits, groupings, headings, focus order, custom actions, and live region announcements. Review the output, test with VoiceOver/TalkBack, and refine.

This is where the conversational workflow is most valuable:

"VoiceOver is reading this card's elements in the wrong order -- it reads the price before the product name. Reorder the accessibility elements so the name comes first, then the price, then the rating."

The AI adjusts the semantic ordering without changing the visual layout.

Phase 4: Automated Regression Prevention

Once compliance is achieved, it must be maintained. The AI generates the CI tests, linting rules, and code review checklists that prevent regression. A custom lint rule that flags missing accessibility labels on new components catches violations at the developer's desk, not in a quarterly audit.


AI Automation Workflows for Ongoing Compliance

Pre-Commit Accessibility Linting

Ask the AI to create a pre-commit hook or CI step that runs static analysis for common accessibility violations. On iOS, this might parse SwiftUI files for Image() calls missing accessibility modifiers. On Android, it might check Compose code for Image() composables without contentDescription. On Flutter, it might verify that Image and Icon widgets include a semanticLabel or are wrapped in ExcludeSemantics.

Accessibility Snapshot Testing

Visual regression testing catches unintended layout changes. Accessibility snapshot testing catches unintended semantic changes. The AI can help you build a snapshot testing infrastructure that captures the accessibility tree (not the visual rendering) of each screen and compares it against a baseline. If a label changes, a trait is removed, or the traversal order shifts, the test fails.

Continuous Monitoring in Production

For production apps, the AI can help integrate lightweight accessibility telemetry: tracking which screens users access via VoiceOver/TalkBack (via the accessibility service active check), monitoring crash rates segmented by assistive technology usage, and flagging screens where assistive technology users show significantly higher abandonment rates. This data drives an informed, ongoing improvement cycle.

Automated Documentation Generation

Compliance documentation -- VPAT (Voluntary Product Accessibility Template) reports, conformance statements, remediation logs -- is required by many enterprise customers and government procurement processes. The AI generates these documents from your test results:

"Based on our accessibility test suite output, generate a VPAT 2.4 (WCAG 2.2 edition) report documenting our conformance level for each success criterion, with explanations for any partial conformance items."


Advanced Considerations

Cognitive Accessibility

WCAG 2.2 includes several criteria that address cognitive accessibility -- Consistent Help (3.2.6, A), Redundant Entry (3.3.7, A), and Accessible Authentication (3.3.8, AA) -- but the broader field of cognitive accessibility goes further. Clear language, simple navigation, consistent layout, forgiving error handling, and predictable behavior all contribute to an experience that works for users with cognitive disabilities, learning disabilities, and neurodivergent users.

AI assistants can evaluate your UI text for clarity:

"Review all user-facing strings in this app for readability. Flag any instructions that use jargon, double negatives, or complex sentence structures. Suggest simplified alternatives that maintain the same meaning."

The AI produces plain-language rewrites that benefit all users, not just those with cognitive disabilities.

Haptic and Multi-Sensory Feedback

Both Apple and Google encourage multi-sensory feedback -- haptics for confirmations, sounds for alerts, visual animations for state changes. The key principle is that no single sensory channel should be the only way information is conveyed. The AI can audit your feedback patterns:

"Review all user feedback in this app -- haptics, sounds, visual indicators, text messages. For each feedback event, verify that information is conveyed through at least two sensory channels."

Localization and Accessibility Intersection

Accessibility and internationalization intersect more than most teams realize. Screen readers need correct language attributes to pronounce text properly. Right-to-left languages require not just mirrored layouts but mirrored reading order in the accessibility tree. Currency, date, and number formatting must be both visually correct and correctly announced by assistive technology.

The AI handles these intersections by generating code that is simultaneously localization-aware and accessibility-aware:

"Create a price display component that formats the amount according to the user's locale, announces the full amount with currency name (not symbol) to VoiceOver, and reads correctly in both LTR and RTL layouts."

Accessibility in Emerging Interaction Paradigms

If you're building for visionOS (spatial computing), watchOS (glanceable interfaces), or Android Automotive (driving contexts), accessibility requirements adapt to the medium. Apple's HIG provides visionOS-specific accessibility guidance around spatial audio, gaze-based interaction, and hand tracking alternatives. Google's Automotive design guidelines address voice-first interaction patterns.

The AI helps you translate WCAG principles to these new paradigms:

"How do the WCAG 2.2 perceivable and operable principles apply to a visionOS app where the primary interaction is gaze and pinch? What accessibility alternatives should I provide for users who can't use gaze tracking?"


A Practical AI Prompt Library for Accessibility

Here are prompt patterns you can use immediately with any AI coding assistant:

For new component creation:

"Create [component] with full accessibility support: labels, roles, traits, minimum touch targets, Dynamic Type support, and Reduce Motion alternatives. Make the accessibility label a required parameter."

For screen auditing:

"Audit this screen's code for WCAG 2.2 AA compliance. Check labels, contrast, touch targets, focus order, keyboard accessibility, error identification, and status announcements. List each violation with its WCAG criterion number and a code fix."

For migration planning:

"Given this list of accessibility violations, create a prioritized remediation plan ordered by user impact per engineering hour. Group fixes by type (theme-level, component-level, screen-level) and estimate effort for each."

For test generation:

"Generate accessibility tests for this screen: verify all images have labels, all interactive elements meet minimum touch target size, the focus order matches visual reading order, and all error states are announced to screen readers."

For documentation:

"Generate a VPAT 2.4 conformance report for this app based on the following accessibility test results. For each WCAG 2.2 AA criterion, report the conformance level and provide an explanation."


Conclusion

WCAG compliance has always been the right thing to do. It has increasingly become the legally required thing to do. And now, with AI coding assistants, it has become the easy thing to do.

The pattern is consistent across every WCAG principle. The requirements are well-specified. The platform APIs exist. The implementation is structured, repetitive, and pattern-driven -- exactly the kind of work AI handles best. What remained was the economic gap: the time and expertise required to apply the specification consistently, across every component, every screen, every interaction, in every app.

That gap is closed. An AI assistant that generates accessible components by default, audits existing code for violations, produces automated tests for regression prevention, and generates compliance documentation from test results gives a solo developer the same accessibility capability that previously required a dedicated specialist.

The question is no longer whether you can afford to make your app accessible. It's whether you can justify not doing it, when the cost has dropped to nearly zero and the tooling has never been better.

Build it accessible from the start. Your AI assistant is ready when you are.