Core library for running web accessibility audits with ta11y.
npm install --save @ta11y/core
The easiest way to use this package is to use the CLI.
const { Ta11y } = require('@ta11y/core')
const ta11y = new Ta11y()
ta11y.audit('https://en.wikipedia.org')
.then((results) => {
console.log(JSON.stringify(results, null, 2))
})
Alternatively, you can tell ta11y to crawl additional pages starting from the root page.
ta11y.audit('https://en.wikipedia.org', {
crawl: true,
maxDepth: 1,
maxVisit: 64
})
If you want to crawl non-public pages, pass an instance of Puppeteer. This is useful for testing in development or behind corporate firewalls.
const puppeteer = require('puppeteer')
const browser = await puppeteer.launch()
ta11y.audit('http://localhost:3000', {
browser,
crawl: true,
maxDepth: 0
})
You can also pass HTML directly to audit (whole pages or fragments).
ta11y.audit('<!doctype><html><body><h1>I ❤ accessibility</h1></body></html>')
The free tier is subject to rate limits as well as a 60 second timeout, so if you're crawling a larger site, you're better off running content extraction locally.
If you're processing a non-publicly accessible website (like localhost
), then you must perform content extraction locally.
You can bypass rate limiting by signing up for an API key and passing it either via the apiKey
option of the Ta11y
constructor or via the TA11Y_API_KEY
environment variable.
const ta11y = new Ta11y({
apiKey: '<your-api-key>'
})
Visit ta11y once you're ready to sign up for an API key.
Class to run web accessibility audits via the ta11y API.
Type: function (opts)
opts
object? Config options.
Runs an accessibility audit against the given URL or raw HTML, optionally crawling the site to discover additional pages and auditing those too.
To audit local or private websites, pass an instance of Puppeteer as opts.browser
.
The default behavior is to perform content extraction locally and auditing remotely. This works best for auditing publicly accessible websites.
Type: function (urlOrHtml, opts): Promise
urlOrHtml
string URL or raw HTML to process.opts
object Config options.opts.suites
Array<string>? Optional array of audit suites to run. Possible values:-section508
wcag2a
wcag2aa
wcag2aaa
best-practice
html
Defaults to running all audit suites.
opts.browser
object? Optional Puppeteer browser instance to use for auditing websites that aren't publicly reachable.opts.crawl
boolean Whether or not to crawl additional pages. (optional, defaultfalse
)opts.maxDepth
number Maximum crawl depth while crawling. (optional, default16
)opts.maxVisit
number? Maximum number of pages to visit while crawling.opts.sameOrigin
boolean Whether or not to only consider crawling links with the same origin as the root URL. (optional, defaulttrue
)opts.blacklist
Array<string>? Optional blacklist of URL glob patterns to ignore.opts.whitelist
Array<string>? Optional whitelist of URL glob patterns to only include.opts.gotoOptions
object? Customize thePage.goto
navigation options.opts.viewport
object? Set the browser window's viewport dimensions and/or resolution.opts.userAgent
string? Set the browser's user-agent.opts.emulateDevice
string? Emulate a specific device type.- Use thename
property from one of the built-in devices.- Overrides
viewport
anduserAgent
.
- Overrides
opts.onNewPage
function? Optional async function called every time a new page is initialized before proceeding with extraction.opts.file
string? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
Extracts the content from a given URL or raw HTML, optionally crawling the site to discover additional pages and auditing those too.
To audit local or private websites, pass an instance of Puppeteer as opts.browser
.
Type: function (urlOrHtml, opts): Promise
urlOrHtml
string URL or raw HTML to process.opts
object Config options.opts.browser
object? Optional Puppeteer browser instance to use for auditing websites that aren't publicly reachable.opts.crawl
boolean Whether or not to crawl additional pages. (optional, defaultfalse
)opts.maxDepth
number Maximum crawl depth while crawling. (optional, default16
)opts.maxVisit
number? Maximum number of pages to visit while crawling.opts.sameOrigin
boolean Whether or not to only consider crawling links with the same origin as the root URL. (optional, defaulttrue
)opts.blacklist
Array<string>? Optional blacklist of URL glob patterns to ignore.opts.whitelist
Array<string>? Optional whitelist of URL glob patterns to only include.opts.gotoOptions
object? Customize thePage.goto
navigation options.opts.viewport
object? Set the browser window's viewport dimensions and/or resolution.opts.userAgent
string? Set the browser's user-agent.opts.emulateDevice
string? Emulate a specific device type.- Use thename
property from one of the built-in devices.- Overrides
viewport
anduserAgent
.
- Overrides
opts.onNewPage
function? Optional async function called every time a new page is initialized before proceeding with extraction.opts.file
string? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
Runs an accessibility audit against previously collected extraction results from
@ta11y/extract
.
Type: function (extractResults, opts): Promise
extractResults
object Extraction results conforming to the output format from@ta11y/extract
.opts
object Config options. (optional, default{}
)opts.suites
Array<string>? Optional array of audit suites to run. Possible values:-section508
wcag2a
wcag2aa
wcag2aaa
best-practice
html
Defaults to running all audits suites.
opts.file
string? Write results to a file (output format determined by file type). See the docs for more info on supported file formats (xls, xlsx, csv, json, html, txt, etc.)
MIT © Saasify