Skip to content

Commit

Permalink
Merge pull request #92 from JamesBrill/continuous-supported-flag
Browse files Browse the repository at this point in the history
Flag for continuous listening support
  • Loading branch information
JamesBrill authored Apr 15, 2021
2 parents d771228 + 3fc6265 commit 98b14bf
Show file tree
Hide file tree
Showing 7 changed files with 87 additions and 8 deletions.
23 changes: 22 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ This version requires React 16.8 so that React hooks can be used. If you're used
* [Supported browsers](#supported-browsers)
* [Polyfills](docs/POLYFILLS.md)
* [API docs](docs/API.md)
* [Troubleshooting](#troubleshooting)
* [Version 3 migration guide](docs/V3-MIGRATION.md)
* [TypeScript declaration file in DefinitelyTyped](https://github.com/OleksandrYehorov/DefinitelyTyped/blob/master/types/react-speech-recognition/index.d.ts)

Expand Down Expand Up @@ -239,6 +240,18 @@ If you want to listen continuously, set the `continuous` property to `true` when
SpeechRecognition.startListening({ continuous: true })
```

Be warned that not all browsers have good support for continuous listening. Chrome on Android in particular constantly restarts the microphone, leading to a frustrating and noisy (from the beeping) experience. To avoid enabling continuous listening on these browsers, you can make use of the `browserSupportsContinuousListening` state from `useSpeechRecognition` to detect support for this feature.

```
if (browserSupportsContinuousListening) {
SpeechRecognition.startListening({ continuous: true })
} else {
// Fallback behaviour
}
```

Alternatively, you can try one of the [polyfills](docs/POLYFILLS.md) to enable continuous listening on these browsers.

## Changing language

To listen for a specific language, you can pass a language tag (e.g. `'zh-CN'` for Chinese) when calling `startListening`. See [here](docs/API.md#language-string) for a list of supported languages.
Expand All @@ -247,7 +260,15 @@ To listen for a specific language, you can pass a language tag (e.g. `'zh-CN'` f
SpeechRecognition.startListening({ language: 'zh-CN' })
```

## How to use `react-speech-recognition` offline?
## Troubleshooting

### `regeneratorRuntime is not defined`

If you see the error `regeneratorRuntime is not defined` when using this library, you will need to ensure your web app installs `regenerator-runtime`:
* `npm i --save regenerator-runtime`
* If you are using NextJS, put this at the top of your `_app.js` file: `import 'regenerator-runtime/runtime'`. For any other framework, put it at the top of your `index.js` file

### How to use `react-speech-recognition` offline?

Unfortunately, speech recognition will not function in Chrome when offline. According to the [Web Speech API docs](https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API/Using_the_Web_Speech_API): `On Chrome, using Speech Recognition on a web page involves a server-based recognition engine. Your audio is sent to a web service for recognition processing, so it won't work offline.`

Expand Down
12 changes: 12 additions & 0 deletions docs/API.md
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,18 @@ if (!browserSupportsSpeechRecognition) {

It is recommended that you use this state to decide when to render fallback content rather than `SpeechRecognition.browserSupportsSpeechRecognition()` as this will correctly re-render your component if the browser support changes at run-time (e.g. due to a polyfill being applied).

#### browserSupportsContinuousListening [bool]

Continuous listening is not supported on all browsers, so it is recommended that you apply some fallback behaviour if your web app uses continuous listening and is running on a browser that doesn't support it:

```
if (browserSupportsContinuousListening) {
SpeechRecognition.startListening({ continuous: true })
} else {
// Fallback behaviour
}
```

## SpeechRecognition

Object providing functions to manage the global state of the microphone. Import with:
Expand Down
2 changes: 1 addition & 1 deletion package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "react-speech-recognition",
"version": "3.7.0",
"version": "3.8.0",
"description": "💬Speech recognition for your React app",
"main": "lib/index.js",
"scripts": {
Expand Down
3 changes: 2 additions & 1 deletion src/RecognitionManager.js
Original file line number Diff line number Diff line change
Expand Up @@ -68,8 +68,9 @@ export default class RecognitionManager {

emitBrowserSupportsSpeechRecognitionChange(browserSupportsSpeechRecognitionChange) {
Object.keys(this.subscribers).forEach((id) => {
const { onBrowserSupportsSpeechRecognitionChange } = this.subscribers[id]
const { onBrowserSupportsSpeechRecognitionChange, onBrowserSupportsContinuousListeningChange } = this.subscribers[id]
onBrowserSupportsSpeechRecognitionChange(browserSupportsSpeechRecognitionChange)
onBrowserSupportsContinuousListeningChange(browserSupportsSpeechRecognitionChange)
})
}

Expand Down
17 changes: 13 additions & 4 deletions src/SpeechRecognition.js
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@ import { concatTranscripts, commandToRegExp, compareTwoStringsUsingDiceCoefficie
import { clearTrancript, appendTrancript } from './actions'
import { transcriptReducer } from './reducers'
import RecognitionManager from './RecognitionManager'
import isAndroid from './isAndroid'

const DefaultSpeechRecognition =
typeof window !== 'undefined' &&
Expand All @@ -12,6 +13,7 @@ const DefaultSpeechRecognition =
window.msSpeechRecognition ||
window.oSpeechRecognition)
let _browserSupportsSpeechRecognition = !!DefaultSpeechRecognition
let _browserSupportsContinuousListening = _browserSupportsSpeechRecognition && !isAndroid()
let recognitionManager

const useSpeechRecognition = ({
Expand All @@ -20,7 +22,10 @@ const useSpeechRecognition = ({
commands = []
} = {}) => {
const [recognitionManager] = useState(SpeechRecognition.getRecognitionManager())
const [browserSupportsSpeechRecognition, setBrowserSupportsSpeechRecognition] = useState(_browserSupportsSpeechRecognition)
const [browserSupportsSpeechRecognition, setBrowserSupportsSpeechRecognition] =
useState(_browserSupportsSpeechRecognition)
const [browserSupportsContinuousListening, setBrowserSupportsContinuousListening] =
useState(_browserSupportsContinuousListening)
const [{ interimTranscript, finalTranscript }, dispatch] = useReducer(transcriptReducer, {
interimTranscript: recognitionManager.interimTranscript,
finalTranscript: ''
Expand Down Expand Up @@ -131,7 +136,8 @@ const useSpeechRecognition = ({
onListeningChange: setListening,
onTranscriptChange: handleTranscriptChange,
onClearTranscript: handleClearTranscript,
onBrowserSupportsSpeechRecognitionChange: setBrowserSupportsSpeechRecognition
onBrowserSupportsSpeechRecognitionChange: setBrowserSupportsSpeechRecognition,
onBrowserSupportsContinuousListeningChange: setBrowserSupportsContinuousListening
}
recognitionManager.subscribe(id, callbacks)

Expand All @@ -153,7 +159,8 @@ const useSpeechRecognition = ({
finalTranscript,
listening,
resetTranscript,
browserSupportsSpeechRecognition
browserSupportsSpeechRecognition,
browserSupportsContinuousListening
}
}
const SpeechRecognition = {
Expand All @@ -165,6 +172,7 @@ const SpeechRecognition = {
recognitionManager = new RecognitionManager(PolyfillSpeechRecognition)
}
_browserSupportsSpeechRecognition = true
_browserSupportsContinuousListening = true
},
getRecognitionManager: () => {
if (!recognitionManager) {
Expand All @@ -188,7 +196,8 @@ const SpeechRecognition = {
const recognitionManager = SpeechRecognition.getRecognitionManager()
await recognitionManager.abortListening()
},
browserSupportsSpeechRecognition: () => _browserSupportsSpeechRecognition
browserSupportsSpeechRecognition: () => _browserSupportsSpeechRecognition,
browserSupportsContinuousListening: () => _browserSupportsContinuousListening
}

export { useSpeechRecognition }
Expand Down
36 changes: 36 additions & 0 deletions src/android.test.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
/* eslint-disable import/first */
jest.mock('./isAndroid', () => () => true)

import { renderHook } from '@testing-library/react-hooks'
import '../tests/vendor/corti'
import SpeechRecognition, { useSpeechRecognition } from './SpeechRecognition'
import RecognitionManager from './RecognitionManager'

const mockRecognitionManager = () => {
const recognitionManager = new RecognitionManager(window.SpeechRecognition)
SpeechRecognition.getRecognitionManager = () => recognitionManager
return recognitionManager
}

describe('SpeechRecognition (Android)', () => {
test('sets browserSupportsContinuousListening to false on Android', async () => {
mockRecognitionManager()

const { result } = renderHook(() => useSpeechRecognition())
const { browserSupportsContinuousListening } = result.current

expect(browserSupportsContinuousListening).toEqual(false)
expect(SpeechRecognition.browserSupportsContinuousListening()).toEqual(false)
})

test('sets browserSupportsContinuousListening to true when using polyfill', () => {
const MockSpeechRecognition = class {}
SpeechRecognition.applyPolyfill(MockSpeechRecognition)

const { result } = renderHook(() => useSpeechRecognition())
const { browserSupportsContinuousListening } = result.current

expect(browserSupportsContinuousListening).toEqual(true)
expect(SpeechRecognition.browserSupportsContinuousListening()).toEqual(true)
})
})

0 comments on commit 98b14bf

Please sign in to comment.