- Paging: We abstract away page handling, allowing you to consume data from beginning to end
- Backpressure: We won't read faster than you can process the data, keeping memory requirements low
npm i survey-monkey-streams
import { Reader } from 'survey-monkey-streams';
const reader = new Reader({
url: `collectors/${id}/responses`,
headers: { authorization: `bearer ${token}` }
})
reader.on('data', response => {
console.log(response)
})
import { SurveysResponsesBulkReader } from 'survey-monkey-streams';
const reader = new SurveysResponsesBulkReader({
headers: { authorization: `bearer ${token}` }
})
reader.pipe(myDbWriter)
Check the Documentation for more information
Rather than define a new API, this module chooses to expose the APIs of the underlying technologies, i.e. request
and the Node.js Stream API. Common use cases are more verbose than they might otherwise be, but minimal tampering means less bugs, documentation and frustration.
Question? Bug? Feature request? Not sure? Open an issue!
See the code on GitHub
I wrote this to fix a problem I was facing, so it might need stretching to meet your needs. Notably, there is no Writable
yet.
Pull requests welcome, but please get in touch first. I don't want to waste your time ๐
Release like:
npm run v
git push --follow-tags