- Made JSONWrapper class - generic JSON helper utilities for configuration objects
- Added back docs, updated links to point to GitHub hosted version
- First usable version incorporated into @bespoken-sdk
- Removed support for SQLite
- Updated JSDoc tags throughout project
- Update client-store to use new version bespoken-api-store
- Added timestamp to job and MySQL by default
- Using classes from shared package where appropriate (logger, env, util, etc.)
- Working on usable batch package
- Updates to collector SDK for better object serialization
- First release of collector SDK into real environment
- First version of collector SDK
- Env requiredVariable now uses a default value if supplied
- Added support for overriding the top result on the recognition object
- Added support dialog state in the input settings
- Added reply history to conversation
- validate shouldRetry when there is an error
- Add ability to send local audio files as utterances for Alexa, Google Assistant and Test Robot
- Change twilio platform to phone platform
- Add ability to change backoff time after attempt, defaults to 10
- Add
interceptError
to fill up the result columns if the process has an error - Use voiceID from source data, defaults to 'en-US-Wavenet-D'
- Fix output path name extension bug
- Use locale from source data, defaults to 'en-US'
- Use voiceID from source data, defaults to 'en-US-Wavenet-D'
interceptRequest
now accepts aRecord
object as a first argument. This is a breaking change.
- Send notification emails after job is done
- Get locale and voiceID from virtualDeviceConfig
- Use voiceID from record by default
- Add ability to change the locale, by default is en-US
- Enhancements on docs and behavior for fuzzyMatch
- Changed cache behavior so that it does NOT clone objects
- Add memory and CPU to avoid crashes on server
- Add shouldRetry property to Result class to retry a record using the interceptResult method
- Add
fuzzyMatch
to search a string in an array of phrases - Add
sanitizedOcrLines
property toResult
class
- Fix issue with loading data on reprocess
- Better support for reprint
- SQL printer can now override table name with MYSQL_TABLE environment variable
- Clean up boolean values in SQL printer
- More fixes to prevent oversaving of data
- Fix oversaving of data on reruns when there is an error
- Better progress info on file downloads
- Forced pino to use 6.3.x version as we see weird
originalMsg
output in logs
- Created Metabase ECS service
- The limit of virtual devices can be set in an env variable
VIRTUAL_DEVICE_LIMIT
. If present, it takes the first "n" devices from the config file, otherwise it will use all of them.
- Fix for improperly outputting results.csv file (file was being created as undefined.csv by default)
- Fix for improper error-handling when the configuration file is not set propertly
- Added outputFields property to record, so outputFields can be defined in the source classes
- Added support for printing to MySQL by default if environment variables are set
- Only printing at the end of runs as opposed to with each record
- Added support for MySQL
- Added support for list command - lists jobs that match run name
- IMPORTANT Using streams to fetch and store S3 files - all clients must upgrade once server is deployed
- Removed lastResponse instance variable (still preserved get property though)
- Added SQLite printer for printing output to SQLite database
- Added
rerun
flag to records as convenience method for simpler code in interceptors - Added
reprocessAll
capability for rerunning multiple jobs based on filters
- The limit of records can be set in an env variable
LIMIT
. If present, tt takes priority over the config file.
- Increase CPU for batch-tester server to 512
- Adds the option to save the logs to a
batch-tester.log
file on the output folder by setting theSAVE_LOG_FILE
env variable. It uses pino default format.
- Adds a conversationId getter and setter to the record object.
- Prevent saving of data on re-runs
- Implements pino.js for logging. Default log level is set to 'info', except for the server where it's still 'debug'.
- Removes several console.logs calls
- Removes request log to reduce clutter
- Adds maxAttempts to the config
- Adds the conversation id to the timeout messages
- Intercept request now has the device as a parameter
- Saves all the results from a response in the record object
- Added conversationId to the lastResponse object
- Added timestamp to each result record
- Added request interceptor
- Added timeouts to get results from virtual device
- Added batch runner instance for saving job after errors
- Fixed RAW DATA URL column values with batch job key for
reprocess
andreprint
commands
- Added
date
property to the Job object. This stores the UTC date in which the job was created in ISO-8691. Eg.:2020-05-21T18:59:55Z
. The job name is also created with UTC. Eg.job_name_2020-05-21T18-59-55
- Added the methods
interceptPreProcess
andinterceptPostProcess
to add custom code before and after a batch tester execution. ForinterceptPreProcess
the user has to be careful if resuming a job.
- Added the
Synchronizer
class for saving the batch job depending on thesaveInterval
property from the configuration file, it is set to 300 seconds by default.
- Adds the
virtualDeviceConfig
property on the configuration json. This allows sending parameters that will be used on all virtual devices. - Allows for using Twilio virtual devices with text utterances or prerecorded audio. The last ones can only be passed as URLs.
- Allow the tag property on a virtual device to be undefined
- IMPORTANT Added compression on fetching files - to avoid timeouts when dealing with very large runs
- Requires upgrading to new version of batch-tester client to interact with server
- Fetch is used by the reprocess, rerun and resume commands
- Added compression on saving files - to reduce file size and improve transfer speed
- Added merge feature - described here
- Added --output_file flag - described here
- Improved our command-line interface using commander
- Handled multiple matches when JSON path expression returns multiple values
- Added support for output only fields to test definitions
- Fixed rerunner command, it was calling source file
- IMPORTANT Added
settings
property on virtualDevices - read here - the previous virtualDevices that took an array of tags will still work but have been deprecated - Added builtin
device
column to csv-source. Automatically filters for devices that match the tag specified in this column, if present - Added
sequential
mode to force records to be processed one-by-one - read about it here - Switched to using async endpoint for virtual devices
- Added improved formatting on log messages
- Better error-handling when an error is returned from the Virtual Device service
- Convenience methods for accessing outputFields and actualFields on the result object
- Fixed issue with configuration values that are booleans which have a default value
- Added the
rerunner
- allows for reprocessing previous jobs - read here - Automatically adds a tag to every result for the platform used by the device ('amazon-alexa' and 'google-assistant' for now)
- IMPORTANT Automatically add output fields to tags - no need to define both output field and tag
- Increased max-file-size allowed to be sent by bespoken-store
- Print out log URL to console after each record is processed
- Added bespoken-store - new and improved persistent results storage!
- IMPORTANT: file-store and s3-store are now deprecated. Existing projects should remove the
store
key from the input json. The runner will just default to bespoken-store from now on. - IMPORTANT: No need to set AWS keys in environment variables - these should be removed in existing projects
- IMPORTANT: Resuming jobs now takes a RUN_KEY as opposed to RUN_NAME
- Added link to detailed logs in CSV results - look at raw input and output directly
- IMPORTANT: file-store and s3-store are now deprecated. Existing projects should remove the
- Automatically publish expected and actual field values to DataDog - NOTE this may be redundant with existing tags
- Handles errors on interceptRecord gracefully - shows the full message and does not stop processing
- Implemented limit feature in configuration file. Now it is possible to run just some utterances for testing purposes.
- Added Node.js version requirements to package.json and README
- Better documentation on DataDog chart creation
- More work to ensure module resolution works well - very tough issue to replicate
- Added reprint feature - described here. Allows for retrieving CSV results after the run is completed.
- Fixed issue with module resolution identified with Node V12 and greater
- Default location for CSV input files is now used -
input/records.csv
- For DataDog metrics, publish a 0 for what did NOT happen (failure, success and error) - this makes math easier in the reports (as otherwise we have N/As, which cannot be using formulas)
- Tokens are now kept in the config file - property:
virtualDevices
. See README. - virtualDeviceBaseURL is now set in the config file - property:
virtualDeviceBaseURL
. Defaults tohttps://virtual-device.bespoken.io
. - All output files are now kept in the /output directory - such as /output/results.csv.
- Reporting uses new
customer
property for the configuration file - this is a REQUIRED field
- Ability to tag tokens to restrict processing of records to certain tokens - read more
TO BE FILLED IN