1.4.0
New Features
Community Rules
To encourage collaboration and contribution of StreamAlert rules from the community, the rules directory has been reorganized:
|------- rules/
| |------- community/
| |------- default/
When contributing public rules, rule files should be placed within a named subdirectory under the community folder. An example is the cloudtrail rules in rules/community/cloudtrail
.
For rules internal to your organization, the default
folder is a great starting point. Arbitrary amounts of subdirectories can be created under this directory. Remember to always place a blank __init__.py
in new subdirectories to be picked up by rule processor imports.
Matchers and helpers have also been reorganized into their own respective directories:
|------- conf/
|------- docs/
|------- helpers/
|------- matchers/
|------- rules/
|------- stream_alert/
|------- stream_alert_cli/
|------- terraform/
|------- test/
Be sure to update rules and matchers referencing helpers based on this new structure.
JSON Cluster Templates
StreamAlert’s supporting AWS infrastructure is managed by a set of Terraform modules. Each module controls a piece of StreamAlert. An example is the monitoring
module, used to create metric alarms and alert administrators when Lambda errors or throttles occur.
To give users full control over which modules and settings they would like, clusters have been refactored into independent JSON files:
# conf/clusters/production.json
{
"id": "production",
"region": "us-west-2",
"modules": {
"stream_alert": {
"alert_processor": {
"timeout": 25,
"memory": 128,
"current_version": "$LATEST"
},
"rule_processor": {
"timeout": 10,
"memory": 256,
"current_version": "$LATEST"
}
},
"cloudwatch_monitoring": {
"enabled": true
},
"kinesis": {
"streams": {
"shards": 1,
"retention": 24
},
"firehose": {
"enabled": true,
"s3_bucket_suffix": "streamalert.results"
}
},
"kinesis_events": {
"enabled": true
}
},
"outputs": {
"kinesis": [
"username",
"access_key_id",
"secret_key"
]
}
}
For more information on setup, check out https://www.streamalert.io/clusters.html
Alert Processor VPC Support
AWS VPC (Virtual Private Cloud) allows users or organizations to run virtual machines in a logically segmented environment. To support delivery of StreamAlerts to internal resources (such as EC2 instances), the alert processor may now be configured to access resources inside a VPC:
# conf/clusters/<cluster-name>.json
{
"alert_processor": {
"vpc_config": {
"subnet_ids": ["subnet-id-1"],
"security_group_ids": ["security-group-id-1"]
}
}
}
Note: When making this change, you must explicitly destroy and then re-create the alert processor:
$ cd terraform
$ terraform destroy -target=module.stream_alert_<cluster-name>.aws_lambda_function.streamalert_alert_processor
Then, run:
$ python stream_alert_cli.py terraform build
Alert Live Testing
To better validate StreamAlert’s end-to-end functionality, testing has been reworked to support sending alerts from a local StreamAlert repo. With a local set of valid AWS credentials, it is possible to use configured rule tests to dispatch alerts to configured outputs (such as Slack or PagerDuty).
This functionality is provided through the StreamAlertCLI tool, with the new command line argument live-test
:
$ python stream_alert_cli.py live-test --cluster <cluster_name>
For normal use cases, it is unlikely to want (or need) to test the full ruleset, as this could result in a high volume of alerts to outputs. To test specific rules, the --rules
argument followed by a space-delimited list of rule names to test:
$ python stream_alert_cli.py live-test --cluster <cluster_name> --rules <rule_name_01> <rule_name_02>
Bug Fixes
#129 - Cluster aware SNS inputs
#166 - Apply optional top level keys to nested JSON records
#168 - Fix the handler import path for the alert_processor
#183 - Lambda traceback due to PagerDuty errors
#201 - Updated IAM permissions for streamalert user
#202 - Handle errors when Terraform is not installed
#206, #209 - Schema updates to osquery and carbonblack:watchlist.hit.binary