Skip to content

Commit

Permalink
Merge pull request #174 from heitorlessa/improv/docs-logger-metrics-t…
Browse files Browse the repository at this point in the history
…esting

docs: add testing tips, increase content width, and improve log sampling wording
  • Loading branch information
heitorlessa authored Sep 22, 2020
2 parents b8d15c3 + 0ac09d6 commit 7311c5d
Show file tree
Hide file tree
Showing 4 changed files with 82 additions and 10 deletions.
56 changes: 49 additions & 7 deletions docs/content/core/logger.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -222,21 +222,24 @@ If you ever forget to use `child` param, we will return an existing `Logger` wit

## Sampling debug logs

You can dynamically set a percentage of your logs to **DEBUG** level using `sample_rate` param or via env var `POWERTOOLS_LOGGER_SAMPLE_RATE`.
Sampling allows you to set your Logger Log Level as DEBUG based on a percentage of your concurrent/cold start invocations. You can set a sampling value of `0.0` to `1` (100%) using either `sample_rate` parameter or `POWERTOOLS_LOGGER_SAMPLE_RATE` env var.

Sampling calculation happens at the Logger class initialization. This means, when configured it, sampling it's more likely to happen during concurrent requests, or infrequent invocations as [new Lambda execution contexts are created](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html), not reused.
This is useful when you want to troubleshoot an issue, say a sudden increase in concurrency, and you might not have enough information in your logs as Logger log level was understandably set as INFO.

Sampling decision happens at the Logger class initialization, which only happens during a cold start. This means sampling may happen significant more or less than you expect if you have a steady low number of invocations and thus few cold starts.

<Note type="info">
If you want this logic to happen on every invocation regardless whether Lambda reuses the execution environment or not, then create your Logger inside your Lambda handler.
If you want Logger to calculate sampling on every invocation, then please open a feature request.
</Note><br/>

```python:title=collect.py
from aws_lambda_powertools import Logger

# Sample 10% of debug logs e.g. 0.1
logger = Logger(sample_rate=0.1) # highlight-line
logger = Logger(sample_rate=0.1, level="INFO") # highlight-line

def handler(event, context):
logger.debug("Verifying whether order_id is present")
if "order_id" in event:
logger.info("Collecting payment")
...
Expand All @@ -245,7 +248,21 @@ def handler(event, context):
<details>
<summary><strong>Excerpt output in CloudWatch Logs</strong></summary>

```json:title=cloudwatch_logs.json
```json:title=sampled_log_request_as_debug.json
{
"timestamp": "2020-05-24 18:17:33,774",
"level": "DEBUG", // highlight-line
"location": "collect.handler:1",
"service": "payment",
"lambda_function_name": "test",
"lambda_function_memory_size": 128,
"lambda_function_arn": "arn:aws:lambda:eu-west-1:12345678910:function:test",
"lambda_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
"cold_start": true,
"sampling_rate": 0.1, // highlight-line
"message": "Verifying whether order_id is present"
}

{
"timestamp": "2020-05-24 18:17:33,774",
"level": "INFO",
Expand All @@ -260,6 +277,7 @@ def handler(event, context):
"message": "Collecting payment"
}
```

</details>


Expand Down Expand Up @@ -305,7 +323,7 @@ This can be fixed by either ensuring both has the `service` value as `payment`,

You might want to continue to use the same date formatting style, or override `location` to display the `package.function_name:line_number` as you previously had.

Logger allows you to either change the format or suppress the following keys altogether at the initialization: `location`, `timestamp`, `level`, and `datefmt`
Logger allows you to either change the format or suppress the following keys altogether at the initialization: `location`, `timestamp`, `level`, `xray_trace_id`, and `datefmt`

```python
from aws_lambda_powertools import Logger
Expand All @@ -317,7 +335,7 @@ logger = Logger(stream=stdout, location="[%(funcName)s] %(module)s", datefmt="fa
logger = Logger(stream=stdout, location=None) # highlight-line
```

Alternatively, you can also change the order of the following log record keys via the `log_record_order` parameter: `level`, `location`, `message`, and `timestamp`
Alternatively, you can also change the order of the following log record keys via the `log_record_order` parameter: `level`, `location`, `message`, `xray_trace_id`, and `timestamp`

```python
from aws_lambda_powertools import Logger
Expand Down Expand Up @@ -358,3 +376,27 @@ except Exception:
}
```
</details>


## Testing your code

When unit testing your code that makes use of `inject_lambda_context` decorator, you need to pass a dummy Lambda Context, or else Logger will fail.

This is a Pytest sample that provides the minimum information necessary for Logger to succeed:

```python:title=fake_lambda_context_for_logger.py
@pytest.fixture
def lambda_context():
lambda_context = {
"function_name": "test",
"memory_limit_in_mb": 128,
"invoked_function_arn": "arn:aws:lambda:eu-west-1:809313241:function:test",
"aws_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
}

return namedtuple("LambdaContext", lambda_context.keys())(*lambda_context.values())

def test_lambda_handler(lambda_handler, lambda_context):
test_event = {'test': 'event'}
lambda_handler(test_event, lambda_context) # this will now have a Context object populated
```
30 changes: 28 additions & 2 deletions docs/content/core/metrics.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -251,11 +251,37 @@ This has the advantage of keeping cold start metric separate from your applicati

## Testing your code

### Environment variables

Use `POWERTOOLS_METRICS_NAMESPACE` and `POWERTOOLS_SERVICE_NAME` env vars when unit testing your code to ensure metric namespace and dimension objects are created, and your code doesn't fail validation.

```bash:title=pytest_metric_namespace.sh

POWERTOOLS_SERVICE_NAME="Example" POWERTOOLS_METRICS_NAMESPACE="Application" python -m pytest
```

You can ignore this if you are explicitly setting namespace/default dimension by passing the `namespace` and `service` parameters when initializing Metrics: `metrics = Metrics(namespace=ApplicationName, service=ServiceName)`.
If you prefer setting environment variable for specific tests, and are using Pytest, you can use [monkeypatch](https://docs.pytest.org/en/latest/monkeypatch.html) fixture:

```python:title=pytest_env_var.py
def test_namespace_env_var(monkeypatch):
# Set POWERTOOLS_METRICS_NAMESPACE before initializating Metrics
monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", namespace)

metrics = Metrics()
...
```

> Ignore this, if you are explicitly setting namespace/default dimension via `namespace` and `service` parameters: `metrics = Metrics(namespace=ApplicationName, service=ServiceName)`
### Clearing metrics

`Metrics` keep metrics in memory across multiple instances. If you need to test this behaviour, you can use the following Pytest fixture to ensure metrics are reset incl. cold start:

```python:title=pytest_metrics_reset_fixture.py
@pytest.fixture(scope="function", autouse=True)
def reset_metric_set():
# Clear out every metric data prior to every test
metrics = Metrics()
metrics.clear_metrics()
metrics_global.is_cold_start = True # ensure each test has cold start
yield
```
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ import styled from '@emotion/styled';
const FlexWrapper = styled.div({
display: 'flex',
minHeight: '100vh',
maxWidth: 1600,
maxWidth: '87vw',
margin: '0 auto'
});

Expand Down
4 changes: 4 additions & 0 deletions docs/src/styles/global.css
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,7 @@ tr > td {
.token.property {
color: darkmagenta !important
}

blockquote {
font-size: 1.15em
}

0 comments on commit 7311c5d

Please sign in to comment.