You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/core/event_handler/appsync.md
+9-9
Original file line number
Diff line number
Diff line change
@@ -9,15 +9,15 @@ Event handler for AWS AppSync Direct Lambda Resolver and Amplify GraphQL Transfo
9
9
10
10
* Automatically parse API arguments to function arguments
11
11
* Choose between strictly match a GraphQL field name or all of them to a function
12
-
* Integrates with [Data classes utilities](../../utilities/data_classes.md){target="_blank"} to access resolver and identity information
12
+
* Integrates with [Data classes utilities](../../utilities/data_classes.md){target="_blank" rel="nofollow"} to access resolver and identity information
13
13
* Works with both Direct Lambda Resolver and Amplify GraphQL Transformer `@function` directive
14
14
* Support async Python 3.8+ functions, and generators
15
15
16
16
## Terminology
17
17
18
-
**[Direct Lambda Resolver](https://docs.aws.amazon.com/appsync/latest/devguide/direct-lambda-reference.html){target="_blank"}**. A custom AppSync Resolver to bypass the use of Apache Velocity Template (VTL) and automatically map your function's response to a GraphQL field.
18
+
**[Direct Lambda Resolver](https://docs.aws.amazon.com/appsync/latest/devguide/direct-lambda-reference.html){target="_blank" rel="nofollow"}**. A custom AppSync Resolver to bypass the use of Apache Velocity Template (VTL) and automatically map your function's response to a GraphQL field.
19
19
20
-
**[Amplify GraphQL Transformer](https://docs.amplify.aws/cli/graphql-transformer/function){target="_blank"}**. Custom GraphQL directives to define your application's data model using Schema Definition Language (SDL). Amplify CLI uses these directives to convert GraphQL SDL into full descriptive AWS CloudFormation templates.
20
+
**[Amplify GraphQL Transformer](https://docs.amplify.aws/cli/graphql-transformer/function){target="_blank" rel="nofollow"}**. Custom GraphQL directives to define your application's data model using Schema Definition Language (SDL). Amplify CLI uses these directives to convert GraphQL SDL into full descriptive AWS CloudFormation templates.
21
21
22
22
## Getting started
23
23
@@ -28,7 +28,7 @@ You must have an existing AppSync GraphQL API and IAM permissions to invoke your
28
28
This is the sample infrastructure we are using for the initial examples with a AppSync Direct Lambda Resolver.
29
29
30
30
???+ tip "Tip: Designing GraphQL Schemas for the first time?"
31
-
Visit [AWS AppSync schema documentation](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html){target="_blank"} for understanding how to define types, nesting, and pagination.
31
+
Visit [AWS AppSync schema documentation](https://docs.aws.amazon.com/appsync/latest/devguide/designing-your-schema.html){target="_blank" rel="nofollow"} for understanding how to define types, nesting, and pagination.
32
32
33
33
=== "getting_started_schema.graphql"
34
34
@@ -93,7 +93,7 @@ Here's an example with two separate functions to resolve `getTodo` and `listTodo
93
93
94
94
### Scalar functions
95
95
96
-
When working with [AWS AppSync Scalar types](https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html){target="_blank"}, you might want to generate the same values for data validation purposes.
96
+
When working with [AWS AppSync Scalar types](https://docs.aws.amazon.com/appsync/latest/devguide/scalars.html){target="_blank" rel="nofollow"}, you might want to generate the same values for data validation purposes.
97
97
98
98
For convenience, the most commonly used values are available as functions within `scalar_types_utils` module.
99
99
@@ -143,15 +143,15 @@ For Lambda Python3.8+ runtime, this utility supports async functions when you us
143
143
144
144
### Amplify GraphQL Transformer
145
145
146
-
Assuming you have [Amplify CLI installed](https://docs.amplify.aws/cli/start/install){target="_blank"}, create a new API using `amplify add api` and use the following GraphQL Schema.
146
+
Assuming you have [Amplify CLI installed](https://docs.amplify.aws/cli/start/install){target="_blank" rel="nofollow"}, create a new API using `amplify add api` and use the following GraphQL Schema.
147
147
148
148
<!-- AppSync resolver decorator is a concise way to create lambda functions to handle AppSync resolvers for multiple `typeName` and `fieldName` declarations. -->
[Create two new basic Python functions](https://docs.amplify.aws/cli/function#set-up-a-function){target="_blank"} via `amplify add function`.
154
+
[Create two new basic Python functions](https://docs.amplify.aws/cli/function#set-up-a-function){target="_blank" rel="nofollow"} via `amplify add function`.
155
155
156
156
???+ note
157
157
Amplify CLI generated functions use `Pipenv` as a dependency manager. Your function source code is located at **`amplify/backend/function/your-function-name`**.
@@ -192,7 +192,7 @@ Use the following code for `merchantInfo` and `searchMerchant` functions respect
192
192
193
193
### Custom data models
194
194
195
-
You can subclass [AppSyncResolverEvent](../../utilities/data_classes.md#appsync-resolver){target="_blank"} to bring your own set of methods to handle incoming events, by using `data_model` param in the `resolve` method.
195
+
You can subclass [AppSyncResolverEvent](../../utilities/data_classes.md#appsync-resolver){target="_blank" rel="nofollow"} to bring your own set of methods to handle incoming events, by using `data_model` param in the `resolve` method.
196
196
197
197
=== "custom_models.py.py"
198
198
@@ -215,7 +215,7 @@ You can subclass [AppSyncResolverEvent](../../utilities/data_classes.md#appsync-
215
215
### Split operations with Router
216
216
217
217
???+ tip
218
-
Read the **[considerations section for trade-offs between monolithic and micro functions](./api_gateway.md#considerations){target="_blank"}**, as it's also applicable here.
218
+
Read the **[considerations section for trade-offs between monolithic and micro functions](./api_gateway.md#considerations){target="_blank" rel="nofollow"}**, as it's also applicable here.
219
219
220
220
As you grow the number of related GraphQL operations a given Lambda function should handle, it is natural to split them into separate files to ease maintenance - That's when the `Router` feature comes handy.
Copy file name to clipboardexpand all lines: docs/core/logger.md
+17-17
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ Logger provides an opinionated logger with output structured as JSON.
15
15
## Getting started
16
16
17
17
???+ tip
18
-
All examples shared in this documentation are available within the [project repository](https://github.com/aws-powertools/powertools-lambda-python/tree/develop/examples){target="_blank"}.
18
+
All examples shared in this documentation are available within the [project repository](https://github.com/aws-powertools/powertools-lambda-python/tree/develop/examples){target="_blank" rel="nofollow"}.
19
19
20
20
Logger requires two settings:
21
21
@@ -39,7 +39,7 @@ Your Logger will include the following keys to your structured logging:
39
39
|**message**: `Any`|`Collecting payment`| Unserializable JSON values are casted as `str`|
40
40
|**timestamp**: `str`|`2021-05-03 10:20:19,650+0200`| Timestamp with milliseconds, by default uses local timezone |
41
41
|**service**: `str`|`payment`| Service name defined, by default `service_undefined`|
42
-
|**xray_trace_id**: `str`|`1-5759e988-bd862e3fe1be46a994272793`| When [tracing is enabled](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html){target="_blank"}, it shows X-Ray Trace ID |
42
+
|**xray_trace_id**: `str`|`1-5759e988-bd862e3fe1be46a994272793`| When [tracing is enabled](https://docs.aws.amazon.com/lambda/latest/dg/services-xray.html){target="_blank" rel="nofollow"}, it shows X-Ray Trace ID |
43
43
|**sampling_rate**: `float`|`0.1`| When enabled, it shows sampling rate in percentage e.g. 10% |
44
44
|**exception_name**: `str`|`ValueError`| When `logger.exception` is used and there is an exception |
45
45
|**exception**: `str`|`Traceback (most recent call last)..`| When `logger.exception` is used and there is an exception |
@@ -83,7 +83,7 @@ When debugging in non-production environments, you can instruct Logger to log th
83
83
84
84
### Setting a Correlation ID
85
85
86
-
You can set a Correlation ID using `correlation_id_path` param by passing a [JMESPath expression](https://jmespath.org/tutorial.html){target="_blank"}.
86
+
You can set a Correlation ID using `correlation_id_path` param by passing a [JMESPath expression](https://jmespath.org/tutorial.html){target="_blank" rel="nofollow"}.
87
87
88
88
???+ tip
89
89
You can retrieve correlation IDs via `get_correlation_id` method
@@ -108,7 +108,7 @@ You can set a Correlation ID using `correlation_id_path` param by passing a [JME
108
108
109
109
#### set_correlation_id method
110
110
111
-
You can also use `set_correlation_id` method to inject it anywhere else in your code. Example below uses [Event Source Data Classes utility](../utilities/data_classes.md){target="_blank"} to easily access events properties.
111
+
You can also use `set_correlation_id` method to inject it anywhere else in your code. Example below uses [Event Source Data Classes utility](../utilities/data_classes.md){target="_blank" rel="nofollow"} to easily access events properties.
112
112
113
113
=== "set_correlation_id_method.py"
114
114
@@ -163,7 +163,7 @@ You can append additional keys using either mechanism:
163
163
#### append_keys method
164
164
165
165
???+ warning
166
-
`append_keys` is not thread-safe, please see [RFC](https://github.com/aws-powertools/powertools-lambda-python/issues/991){target="_blank"}.
166
+
`append_keys` is not thread-safe, please see [RFC](https://github.com/aws-powertools/powertools-lambda-python/issues/991){target="_blank" rel="nofollow"}.
167
167
168
168
You can append your own keys to your existing Logger via `append_keys(**additional_key_values)` method.
169
169
@@ -242,7 +242,7 @@ You can remove any additional key from Logger state using `remove_keys`.
242
242
243
243
#### Clearing all state
244
244
245
-
Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html){target="_blank"}, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator.
245
+
Logger is commonly initialized in the global scope. Due to [Lambda Execution Context reuse](https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html){target="_blank" rel="nofollow"}, this means that custom keys can be persisted across invocations. If you want all custom keys to be deleted, you can use `clear_state=True` param in `inject_lambda_context` decorator.
246
246
247
247
???+ tip "Tip: When is this useful?"
248
248
It is useful when you add multiple custom keys conditionally, instead of setting a default `None` value if not present. Any key with `None` value is automatically removed by Logger.
@@ -301,7 +301,7 @@ Logger can optionally log uncaught exceptions by setting `log_uncaught_exception
301
301
302
302
??? question "What are uncaught exceptions?"
303
303
304
-
It's any raised exception that wasn't handled by the [`except` statement](https://docs.python.org/3.9/tutorial/errors.html#handling-exceptions){target="_blank"}, leading a Python program to a non-successful exit.
304
+
It's any raised exception that wasn't handled by the [`except` statement](https://docs.python.org/3.9/tutorial/errors.html#handling-exceptions){target="_blank" rel="nofollow"}, leading a Python program to a non-successful exit.
305
305
306
306
They are typically raised intentionally to signal a problem (`raise ValueError`), or a propagated exception from elsewhere in your code that you didn't handle it willingly or not (`KeyError`, `jsonDecoderError`, etc.).
307
307
@@ -323,10 +323,10 @@ Logger uses Python's standard logging date format with the addition of timezone:
323
323
324
324
You can easily change the date format using one of the following parameters:
325
325
326
-
***`datefmt`**. You can pass any [strftime format codes](https://strftime.org/){target="_blank"}. Use `%F` if you need milliseconds.
326
+
***`datefmt`**. You can pass any [strftime format codes](https://strftime.org/){target="_blank" rel="nofollow"}. Use `%F` if you need milliseconds.
327
327
***`use_rfc3339`**. This flag will use a format compliant with both RFC3339 and ISO8601: `2022-10-27T16:27:43.738+02:00`
328
328
329
-
???+ tip "Prefer using [datetime string formats](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes){target="_blank"}?"
329
+
???+ tip "Prefer using [datetime string formats](https://docs.python.org/3/library/datetime.html#strftime-and-strptime-format-codes){target="_blank" rel="nofollow"}?"
330
330
Use `use_datetime_directive` flag along with `datefmt` to instruct Logger to use `datetime` instead of `time.strftime`.
331
331
332
332
=== "date_formatting.py"
@@ -360,7 +360,7 @@ You can use any of the following built-in JMESPath expressions as part of [injec
360
360
361
361
### Reusing Logger across your code
362
362
363
-
Similar to [Tracer](./tracer.md#reusing-tracer-across-your-code){target="_blank"}, a new instance that uses the same `service` name - env var or explicit parameter - will reuse a previous Logger instance. Just like `logging.getLogger("logger_name")` would in the standard library if called with the same logger name.
363
+
Similar to [Tracer](./tracer.md#reusing-tracer-across-your-code){target="_blank" rel="nofollow"}, a new instance that uses the same `service` name - env var or explicit parameter - will reuse a previous Logger instance. Just like `logging.getLogger("logger_name")` would in the standard library if called with the same logger name.
364
364
365
365
Notice in the CloudWatch Logs output how `payment_id` appeared as expected when logging in `collect.py`.
366
366
@@ -407,7 +407,7 @@ You can use values ranging from `0.0` to `1` (100%) when setting `POWERTOOLS_LOG
407
407
Sampling decision happens at the Logger initialization. This means sampling may happen significantly more or less than depending on your traffic patterns, for example a steady low number of invocations and thus few cold starts.
408
408
409
409
???+ note
410
-
Open a [feature request](https://github.com/aws-powertools/powertools-lambda-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=){target="_blank"} if you want Logger to calculate sampling for every invocation
410
+
Open a [feature request](https://github.com/aws-powertools/powertools-lambda-python/issues/new?assignees=&labels=feature-request%2C+triage&template=feature_request.md&title=){target="_blank" rel="nofollow"} if you want Logger to calculate sampling for every invocation
411
411
412
412
=== "sampling_debug_logs.py"
413
413
@@ -447,9 +447,9 @@ If you prefer configuring it separately, or you'd want to bring this JSON Format
447
447
448
448
### Observability providers
449
449
450
-
!!! note "In this context, an observability provider is an [AWS Lambda Partner](https://go.aws/3HtU6CZ){target="_blank"} offering a platform for logging, metrics, traces, etc."
450
+
!!! note "In this context, an observability provider is an [AWS Lambda Partner](https://go.aws/3HtU6CZ){target="_blank" rel="nofollow"} offering a platform for logging, metrics, traces, etc."
451
451
452
-
You can send logs to the observability provider of your choice via [Lambda Extensions](https://aws.amazon.com/blogs/compute/using-aws-lambda-extensions-to-send-logs-to-custom-destinations/){target="_blank"}. In most cases, you shouldn't need any custom Logger configuration, and logs will be shipped async without any performance impact.
452
+
You can send logs to the observability provider of your choice via [Lambda Extensions](https://aws.amazon.com/blogs/compute/using-aws-lambda-extensions-to-send-logs-to-custom-destinations/){target="_blank" rel="nofollow"}. In most cases, you shouldn't need any custom Logger configuration, and logs will be shipped async without any performance impact.
453
453
454
454
#### Built-in formatters
455
455
@@ -634,7 +634,7 @@ For exceptional cases where you want to completely replace our formatter logic,
634
634
635
635
#### Bring your own JSON serializer
636
636
637
-
By default, Logger uses `json.dumps` and `json.loads` as serializer and deserializer respectively. There could be scenarios where you are making use of alternative JSON libraries like [orjson](https://github.com/ijl/orjson){target="_blank"}.
637
+
By default, Logger uses `json.dumps` and `json.loads` as serializer and deserializer respectively. There could be scenarios where you are making use of alternative JSON libraries like [orjson](https://github.com/ijl/orjson){target="_blank" rel="nofollow"}.
638
638
639
639
As parameters don't always translate well between them, you can pass any callable that receives a `dict` and return a `str`:
640
640
@@ -664,7 +664,7 @@ This is a Pytest sample that provides the minimum information necessary for Logg
664
664
```
665
665
666
666
???+ tip
667
-
Check out the built-in [Pytest caplog fixture](https://docs.pytest.org/en/latest/how-to/logging.html){target="_blank"} to assert plain log messages
667
+
Check out the built-in [Pytest caplog fixture](https://docs.pytest.org/en/latest/how-to/logging.html){target="_blank" rel="nofollow"} to assert plain log messages
668
668
669
669
### Pytest live log feature
670
670
@@ -703,7 +703,7 @@ By default all registered loggers will be modified. You can change this behavior
703
703
704
704
### How can I add standard library logging attributes to a log record?
705
705
706
-
The Python standard library log records contains a [large set of attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes){target="_blank"}, however only a few are included in Powertools for AWS Lambda (Python) Logger log record by default.
706
+
The Python standard library log records contains a [large set of attributes](https://docs.python.org/3/library/logging.html#logrecord-attributes){target="_blank" rel="nofollow"}, however only a few are included in Powertools for AWS Lambda (Python) Logger log record by default.
707
707
708
708
You can include any of these logging attributes as key value arguments (`kwargs`) when instantiating `Logger` or `LambdaPowertoolsFormatter`.
709
709
@@ -744,4 +744,4 @@ Here's an example where we persist `payment_id` not `request_id`. Note that `pay
744
744
<!-- markdownlint-disable MD013 -->
745
745
### How do I aggregate and search Powertools for AWS Lambda (Python) logs across accounts?
746
746
747
-
As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this [discussion for more details](https://github.com/aws-powertools/powertools-lambda-python/issues/460){target="_blank"}
747
+
As of now, ElasticSearch (ELK) or 3rd party solutions are best suited to this task. Please refer to this [discussion for more details](https://github.com/aws-powertools/powertools-lambda-python/issues/460){target="_blank" rel="nofollow"}
0 commit comments