Skip to content

Add KVP Tracer and OpenTel #83

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 8 commits into from
Closed

Add KVP Tracer and OpenTel #83

wants to merge 8 commits into from

Conversation

peytonr18
Copy link
Contributor

Draft PR to discuss OpenTelemetry Tracer and next steps for making tracer async, refactoring , and instrumenting the rest of the repo

Copy link
Member

@jeremycline jeremycline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can shuffle things around a bit and configure the initial layers slightly differently, but after that I'd advocate for merging this and adding additional instrumentation and tracing layers to export to KVP in follow-up PRs.

Comment on lines +38 to +41

// Create a `tracing` subscriber
let subscriber = Registry::default()
.with(otel_layer);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tracing subscriber lets you register multiple layers, and each layer can be configured with its own filters and formatters and so on. It takes a little digging in the tracing_subscriber docs and I forever forget how to do it properly, but here's an example that would let us avoid needing to duplicate the file descriptor:

Suggested change
// Create a `tracing` subscriber
let subscriber = Registry::default()
.with(otel_layer);
// Create a tracing layer for logging to stdout
let log_file = Arc::new(File::create("example.log").unwrap());
let stderr_layer = tracing_subscriber::fmt::layer()
.with_writer(std::io::stderr.and(log_file))
.with_filter(tracing_subscriber::EnvFilter::from_env("AZURE_INIT_LOG"));
// Create a `tracing` subscriber
let subscriber = Registry::default()
.with(stderr_layer)
.with(otel_layer);

This adds a second layer which writes to stderr as well as a file, and applies a filter you can define (requires env-filter feature on the tracing-subscriber crate) in the AZURE_INIT_LOG environment variable. We don't necessarily need to use EnvFilter, but I tend to start with it because it's easy to get started with. I tend to abandon it if I start using a configuration file for the application.

If you run with this diff, you'll get logs written to stderr and "example.log" for that layer, and then separately to "spans.log". I don't think we want this literal suggestion, but I think this is the general approach we want to take when setting things up.

We can also apply a global filter that applies to all layers instead (or in addition to) filters on each layer.

use tracing_subscriber::{Registry, layer::SubscriberExt};


pub static TRACER: Lazy<()> = Lazy::new(|| {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would recommend that we move the code within this block into the binary in the azure-init crate. Each application tends to want to configure its logging in its own way - different file names, different environment variables for configuration, different filtering rules, etc.

What would be good in a library is any custom Layers we end up making, if any. These can be consumed (and configured) by applications in their tracing-subscriber setup.

fn test () {
print!("Hello, World!");
}

#[tokio::main]
async fn main() -> ExitCode {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The tracing-subscriber setup should probably live here, then we add a root span and instrument the provision call in that span, something like:

    let subscriber = Registry::default().with(layers);
    tracing::subscriber::set_global_default(subscriber).expect("...")
    let root_span = tracing::info_span!("root");
    match provision().instrument(root_span).await {

jeremycline added a commit to jeremycline/azure-init that referenced this pull request Jul 15, 2024
While debugging what turned into Azure#103, I added some minimal log
configuration to figure out what was happening. Note that this only logs
to stderr and is only configurable via the `AZURE_INIT_LOG` environment
variable. The most minimal usage is like this:

AZURE_INIT_LOG=info azure-init

Which emits anything at info level or high. Refer to the tracing
documentation[0] for more complex examples.

This doesn't try to integrate with OpenTelemetry or pipe logs to KVP;
those are being wired up in Azure#83.

[0] https://docs.rs/tracing-subscriber/0.3.18/tracing_subscriber/filter/struct.EnvFilter.html#directives
jeremycline added a commit to jeremycline/azure-init that referenced this pull request Jul 16, 2024
While debugging why running azure-init in a local VM hung forever, I
added some minimal log configuration to figure out what was happening.
Note that this only logs to stderr and is only configurable via the
`AZURE_INIT_LOG` environment variable. The most minimal usage is like
this:

AZURE_INIT_LOG=info azure-init

Which emits anything at info level or high. Refer to the
tracing-subscriber documentation[0] for more complex examples.

Logs look like this:
```
2024-07-16T14:41:16.924909Z  INFO provision: azure_init: new
2024-07-16T14:41:16.928668Z  INFO provision:get_environment: azure_init: new
...
2024-07-16T14:41:17.065528Z  INFO provision:get_environment: azure_init: close time.busy=133ms time.idle=4.21ms
2024-07-16T14:41:17.073142Z  INFO provision:get_username: azure_init: new
2024-07-16T14:41:17.076323Z  INFO provision:get_username: azure_init: close time.busy=23.3µs time.idle=3.16ms
2024-07-16T14:41:17.081070Z  INFO provision: azure_init: close time.busy=153ms time.idle=3.12ms
```

It's configured to emit logs when a span opens and closes in addition to
any explicit `tracing::{debug,info,warn,error}!` call, although we can
disable that if folks don't like it. I also added spans to most of the
library functions. All the spans are at the default `INFO` level and the
default log configuration is to write logs for warning or higher, so
running this without specifying `AZURE_INIT_LOG` results in the same
behavior as before.

This doesn't try to integrate with OpenTelemetry or pipe logs to KVP;
those are being wired up in Azure#83.

[0] https://docs.rs/tracing-subscriber/0.3.18/tracing_subscriber/filter/struct.EnvFilter.html#directives
dongsupark pushed a commit that referenced this pull request Jul 16, 2024
While debugging why running azure-init in a local VM hung forever, I
added some minimal log configuration to figure out what was happening.
Note that this only logs to stderr and is only configurable via the
`AZURE_INIT_LOG` environment variable. The most minimal usage is like
this:

AZURE_INIT_LOG=info azure-init

Which emits anything at info level or high. Refer to the
tracing-subscriber documentation[0] for more complex examples.

Logs look like this:
```
2024-07-16T14:41:16.924909Z  INFO provision: azure_init: new
2024-07-16T14:41:16.928668Z  INFO provision:get_environment: azure_init: new
...
2024-07-16T14:41:17.065528Z  INFO provision:get_environment: azure_init: close time.busy=133ms time.idle=4.21ms
2024-07-16T14:41:17.073142Z  INFO provision:get_username: azure_init: new
2024-07-16T14:41:17.076323Z  INFO provision:get_username: azure_init: close time.busy=23.3µs time.idle=3.16ms
2024-07-16T14:41:17.081070Z  INFO provision: azure_init: close time.busy=153ms time.idle=3.12ms
```

It's configured to emit logs when a span opens and closes in addition to
any explicit `tracing::{debug,info,warn,error}!` call, although we can
disable that if folks don't like it. I also added spans to most of the
library functions. All the spans are at the default `INFO` level and the
default log configuration is to write logs for warning or higher, so
running this without specifying `AZURE_INIT_LOG` results in the same
behavior as before.

This doesn't try to integrate with OpenTelemetry or pipe logs to KVP;
those are being wired up in #83.

[0] https://docs.rs/tracing-subscriber/0.3.18/tracing_subscriber/filter/struct.EnvFilter.html#directives
balakreddy pushed a commit to balakreddy/azure-init that referenced this pull request Jul 19, 2024
While debugging why running azure-init in a local VM hung forever, I
added some minimal log configuration to figure out what was happening.
Note that this only logs to stderr and is only configurable via the
`AZURE_INIT_LOG` environment variable. The most minimal usage is like
this:

AZURE_INIT_LOG=info azure-init

Which emits anything at info level or high. Refer to the
tracing-subscriber documentation[0] for more complex examples.

Logs look like this:
```
2024-07-16T14:41:16.924909Z  INFO provision: azure_init: new
2024-07-16T14:41:16.928668Z  INFO provision:get_environment: azure_init: new
...
2024-07-16T14:41:17.065528Z  INFO provision:get_environment: azure_init: close time.busy=133ms time.idle=4.21ms
2024-07-16T14:41:17.073142Z  INFO provision:get_username: azure_init: new
2024-07-16T14:41:17.076323Z  INFO provision:get_username: azure_init: close time.busy=23.3µs time.idle=3.16ms
2024-07-16T14:41:17.081070Z  INFO provision: azure_init: close time.busy=153ms time.idle=3.12ms
```

It's configured to emit logs when a span opens and closes in addition to
any explicit `tracing::{debug,info,warn,error}!` call, although we can
disable that if folks don't like it. I also added spans to most of the
library functions. All the spans are at the default `INFO` level and the
default log configuration is to write logs for warning or higher, so
running this without specifying `AZURE_INIT_LOG` results in the same
behavior as before.

This doesn't try to integrate with OpenTelemetry or pipe logs to KVP;
those are being wired up in Azure#83.

[0] https://docs.rs/tracing-subscriber/0.3.18/tracing_subscriber/filter/struct.EnvFilter.html#directives
@peytonr18
Copy link
Contributor Author

Closed, kept this open for documentation's sake but final implementation was merged in with #133.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants