You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current tracer target property either takes in a boolean or a string which sets the name of the generated .lft file. When the target property is set to true or given a string, the LF_TRACER macro is set. And a file gets generated with all of the tracepoints.
The list of tracepoints:
typedef enum {
reaction_starts,
reaction_ends,
reaction_deadline_missed,
schedule_called,
user_event,
user_value,
worker_wait_starts,
worker_wait_ends,
scheduler_advancing_time_starts,
scheduler_advancing_time_ends,
federated, // Everything below this is for tracing federated interactions.
...
Even in non-federated code, this causes ~7 tracepoints per reaction call. The tracer has a tracepoint buffer space for 2048 points and it has two of these buffer arrays. But when both of them get filled, the tracer flushes these to a file, which causes the tracer to be very inefficient and causes some irregularities in the measurements.
The LF code diagram that I used to run the benchmark a timer-triggered reaction with an empty body (empty other than the GPIO toggle)
The Period jitter histograms are measured using a logic analyzer, the default tracer, and an optimized tracer that logs the reaction start values only.
As one can see from the graphs above when we use the regular tracer, the accuracy of the measurements drastically drops due to inefficiencies of the tracer. However, when we decrease the number of tracepoints dropped — the accuracy gets fairly close to the logic analyzer measurements.
My proposal relies on the idea that in most cases, one doesn’t need all the tracepoints in the file, but needs a subset of them. So I plan to change the tracer target property to a list of strings or enums that would identify which tracepoints will be triggered. Suggested options:
The suggested implementation is to generate a macro corresponding to each of these above and edit the tracepoint.h and potentially tracepoint.c accordingly. An example is given as pseudocode below:
This sounds like a very good idea to me. We might consider abstracting common patterns. E.g., TRACE_REACTIONS could specify to trace reaction starts and ends. TRACE_TAG_ADVANCE could specify four related events. Etc.
One clarification: two of the seven events you identified occur once per tag, two occur once per tag per worker, one occurs once per timer, and one occurs only if a reaction has a deadline miss. In your example, there is only one reaction invocation per tag, which is why it looks like there seven per reaction invocation.
Still, being able to record a subset of these seems valuable.
The current tracer target property either takes in a boolean or a string which sets the name of the generated
.lft
file. When the target property is set totrue
or given a string, theLF_TRACER
macro is set. And a file gets generated with all of the tracepoints.The list of tracepoints:
Even in non-federated code, this causes ~7 tracepoints per reaction call. The tracer has a tracepoint buffer space for 2048 points and it has two of these buffer arrays. But when both of them get filled, the tracer flushes these to a file, which causes the tracer to be very inefficient and causes some irregularities in the measurements.
The LF code diagram that I used to run the benchmark a timer-triggered reaction with an empty body (empty other than the GPIO toggle)
The Period jitter histograms are measured using a logic analyzer, the default tracer, and an optimized tracer that logs the reaction start values only.
As one can see from the graphs above when we use the regular tracer, the accuracy of the measurements drastically drops due to inefficiencies of the tracer. However, when we decrease the number of tracepoints dropped — the accuracy gets fairly close to the logic analyzer measurements.
My proposal relies on the idea that in most cases, one doesn’t need all the tracepoints in the file, but needs a subset of them. So I plan to change the tracer target property to a list of strings or enums that would identify which tracepoints will be triggered. Suggested options:
The suggested implementation is to generate a macro corresponding to each of these above and edit the
tracepoint.h
and potentiallytracepoint.c
accordingly. An example is given as pseudocode below:If it's a desired feature to be able to name the trace file, I propose we add a new target property,
tracer-file
, which gets the name of the file.The text was updated successfully, but these errors were encountered: