API reference
This is the official API reference of PerfTest
. Note that it can also be queried interactively from the Julia REPL using the help mode:
julia> using PerfTest
+API reference · PerfTest.jl API reference
This is the official API reference of PerfTest
. Note that it can also be queried interactively from the Julia REPL using the help mode:
julia> using PerfTest
julia>?
-help?> PerfTest
Types
Index
PerfTest.ASTRule
PerfTest.ASTWalkDepthRecord
PerfTest.Context
PerfTest.CustomMetric
PerfTest.DepthRecord
PerfTest.EnvironmentFlags
PerfTest.Methodology_Result
PerfTest.Metric_Constraint
PerfTest.Metric_Reference
PerfTest.Metric_Result
PerfTest.Perftest_Datafile_Root
PerfTest.Perftest_Result
PerfTest.Struct_Eff_Mem_Throughput
PerfTest.Struct_Metric_Config
PerfTest.Struct_Regression
PerfTest.Struct_Roofline_Config
PerfTest.Struct_Tolerance
Documentation
PerfTest.ASTRule
— TypeUsed by the AST walker to check for expressions that match condition
, if they do then modifier
will be applied to the expression.
This is the basic building block of the code transformer, a set of these rules compounds to all the needed manipulations to create the testing suite.
sourcePerfTest.ASTWalkDepthRecord
— TypeThis structure is used to record a test set hierarchy during a AST walk. In any specific point of the walk the array will TODO
sourcePerfTest.Context
— TypeIn order to perform with the test suite generation, the AST walk needs to keep a context register to integrate features that rely on the scope hierarchy.
sourcePerfTest.CustomMetric
— TypeSaves flags needed during the execution of the AST walk. It holds if:
- The walk is on an expression that is a test target
- The walk is on an expression that is inside a config macro
- Several flags that affect the roofline methodology
sourcePerfTest.DepthRecord
— TypeThis structure is used to record a test set frame during a AST walk. See ASTWalkDepthRecord
for more info.
sourcePerfTest.EnvironmentFlags
— TypeSaves flags needed during the execution of the AST walk. It holds if:
- The walk is on an expression that is a test target
- The walk is on an expression that is inside a config macro
- Several flags that affect the roofline methodology
sourcePerfTest.Methodology_Result
— TypeThis struct is used in the test suite to save a methodology result, which in turn is constituted of a group of metric results and their references. Additionally, custom elements that are not subject to test are also saved, e.g. informational metrics, printing functions.
sourcePerfTest.Metric_Constraint
— TypeThis struct is used in the test suite to save a metric test result and its associated data, it saves the reference used and the toreance intervals in absolute and percentual values, also it shows if the test succeded and some additional variables for data printing
sourcePerfTest.Metric_Reference
— TypeThis struct is used in the test suite to save a metric reference, a reference is meant to be later compared with a result, its combination gives the Metric_Constraint
struct. It holds:
- A
reference
value. low_is_bad
registers if in this metric lower values are less desired than higher ones, or the opposite (e.g. time vs FLOP/s).
sourcePerfTest.Metric_Result
— TypeThis struct is used in the test suite to save a metric measurement, therefore its saves the metric name
, its units
space and its value
.
sourcePerfTest.Perftest_Datafile_Root
— TypeThis struct is the root of the data recording file, it can save several performance test suite execution results.
sourcePerfTest.Perftest_Result
— TypeThis struct saves a complete test suite result for one execution. It also saves the raw measurements obtained from the targets.
sourcePerfTest.Struct_Eff_Mem_Throughput
— TypeThis struct holds the configuration of the basic effective memory throughput methodology.
enabled
is used to enable or disable the methodologytolerance
defines the interval of ratios (eff.mem.through. / max. bandwidth) that make the test succeed.
sourcePerfTest.Struct_Metric_Config
— TypeThis struct can hold the configuration of any metric.
enabled
is used to enable or disable the methodologyregression_threshold
, when comparing the measure with a reference, defines how far can the measurement be from the reference
sourcePerfTest.Struct_Regression
— TypeThis struct holds the configuration of the basic metric regression methodology.
enabled
is used to enable or disable the methodology save_failed
will record historical measurements of failed tests if true general_regression_threshold
sets the torelance interval for the test comparison
regression_calculation
can be:
- :latest The reference will be the latest saved result
- :average The reference will be the average of all saved results
sourcePerfTest.Struct_Roofline_Config
— TypeThis struct holds the configuration of the basic roofline methodology.
enabled
is used to enable or disable the methodologytolerance
defines the interval of ratios (eff.mem.through. / max. bandwidth) that make the test succeed.
sourcePerfTest.Struct_Tolerance
— TypeTolerance interval structure. Used to save intervals around a threshold during test comparisons.
sourceFunctions – additional to standard AbstractArray
functionality
Index
PerfTest._treeRun
PerfTest.autoflopExpressionParser
PerfTest.auxiliarMetricPrint
PerfTest.buildPrimitiveMetrics
PerfTest.by_index
PerfTest.checkAuxiliaryCustomMetrics
PerfTest.checkAuxiliaryMetric
PerfTest.checkCustomMetric
PerfTest.checkCustomMetrics
PerfTest.checkMedianTime
PerfTest.checkMinTime
PerfTest.configFallBack
PerfTest.customMetricExpressionParser
PerfTest.customMetricReferenceExpressionParser
PerfTest.extractMethodologyResultArray
PerfTest.extractNamesResultArray
PerfTest.flattenedInterpolation
PerfTest.fullParsingSuite
PerfTest.genTestName!
PerfTest.getMetricValue
PerfTest.getNumber
PerfTest.grepOutput
PerfTest.grepOutputXGetNumber
PerfTest.iteratorExpressionParser
PerfTest.loadFileAsExpr
PerfTest.metaGet
PerfTest.metaGetString
PerfTest.onCustomMetricDefinition
PerfTest.onMemoryThroughputDefinition
PerfTest.openDataFile
PerfTest.p_blue
PerfTest.p_green
PerfTest.p_red
PerfTest.p_yellow
PerfTest.perftestConfigEnter
PerfTest.perftestConfigExit
PerfTest.popQuoteBlocks
PerfTest.printDepth!
PerfTest.printIntervalLanding
PerfTest.printMethodology
PerfTest.printMetric
PerfTest.printedOutputExpressionParser
PerfTest.printfail
PerfTest.removeBlock
PerfTest.retvalExpressionParser
PerfTest.rooflineMacroParse
PerfTest.ruleSet
PerfTest.saveDataFile
PerfTest.saveExprAsFile
PerfTest.setupCPUPeakFlopBenchmark
PerfTest.setupMemoryBandwidthBenchmark
PerfTest.testsetUpdate!
PerfTest.treeRun
PerfTest.unblockAndConcat
PerfTest.@auxiliary_metric
PerfTest.@define_eff_memory_throughput
PerfTest.@define_metric
PerfTest.@lpad
PerfTest.@on_perftest_exec
PerfTest.@on_perftest_ignore
PerfTest.@perftest
PerfTest.@perftest_config
PerfTest.@roofline
Documentation
PerfTest._treeRun
— MethodThis method gets a input julia expression, and a context register and executes a transformation of the input that converts a recipe script (input) into a fully-fledged testing suite (return value).
Arguments
input_expr
the recipe/source expression. (internally, a.k.a source code space)context
a register that will store information useful for the transformation over its run over the AST of the input
sourcePerfTest.autoflopExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :autoflop
symbol and substitute it with the flop count of the test target
sourcePerfTest.auxiliarMetricPrint
— MethodThis function is used to dump metric information regading auxiliar metrics, which are not used in testing.
sourcePerfTest.buildPrimitiveMetrics
— MethodThis function generates the code that make primitive metrics values available to all metodologies and custom metrics.
sourcePerfTest.by_index
— MethodThis method expects a hierarchy tree (dict
) in the form of nested dictionaries and a vector of dictionary keys idx
. The function will recursively apply the keys to get to a final element.
It is usually put to work with the DepthRecord
struct.
sourcePerfTest.checkAuxiliaryCustomMetrics
— MethodThis method is used to generate the code that computes the value of every auxiliary custom metric enabled in th current context, where the code is generated.
sourcePerfTest.checkAuxiliaryMetric
— MethodThis method is used to generate the code that computes the value of a given auxiliary custom metric in the context the code is generated.
sourcePerfTest.checkCustomMetric
— MethodThis function is used to generate the code that evaluates if a custom metric result f a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkCustomMetrics
— MethodThis function is used to generate the code that evaluates if a custom metric result f a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkMedianTime
— MethodThis function is used to generate the code that evaluates if the median time of execution of a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkMinTime
— MethodThis function is used to generate the code that evaluates if the minimum time of execution of a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.configFallBack
— MethodA little automatism to jump to defaults if the configuration provided is absent Kind can be: :regression
sourcePerfTest.customMetricExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse all primitive metric symbols with the structure where the corresponding value of the metric is.
sourcePerfTest.customMetricReferenceExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse all primitive metric symbols with the structure where the corresponding reference value for the metric is.
sourcePerfTest.extractMethodologyResultArray
— MethodThis method will return a flattened array of all of the results for all the methodologies exercised in the provided dictionary.
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: M. A result (Test 1) M. B result (Test 1) M. A result (Test 2)
sourcePerfTest.extractNamesResultArray
— MethodThis method will return a flattened array of the whole test result hierarchy.
Example
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: "Test Set 1 -> Test 1 -> Methodology A" "Test Set 1 -> Test 1 -> Methodology B" "Test Set 2 -> Test 1 -> Methodology A"
sourcePerfTest.flattenedInterpolation
— MethodThis method interpolates the inside_expr
into outside_expr
anywhere it finds the token substitution_token
, which is a symbol. The outside_expr
has to be a block or a quote block. It has the particularity that it will remove block heads from the inside_expr
and add the nested elements onto the location where the token it.
Example:
outside_expr = :(:A; 4)
inside_expr = :(begin 2;3 end)
substitution_token = :A
returns = :(2;3;4)
sourcePerfTest.fullParsingSuite
— MethodThis function combines a collection of rules to turn a formula block into a functioning expression to calculate any metric defined by said formula
sourcePerfTest.genTestName!
— MethodFunction that generates a test name if needed, it is used to name test targets to distinguish them if several go in the same testset.
sourcePerfTest.getMetricValue
— MethodGiven a series of methodology results, the the raw values of all the metrics contained in the methodology results.
sourcePerfTest.getNumber
— MethodFrom a string (field
), it will parse the first number it finds as a Float
sourcePerfTest.grepOutput
— MethodFrom a string, it will divide it by lines and retrieve the ones that match the regular expression provided.
sourcePerfTest.grepOutputXGetNumber
— MethodGiven a string output
, it will retrieve the first number in the first line that contains the string string
.
sourcePerfTest.iteratorExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :iterator
symbol and substitute it with the current value of the innermost test set loop of the current test target execution
sourcePerfTest.loadFileAsExpr
— MethodUtility to get an expression from a Julia file stored at path
sourcePerfTest.metaGet
— MethodRuns over an array of expressions trying to match the desired one. If not found returns "Nothing".
"sym" should follow the MacroTools nomenclature for the @capture macro
sourcePerfTest.metaGetString
— Methodsource PerfTest.onCustomMetricDefinition
— MethodThis function is called to register a custom metric, it will parse the arguments of the definition macro and add the metric to the context to be later used in test targets on the same scope.
sourcePerfTest.onMemoryThroughputDefinition
— MethodThis function is used to register a special custom metric, which is the effective memory throughput calculation, and is registered in the same way as any other but with a special flag that the EMT metholodogy will use to get and use the metric.
sourcePerfTest.openDataFile
— MethodThis method is used to get historical data of a performance test suite from a save file located in path
.
sourcePerfTest.p_blue
— MethodPrints the element in color blue
sourcePerfTest.p_green
— MethodPrints the element in color green
sourcePerfTest.p_red
— MethodPrints the element in color red
sourcePerfTest.p_yellow
— MethodPrints the element in color yellow
sourcePerfTest.perftestConfigEnter
— MethodFunction to trigger the configuration mode on the context register
sourcePerfTest.perftestConfigExit
— MethodFunction to deactivate the configuration mode on the context register
sourcePerfTest.popQuoteBlocks
— MethodUseful to correct operations limited by the tree walking Will remove quote blocks inside the main block without recursion and push their expressions into the main block
sourcePerfTest.printDepth!
— MethodThis method is used to print the test names, with consideration on the hierarchy and adding indentation whenever necessary
sourcePerfTest.printIntervalLanding
— FunctionThis method is used to print a graphical representation on a test result and the admisible intervals it can take. The result will and the two bounds will be printed in order.
sourcePerfTest.printMethodology
— MethodThis function is used to print the information relative to a methodology, relative to a a specific test execution result. This will usually print a series of metrics and might also print plots.
sourcePerfTest.printMetric
— MethodThis method is used to dump into the output the information about a metric and the value obtained in a specific test.
sourcePerfTest.printedOutputExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :printed_output
symbol and substitute it with the standard output of the test target execution
sourcePerfTest.printfail
— MethodThis method dumps into the output a test result in case of failure. The output will be formatted to make it easy to read.
sourcePerfTest.removeBlock
— MethodPops expr
which has a head that is :block or :quote and returns array of nested expressions which are the arguments of such head.
sourcePerfTest.retvalExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the appropiate symbol and substitute it by the return value of the test target execution.
sourcePerfTest.rooflineMacroParse
— MethodParses roofline user request and sets up data for roofline computation.
sourcePerfTest.ruleSet
— MethodThis method builds what is known as a rule set. Which is a function that will evaluate if an expression triggers a rule in a set and if that is the case apply the rule modifier. See the ASTRule documentation for more information.
WARNING: the rule set will apply the FIRST rule that matches with the expression, therefore other matches will be ignored
Arguments
context
the context structure of the tree run, it will be ocassinally used by some rules on the set.rules
the collection of rules that will belong to the resulting set.
sourcePerfTest.saveDataFile
— MethodThis method is used to save historical data of a performance test suite to a save file located in path
.
sourcePerfTest.saveExprAsFile
— FunctionUtility to save an expression (expr
) to a Julia file stored at path
Requires a :toplevel symbol to be the head of the expression.
sourcePerfTest.setupCPUPeakFlopBenchmark
— MethodThis method is used to generate the code responsible for sampling the maximum CPU FLOPS based on the avaiable threads in every resulting suite.
sourcePerfTest.setupMemoryBandwidthBenchmark
— MethodThis method is used to generate the code responsible for sampling the maximum memory bandwith in every resulting suite.
sourcePerfTest.testsetUpdate!
— MethodFunction used to register a new test set in the hierarchy record of the context, where name
is the name of the test set.
sourcePerfTest.treeRun
— MethodThis method implements the transformation that converts a recipe script into a fully-fledged testing suite. The function will return a Julia expression with the resulting performance testing suite. This can be then executed or saved in a file for later usage.
Arguments
path
the path of the script to be transformed.
sourcePerfTest.unblockAndConcat
— MethodThis function is useful to move expressions to the toplevel when they are enclosed inside a block
sourcePerfTest.@auxiliary_metric
— MacroDefines a custom metric for informational purposes that will not be used for testing but will be printed as output.
sourcePerfTest.@define_eff_memory_throughput
— MacroThis macro is used to define the memory bandwidth of a target in order to execute the effective memory thorughput methodology.
Arguments
- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
Example:
The following definition assumes that each execution of the target expression involves transacting 1000 bytes. Therefore the bandwith is 1000 / execution time.
@define_eff_memory_throughput begin
+help?> PerfTest
Types
Index
PerfTest.ASTRule
PerfTest.ASTWalkDepthRecord
PerfTest.Context
PerfTest.CustomMetric
PerfTest.DepthRecord
PerfTest.EnvironmentFlags
PerfTest.Methodology_Result
PerfTest.Metric_Constraint
PerfTest.Metric_Reference
PerfTest.Metric_Result
PerfTest.Perftest_Datafile_Root
PerfTest.Perftest_Result
PerfTest.Struct_Eff_Mem_Throughput
PerfTest.Struct_Metric_Config
PerfTest.Struct_Regression
PerfTest.Struct_Roofline_Config
PerfTest.Struct_Tolerance
Documentation
PerfTest.ASTRule
— TypeUsed by the AST walker to check for expressions that match condition
, if they do then modifier
will be applied to the expression.
This is the basic building block of the code transformer, a set of these rules compounds to all the needed manipulations to create the testing suite.
sourcePerfTest.ASTWalkDepthRecord
— TypeThis structure is used to record a test set hierarchy during a AST walk. In any specific point of the walk the array will TODO
sourcePerfTest.Context
— TypeIn order to perform with the test suite generation, the AST walk needs to keep a context register to integrate features that rely on the scope hierarchy.
sourcePerfTest.CustomMetric
— TypeSaves flags needed during the execution of the AST walk. It holds if:
- The walk is on an expression that is a test target
- The walk is on an expression that is inside a config macro
- Several flags that affect the roofline methodology
sourcePerfTest.DepthRecord
— TypeThis structure is used to record a test set frame during a AST walk. See ASTWalkDepthRecord
for more info.
sourcePerfTest.EnvironmentFlags
— TypeSaves flags needed during the execution of the AST walk. It holds if:
- The walk is on an expression that is a test target
- The walk is on an expression that is inside a config macro
- Several flags that affect the roofline methodology
sourcePerfTest.Methodology_Result
— TypeThis struct is used in the test suite to save a methodology result, which in turn is constituted of a group of metric results and their references. Additionally, custom elements that are not subject to test are also saved, e.g. informational metrics, printing functions.
sourcePerfTest.Metric_Constraint
— TypeThis struct is used in the test suite to save a metric test result and its associated data, it saves the reference used and the toreance intervals in absolute and percentual values, also it shows if the test succeded and some additional variables for data printing
sourcePerfTest.Metric_Reference
— TypeThis struct is used in the test suite to save a metric reference, a reference is meant to be later compared with a result, its combination gives the Metric_Constraint
struct. It holds:
- A
reference
value. low_is_bad
registers if in this metric lower values are less desired than higher ones, or the opposite (e.g. time vs FLOP/s).
sourcePerfTest.Metric_Result
— TypeThis struct is used in the test suite to save a metric measurement, therefore its saves the metric name
, its units
space and its value
.
sourcePerfTest.Perftest_Datafile_Root
— TypeThis struct is the root of the data recording file, it can save several performance test suite execution results.
sourcePerfTest.Perftest_Result
— TypeThis struct saves a complete test suite result for one execution. It also saves the raw measurements obtained from the targets.
sourcePerfTest.Struct_Eff_Mem_Throughput
— TypeThis struct holds the configuration of the basic effective memory throughput methodology.
enabled
is used to enable or disable the methodologytolerance
defines the interval of ratios (eff.mem.through. / max. bandwidth) that make the test succeed.
sourcePerfTest.Struct_Metric_Config
— TypeThis struct can hold the configuration of any metric.
enabled
is used to enable or disable the methodologyregression_threshold
, when comparing the measure with a reference, defines how far can the measurement be from the reference
sourcePerfTest.Struct_Regression
— TypeThis struct holds the configuration of the basic metric regression methodology.
enabled
is used to enable or disable the methodology save_failed
will record historical measurements of failed tests if true general_regression_threshold
sets the torelance interval for the test comparison
regression_calculation
can be:
- :latest The reference will be the latest saved result
- :average The reference will be the average of all saved results
sourcePerfTest.Struct_Roofline_Config
— TypeThis struct holds the configuration of the basic roofline methodology.
enabled
is used to enable or disable the methodologytolerance
defines the interval of ratios (eff.mem.through. / max. bandwidth) that make the test succeed.
sourcePerfTest.Struct_Tolerance
— TypeTolerance interval structure. Used to save intervals around a threshold during test comparisons.
sourceFunctions
Index
PerfTest._treeRun
PerfTest.autoflopExpressionParser
PerfTest.auxiliarMetricPrint
PerfTest.buildPrimitiveMetrics
PerfTest.by_index
PerfTest.checkAuxiliaryCustomMetrics
PerfTest.checkAuxiliaryMetric
PerfTest.checkCustomMetric
PerfTest.checkCustomMetrics
PerfTest.checkMedianTime
PerfTest.checkMinTime
PerfTest.configFallBack
PerfTest.customMetricExpressionParser
PerfTest.customMetricReferenceExpressionParser
PerfTest.extractMethodologyResultArray
PerfTest.extractNamesResultArray
PerfTest.flattenedInterpolation
PerfTest.fullParsingSuite
PerfTest.genTestName!
PerfTest.getMetricValue
PerfTest.getNumber
PerfTest.grepOutput
PerfTest.grepOutputXGetNumber
PerfTest.iteratorExpressionParser
PerfTest.loadFileAsExpr
PerfTest.metaGet
PerfTest.metaGetString
PerfTest.onCustomMetricDefinition
PerfTest.onMemoryThroughputDefinition
PerfTest.openDataFile
PerfTest.p_blue
PerfTest.p_green
PerfTest.p_red
PerfTest.p_yellow
PerfTest.perftestConfigEnter
PerfTest.perftestConfigExit
PerfTest.popQuoteBlocks
PerfTest.printDepth!
PerfTest.printIntervalLanding
PerfTest.printMethodology
PerfTest.printMetric
PerfTest.printedOutputExpressionParser
PerfTest.printfail
PerfTest.removeBlock
PerfTest.retvalExpressionParser
PerfTest.rooflineMacroParse
PerfTest.ruleSet
PerfTest.saveDataFile
PerfTest.saveExprAsFile
PerfTest.setupCPUPeakFlopBenchmark
PerfTest.setupMemoryBandwidthBenchmark
PerfTest.testsetUpdate!
PerfTest.treeRun
PerfTest.unblockAndConcat
PerfTest.@auxiliary_metric
PerfTest.@define_eff_memory_throughput
PerfTest.@define_metric
PerfTest.@lpad
PerfTest.@on_perftest_exec
PerfTest.@on_perftest_ignore
PerfTest.@perftest
PerfTest.@perftest_config
PerfTest.@roofline
Documentation
PerfTest._treeRun
— MethodThis method gets a input julia expression, and a context register and executes a transformation of the input that converts a recipe script (input) into a fully-fledged testing suite (return value).
Arguments
input_expr
the recipe/source expression. (internally, a.k.a source code space)context
a register that will store information useful for the transformation over its run over the AST of the input
sourcePerfTest.autoflopExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :autoflop
symbol and substitute it with the flop count of the test target
sourcePerfTest.auxiliarMetricPrint
— MethodThis function is used to dump metric information regading auxiliar metrics, which are not used in testing.
sourcePerfTest.buildPrimitiveMetrics
— MethodThis function generates the code that make primitive metrics values available to all metodologies and custom metrics.
sourcePerfTest.by_index
— MethodThis method expects a hierarchy tree (dict
) in the form of nested dictionaries and a vector of dictionary keys idx
. The function will recursively apply the keys to get to a final element.
It is usually put to work with the DepthRecord
struct.
sourcePerfTest.checkAuxiliaryCustomMetrics
— MethodThis method is used to generate the code that computes the value of every auxiliary custom metric enabled in th current context, where the code is generated.
sourcePerfTest.checkAuxiliaryMetric
— MethodThis method is used to generate the code that computes the value of a given auxiliary custom metric in the context the code is generated.
sourcePerfTest.checkCustomMetric
— MethodThis function is used to generate the code that evaluates if a custom metric result f a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkCustomMetrics
— MethodThis function is used to generate the code that evaluates if a custom metric result f a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkMedianTime
— MethodThis function is used to generate the code that evaluates if the median time of execution of a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.checkMinTime
— MethodThis function is used to generate the code that evaluates if the minimum time of execution of a target is within a specified reference.
WARNING
Predefined symbols needed before this code is added to the generated space:
reference_value
metric_results
sourcePerfTest.configFallBack
— MethodA little automatism to jump to defaults if the configuration provided is absent Kind can be: :regression
sourcePerfTest.customMetricExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse all primitive metric symbols with the structure where the corresponding value of the metric is.
sourcePerfTest.customMetricReferenceExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse all primitive metric symbols with the structure where the corresponding reference value for the metric is.
sourcePerfTest.extractMethodologyResultArray
— MethodThis method will return a flattened array of all of the results for all the methodologies exercised in the provided dictionary.
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: M. A result (Test 1) M. B result (Test 1) M. A result (Test 2)
sourcePerfTest.extractNamesResultArray
— MethodThis method will return a flattened array of the whole test result hierarchy.
Example
Example:
"Test Set 1" -> "Test 1" -> Methodology A result -> Methodology B result "Test Set 2" -> "Test 1" -> Methodology A result Returns: "Test Set 1 -> Test 1 -> Methodology A" "Test Set 1 -> Test 1 -> Methodology B" "Test Set 2 -> Test 1 -> Methodology A"
sourcePerfTest.flattenedInterpolation
— MethodThis method interpolates the inside_expr
into outside_expr
anywhere it finds the token substitution_token
, which is a symbol. The outside_expr
has to be a block or a quote block. It has the particularity that it will remove block heads from the inside_expr
and add the nested elements onto the location where the token it.
Example:
outside_expr = :(:A; 4)
inside_expr = :(begin 2;3 end)
substitution_token = :A
returns = :(2;3;4)
sourcePerfTest.fullParsingSuite
— MethodThis function combines a collection of rules to turn a formula block into a functioning expression to calculate any metric defined by said formula
sourcePerfTest.genTestName!
— MethodFunction that generates a test name if needed, it is used to name test targets to distinguish them if several go in the same testset.
sourcePerfTest.getMetricValue
— MethodGiven a series of methodology results, the the raw values of all the metrics contained in the methodology results.
sourcePerfTest.getNumber
— MethodFrom a string (field
), it will parse the first number it finds as a Float
sourcePerfTest.grepOutput
— MethodFrom a string, it will divide it by lines and retrieve the ones that match the regular expression provided.
sourcePerfTest.grepOutputXGetNumber
— MethodGiven a string output
, it will retrieve the first number in the first line that contains the string string
.
sourcePerfTest.iteratorExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :iterator
symbol and substitute it with the current value of the innermost test set loop of the current test target execution
sourcePerfTest.loadFileAsExpr
— MethodUtility to get an expression from a Julia file stored at path
sourcePerfTest.metaGet
— MethodRuns over an array of expressions trying to match the desired one. If not found returns "Nothing".
"sym" should follow the MacroTools nomenclature for the @capture macro
sourcePerfTest.metaGetString
— Methodsource PerfTest.onCustomMetricDefinition
— MethodThis function is called to register a custom metric, it will parse the arguments of the definition macro and add the metric to the context to be later used in test targets on the same scope.
sourcePerfTest.onMemoryThroughputDefinition
— MethodThis function is used to register a special custom metric, which is the effective memory throughput calculation, and is registered in the same way as any other but with a special flag that the EMT metholodogy will use to get and use the metric.
sourcePerfTest.openDataFile
— MethodThis method is used to get historical data of a performance test suite from a save file located in path
.
sourcePerfTest.p_blue
— MethodPrints the element in color blue
sourcePerfTest.p_green
— MethodPrints the element in color green
sourcePerfTest.p_red
— MethodPrints the element in color red
sourcePerfTest.p_yellow
— MethodPrints the element in color yellow
sourcePerfTest.perftestConfigEnter
— MethodFunction to trigger the configuration mode on the context register
sourcePerfTest.perftestConfigExit
— MethodFunction to deactivate the configuration mode on the context register
sourcePerfTest.popQuoteBlocks
— MethodUseful to correct operations limited by the tree walking Will remove quote blocks inside the main block without recursion and push their expressions into the main block
sourcePerfTest.printDepth!
— MethodThis method is used to print the test names, with consideration on the hierarchy and adding indentation whenever necessary
sourcePerfTest.printIntervalLanding
— FunctionThis method is used to print a graphical representation on a test result and the admisible intervals it can take. The result will and the two bounds will be printed in order.
sourcePerfTest.printMethodology
— MethodThis function is used to print the information relative to a methodology, relative to a a specific test execution result. This will usually print a series of metrics and might also print plots.
sourcePerfTest.printMetric
— MethodThis method is used to dump into the output the information about a metric and the value obtained in a specific test.
sourcePerfTest.printedOutputExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the :printed_output
symbol and substitute it with the standard output of the test target execution
sourcePerfTest.printfail
— MethodThis method dumps into the output a test result in case of failure. The output will be formatted to make it easy to read.
sourcePerfTest.removeBlock
— MethodPops expr
which has a head that is :block or :quote and returns array of nested expressions which are the arguments of such head.
sourcePerfTest.retvalExpressionParser
— MethodThis is one of the parser functions that expand any formula block for metric definition. This function will parse the appropiate symbol and substitute it by the return value of the test target execution.
sourcePerfTest.rooflineMacroParse
— MethodParses roofline user request and sets up data for roofline computation.
sourcePerfTest.ruleSet
— MethodThis method builds what is known as a rule set. Which is a function that will evaluate if an expression triggers a rule in a set and if that is the case apply the rule modifier. See the ASTRule documentation for more information.
WARNING: the rule set will apply the FIRST rule that matches with the expression, therefore other matches will be ignored
Arguments
context
the context structure of the tree run, it will be ocassinally used by some rules on the set.rules
the collection of rules that will belong to the resulting set.
sourcePerfTest.saveDataFile
— MethodThis method is used to save historical data of a performance test suite to a save file located in path
.
sourcePerfTest.saveExprAsFile
— FunctionUtility to save an expression (expr
) to a Julia file stored at path
Requires a :toplevel symbol to be the head of the expression.
sourcePerfTest.setupCPUPeakFlopBenchmark
— MethodThis method is used to generate the code responsible for sampling the maximum CPU FLOPS based on the avaiable threads in every resulting suite.
sourcePerfTest.setupMemoryBandwidthBenchmark
— MethodThis method is used to generate the code responsible for sampling the maximum memory bandwith in every resulting suite.
sourcePerfTest.testsetUpdate!
— MethodFunction used to register a new test set in the hierarchy record of the context, where name
is the name of the test set.
sourcePerfTest.treeRun
— MethodThis method implements the transformation that converts a recipe script into a fully-fledged testing suite. The function will return a Julia expression with the resulting performance testing suite. This can be then executed or saved in a file for later usage.
Arguments
path
the path of the script to be transformed.
sourcePerfTest.unblockAndConcat
— MethodThis function is useful to move expressions to the toplevel when they are enclosed inside a block
sourcePerfTest.@auxiliary_metric
— MacroDefines a custom metric for informational purposes that will not be used for testing but will be printed as output.
sourcePerfTest.@define_eff_memory_throughput
— MacroThis macro is used to define the memory bandwidth of a target in order to execute the effective memory thorughput methodology.
Arguments
- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
Example:
The following definition assumes that each execution of the target expression involves transacting 1000 bytes. Therefore the bandwith is 1000 / execution time.
@define_eff_memory_throughput begin
1000 / :median_time
-end
sourcePerfTest.@define_metric
— MacroThis macro is used to define a new custom metric.
Arguments
name
: the name of the metric for identification purposes.units
: the unit space that the metric values will be in.- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
sourcePerfTest.@lpad
— MacroMacro that adds a space at the beggining of a string
sourcePerfTest.@on_perftest_exec
— MacroThe expression given to this macro will only be executed in the generated suite, and will be deleted if the source code is executed as is.
sourcePerfTest.@on_perftest_ignore
— MacroThe expression given to this macro will only be executed in the source code, and will be deleted in the generated performance test suite.
sourcePerfTest.@perftest
— MacroThis macro is used to signal that the wrapped expression is a performance test target, and therefore its performance will be sampled and then evaluated following the current suite configuration.
If the macro is evaluated it does not modify the target at all. The effects of the macro only show when the script is transformed into a performance testing suite.
This macro is sensitive to context since other adjacent macros can change how the target will be evaluated.
Arguments
- The target expression
Example
@perftest 2 + 3
sourcePerfTest.@perftest_config
— MacroPerftest_config macro, used to set customised configuration on the suite generated by the source script
Configuration inside this macro must follow the syntax below:
@perftest_config
+end
sourcePerfTest.@define_metric
— MacroThis macro is used to define a new custom metric.
Arguments
name
: the name of the metric for identification purposes.units
: the unit space that the metric values will be in.- formula block : an expression that returns a single value, which would be the metric value. The formula can have any julia expression inside and additionally some special symbols are supported. The formula may be evaluated several times, so its applied to every target in every test set or just once, if the formula is defined inside a test set, which makes it only applicable to it.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
sourcePerfTest.@lpad
— MacroMacro that adds a space at the beggining of a string
sourcePerfTest.@on_perftest_exec
— MacroThe expression given to this macro will only be executed in the generated suite, and will be deleted if the source code is executed as is.
sourcePerfTest.@on_perftest_ignore
— MacroThe expression given to this macro will only be executed in the source code, and will be deleted in the generated performance test suite.
sourcePerfTest.@perftest
— MacroThis macro is used to signal that the wrapped expression is a performance test target, and therefore its performance will be sampled and then evaluated following the current suite configuration.
If the macro is evaluated it does not modify the target at all. The effects of the macro only show when the script is transformed into a performance testing suite.
This macro is sensitive to context since other adjacent macros can change how the target will be evaluated.
Arguments
- The target expression
Example
@perftest 2 + 3
sourcePerfTest.@perftest_config
— MacroPerftest_config macro, used to set customised configuration on the suite generated by the source script
Configuration inside this macro must follow the syntax below:
@perftest_config
key = value
key.subkey = value
-end
Where key can be any configuration parameter, in some cases parameters will consist on a set of subparameters denoted by the "." to refer to them.
sourcePerfTest.@roofline
— MacroThis macro enables roofline modelling, if put just before a target declaration (@perftest
) it will proceed to evaluate it using a roofline model.
Mandatory arguments
- formula block: the macro has to wrap a block that holds a formula to obtain the operational intensity of target algorithms.
Optional arguments
cpu_peak
: a manual input value for the maximum attainable FLOPS, this will override the empirical runtime benchmarkmembw_peak
: a manual input value for the maximum memory bandwith, this will override the empirical runtime benchmarktarget_opint
: a desired operational intensity for the target, this will turn operational intensity into a test metricactual_flops
: another formula that defines the actual performance of the test.target_ratio
: the acceptable ratio between the actual performance and the projected performance from the roofline, this will turn actual performance into a test metric.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
Any formula block specified in this macro supports these symbols.
Example
@roofline actual_flops=:autoflop target_ratio=0.05 begin
+end
Where key can be any configuration parameter, in some cases parameters will consist on a set of subparameters denoted by the "." to refer to them.
sourcePerfTest.@roofline
— MacroThis macro enables roofline modelling, if put just before a target declaration (@perftest
) it will proceed to evaluate it using a roofline model.
Mandatory arguments
- formula block: the macro has to wrap a block that holds a formula to obtain the operational intensity of target algorithms.
Optional arguments
cpu_peak
: a manual input value for the maximum attainable FLOPS, this will override the empirical runtime benchmarkmembw_peak
: a manual input value for the maximum memory bandwith, this will override the empirical runtime benchmarktarget_opint
: a desired operational intensity for the target, this will turn operational intensity into a test metricactual_flops
: another formula that defines the actual performance of the test.target_ratio
: the acceptable ratio between the actual performance and the projected performance from the roofline, this will turn actual performance into a test metric.
Special symbols:
:median_time
: will be substituted by the median time the target took to execute in the benchmark.:minimum_time
: will be substituted by the minimum time the target took to execute in the benchmark.:ret_value
: will be substituted by the return value of the target.:autoflop
: will be substituted by the FLOP count the target.:printed_output
: will be substituted by the standard output stream of the target.:iterator
: will be substituted by the current iterator value in a loop test set.
Any formula block specified in this macro supports these symbols.
Example
@roofline actual_flops=:autoflop target_ratio=0.05 begin
mem = ((:iterator + 1) * :iterator)
:autoflop / mem
-end
The code block defines operational intensity, whilst the other arguments define how to measure and compare the actual performance with the roofline performance. If the actual to projected performance ratio goes below the target, the test fails.
sourceSettings
This document was generated with Documenter.jl version 1.6.0 on Tuesday 3 September 2024. Using Julia version 1.10.5.
+endThe code block defines operational intensity, whilst the other arguments define how to measure and compare the actual performance with the roofline performance. If the actual to projected performance ratio goes below the target, the test fails.
source