-
Notifications
You must be signed in to change notification settings - Fork 58
Initial implementation of liveness analysis #275
base: relax
Are you sure you want to change the base?
Conversation
Co-Authored-By: Yuchen Jin <[email protected]>
Co-authored-by: ZihengJiang <[email protected]>
* Implementation of call_dps. * Implementation of PackedFuncExpr. * Test CallDPS for TIR function. * Rename. * Add header and comments. * Update. * Address comments.
* Update AST. * ShapeOf. * ShapeOf. * Address comment.
* Add initial IRBuilder. * Add function output to irbuilder; update based on new AST. * Add call method; clean up bindings * Add test. * Add multifuction test * Move implementation to C++; infer shape and type * update op python hook * More tests and bug fix * Add comments. * Update shape/type inference. * Restructure code; add python type hint. * Cleanup code. * Rebase; address comments. * Add call intrinsic. * nits. * Remove call op. * Migrate scope to C++ using tvm::With. * Address naming. * Add GetBlocks API. * Unify EmitOutput APIs; add more comments. * Remove shape and type deduction code. * Also remove the shape/type attr interface. * Address comments. * Differentiate global and local function. * Reset counter after building func/block. * Rebase. * Remove shape infer builtin. * Return from void function as empty tuple. Co-authored-by: Michalis Papadimitriou <[email protected]>
* Copy jared's frontend * Remove some extraneous code + add TODOs * Skeleton AST * Added more skeleton AST, worked on parsing shape annotations. Something is wrong with span_to_span * Fix spans * Type annotations parsing correctly * some match_shape support * More bug fixes! Some stuff parses. Importing into tests is messed up. We probably need to restructure this code as well. * refactor parser and fill out more stubs * some parser tests * yolo dataflow * checkpoint for rebase * hook up AST * add inline TIR parsing * some cleanup * support call_packed parsing to ExternFunc call * remove stub ops * improve docstrings * address nits * support coercing tuples to ShapeExpr when possible for call_dps Co-authored-by: electriclilies <[email protected]>
* Shape and type deduction. * Fix header. * Add call attrs to the deduce signature. * Address comments. * Add DiagnosticContext to IRBuilder and inference signature. * Fix nits.
* Relax pretty printer initial prototype * call into TVMScriptPrinter for PrimFuncs * most round-trip tests pass * address comments * fix typo
…c-pack#9) * Relax pretty printer initial prototype * call into TVMScriptPrinter for PrimFuncs * most round-trip tests pass * address comments * implement relax.output syntax for dataflow block outputs * remove leftover comments * fix Var constructor on ShapeExpr annotation * fix DataflowVar as well
* Update MatchShape AST Node. * Update. * Update.
* Relax pretty printer initial prototype * call into TVMScriptPrinter for PrimFuncs * most round-trip tests pass * address comments * implement relax.output syntax for dataflow block outputs * remove leftover comments * fix Var constructor on ShapeExpr annotation * add printing and parsing for simple PrimExpr and Call Attrs
* ExprVisitor/ExprMutator for relax nodes. * Update Visitor & Mutator. * Update Mutator. * DataflowMutator interface. * EwiseFMARewriter. * Update fma rewrite and add test. * Update test. * Fix dataflow block dispatching. * Construct new dataflow block with IRBuilder. * VisitBinding return void and mutate internal IRBuilder. * Simplify. * Update emit dataflow output. * Explicit memeory allocation rewrite. * LazyIRBuilder. * Update ExplicitMemMutator. * Overload IRBuilder::Emit to have 3 styles. * Update IRBuilder/IRMutator interfaces and passes. * Add MatchShape binding to IRBuilder. * Improve IRMutator interface; add Normalize and CanProveShapeEqual to IRBuilder * Update EmitMatchShape. Co-authored-by: ZihengJiang <[email protected]>
…ort for call_dps (tlc-pack#15) * update parser and printer for match_shape * support parsing class to IRModule, and extern func in call_dps
…rint IRModule PrimFuncs (tlc-pack#17)
* [PASS] Shape lowering. * Update to IRModule based. * TIR function generation. * Improve. * Improve. * Improve test. * Improve. * Address comment.
…lc-pack#19) * relax call_packed arity, return IRModule factory, print IRModule PrimFuncs * explicitly parse and print attrs_type_key on calls * print type even when attrs has no fields
* VM compiler. * Update. * Compile IRmodule; expose Python api * Add dtype contant serialization and type hint. * Address comments. * Add todos and fix lint. * Update * Update.
* init * update * update * test case working * update and add multi block test case * check in * fixes * fix * update * add * update * add * update * address comments. Co-authored-by: Altan Haan <[email protected]>
…ble (tlc-pack#21) * rebase. * Update. * Update shape lowering, make sure the lowering pipeline works. * Address comment.
* call_dps lowering. * Improve shape lowering. * Support alloc_storage for dynamic shape. * implementt ToNonDF to transform program to non-dataflow format. * Fix the mutator issue. * Update build api, an issue occurred. * vm tests can pass. * Support shape tuple in executable seriablization. * Fix for test. * Minor fixes. * Address comments. * Add mutate binding var back. * Visit binding var and fix tests. Co-authored-by: YuchenJin <[email protected]>
* Return Instruction::Arg for each CodeGenLLVM::VisitExpr_. * Change VMCompiler to be an Object from ModuleNode. * Introduce intrinsics and attrs. * Generic handling of attribute codegen. * Do to-non-dataflow transform in call_dps_rewrite. * Back to special attr handling. * Address comments. * Standalone to_non_dataflow pass; more tests. * Rename decode/make shape to store/load shape. * Update. * Fix namespace, add comments. * rebase * Rename files. * nit
* fixes * revert checked_type visitor and fix relax usage * ExprNormalizer * fix that annoying bug and get tests passing * Memoization fix for the ExprMutator; separate VisitVarDef from use. * rebase. * rebase. * address part of comments. * address more comments * address more comments and add doc * address more comments * fix potential mutation bug * always assign normalized shape if can * address comments Co-authored-by: Altan Haan <[email protected]>
…ck#219) In the current codebase, kNumArgs is a runtime-dependent variable (i.e. its value depends on the input shape of Array). Allocating arrays with runtime values is not allowed during building on Windows (I'm surprised it can be compiled on Linux and macOS)
Add decorators `visitor` and `mutator` to help users create `ExprVisitor` and `ExprMutator` in Python. Users can customize visit/rewrite/post-order-rewrite function in Python. `PyExprVisitor` and `PyExprMutator` lists the functions users can customize.
* Fixes a bug to use the uploaded file remote path for loading the module remotely. * Modifies the task_python_hexagon.sh script to only run passing test on device. This is used by Jenkins CI.
It may be useful for some passes to collapse chains of definitions, particularly after other compiler transformations that may reduce or simplify some expressions. This pass will take chains of definitions and replace references to later definitions to the original one. It works by checking `LookupBinding` for each var use-site and replacing the var with its definition if the definition was another var. (Note: This required updating `BlockBuilder` to also update its binding map for `MatchShape` nodes; that was arguably a bug.) Additionally, `MatchShape` bindings where the `LHS` and the `RHS` are guaranteed to match at compile time are canonicalized into ordinary `VarBinding`s.
Fix an incorrect check which disables emitting global MatchShape outside a dataflow block and mistakenly enables emitting dataflow MatchShape outside a dataflow block.
…-pack#247) This PR makes some small additions to the end-to-end AutoTIR script, namely eliminating a bug (it was incorrectly using the stateful API) and adding an option to save the test results as a CSV file for benchmarking purposes (the data can then be separately analyzed as needed). These changes also required a small extension to the save_function method in the VM, namely allowing it to take keyword arguments.
Attempting to use `dump_ast` on functions containing the operators `relax.unique` and `relax.print` previously crashed due to being unable to query their attributes' keys. It turned out that this was a problem with the operator attributes: They had not been registered on the Python side, so Python representation treated them as opaque TVM objects. This PR corrects this mistake.
…ck#254) This small PR changes a check in the tvmscript parser to support empty shape tuples which are used to represent scalars. I added a scalar addition test to make sure it works properly.
…ack#257) It was observed that closures saved using `save_function` would crash when used over RPC with the `time_evaluator`, whereas using `set_input` and `invoke_stateful` worked as normal. While I am not entirely sure why these failures happened over RPC only in `time_evaluator` (but not in other RPC trials), it became clear that `set_input` performs a conversion of input tensor values in `SetInputTensorWithIndex`, while `save_function` was not doing this. Adding this conversion fixed the observed bug.
This PR adds a `ret_shape` field for specifying the shape of the function's return value. At present, we will not use this information, but by adding it into the AST, we will be able to parse the return shape and use it in the future. Parser V1 in this PR will just always list the `ret_shape` as `RuntimeDepShape`.
Previously, analyses to gather up all variables, free variables, bound variables, all global variables, and all global variables that are called had been implemented in C++ but had not been exposed in Python or tested. This PR exposes these analyses and adds tests for them. Two further changes: * The analyses previously ignored variables bound in `MatchShape` nodes; these are now treated as bindings too. * `rec_global_vars` is renamed `called_global_vars`, since the analysis itself does not check recursion.
* Support Function and If in Normalize pass. * Use structural equality for expr_memo_. * Change back to pointer equality for expr_memo_; Add more tests. * rebase.
It was brought up that Relay lacks an assert operator, so we may as well have one in Relax for debugging. One issue is that we can't name it "`assert`" because Python will treat it as a syntax error to have it as a field name for the "`relax`" module, i.e., `relax.assert` is a syntax error. Thus the op is named "`assert_op`," which is not ideal but serves its purpose.
[TVMScript] B4: If branch support (tlc-pack#263) B8: Local Function Support (tlc-pack#258) [TVMScript] B3: Type annotation checks (tlc-pack#256) [TVMScript][Parser] B1: Dataflow block (tlc-pack#252) [TVMScript] B2: match shape support (tlc-pack#251) [TVMScript] B6/B7: Symbolic shape and var shadowing (tlc-pack#245) [TVMScript] B5: Support relax op (tlc-pack#244) [TVMScript] B0: Call_tir support (tlc-pack#243) enhance parser error reporting (tlc-pack#242) [TVMScript] A1: Relax Parser infra (tlc-pack#240) update ci image versions. (tlc-pack#241) [TVMScript] B2-4: TIR IRBuilder (tlc-pack#239) [TVMScript] A0: Relax IRBuilder infra (tlc-pack#235) [TVMScript] B5-6: TIR IRBuilder (tlc-pack#231) [TVMScript] B1: IRBuilder (tlc-pack#228) [TVMScript] New Parser: Part C (tlc-pack#218) [TVMScript] New Parser: Part A (tlc-pack#221) [TVMScript] New Parser: Part B (tlc-pack#217) Not recovered: [Pass] Separate ApplyHistoryBest from tuning passes (tlc-pack#226) [Bugfix] Couple of bug fixes to run TVM-gen code together with BYOC (tlc-pack#249) co-authored-by: Yuchen Jin <[email protected]> co-authored-by: Siyuan Feng <[email protected]> co-authored-by: Ruihang Lai <[email protected]>
Co-authored-by: YuchenJin <[email protected]>
This commit changes the behavior of the parser to allow type annotations, as suggested by the community. The current behavior: - Use the more refined type/shape between user annotated and deduced type/shape. The updated behavior: - Always use user annotations - Only checks if the type/shape is valid.
Introduce the memory primitives, including `relax.memory.{alloc_storage, alloc_tensor, kill_storage, kill_tensor}`.
FYI, I think you should be checking for calls to Also, ordinary Yes, aliasing is a headache 😅 |
Note that some of the alias dependencies are managed by ref counting. So if the purpose is to insert the kill op(which decref and does not do direct memory free), it is fine to ignore alias in such analysis. The catch is that such analysis result only being used to insert the early kill point. For other cases such as memory sharing, then we should make sure that allocated tensor do not alias, in this case, restricting things onto special set of ops makes sense. |
Yes, some values cannot be tracked statically and should be refcounted or otherwise garbage-collected. If we are going to have an analysis to track when values can be statically deallocated, we would need to be very certain that other references to it don't exist. |
1f84c7b
to
3cbe967
Compare
This PR is an initial implementation of liveness analysis for Relax programs and is based on the manifest_lifetimes pass in RelayVM.
It should handle nested control-flow in the form of
if/else
statements.At the moment, it does not support:
The limitations are also mentioned here