-
Notifications
You must be signed in to change notification settings - Fork 205
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Macros: Add a concept of "macro libraries" #1831
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
generated code. | ||
|
||
TODO: Can we come up with a better testing pattern? Possibly something where the | ||
tests are actually ran at compile time, as a macro? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what you mean by compile-time. Do you mean the time at which the macro is being loaded into the VM, which presumably happens when loading code that uses the macro?
Would this include the analyzer's execution of macros?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Essentially I was thinking you could maybe run a "test" where the entire test is actually just a macro (so the macro apis are available). This would mean the analyzer would also run these just like any other macros. And so your "tests" would actually run as a part of the analyzer, which might be kind of cool actually.
But I would not be asking the analyzer to implement a testing framework or anything like that, just execute macros normally.
tests are actually ran at compile time, as a macro? | ||
|
||
TODO: Can we expose some sort of golden tests functionality to match the | ||
expected output? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does "expected output" mean the content of the file that would be generated by the analyzer, VM, or any other tool that requires a textual representation of the generated code?
Does the expected output include the imports of macro libraries and macro applications?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I made these TODOs because I don't have the answers to these questions. Just future avenues to explore.
I don't have any specific ideas on how this could/should work right now, I just know people sometimes like to use golden files for testing codegen output today.
libraries should not be available, nor any stubs of them. The imports should | ||
be entirely removed. | ||
|
||
In order to support this, we introduce a new type of library - a "macro" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have evidence that this feature is actually needed? Do we know that we need to make users explicitly decide which libraries should be macro libraries and which shouldn't (and also lose the ability to test their macros!) just to ensure that the compiler does what it should be able to do automatically?
This feels like a lot of user effort for almost no value.
If we do want to go in this direction, then I think we shouldn't have macro libraries, we should have macro imports. Whether some code is compiled into the final program is a property of how it's used, not how it's declared. We could say that you can mark any import as (maybe) static
:
static import 'package:macro_stuff/macro_stuff.dart';
The behavior then is:
- It is a compile-time error to refer to any identifier imported from a
static
import outside of a macro application expression.
This would let you then test your macros by having the test code import the macro normally.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is another approach that I considered for sure. The problem with it is that people are going to create libraries that contain both macros and other normal code. But some people won't use the macro at all. Then, there will exist in the program some pattern that defeats the tree shaking of the macro, and we end up with portions of the macro libraries and macros themselves shipped with real apps.
If we believe fully in tree shaking, then we don't have to do anything at all (including removing the import, it will all be tree shaken anyways!).
The mechanism I proposed here should be 100% foolproof, and won't have the same pitfalls. It also doesn't require most users to even know these types of libraries exist - only the authors of the libraries have to know.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried about intermingling macro code and run-time code too, so much that I don't like importing the code of the macros at all.
Consider instead:
- We have
dart:macros
which defines an abstract API and some base annotation classes. - An annotation can extend those and specify a macro implementation by URL along with data to pass to that code.
- The macro is a script which gets an actual implementation of the macro API as second argument to
main
. We can define any API we want there, including allowing the script to stay-alive and handle more macro expansion requests, or just shut down when it's done. - The macros are then defined in macro-libraries, which are perfectly normal libraries with a
main
method taking aMacroContext
as second argument. You can run them with mock macro implementations for testing.
Then my package:data_classes/data_classes.dart
would expose a public annotation DataClass(...data...)
which extends MacroAnnotation("package:data_classes/generator.dart", ...data...)
. The macro processor will see that, spawn that script (along with an implementation of the macros API, maybe fully fledged or maybe a shell which communicates with the real processor using ports), and will give a MacroContext
to it containing data
and anything it needs. It generates code and inspects existing code by interacting with the context.
But no library is a special kind of library. It's all which APIs it has access to, and who will invoke it.
And it keeps the macro implementation at arms length from the code which uses it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels to me like that approach starts tying our hands to a specific implementation strategy, and it also opens up a fair bit of extra api without much benefit over this proposal? It does open up a strategy for testing, but does it solve much else?
The MacroAnnotation
class and other apis exposed through dart:macros
also leak into the program still in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes...let's PLEASE be careful here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how importing a library with macro builders and not using them is different from importing some other library and not using some of its parts. In both cases the result is that your compiled application is bigger than it could be. In both cases the solution is not avoid these (which?) patterns that don't allow tree shaking do its work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference is we can make a clear distinction here about code that we know is only useful compile time (excluding the test use case, though). That isn't true of other code, so it is different.
Also, I believe that the set of dependencies used by macro code and runtime code will be very different. Consider the set of dev_dependencies
that are typically added in an app that uses build_runner today. This quite often is many more packages than are actually used at runtime (especially when you consider the transitive set).
It is true we don't have to take this approach. But I do think it would be a mistake not to enforce that these build time dependencies get removed at runtime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we don't want to make a separate type of library to enforce this, then I would probably want to go with something like the proposal @lrhn suggested.
One advantage of that is it could simplify some of the questions around arguments to macros. We could treat it more as just serializable data, and that has its appeals for sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the question here then is (correct me if I'm wrong): Can a macro library be a dev_dependency
, or because of the annotations do they have to be a regular dependency
? That all depends on how you specify whether macro annotations will be in the final code. Personally I look at it from the perspective that annotations are all for static analysis / compile-time usage only and should all be removed from final code. Because of dart:mirrors
I understand that isn't the case currently, but if macros are supposed to replace dart:mirrors
, I would assume that you would be able to specify that this removes annotation reflection along with the new language version, and all annotations / annotation libraries can be dev_dependencies
and are removed from final generated code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Macros have to be regular dependencies - unless we decided packages should ship with macros already ran - but I don't think we want to do that.
But we do still want all the macro dependencies to be eliminated from the final application.
I'm still leaning quite strongly against this. It feels like a (fairly complex) solution in search of a problem. I think we should leave solving this problem out until we know it's an actual problem. |
This is directly targetted at solving a problem - accidental leakage of macro related code (in particular helper libraries used within the macro) into the released application code. Released app size is something our users care about deeply. At a minimum this is a very real problem for modular builds that don't do tree shaking - but I am quite confident it will also cause bloat in release builds if not addressed. I don't think just saying tree shaking will solve all our problems is a reliable enough approach. |
Closing this and will instead file an issue regarding code bloat/tree shaking. |
I am re-opening this issue given the reflected imports proposal, as I think it solves an actual requirement for that proposal. Specifically, the proposal says this today:
We have no way of actually enforcing that requirement without something like this proposal, which can separate the "compile-time" imports from the "run-time" imports. |
I also believe that the reflected imports will anyways cause a problem for unit testing macros, so this proposal also not allowing that becomes a non-issue. |
This introduces a new concept of a
macro
library, which gives us maximum predictability in terms of tree shaking of macros themselves from the final program. These can still be used just like normal libraries from a user perspective, and the extra boilerplate is just one keyword in the macro library itself.