-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rsz: Try multiple repairs per iteration #6262
base: master
Are you sure you want to change the base?
rsz: Try multiple repairs per iteration #6262
Conversation
390c298
to
4504aab
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clang-tidy made some suggestions
4504aab
to
430d571
Compare
clang-tidy review says "All clean, LGTM! 👍" |
430d571
to
439c31b
Compare
clang-tidy review says "All clean, LGTM! 👍" |
439c31b
to
e74e2f7
Compare
clang-tidy review says "All clean, LGTM! 👍" |
@kbieganski If you are interested in |
@maliberty ready for review again |
@precisionmoon please review |
Ok. Let me review the changes. @kbieganski, were you able to run ORFS tests? I see some small degradations in unit tests, so I'm curious what you see in other public PDK designs beyond asap7. |
@@ -309,6 +309,15 @@ bool Resizer::bufferBetweenPorts(Instance* buffer) | |||
bool Resizer::removeBuffer(Instance* buffer, | |||
bool honorDontTouchFixed, | |||
bool recordJournal) | |||
{ | |||
if (canRemoveBuffer(buffer, honorDontTouchFixed)) { | |||
removeBuffer2(buffer, recordJournal); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be good to rename removeBuffer2 to something like doRemoveBuffer or removeBufferCore
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I plan on doing that, I wanted to get some general feedback first.
I also tested |
I merged master to a copy of this branch and started a secure CI. |
src/rsz/test/repair_fanout7.ok
Outdated
worst slack -3.94 | ||
worst slack -4.44 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Non-trivial degradation.
src/rsz/src/Resizer.cc
Outdated
in_db_net->setDoNotTouch(false); | ||
out_db_net->setDoNotTouch(false); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These should be restored when done to not lose user intent later in the flow.
src/rsz/src/RepairSetup.cc
Outdated
@@ -537,7 +539,19 @@ bool RepairSetup::repairPath(PathRef& path, | |||
|| (pair1.second == pair2.second && pair1.first > pair2.first); | |||
}); | |||
// Attack gates with largest load delays first. | |||
int repairs_per_iter = std::round(std::max(-path_slack * 10e9f, 1.0f)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an assumption that slack is in ns? This is dependent on the Liberty units and may vary by PDK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's just a constant that worked pretty well.
The final version was going to either have this factor configurable via a parameter, or have the number of repairs always be the maximum (10 here) for the first endpoint, and then linearly go down to 1 for the final endpoints.
In the public CI:
The non-metrics failures should be investigated first. |
This is an interesting approach. Besides the CI test failures from @maliberty that need review, I have some additional questions:
|
Do you mean one driver or one path? The idea is that you need multiple repairs for the worst path of the worst endpoint anyway, and then the number of repairs goes down when we get to paths with a better slack.
I'll try and get some measurements. |
@maliberty Where can I see those failures? Or do you mean private CI? Can't see it in Jenkins. |
That particular pipeline is private but the test cases are all in ORFS so you can run them yourself. |
One driver. Currently, only one transform is tried per driver. But you will allow multiple ones.
Thank you. |
I think I'm confused by the terminology. In the code, we iterate over driver vertices in a given path. Only one transform is allowed per each vertex because if it's successful, it goes to the next iteration immediately. Full flow results for
For
Also I think this is a mode that should only be enabled in floorplanning, as later on it doesn't seem to speed anything up. |
@maliberty Is there a particular way I should run these? If I run e.g. make DESIGN_CONFIG=designs/sky130hd/microwatt/config.mk congestion is ok:
(unless I'm misinterpreting it) |
You want the metadata target to check against the limits. Usually I use |
Signed-off-by: Krzysztof Bieganski <[email protected]>
Signed-off-by: Krzysztof Bieganski <[email protected]>
Signed-off-by: Krzysztof Bieganski <[email protected]>
e74e2f7
to
53bcae0
Compare
This makes RSZ try multiple fixes at once per iteration.
If the slack is worsened, fall back to one repair (old behavior).
Speeds up some specific designs significantly without really slowing anything else down.
The biggest gains I got were on a design that I can't share, unfortunately.
It follows a heuristic where the number of repairs is proprortional to the currently repaired path slack, with a cap of 10 repairs.
Probably not the best approach possible, but I tried multiple strategies and this one is the best so far.
Note: The code needs cleaning up, but I'll do that after I get some feedback.
asap7/<secret design 1>
, floorplanasap7/<secret design 2>
, floorplanasap7/aes
, floorplanasap7/aes
, floorplanasap7/aes
, floorplanasap7/aes
, floorplanasap7/aes
, floorplan