-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking against IBM-MSwPins, Faraday, MMS benchmarks #27
Comments
Hi adyasaurabhn, we did consider these testcases when we started this project. But one problem is that the Circuit Training relies on the running physical-aware synthesis to generate clustering. We are not clear that how to run physical-aware synthesis on these benchmarks. If you have any idea about this, please let us know. Thanks. |
@adyasaurabhn -- Thanks for your suggestion. I will discuss with @ZhiangWang033 and others. (Obviously we can get a placement to seed (x,y) locations for macro and stdcell instances by means other than commercial physical synthesis.) The main reason we have not proposed to study ICCAD04 and other "venerable" testcases is that it's not possible to produce "Nature paper Table 1" metrics (WNS/TNS, power, routed WL) with them. This said, your point about proxy cost being a function of HPWL, density and congestion (i.e., no timing or power or detailed routing metrics) has been made in many other discussions. Please hang on while we assess feasibility of this study and resolve other details (e.g., which versions of which benchmarks are best to study with limited compute resources). Thanks again. |
Thank you @abk-tilos @ZhiangWang033 . Looking forward to this study. Agreed that power of a RL based solution is in modeling the hard to optimize metrics like congestion, wns, tns, DRCs, power |
We are sorry for the delay in responding, and thank you for your suggestion to run the ICCAD04 testcases. Here is our current status related to ICCAD04 testcases:
Please let us know if you have any questions or suggestions. |
Thank you for the update. Looking forward to CT results on these benchmarks. Some comments, for ICCAD04, all designs have a whitespace of 20%. It is possible that different methods (CT, RePlace, SA+RePlace, CDNS mixed size placer) perform differently for different whitespace in the design. One suggestion would be to vary the amount of whitespace (eg. 20%, 30%, 40%) for each design by changing their core area. This would show us if there is sensitivity to whitespace for different algorithms. Also, variance of results (QOR) with different starting seeds would also tell us the stability of each algorithm for this problem. Thanks |
Very interesting project.
Have you considered benchmarking against the IBM-MSwPins and Faraday benchmarks released in ICCAD 04.
http://vlsicad.eecs.umich.edu/BK/ICCAD04bench/
Comparisons for the modern mixed size placement (MMS) benchmarks would be interesting too.
https://home.engineering.iastate.edu/~cnchu/pubs/c53.pdf
Mixed-size placement benchmarks have upto 2.4M std-cells and upto 1000's of macros. The macro placement complexity is not just in number but also in various sizes and aspect ratios of macros. Since RL cost function is also primarily HPWL, doing a HPWL/runtime comparison for a fully legal placement (macros + std-cells) from various techniques should be a good metric. HPWL as a first order metric has shown to correlate well to final routed WL.
The text was updated successfully, but these errors were encountered: