Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking against IBM-MSwPins, Faraday, MMS benchmarks #27

Open
adyasaurabhn opened this issue Sep 25, 2022 · 5 comments
Open

Benchmarking against IBM-MSwPins, Faraday, MMS benchmarks #27

adyasaurabhn opened this issue Sep 25, 2022 · 5 comments

Comments

@adyasaurabhn
Copy link

Very interesting project.
Have you considered benchmarking against the IBM-MSwPins and Faraday benchmarks released in ICCAD 04.
http://vlsicad.eecs.umich.edu/BK/ICCAD04bench/
Comparisons for the modern mixed size placement (MMS) benchmarks would be interesting too.
https://home.engineering.iastate.edu/~cnchu/pubs/c53.pdf
Mixed-size placement benchmarks have upto 2.4M std-cells and upto 1000's of macros. The macro placement complexity is not just in number but also in various sizes and aspect ratios of macros. Since RL cost function is also primarily HPWL, doing a HPWL/runtime comparison for a fully legal placement (macros + std-cells) from various techniques should be a good metric. HPWL as a first order metric has shown to correlate well to final routed WL.

@ZhiangWang033
Copy link
Collaborator

Hi adyasaurabhn, we did consider these testcases when we started this project. But one problem is that the Circuit Training relies on the running physical-aware synthesis to generate clustering. We are not clear that how to run physical-aware synthesis on these benchmarks. If you have any idea about this, please let us know. Thanks.

@abk-tilos
Copy link
Contributor

@adyasaurabhn -- Thanks for your suggestion. I will discuss with @ZhiangWang033 and others. (Obviously we can get a placement to seed (x,y) locations for macro and stdcell instances by means other than commercial physical synthesis.) The main reason we have not proposed to study ICCAD04 and other "venerable" testcases is that it's not possible to produce "Nature paper Table 1" metrics (WNS/TNS, power, routed WL) with them. This said, your point about proxy cost being a function of HPWL, density and congestion (i.e., no timing or power or detailed routing metrics) has been made in many other discussions. Please hang on while we assess feasibility of this study and resolve other details (e.g., which versions of which benchmarks are best to study with limited compute resources). Thanks again.

@adyasaurabhn
Copy link
Author

Thank you @abk-tilos @ZhiangWang033 . Looking forward to this study. Agreed that power of a RL based solution is in modeling the hard to optimize metrics like congestion, wns, tns, DRCs, power

@sakundu
Copy link
Collaborator

sakundu commented Dec 2, 2022

We are sorry for the delay in responding, and thank you for your suggestion to run the ICCAD04 testcases.

Here is our current status related to ICCAD04 testcases:

  • We have used the Bookshelf format as our input and made some modifications to run Circuit Training (CT). Please see the conventions/methods that we use, here. Any feedback is welcome.
  • We do not use LEF / DEF versions due to several basic problems. See here.
  • We have run RePlAce (standalone version downloaded from here) for the ICCAD04 testcases, using several reasonable parameter settings. Here are the results.
  • We are working on generating CT results. Once these are ready, we will post them.

Please let us know if you have any questions or suggestions.

@adyasaurabhn
Copy link
Author

Thank you for the update. Looking forward to CT results on these benchmarks. Some comments, for ICCAD04, all designs have a whitespace of 20%. It is possible that different methods (CT, RePlace, SA+RePlace, CDNS mixed size placer) perform differently for different whitespace in the design. One suggestion would be to vary the amount of whitespace (eg. 20%, 30%, 40%) for each design by changing their core area. This would show us if there is sensitivity to whitespace for different algorithms. Also, variance of results (QOR) with different starting seeds would also tell us the stability of each algorithm for this problem. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants