-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why model compiling? #91
Comments
Yes these are the baseline runs to compare any future ideas or proposals against. |
Can't we just distribute the cached baseline results, to avoid the heavy
task of compiling 🤔
…On Sun, Sep 1, 2024, 11:16 PM Cong Lu ***@***.***> wrote:
Yes these are the baseline runs to compare any future ideas or proposals
against.
—
Reply to this email directly, view it on GitHub
<#91 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A6UFP3B4NHDAWLPYW5WDHJTZUNHI5AVCNFSM6AAAAABNOUCO46VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDGMRTGQZTSMZWGY>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
See the FAQ, no because things like runtime comparisons are machine dependent :) |
@conglu1997 i've been using cloud instance to run this ai scientist, it works fine. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
does there any usage of the models (nanogpt, nanogpt_lite ...) in later time in
launch_scientist.py
or is it just for comparing with the baseline results?in this image we are compiling the model, & upon later analysis or the code in
launch_scientist.py
we are just comparing with the baseline results
The text was updated successfully, but these errors were encountered: