Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about performance comparison in the paper #95

Open
garyzhang99 opened this issue Jul 29, 2024 · 0 comments
Open

Questions about performance comparison in the paper #95

garyzhang99 opened this issue Jul 29, 2024 · 0 comments

Comments

@garyzhang99
Copy link

Hi team,

Just read your paper and loved it!

I'm really interested in the specific comparison of effects you made. The paper mainly compares methods like DSPy(BFSR) and Reflection, but there isn’t much on how other prompt optimization methods stack up against these.

I'm curious about how the textgrad method performs compared to other prompt optimization methods like APE, PE2, and APO. Particularly, you mentioned the Prompt Optimization with Textual Gradients (ProTeGi) method in your paper. How does textgrad compare to ProTeGi in terms of performance? Does it offer any further performance gains?
Additionally, are these methods applicable to datasets beyond GSM8K and BigBench, or do they have limitations in other contexts while textgrad don't?

If you've done any experiments or have insights on these comparisons, could you share your results or thoughts?

Thanks a lot for your amazing work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant