-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share your best prompts and generations (and model name) here. #7
Comments
Model: 30B, prompt:
generation:
|
It writes PHP code as well! Model: 30B, prompt:
generation
|
Reviewing and fixing an error Model: 30B, prompt:
generation
It seems, markdown is also supported by LLaMA :) |
Summarizing text Model: 30B, prompt
generation
|
Generating prompts for Stable Diffusion!!! Just giving more examples to LLaMA allow it to generate prompts for SD finally! Even those I do not asked for. Model: 30B, prompt:
generation
|
Extraction of 4chan's "medical knowledge" :) Model: 30B, prompt
generation
then model falled into loop state, generating spacebars only |
Arduino IDE supported, but forgot to change pin :) Model: 30B, prompt:
generation
|
Funny, 7B model a little bit outperformed 30B model on the simplest Arduino task :) Model: 7B, prompt:
generation
|
My 7B model is not interested in coding. User: Write the Arduino code, fully compatible with Arduino IDE, with detailed comments, to blink LED on a second pin, once a two seconds. |
@kmichal try with example.py, not with chat. Also, do not forget that this is generic model, so we need to give it not so obvious prompts https://github.com/facebookresearch/llama/blob/main/FAQ.md#2-generations-are-bad |
I think the arduino prompt can be improved by providing clearer instructions. I've modified the prompt as follows:
OUTPUT (30B - quantized to 4bit):
|
Ability to write SQL code Model: 30B, temp 0.8, repetition penalty 1.17, top_p 0, top_k 40, sampler top_k, prompt:
generation
|
model 7b
|
GPT3.5 and GPT4 for comparision for this promt: Write c++ code, fully compatible with Arduino IDE, with detailed comments, to blink an LED on pin 6 once every two seconds. GPT3.5:
GPT4:
I also asked to GPT4 to compare this two codes and it said(code1=3.5, code2=4):
When I told GPT3.5 about the bug pointed out by GPT4 it fixed it. GPT4 is better than any other model on the market but if someone tweaks the llama using rlhf with instructions and Parallel Context Windows maybe it can stand against GPT4 only in text input/output. Fixed code by GPT3.5:
|
Model: 65B Prompt:
Generation:
|
Stable Diffusion prompting after quick training of 13B HF model on a short dataset: Prompt:
generation
|
could you tell, how to train model by customs dataset? |
@zotona create your datasets/sd.csv file that contains your favorite prompts:
modify hf-training-example to use your csv file, and run
|
Model: 30B, prompt:
To my great surprise it generated:
To me it looks like LLaMA was trained on some chat sessions or it's actually sentient because how to explain that despite being prompted to act as Xi Jiping it said that it's "artificial intelligence that can learn from its experience and improve itself by learning new things"? The only other explanation is that Xi is AI :) |
can you share some infomation about your training dataset ? |
Do you have any git repo for this ? @randaller |
As the model was trained on a "scientific-looks" data and wiki, we need to be "more scientific" when prompting.
Model: 30B, prompt:
generation:
Stopped the generation, do not wish to wait for 256 integers list.
The text was updated successfully, but these errors were encountered: