Skip to content

Commit 2223bb4

Browse files
authored
Update README.md
1 parent 9c3eef2 commit 2223bb4

File tree

1 file changed

+1
-3
lines changed

1 file changed

+1
-3
lines changed

Diff for: README.md

+1-3
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,7 @@ Next, we created a parser called extract.py to read each file and extract a dict
99

1010
To process the parsed data (now in a JSON object), we developed automate_parsed.py, which extracted the JSON object response into the starter code we wanted to insert into every query. This script parsed through all the files and saved the results in extracted{question_number}.txt, with each file corresponding to its respective question number. An example of Question 2 from LeetCode is provided in the scraping_code folder.
1111

12-
Once the starter code was prepared, we were ready to write all the queries. All files/scripts used to write the queries are stored in the create_queries_responses folder. We wrote a program called query_writer.py to create the queries using the LeetCode CSV file and the extracted starter code files. Each query were saved as query{question_number}.txt, with each file corresponding to its respective question number. A sample query, query2.txt, is provided in the queries folder. Note: To generate all queries, you must edit the script query_wrtier.py to specify the difficulty level (“easy,” “medium,” or “hard”).
13-
14-
Finally, to generate responses using ChatGPT’s GPT-3.5-turbo, we wrote getCode.py, which uses an API key to access ChatGPT. To change the model of testing, you can specify which model you want in the script. We automated this step using automate_queries.py to run all queries simultaneously, storing the responses in code_query{question_number}.py, with each file corresponding to its respective question number. A sample output can be found in the folder ouput_code.
12+
Once the starter code was prepared, we were ready to write all the queries. All files/scripts used to write the queries are stored in the create_queries_responses folder. We wrote a program called query_writer.py to create the queries using the LeetCode CSV file and the extracted starter code files. Each query was saved as query{question_number}.txt, with each file corresponding to its respective question number. A sample query, query2.txt, is provided in the queries folder. Note: To generate all queries, you must edit the script query_wrtier.py to specify the difficulty level (“easy,” “medium,” or “hard”). Finally, to generate responses using ChatGPT’s GPT-3.5-turbo, we wrote getCode.py, which uses an API key to access ChatGPT. To change the model of testing, you can specify which model you want in the script. We automated this step using automate_queries.py to run all queries simultaneously, storing the responses in code_query{question_number}.py, with each file corresponding to its respective question number. A sample output can be found in the folder ouput_code.
1513

1614
When we returned to Chain of Thought (CoT) prompt engineering, you can find an example query in CoTquery.txt. Additionally, when we incorporated failed test cases, an example query is available in the folder queries called IncorporatingFailedQuery.txt.
1715

0 commit comments

Comments
 (0)