From efd6817c8856de30d0b66d6d5313317c0424d131 Mon Sep 17 00:00:00 2001 From: krychu Date: Sun, 10 Dec 2023 19:07:51 +0100 Subject: [PATCH] Docs typos (#1415) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit This fixes a small typo in docs. ## Final checklist 👀 ### Submission agreement By contributing to Evals, you are agreeing to make your evaluation logic and data under the same MIT license as this repository. You must have adequate rights to upload any data used in an Eval. OpenAI reserves the right to use this data in future service improvements to our product. Contributions to OpenAI Evals will be subject to our usual Usage Policies (). - [x] I agree that my submission will be made available under an MIT license and complies with OpenAI's usage policies. ### Email address validation If your submission is accepted, we will be granting GPT-4 access to a limited number of contributors. Access will be given to the email address associated with the commits on the merged pull request. - [x] I acknowledge that GPT-4 access will only be granted, if applicable, to the email address used for my merged pull request. ### Limited availability acknowledgment We know that you might be excited to contribute to OpenAI's mission, help improve our models, and gain access to GPT-4. However, due to the requirements mentioned above and the high volume of submissions, we will not be able to accept all submissions and thus not grant everyone who opens a PR GPT-4 access. We know this is disappointing, but we hope to set the right expectation before you open this PR. - [x] I understand that opening a PR, even if it meets the requirements above, does not guarantee the PR will be merged nor GPT-4 access be granted. ### Submit eval - [x] I have filled out all required fields of this form - [x] I have used **Git LFS** for the Eval JSON data - [x] (Ignore if not submitting code) I have run `pip install pre-commit; pre-commit install` and have verified that `mypy`, `black`, `isort`, `autoflake` and `ruff` are running when I commit and push Failure to fill out all required fields will result in the PR being closed. --- docs/completion-fns.md | 2 +- docs/custom-eval.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/completion-fns.md b/docs/completion-fns.md index 4d93d2be6a..7b4200ff2b 100644 --- a/docs/completion-fns.md +++ b/docs/completion-fns.md @@ -25,7 +25,7 @@ langchain/llm/flan-t5-xl: ``` Here is how it breaks down `langchain/llm/flan-t5-xl`: This is the top level key that will be used to access this completion function with `oaieval`. -`class`: This is the path to your implementation of the completion function protocol. This class needs to importable within your python environment. +`class`: This is the path to your implementation of the completion function protocol. This class needs to be importable within your python environment. `args`: These are arguments that are passed to your completion function when it is instantiated. diff --git a/docs/custom-eval.md b/docs/custom-eval.md index f03463e832..2cc884afaa 100644 --- a/docs/custom-eval.md +++ b/docs/custom-eval.md @@ -72,7 +72,7 @@ Generally, most `run` methods will follow the same pattern shown here: loading t This method does the following: 1. Generate a prompt that contains the task statement, a few examples, and the test question. 2. Generate a completion from the model. - 2. Check if the generated answer is correct. + 3. Check if the generated answer is correct. """ stuffing = rng.sample(self.train_samples, self.train_samples_per_prompt) @@ -93,7 +93,7 @@ Generally, most `run` methods will follow the same pattern shown here: loading t result = self.completion_fn(prompt=prompt, temperature=0.0, max_tokens=1) sampled = result.get_completions()[0] - evals.record_and_check_match(prompt=prompt, sampled=sampled, expected=sample["answer"]) + evals.record_and_check_match(prompt=prompt, sampled=sampled, expected=test_sample["answer"]) ``` You'll notice that `eval_sample` doesn't take the `recorder` as an argument. This is because `eval_all_samples` sets it to be the default recorder before calling `eval_sample`, and the recording utilities defined in `evals/record.py` use the default recorder. In this example, the `eval_sample` method passes off a lot of the heavy lifting to the `evals.check_sampled_text` utility function, which is defined in `evals/api.py`. This utility function queries the model, defined by `self.model_spec`, with the given `prompt` and checks to see if the result matches the `expected` answer (or one of them, if given a list). It then records these matches (or non matches) using the default recorder.