Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge Rowley quiz code to main #9

Merged
merged 2 commits into from
Nov 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
76 changes: 76 additions & 0 deletions drop_down_quizzes/DropDownQuizExamples.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": null,
"id": "d31bd130-7b19-478c-a186-1bf5d55eac1f",
"metadata": {},
"outputs": [],
"source": [
"#install the required packages\n",
"import requests\n",
"import json\n",
"import ipywidgets as widgets\n",
"from IPython.display import display\n",
"import random\n",
"print(\"done installing required packages\")\n",
"\n",
"#install the module quiz_module.py\n",
"##from quiz_module import run_quiz\n",
"from quiz_module import run_quiz\n",
"print(\"done installing quiz_module\")"
]
},
{
"cell_type": "markdown",
"id": "922cfc55-9219-4fb2-ad8f-8d07c578a5ab",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\">\n",
" <b>Use the dropdown menu to make appropriate matches</b>\n",
" </div>"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6b4c601f-7e56-4fce-ae51-20ec4fd6e582",
"metadata": {},
"outputs": [],
"source": [
"#This randomizes the order of the possible answers.\n",
"##import_type should be one of two str values: 'json' or 'url'\n",
"##import_path here defines the json filepath\n",
"run_quiz(import_type=\"json\", import_path=\"questions/TestingExample.json\", instant_feedback=False, shuffle_questions=False, shuffle_answers=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "095d4bcd-2f3b-4d9b-96ef-a03f86865056",
"metadata": {},
"outputs": [],
"source": [
"#By changing the parameters, you can have it provide instant feedback and/or randomize the question order.\n",
"##import_type should be one of two str values: 'json' or 'url'\n",
"##import_path here defines the json filepath\n",
"run_quiz(import_type=\"json\", import_path=\"questions/TestingExample.json\", instant_feedback=True, shuffle_questions=True, shuffle_answers=True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "374550dc-6bf9-49c8-ad65-6500ef64e320",
"metadata": {},
"outputs": [],
"source": [
"#Here we are importing the raw json file from the github repo.\n",
"##import_path here defines the URL to access the json quiz file on github\n",
"run_quiz(import_type='url', import_path=\"https://raw.githubusercontent.com/JRowleyLab/JupyterDropDownQuizzes/main/questions/TestingExample.json\", instant_feedback=True, shuffle_questions=True, shuffle_answers=True)"
]
}
],
"metadata": {},
"nbformat": 4,
"nbformat_minor": 5
}
37 changes: 37 additions & 0 deletions drop_down_quizzes/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# JupyterDropDownQuizzes
Create matching-type quizzes in jupyter notebooks, with options to include distractors, randomization, and/or provide instant feedback.

## Dependencies (most or all should already be present with python3 jupyter):
* Jupyter Notebook or Jupyter Lab running python3
* ipywidgets
* IPython (display)
* json
* random
* requests

## Usage:
Follow the example inside the jupyter notebook found in this repository: [DropDownQuizExamples.ipynb](DropDownQuizExamples.ipynb). The first cell is required to load in required dependencies and the quiz_module.

Questions, answers, hints, and distractors are stored in a JSON file, an example of which can be found in the questions folder: [TestingExample.json](questions/TestingExample.json). You can download and import using this path or, alternatively, import the data using the raw github url (see examples below).

![image](https://github.com/user-attachments/assets/dd9b3e22-7814-4b72-9c65-104c179715cd)

Replace the values for question, answer, and explanation. Add additional blocks as needed. Distractors are optional and provide items to the dropdown menu that are not false answers to the questions. If you do not want to include distractors, simply delete everything within the corresponding brackets.

The command to run the quiz using the downloaded json file is <b> run_quiz("json", "path_to_questions.json", instant_feedback=BOOLEAN, shuffle_questions=BOOLEAN, shuffle_answers=BOOLEAN) </b>.

Our example using the json file import method: <b> run_quiz("questions/TestingExample.json", instant_feedback=False, shuffle_questions=False, shuffle_answers=True) </b>.

The command to run the quiz using a url is <b> run_quiz("url", "https://raw.githubusercontent.com/repo_name/branch/file_directory/file.json", instant_feedback=BOOLEAN, shuffle_questions=BOOLEAN, shuffle_answers=BOOLEAN) </b>.

Our example using the url import method: <b> run_quiz("url", "https://raw.githubusercontent.com/JRowleyLab/JupyterDropDownQuizzes/main/questions/TestingExample.json", instant_feedback=BOOLEAN, shuffle_questions=BOOLEAN, shuffle_answers=BOOLEAN) </b>.

The examples above will keep the order of questions (shuffle_questions=False), but randomize the order of items within the dropdown menus (shuffle_answers=True). It will also wait until they have selected answers to each and clicked "Check Answers" before providing feedback.

![image](https://github.com/user-attachments/assets/e2b8baf6-cbec-4736-a31f-c73d967738c1)

Changing the instant_feedback and shuffle_questions parameters to True enables immediate feedback, and randomizes both the order of questions and the order of answers.

![image](https://github.com/user-attachments/assets/b6cc96a5-de69-4e54-a223-a9f5a466b288)

That's it! Enjoy making dynamic quizzes in Jupyter!
1 change: 1 addition & 0 deletions drop_down_quizzes/questions/README
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Questions, answers, and hints in JSON files.
21 changes: 21 additions & 0 deletions drop_down_quizzes/questions/TestingExample.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"descripton": "Match the appropriate powers to the superhero:",
"questions": [
{
"question": "Flight",
"answer": "Superman",
"explanation": "Hint: Crypton is calling."
},
{
"question": "Spidey Sense",
"answer": "Spiderman",
"explanation": "Hint: Someone with spider powers."
},
{
"question": "Defeats all other superheros",
"answer": "Chuck Norris",
"explanation": "Hint: His tears cure cancer. Too bad he's never cried!"
}
],
"distractors": ["Batman", "Mr T", "Antman"]
}
130 changes: 130 additions & 0 deletions drop_down_quizzes/quiz_module.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,130 @@
# quiz_module.py
import requests
import json
import ipywidgets as widgets
from IPython.display import display
import random

class MatchingQuiz:
def __init__(self, import_type:str, import_path:str, instant_feedback=False, shuffle_questions=False, shuffle_answers=False):
self.questions = []
self.answers = {}
self.user_answers = []
self.feedback_labels = []
self.instant_feedback = instant_feedback
self.shuffle_questions = shuffle_questions
self.shuffle_answers = shuffle_answers
self.distractors = []
self.explanations = {}
self.load_questions_from_json(import_type, import_path)

def load_questions_from_json(self, import_type, import_path):
if import_type == 'json':
# Load the JSON file
with open(import_path, 'r') as f:
data = json.load(f)

elif import_type == 'url':
res = requests.get(import_path)
data = res.json()

else:
print("Invalid parameter value, import_type must be a str value equal to 'json' or 'url'.")

# Extract questions, answers, and explanations
for item in data["questions"]:
question = item["question"]
answer = item["answer"]
explanation = item.get("explanation", "No explanation provided.")
self.questions.append((question, answer))
self.answers[question] = answer
self.explanations[question] = explanation

def setup_quiz(self):
# Optionally shuffle questions
questions = self.questions[:]
if self.shuffle_questions:
random.shuffle(questions)

# Clear previous user answers and feedback labels
self.user_answers = []
self.feedback_labels = []

# Display each question with a dropdown for selecting the answer
question_widgets = []
for i, (question, correct_answer) in enumerate(questions):
# Create answer options with distractors
answer_options = list(self.answers.values()) + self.distractors
if self.shuffle_answers:
random.shuffle(answer_options)

# Create the label, dropdown, and feedback label
question_label = widgets.Label(value=question)
answer_dropdown = widgets.Dropdown(options=['--Select--'] + answer_options)
feedback_label = widgets.Label(value="")

# Store reference to each dropdown and its corresponding feedback label
self.user_answers.append(answer_dropdown)
self.feedback_labels.append(feedback_label)

question_widgets.append(widgets.HBox([question_label, answer_dropdown, feedback_label]))

# Define the dropdown change event for instant feedback
def on_answer_change(change, answer_dropdown=answer_dropdown, feedback_label=feedback_label, correct_answer=correct_answer):
if self.instant_feedback and change['name'] == 'value':
selected_answer = answer_dropdown.value
if selected_answer == correct_answer:
feedback_label.value = "✔️" # Checkmark for correct
feedback_label.layout.color = 'green'
elif selected_answer == '--Select--':
feedback_label.value = "" # Reset for no selection
else:
feedback_label.value = "❌" # Cross for incorrect
feedback_label.layout.color = 'red'

# Link the dropdown to the feedback event
answer_dropdown.observe(on_answer_change, names='value')

# Create a button to check answers
check_button = widgets.Button(description="Check Answers")
reset_button = widgets.Button(description="Reset Quiz")
result_label = widgets.HTML(value="")

# Define the button click event for "Check Answers"
def on_check_button_click(b):
correct = 0
explanations_output = ""

for i, (question, _) in enumerate(questions):
selected_answer = self.user_answers[i].value
if selected_answer == self.answers[question]:
correct += 1
else:
# Add explanation for incorrect answers
explanation = self.explanations.get(question, "No explanation provided.")
explanations_output += f"<li>{question}: {explanation}</li>"

# Update the result label with color based on the score
if correct == len(questions):
result_label.value = f"<span style='color: green; font-weight: bold;'>✔️ Correct!</span>"
else:
result_label.value = f"<span style='color: red; font-weight: bold;'>❌ Incorrect! Try again.</span>"
result_label.value += f"<br><ul>{explanations_output}</ul>"

# Define the button click event for "Reset Quiz"
def on_reset_button_click(b):
for dropdown, feedback in zip(self.user_answers, self.feedback_labels):
dropdown.value = '--Select--'
feedback.value = ""
result_label.value = ""

check_button.on_click(on_check_button_click)
reset_button.on_click(on_reset_button_click)

# Display all components
display(widgets.VBox(question_widgets + [check_button, reset_button, result_label]))

# Function to create and run the quiz
def run_quiz(import_type, import_path, instant_feedback=False, shuffle_questions=False, shuffle_answers=False):
quiz = MatchingQuiz(import_type, import_path, instant_feedback, shuffle_questions, shuffle_answers)
quiz.setup_quiz()
Loading