-
Notifications
You must be signed in to change notification settings - Fork 189
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
0093a28
commit c6445c4
Showing
3 changed files
with
973 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,388 @@ | ||
{ | ||
"cells": [ | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"# Resit Assignment 3a\n", | ||
"\n", | ||
"## Due: Friday, November 25, 2022 at 5pm (submission via Canvas)\n", | ||
"\n", | ||
"\n", | ||
"* Please submit your assignment as **a single .zip file** using Canvas (Assignments --> Assignment 3). Please put the notebooks as well as the Python modules (files ending with .py) in one folder, which you call ASSIGNMENT_3_FIRSTNAME_LASTNAME. Please zip this folder and upload it as your submission. DO NOT include the data files in your zip folder.\n", | ||
"\n", | ||
"* Please name your zip file with the following naming convention: RESIT_3_FIRSTNAME_LASTNAME.zip\n", | ||
"\n", | ||
"\n", | ||
"In this block, we covered:\n", | ||
"\n", | ||
"* Chapter 12 - Importing external modules \n", | ||
"* Chapter 13 - Working with Python scripts\n", | ||
"* Chapter 14 - Reading and writing text files\n", | ||
"* Chapter 15 - Off to analyzing text \n", | ||
"\n", | ||
"**Can I use external modules other than the ones treated so far?**\n", | ||
"\n", | ||
"\n", | ||
"For now, please try to avoid it. All the exercises can be solved with what we have covered in the course. \n", | ||
"\n", | ||
"In order to easily donwload the dataset for this assignment, you will need to install the datasets module\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Install Related Modules\n", | ||
"! pip install datasets" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"The following code downloads the \"Simplified English Wikipedia\" dataset from the HuggingFace dataset hub, consisting of more than 160k articles. Since this is a small assignment, we will be playing only with the first 1000 articles from the dataset. The code already converts the dataset into a JSON file, which you should know how to manipulate.\n", | ||
"\n", | ||
"The JSON file has the following simple structure:\n", | ||
"{\n", | ||
" \"title\": [\"Title 1\", \"Title 2\", ..., \"Title 1000\"],\n", | ||
" \"text\": [\"Text 1\", \"Text 2\", ..., \"Text 1000\"]\n", | ||
"}\n", | ||
"\n", | ||
"where the first element of the title list is the title of the first element of the text list, the second title corresponds to the second text, etcetera ..." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"from datasets import load_dataset\n", | ||
"import json\n", | ||
"\n", | ||
"# Please adjust this path if it is more convenient for you\n", | ||
"save_filepath=\"WikipediaSimple_1000k.json\"\n", | ||
"\n", | ||
"wiki_dataset = load_dataset(\"wikipedia\", \"20200501.simple\")['train']\n", | ||
"print(wiki_dataset)\n", | ||
"\n", | ||
"selected_data = wiki_dataset[:1000]\n", | ||
"with open(save_filepath, \"w\") as f:\n", | ||
" json.dump(selected_data, f)\n", | ||
"\n", | ||
"print(f\"File was succesfully downloaded and the first 1000 articles saved in '{save_filepath}' !\")\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Functions & scope" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Excercise 1:\n", | ||
"\n", | ||
"Define a function called `match_titles_texts` which takes one positional parameter called **filepath** (a string) and a keyword parameter called **return_sorted** (a boolean), which is set as True by default.\n", | ||
"\n", | ||
"The function:\n", | ||
"* opens the JSON file defined by the filepath and saves the content into a python dictionary (please use the JSON file created in the previous cell)\n", | ||
"* Iterates through the dictionary to pair each title with its respective text\n", | ||
"* returns a list of tuples such that returned_list = [(Title_1, Text_1), (Title_2, Text_2), ..., (Title_1000, Text_1000)]\n", | ||
"* If the parameter `return_sorted=True` then return the list alphabetically ordered by Title. Otherwise return the list in the same order as it is.\n", | ||
"\n", | ||
"Hints:\n", | ||
"* Hint 1: Use the context manager and json module to read in the file.\n", | ||
"* Hint 2: Recall that the zip() function can help you.\n", | ||
"* Hint 3: There is a function which sorts items in an iterable called 'sorted'. Look at the documentation to see how it is used. \n", | ||
"* Hint 4: Don't forget to write a docstring. Please make sure that the docstring generally explains with the input is, what the function does, and what the function returns. The docstrings are mandatory!" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# your code here\n", | ||
"import json\n", | ||
"def match_titles_texts(filepath, return_sorted=True):\n", | ||
" # your code goes here (you can erase the pass)\n", | ||
" pass\n", | ||
"\n", | ||
"input_filepath = \"WikipediaSimple_1000k.json\"\n", | ||
"\n", | ||
"# Return Articles without sorting\n", | ||
"wiki_articles = match_titles_texts(input_filepath, return_sorted=False)\n", | ||
"assert [art[0] for art in wiki_articles[:2]] == ['Rosh Hashanah', 'Longueville, Calvados']\n", | ||
"\n", | ||
"# Return Articles sorted lexicographically\n", | ||
"wiki_articles = match_titles_texts(input_filepath)\n", | ||
"assert [art[0] for art in wiki_articles[:2]] == ['1 Giant Leap', '1100 New York Avenue']\n", | ||
"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Working with python scripts" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Exercise 2\n", | ||
"For the following excercises we will make use of the `NLTK library`.\n", | ||
"\n", | ||
"* Write a function called `get_unique_tokens` which takes one positional parameter called **text** (a string) and one keyword parameter called **case_sensitive** (boolean). The function tokenizes the text and returns a set containing the unique words in the text. Remember that if **case_sensitive=True** it means that *word* and *Word* are considered two different tokens and if **case_sensitive=False** then you should only save the lowercased version of the tokens.\n", | ||
"\n", | ||
"\n", | ||
"#### 2a)" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# your code here\n", | ||
"from nltk import word_tokenize\n", | ||
"\n", | ||
"def get_unique_tokens(text, case_sensitive=True):\n", | ||
" unique_words = set()\n", | ||
" tokens = word_tokenize(text)\n", | ||
" for tok in tokens:\n", | ||
" if case_sensitive:\n", | ||
" unique_words.add(tok)\n", | ||
" else:\n", | ||
" unique_words.add(tok.lower())\n", | ||
" return unique_words" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"#### 2b)\n", | ||
"\n", | ||
"You will now write a function called `get_text_diversity` with one positional parameter called **text**, which can be any string. This function will calculate the ratio of unique tokens inside a text compared to the total number of tokens. The idea behind this is to have a metric that informs us about the complexity of a text (e.g. a very complex text introduces several terms and synonyms whereas a simple text tends to use a narrow vocabulary). Formally we define it as:\n", | ||
"\n", | ||
"$text\\_diversity=\\frac{unique\\_tokens}{all\\_tokens}*100$ \n", | ||
"\n", | ||
"If the text has all of their tokens appearing one single time then token_diversity=1, otherwise the more repetitive the article is, the closer text_diversity gets to 0.\n", | ||
"\n", | ||
"Write such a function and apply it to the 1000 Wikipedia articles. Print the Top-5 articles (with the highest diversity) and the Bottom-5 articles (with the lowest diversity)." | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Your code here ..." | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Exercise 3\n", | ||
"\n", | ||
"#### a.) Define a function called `lemmatize_text`\n", | ||
"\n", | ||
"* Write a helper-function called `lemmatize_text`, this should receive a **text** as input (positional parameter) and return it lowercased and lemmatized (i.e. a list of lemmas). The second positional parameter should be **lemmatizer** which is the initialized lemmatizer from NLTK. Finally, the function also has a keyword parameter **only_content_words** with a default value set to False.\n", | ||
"\n", | ||
"* Remember that to lemmatize you need to know the POS tag of the token. We already provided with the list of tags to distinguish Nouns, Verbs and Adjectives. Your function should lemmatize appropriately for those words. For simplicity, if the POS tag is not from those three categories, here you can assume the lemma is the same as the original token\n", | ||
"\n", | ||
"* Please make use of the nltk methods to achieve this (tokenize, obtain Part-of-speech tags and lemmatize). See Chapter 15.\n", | ||
"\n", | ||
"* If **only_content_words = True** then return the list of lemmatized tokens that correspond only to Nouns, Verbs, and Adjectives (ignore any other token) \n", | ||
"\n", | ||
"* Test your function in the notebook using the first article from the provided `test_articles` list.\n", | ||
"\n", | ||
"\n", | ||
"#### b.) Define a function called `build_global_vocabulary`\n", | ||
"\n", | ||
"* The main function has one positional parameter **wiki_articles** (list of strings), each string in the list is a wikipedia article. The function will determine how often each word lemma occurs in all articles by keeping a dictionary with the lemmas as keys and the frequency as values. To achive this, you will have to lemmatize and lowercase each token before adding it to the vocabulary counts. Make use of the `lemmatize_text` function you just wrote and call it with **only_content_words=True** so you get only the interesting words into your vocabulary \n", | ||
"\n", | ||
"* Call the function `lemmatize_text` inside the `build_global_vocabulary` function in order to count the lowercased lemmas only.\n", | ||
"\n", | ||
"* Remember how we used dictionaries to count words? If not, have a look at Chapter 10 - Dictionaries. \n", | ||
"\n", | ||
"* Test your function in the notebook with the `test_articles` list containing 2 fake short articles.\n", | ||
"\n", | ||
"#### c.) Create a python script \n", | ||
"\n", | ||
"* Create a main script called `resit_main.py`. Place the function definition of the **build_global_vocabulary** function in `resit_main.py`.\n", | ||
"\n", | ||
"* Place your helper function definition, i.e., **lemmatize_text**, in a separate auxiliary script called `resit_utils.py`. \n", | ||
"\n", | ||
"* Import your helper function **lemmatize_text** into `resit_main.py` and also use the **test_articles** list there to confirm that everything is still working as expected by calling the script `resit_main.py` from the terminal (remember to add a print statement in the main script so the result appeard in the terminal). \n", | ||
"\n", | ||
"* Careful! The function must do two things: return the vocabulary counts (so they can be re-used later) and also print it in the terminal. \n", | ||
"\n", | ||
"* To confirm everything is working, basically the same result has to appear in the terminal as it appeared in the notebook when solving exercise 3a. and 3b.\n", | ||
"\n", | ||
"**Please submit these scripts together with the other notebooks**.\n", | ||
"\n", | ||
"Don't forget to add docstrings to your functions. " | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# This is given as help already:\n", | ||
"verb_tags = [\"VBD\", \"VBG\", \"VBN\", \"VBP\", \"VBZ\"]\n", | ||
"noun_tags = [\"NN\", \"NNP\", \"NNS\"]\n", | ||
"adj_tags = [\"JJ\", \"JJR\", \"JJS\"]\n", | ||
"\n", | ||
"test_articles = [\"Fake Article: this is meant to imitate a real wikipedia article, nothing else. The fake article was written in 2022, which is actually not that long ago! End of Fake Article.\", \n", | ||
" \"Short Article: this short article is also fake, but it is shorter than the other one\"\n", | ||
" ]\n", | ||
"\n", | ||
"import nltk\n", | ||
"import string\n", | ||
"\n", | ||
"# 3a\n", | ||
"def lemmatize_text():\n", | ||
" # Modify the function definition so it receives the requested parameters\n", | ||
" # The rest of your code goes here (you can erase the pass)\n", | ||
" pass\n", | ||
"\n", | ||
"# 3b\n", | ||
"def build_global_vocabulary():\n", | ||
" # Modify the function definition so it receives the requested parameters\n", | ||
" # The rest of your code goes here (you can erase the pass)\n", | ||
" pass\n", | ||
" \n", | ||
"\n", | ||
"# Test only lemmatize_text()\n", | ||
"lmtzr = nltk.stem.wordnet.WordNetLemmatizer()\n", | ||
"text = test_articles[0]\n", | ||
"content_lemmas = lemmatize_text(text, lmtzr)\n", | ||
"print(content_lemmas)\n", | ||
"assert content_lemmas == ['fake', 'article', 'be', 'mean', 'real', 'wikipedia', 'article', 'nothing', 'fake', 'article', 'be', 'write', 'be', 'end', 'fake', 'article']\n", | ||
"\n", | ||
"# Test build global_vocab()\n", | ||
"vocab = build_global_vocabulary(test_articles)\n", | ||
"assert vocab['fake'] == 4\n", | ||
"\n", | ||
"\n", | ||
"# 3 c\n", | ||
"## MOVE EVERYTHING TO AN EXTERNAL SCRIPT" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Exercise 4 \n", | ||
"\n", | ||
"In this exercise, we want to see what content words are used most frequently in the wikipedia corpus.\n", | ||
"\n", | ||
"Move the function **match_titles_texts** to the `resit_utils.py` script.\n", | ||
"\n", | ||
"Now we come back to the notebook. We will import here the function **build_global_vocabulary** from the main script and **match_titles_texts** from the `resit_utils.py` script. In the notebook code cell, do the following: \n", | ||
"* Open and read the initial JSON file using the imported **match_titles_texts** function\n", | ||
"* Obtain a list that contains only the article texts (without the titles)\n", | ||
"* Call the build_global_vocabulary function with the 1000 articles as input\n", | ||
"* Get the Top-100 lemmas found in all the wikipedia articles\n", | ||
"* Save them into the file **top_100_lemmas.json**, where the structure of the JSON is:\n", | ||
"{\"top_100\": [('lemma_1', freq_lemma_1), ('lemma_2', freq_lemma_2), ..., ('lemma_100', freq_lemma_100)]}" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# Your imports go here\n", | ||
"\n", | ||
"# Your code goes here" | ||
] | ||
}, | ||
{ | ||
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Exercise 5\n", | ||
"\n", | ||
"**Make a rudimentary search engine**\n", | ||
"\n", | ||
"Finally, write in this notebook a function called **search_titles_with_terms** which has one positional parameter: `wiki_articles` (a list containing all the tuples (title, article)) and a keyword parameter `search_terms` (a list containing strings that represent keywords to match), which has a default value of **[]** (empty list). \n", | ||
"* If the list is empty, avoid iterating the articles and return an empty list right away.\n", | ||
"* To improve the matches of your search function, lemmatize the terms inside the article texts (you can import here again the lemmatize_text() an use it, just beware that it requires a string as input, NOT a list of lemmas! TIP: use the join() python method for this)\n", | ||
"* The function in summary:\n", | ||
" * Iterates the 1000 articles everytime it is called\n", | ||
" * Lemmatizes each one of the search terms\n", | ||
" * Gets the lemmas of each article text\n", | ||
" * Returns the **set of (unique) titles** (without the texts!) of those articles whose text contain at least one of the lemmatized provided terms.\n", | ||
"\n", | ||
"\n" | ||
] | ||
}, | ||
{ | ||
"cell_type": "code", | ||
"execution_count": null, | ||
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"# your imports go here\n", | ||
"\n", | ||
"\n", | ||
"def search_titles_with_terms():\n", | ||
" # Modify the function definition so it receives the requested parameters\n", | ||
" # The rest of your code goes here (you can erase the pass)\n", | ||
" pass\n", | ||
"\n", | ||
"\n", | ||
"\n", | ||
"my_titles = search_titles_with_terms(wiki_articles, terms=[\"movies\"])\n", | ||
"print(my_titles)\n", | ||
"assert len(my_titles) == 102\n", | ||
"assert 'The Chronicles of Narnia: Prince Caspian' in my_titles\n", | ||
"print('Success!')" | ||
] | ||
} | ||
], | ||
"metadata": { | ||
"kernelspec": { | ||
"display_name": "Python 3", | ||
"language": "python", | ||
"name": "python3" | ||
}, | ||
"language_info": { | ||
"codemirror_mode": { | ||
"name": "ipython", | ||
"version": 3 | ||
}, | ||
"file_extension": ".py", | ||
"mimetype": "text/x-python", | ||
"name": "python", | ||
"nbconvert_exporter": "python", | ||
"pygments_lexer": "ipython3", | ||
"version": "3.8.12" | ||
}, | ||
"vscode": { | ||
"interpreter": { | ||
"hash": "1f37b899a14b1e53256e3dbe85dea3859019f1cb8d1c44a9c4840877cfd0e7ef" | ||
} | ||
} | ||
}, | ||
"nbformat": 4, | ||
"nbformat_minor": 4 | ||
} |
Oops, something went wrong.