Skip to content

Tutorial: How to develop a webscraping script, save the data to an AWS S3 bucket and automate this process using an EC2 instance

License

Notifications You must be signed in to change notification settings

Alessine/aws_webscraping_automation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Tutorial: Automated Web-scraping with AWS Free Tier


Learn how to move your scraping job into the cloud.

Image of Clouds with Sun Breaking Through Photo by Daniel Páscoa on Unsplash

Project Objective

The purpose of this project is to showcase an automated web-scraping example and to document all the steps in the set-up in this blog post. This should enable others to learn about automation with AWS and to replicate this process to solve their own tasks.

Methods Used

  • Web-scraping
  • Virtual Machines

Technologies

  • Python: Pandas, BeautifulSoup, requests
  • Jupyter Notebook (for exploration)
  • PyCharm (for production)
  • AWS (S3 for storage, EC2 for cloud computing)

Project Description

I'm interested in trying out NLP tasks such as Named Entity Recognition, Topic Modelling or Sentiment Analysis on German language data, since it is my native language and most of the well-documented use cases focus on English. Therefore, I decided to scrape the news website by the Swiss national TV and Radio broadcaster SRF daily to put together a unique and interesting German text data set. The web-scraping script can be found here.

When I started looking into resources on web-scraping automation, I did not find anything that really matched my needs and interests. Therefore, I decided to invest a bit more into the documentation of this project and write up an article, so that others can use my insights to work on their own automation projects. The topics covered range from how to develop and refactor a web-scraping script to the basic AWS setup, saving data to an S3 bucket, on to launching an EC2 instance and scheduling a task. If you're interested, check out the blog post.

Folder Structure

Here's how I organized this project:

├── data
│ ├── processed
│ └── raw
│     └── 2022-06-01_srf_news_snippets.csv         <-- example data file

├── notebooks
│ ├── 220601_nb1_aws_webscraping_automation.ipynb         <-- developing the webscraping script
│ └── 220605_nb2_srf_headlines_analysis.ipynb         <-- first peak at the new dataset

├── references

├── reports 
│ └── img 

├── src
│ ├── main.py         <-- scraping script to run in the cloud
│ └── main_local.py         <-- scraping script to run locally

├── .gitignore
├── LICENSE
├── README.md
└── requirements.txt         <-- list of all the requirements for this project

Featured Materials

Questions?

If you have any questions you can get in touch with me via LinkedIn.

About

Tutorial: How to develop a webscraping script, save the data to an AWS S3 bucket and automate this process using an EC2 instance

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published