Skip to content
/ Krawler Public

A complete multi-threaded web-crawler in Python3

License

Notifications You must be signed in to change notification settings

Cirice/Krawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

38 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Krawler: A Multithreaded Web Crawler in Python

GitHub issues GitHub forks GitHub stars

An implementation of a simple web crawler in Python. The crawler is fully multithreaded and can be used to crawl the web for a given domain name.

Installing Poetry

To get started you need to have Poetry installed. You can install Poetry by running the following command in the shell.

pip install poetry

When the installation is finished, run the following command in the shell in the root folder of this repository to install the dependencies and create a virtual environment for the project.

poetry install

After that, enter the Poetry environment by invoking the poetry shell command.

poetry shell

Installing System Dependencies

If you are using a Debian-based system, you can install the system-wide dependencies by running the following command.

sudo apt-get install python3-bs4 libnss-resolve nscd

Running the Crawler

To run the crawler, you can use the following command.

pushd src && python3 main.py --domain <domain_name> --threads <number_of_threads> --output <output_file> && popd

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

A complete multi-threaded web-crawler in Python3

Topics

Resources

License

Stars

Watchers

Forks

Languages