A small python project that crawls the given website for about 100 links Or all the links read the instruction for a better description Clone the repo and cd to the project.Run python main.py or main_all.py than enter a valid url the spider will start crawling and save the collected link in the same directory as the main file
main.py coolects about 10 links #Use main_all Main_all collects all links
This spider can be used to create a site map of a site and save links in a text file
screen shots are provided in the repo