Skip to content

Latest commit

 

History

History
15 lines (14 loc) · 432 Bytes

readme.md

File metadata and controls

15 lines (14 loc) · 432 Bytes

Description

This asynchronous web crawler is designed for reconnaissance tasks. It crawls a specified URL up to a defined depth, extracting useful information such as:

  • Email addresses
  • Internal and external links
  • JavaScript files
  • Images
  • Document URLs (e.g., PDF, DOC, XLS)
  • Comments and potential sensitive data

Usage

you can use this tool with this command python

spider.py -u https://exemple.com -d [depth]