Skip to content

abo-okakh/NodeWebCrawler

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Web Crawler

This is a web crawler designed for extracting information from websites.

Getting Started

  1. Copy the Files

    • Crawl.js: Main script for web crawling and filtering.
    • json.js: A versatile script for creating and updating data.json files, suitable for use in multiple projects.
  2. Integration

    • Include the following line in your main script:
      require('./path-to/Crawl.js');
      Make sure to adjust the path according to your project structure.

How to Use crawl()

async function crawl(url, maxDepth, maxTime)
  • url: Starting point for the web crawling.
  • maxDepth: Maximum number of links visited on a single page.
  • maxTime: Time limit given in minutes for the crawling process.

Configuration

Excluded Websites

  • Check the Crawl.js file for the excludedWebsites variable, defined as:
    const excludedWebsites = require('./excluded-websites.json').websites;
  • Modify the contents of ./excluded-websites.json to include pages that should be ignored during crawling.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published