Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: Detection of alive hosts #14

Closed
1 task done
psyray opened this issue Apr 21, 2024 · 0 comments · Fixed by #96
Closed
1 task done

bug: Detection of alive hosts #14

psyray opened this issue Apr 21, 2024 · 0 comments · Fixed by #96
Labels
bug Something isn't working

Comments

@psyray
Copy link
Contributor

psyray commented Apr 21, 2024

Is there an existing issue for this?

  • I have searched the existing issues

Current Behavior

According to this issue #7 and some investigation on my side, I think we have a problem with the detection of alive hosts.

I explain

This piece of code is used to check if an endpoint is alive

# If is_alive is True, select only endpoints that are alive
if is_alive:
endpoints = [e for e in endpoints if e.is_alive]

The main problem with this check is that it is used as the base check to launch scan like

  • dir_file_fuzz
    # Grab URLs to fuzz
    urls = get_http_urls(
    is_alive=True,
    ignore_files=False,
    write_filepath=input_path,
    get_only_default_urls=True,
    ctx=ctx
    )
    logger.warning(urls)
  • fetch_url
    urls = get_http_urls(
    is_alive=enable_http_crawl,
    write_filepath=input_path,
    exclude_subdomains=exclude_subdomains,
    get_only_default_urls=True,
    ctx=ctx
    )
  • vulnerability_scan

So the method get_http_urls is mandatory to launch scan of the above type.

The main problem comes from the is_alive method of the Endpoint class in the startScan model

def is_alive(self):
return self.http_status and (0 < self.http_status < 500) and self.http_status != 404

As you can see, if, in those conditions :

  • an URL returns 404 or http status code above 500
  • get_only_default_urls option is set to true
  • No default URL has been set (the default URL are set only in the target scan, not subdomain scan)
    endpoint, _ = save_endpoint(
    http_url,
    ctx=ctx,
    crawl=enable_http_crawl,
    is_default=True,
    subdomain=subdomain
    )

No base url is returned, so no scan is launched.

It's problematic because dir_file_fuzz could be launched even if the base endpoint returned 404, and it's the same thing for fetch_url and vulnerability_scan

So we need to rework this part to always send to some tools the base URL, and also correctly set the default URL for root endpoint

Expected Behavior

From the moment we have a subdomain, that have an IP, and give some HTTP response, we must run :

  • dir_file_fuzz
  • vulnerability_scan
  • fetch_url

Steps To Reproduce

Try to launch a scan on a website that have the base URL responding HTTP status code >= 500 or 404

Environment

- reNgine: 2.0.2
- OS: debian
- Python: 3.10

Anything else?

No response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant