Skip to content

Latest commit

 

History

History
63 lines (44 loc) · 1.75 KB

README.md

File metadata and controls

63 lines (44 loc) · 1.75 KB

GPU versus API

Description

This project aims to compare the cost of buying a GPU versus the cost of using an API for Large Language Model (LLM) inference.

At the end of the day, this is very crude but it's a start and was immensely helpful for me to understand the cost of using an API for LLM inference for me.

If you like it, AWESOME... if not, sorry... I will try harder next time. :D

Demo

You can try it here

Screenshots

Screenshot 1

Screenshot 1

Screenshot 2

Screenshot 2

Screenshot 3

Screenshot 3

Screenshot 4

Screenshot 4

Installation

  1. Install http-server locally by running:

    npm install http-server
    npm install -g http-server

Usage

  1. Run the following command to start the HTTP server:

    ./run.sh
  2. Open the web page in your default browser by visiting:

    http://localhost:8080

Notes

  • Make sure to have Node.js installed on your machine before running the script.
  • Report / PDF download not working yet... my bad.