Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepare for API Summit #8

Merged
merged 6 commits into from
Mar 6, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,9 @@
"ghcr.io/devcontainers/features/node:1": {
"version": "lts",
"nvmVersion": "latest"
},
"ghcr.io/devcontainers/features/go:1": {
"version": "latest"
}
},
"customizations": {
Expand All @@ -35,7 +38,8 @@
"redhat.vscode-yaml",
"ms-python.black-formatter",
"tamasfe.even-better-toml",
"streetsidesoftware.code-spell-checker"
"streetsidesoftware.code-spell-checker",
"remcohaszing.schemastore"
]
}
},
Expand All @@ -47,7 +51,7 @@
// "forwardPorts": [],

// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "./scripts/post_create.sh",
"postCreateCommand": "./scripts/post-create.sh",

// Configure tool-specific properties.
// "customizations": {},
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/check_spec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -22,14 +22,14 @@ jobs:
pip install -r requirements.txt
- name: Check if specs have changed
run: |
./scripts/create_specs.sh
./scripts/check_specs.sh
./scripts/create-specs.sh
./scripts/check-specs.sh

exit_code=$?

echo "Exit code: $exit_code"

if [ $exit_code -ne 0 ]; then
echo "Specs have changed, please run 'create_specs' and commit the changes"
echo "Specs have changed, please run 'create-specs' and commit the changes"
exit 1
fi
2 changes: 2 additions & 0 deletions .github/workflows/publish_sdk.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ name: Publish SDK

on:
push:
branches:
- main
paths:
- 'spec.json'
- 'liblab.config.json'
Expand Down
11 changes: 9 additions & 2 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,15 @@
"version": "0.2.0",
"configurations": [
{
"name": "Python: FastAPI",
"type": "python",
"name": "Python - Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
},
{
"name": "Debug llama store",
"type": "debugpy",
"request": "launch",
"module": "uvicorn",
"env": {
Expand Down
4 changes: 2 additions & 2 deletions Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@ WORKDIR /llama_store/

COPY ./llama_store /llama_store/
COPY ./requirements.txt /requirements.txt
COPY ./scripts/recreate_database.sh /scripts/recreate_database.sh
COPY ./scripts/recreate-database.sh /scripts/recreate-database.sh

RUN pip install -r /requirements.txt
RUN chmod +x /scripts/recreate_database.sh && /scripts/recreate_database.sh
RUN chmod +x /scripts/recreate-database.sh && /scripts/recreate-database.sh

EXPOSE 80

Expand Down
107 changes: 56 additions & 51 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,11 @@ This repo also has a devContainer file, so you can also open it using the dev co

### Prime the database

Before you can run the API you need to configure the SQLite database that is used to store the llamas. This database needs to be primed with some llama data for 6 llamas, as well as creating some pictures. You can create the database using the `recreate_database.sh` script in the [`scripts`](./scripts/) folder:
Before you can run the API you need to configure the SQLite database that is used to store the llamas. This database needs to be primed with some llama data for 6 llamas, as well as creating some pictures. You can create the database using the `recreate-database.sh` script in the [`scripts`](./scripts/) folder:

```bash
cd scripts
./recreate_database.sh
./recreate-database.sh
```

This will create a database called `sql_app.db` in the [`llama_store/.appdata`](/llama_store/.appdata) folder. It will add the following tables to this database:
Expand Down Expand Up @@ -57,7 +57,7 @@ You can also run this from the command line using the `uvicorn` command:
uvicorn main:app --reload
```

Either way will launch the API on localhost on port 8000. You can then navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the Swagger UI for the API. You can change the port number by passing the `--port` parameter to `uvicorn`:
Either way will launch the API on localhost on port 8080. You can then navigate to [http://localhost:8080/docs](http://localhost:8080/docs) to see the Swagger UI for the API. You can change the port number by passing the `--port` parameter to `uvicorn`:

```bash
uvicorn main:app --reload --port 80
Expand Down Expand Up @@ -88,16 +88,16 @@ docker buildx build --platform=linux/arm64 -t llama-store .
You can then run the container. On x86/x64 platforms run:

```bash
docker run -p 80:8000 llama-store
docker run -p 80:8080 llama-store
```

On ARM64 (such as macOS on Apple Silicon), run the following:

```bash
docker run --platform=linux/arm64 -p 8000:80 llama-store
docker run --platform=linux/arm64 -p 8080:80 llama-store
```

This will run on port 8000. Change the port number if you want to run it on a different port. The Docker container exposes port 80, but this run command maps it to port 8000 on the host to be consistent with the default `uvicorn` command.
This will run on port 8080. Change the port number if you want to run it on a different port. The Docker container exposes port 80, but this run command maps it to port 8080 on the host to be consistent with the default `uvicorn` command.

## API end points

Expand Down Expand Up @@ -171,48 +171,7 @@ If you don't have an account - [join our beta](https://liblab.com/join).

You can learn more about how to use liblab from our [developer docs](https://developers.liblab.com).

The liblab CLI uses a [config file called `liblab.config.json`](https://developers.liblab.com/cli/config-file-overview) to configure the SDK. This repo has a config file called [`liblab.config.json`](./liblab.config.json) that you can use to generate the SDK. This config file has the following settings:

```json
{
"sdkName": "llama-store",
"specFilePath": "spec.json",
"languages": [
"python",
"java",
"typescript"
],
"auth": [
"bearer"
],
"createDocs": true,
"customizations": {
"devContainer": true,
"license": {
"type": "MIT"
}
},
"languageOptions": {
"typescript": {
"githubRepoName": "llama-store-sdk-typescript",
"sdkVersion": "0.0.1"
},
"python": {
"pypiPackageName": "LlamaStore",
"githubRepoName": "llama-store-sdk-python",
"sdkVersion": "0.0.1"
},
"java": {
"groupId": "com.liblab",
"githubRepoName": "llama-store-sdk-java",
"sdkVersion": "0.0.1"
}
},
"publishing": {
"githubOrg": "liblaber"
}
}
```
The liblab CLI uses a [config file called `liblab.config.json`](https://developers.liblab.com/cli/config-file-overview) to configure the SDK. This repo has a config file called [`liblab.config.json`](./liblab.config.json) that you can use to generate the SDK.

This config file reads the local `spec.json` file. If you want to generate an SDK from a running API, you can change this to the URL of that API. SDKs will be generated for Java, Python and TypeScript with a name of `llama-store` (adjust to be language specific, so `llamaStore` in Java and TypeScript). The SDKs will be configured to use bearer tokens for authentication, and will include documentation. The generated SDKs will also be set up with dev containers for VS Code, so you can open the created SDK folder and get going straight away.

Expand All @@ -235,6 +194,8 @@ You can find pre-built SDKs in the following GitHub repos:
| Python | [llama-store-sdk-python](https://github.com/liblaber/llama-store-sdk-python) |
| Java | [llama-store-sdk-java](https://github.com/liblaber/llama-store-sdk-java) |
| TypeScript | [llama-store-sdk-typescript](https://github.com/liblaber/llama-store-sdk-typescript) |
| C# | [llama-store-sdk-csharp](https://github.com/liblaber/llama-store-sdk-csharp) |
| Go | [llama-store-sdk-go](https://github.com/liblaber/llama-store-sdk-go) |

These are generated by a GitHub action, and are updated whenever the spec changes. You can find the `publish_sdk.yaml` action in the [`.github/workflows`](./.github/workflows) folder.

Expand Down Expand Up @@ -295,10 +256,10 @@ Next, you need to launch the Llama store:
1. From a terminal, run:

```bash
./scripts/start_llama_store.sh
./scripts/start-llama-store.sh
```

This will reset the llama store database, then launch the API on port 8000.
This will reset the llama store database, then launch the API on port 8080.

Once you have done this, you can run the examples. You will need to create a new terminal to do this.

Expand Down Expand Up @@ -327,6 +288,14 @@ To run the TypeScript examples, navigate to the [`sdk-examples/typescript`](./sd

This will create a user, generate an API token, and create a llama.

1. Run the get llamas demo again with the following command:

```bash
npm run get-llamas
```

You will see the llama you created in the previous step in the list of llamas.

### Python

To run the Python examples, navigate to the [`sdk-examples/python`](./sdk-examples/python) folder.
Expand Down Expand Up @@ -355,11 +324,47 @@ To run the Python examples, navigate to the [`sdk-examples/python`](./sdk-exampl

This will create a user, generate an API token, and create a llama, uploading a picture.

1. Run the get llamas demo again with the following command:

```bash
python get_llamas.py
```

You will see the llama you created in the previous step in the list of llamas.

### Go

To run the Go examples, you will need to copy the contents of the [`sdk-examples/go`](./sdk-examples/go) folder into the [`output/go/cmd/examples`](./output/go/cmd/examples) folder.

1. Run the get llamas demo with the following command:

```bash
go run get-llamas.go
```

This will create a user, generate an API token, and print out a list of llamas. This demo shows the ability to call services on the SDK, set an API token once, and use that for all subsequent calls.

1. Run the create llamas demo with the following command:

```bash
go run create-llama.go
```

This will create a user, generate an API token, and create a llama, uploading a picture.

1. Run the get llamas demo again with the following command:

```bash
go run get-llamas.go
```

You will see the llama you created in the previous step in the list of llamas.

## OpenAPI spec

The OpenAPI spec for this API is in the [`spec.json`](/spec.json) and [`spec.yaml`](/spec.yaml) files. These need to be generated whenever the spec changes. To do this, run the following command:

```bash
cd scripts
./create_specs.sh
./create-specs.sh
```
20 changes: 20 additions & 0 deletions hooks/csharp/CustomHook.cs
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@

public class CustomHook : IHook
{
public async Task<HttpRequestMessage> BeforeRequestAsync(HttpRequestMessage request)
{
Console.WriteLine($"Before request on URL {request.RequestUri.AbsoluteUri} with method {request.Method.ToUpper()}");
return request;
}

public async Task<HttpResponseMessage> AfterResponseAsync(HttpResponseMessage response)
{
Console.WriteLine($"After response on URL {response.RequestMessage.RequestUri.AbsoluteUri} with method {response.RequestMessage.Method.ToUpper()}, returning status {response.StatusCode}")
return response;
}

public async Task OnErrorAsync(HttpResponseMessage response)
{
Console.WriteLine($"On error - {response.StatusCode} - {response.ReasonPhrase}");
}
}
3 changes: 3 additions & 0 deletions hooks/go/go.mod
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
module github.com/liblaber/sdkhook

go 1.18
26 changes: 26 additions & 0 deletions hooks/go/hooks/custom_hook.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
package hooks

import (
"fmt"
)

type CustomHook struct{}

func NewCustomHook() Hook {
return &CustomHook{}
}

func (h *CustomHook) BeforeRequest(req Request) Request {
fmt.Printf("Before request on URL %#v with method %#v\n", req.GetBaseUrl(), req.GetMethod())
return req
}

func (h *CustomHook) AfterResponse(req Request, resp Response) Response {
fmt.Printf("After response on URL %#v with method %#v, returning status %d\n", req.GetBaseUrl(), req.GetMethod(), resp.GetStatusCode())
return resp
}

func (h *CustomHook) OnError(req Request, resp ErrorResponse) ErrorResponse {
fmt.Printf("On Error: %#v\n", resp.Error())
return resp
}
44 changes: 44 additions & 0 deletions hooks/go/hooks/hook.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
package hooks

type Hook interface {
BeforeRequest(req Request) Request
AfterResponse(req Request, resp Response) Response
OnError(req Request, resp ErrorResponse) ErrorResponse
}

type Request interface {
GetMethod() string
SetMethod(method string)
GetBaseUrl() string
SetBaseUrl(baseUrl string)
GetPath() string
SetPath(path string)
GetHeader(header string) string
SetHeader(header string, value string)
GetPathParam(param string) string
SetPathParam(param string, value any)
GetQueryParam(param string) string
SetQueryParam(param string, value any)
GetBody() any
SetBody(body any)
}

type Response interface {
GetStatusCode() int
SetStatusCode(statusCode int)
GetHeader(header string) string
SetHeader(header string, value string)
GetBody() []byte
SetBody(body []byte)
}

type ErrorResponse interface {
Error() string
GetError() error
GetStatusCode() int
SetStatusCode(statusCode int)
GetHeader(header string) string
SetHeader(header string, value string)
GetBody() []byte
SetBody(body []byte)
}
Loading
Loading