Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add BlockBlobDatabase as TES database option #194

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

MattMcL4475
Copy link
Collaborator

@MattMcL4475 MattMcL4475 commented Apr 12, 2023

This implementation can be used to store TesTasks in Azure Storage instead of PostgreSQL, and reduces Azure resource count and cost (#226 ).

Features

  • Cheap - likely the cheapest possible way to store tasks
  • Unlimited scale - supports an unlimited number of TES tasks since the number of blobs in an Azure Storage account is unlimited
  • Simple - uses the existing default storage account, and doesn't require an additional Azure resource (such as PostgreSQL, Cosmos DB, etc.).
  • Reasonable and consistent performance
  • Easy to manually view/edit existing tasks via Azure Portal and Azure Storage tools
  • Easy integration by using existing Azure Storage SDKs

Implementation notes

  • All TesTasks are stored as JSON files as individual Azure Block Blobs
  • The blob name is prefixed with whether it's active or not active (a or z respectively), to facilitate fast query by state via the List Blobs operation:
a/0fb0858a-3166-4a22-85b6-4337df2f53c5.json
z/0fb0858a-3166-4a22-85b6-4337df2f53c5.json

Current limitations

  • Doesn't support querying tasks by anything other than whether they are active or not, which is fine for TES as is, with the exception of:
  • No support for TES spec's ListTasks by name_prefix since this would require downloading every single blob to get the name, or, an alternative implementation (one idea would be to store a separate blob that has all name+id tuples (which could result in contention issues for create/update/delete since a lease would need to be held), OR, to store 2 blobs for every TesTask - one for the existing TesTask, and another that's empty but has a blob name that is the TES task name with special characters encoded and the ID as the suffix, which would enable fast query by List Blobs operation, then extract the IDs from the name and then download by ID as the current LIST implementation does)

@MattMcL4475 MattMcL4475 marked this pull request as draft April 12, 2023 22:42
@MattMcL4475 MattMcL4475 changed the title Add BlockBlobDatabase as deployment option Add BlockBlobDatabase as database option Apr 12, 2023
@MattMcL4475 MattMcL4475 changed the title Add BlockBlobDatabase as database option Add BlockBlobDatabase as TES database option Apr 12, 2023
@MattMcL4475 MattMcL4475 added this to the 4.4.0 milestone Apr 28, 2023
@MattMcL4475 MattMcL4475 self-assigned this May 23, 2023
var task2 = blobClient2.UploadAsync(BinaryData.FromString(json), overwrite: true);

// Retry to reduce likelihood of one blob succeeding and the other failing
await Policy
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm... I think this retry policy could lead to data loss. Two requests roughly at the same time hit this method, the first fails and the second succeeds (we want the second one to win), the first request will go in retry loop that could overwrite the latest data.

As an alternative you can consider an optimistic concurrency approach for the update, where you check if an item has changed since you last read it - storage supports this via headers, and etags. And for the create scenario, you turn overwrite off, to avoid any race condition.

@MattMcL4475 MattMcL4475 modified the milestones: 4.4.0, 4.5.0 Jun 13, 2023
@BMurri BMurri modified the milestones: 4.5.0, next Mar 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants