diff --git a/billion-scale-image-search/src/main/bash/download_models.sh b/billion-scale-image-search/src/main/bash/download_models.sh index 6f5b1a854..7efc41553 100755 --- a/billion-scale-image-search/src/main/bash/download_models.sh +++ b/billion-scale-image-search/src/main/bash/download_models.sh @@ -9,8 +9,8 @@ if [ -f "$FILE" ]; then echo "$FILE exists." else echo "$FILE does not exist." - echo "Downloading model https://data.vespa.oath.cloud/sample-apps-data/clip_text_transformer.onnx" + echo "Downloading model https://data.vespa-cloud.com/sample-apps-data/clip_text_transformer.onnx" curl -L -o $DIR/text_transformer.onnx \ - https://data.vespa.oath.cloud/sample-apps-data/clip_text_transformer.onnx + https://data.vespa-cloud.com/sample-apps-data/clip_text_transformer.onnx fi diff --git a/billion-scale-vector-search/README.md b/billion-scale-vector-search/README.md index 5b1a24948..86db7c7b9 100644 --- a/billion-scale-vector-search/README.md +++ b/billion-scale-vector-search/README.md @@ -85,7 +85,7 @@ It uses the first 10M vectors of the 100M slice sample. This sample file is about 1GB (10M vectors):
$ curl -L -o spacev10m_base.i8bin \ - https://data.vespa.oath.cloud/sample-apps-data/spacev10m_base.i8bin + https://data.vespa-cloud.com/sample-apps-data/spacev10m_base.i8binGenerate the feed file for the first 10M vectors from the 100M sample. @@ -141,7 +141,7 @@ Download the query vectors and the ground truth for the 10M first vectors: $ curl -L -o query.i8bin \ https://github.com/microsoft/SPTAG/raw/main/datasets/SPACEV1B/query.bin $ curl -L -o spacev10m_gt100.i8bin \ - https://data.vespa.oath.cloud/sample-apps-data/spacev10m_gt100.i8bin + https://data.vespa-cloud.com/sample-apps-data/spacev10m_gt100.i8bin Note, initially, the routine above used the query file from https://comp21storage.blob.core.windows.net/publiccontainer/comp21/spacev1b/query.i8bin diff --git a/commerce-product-ranking/README.md b/commerce-product-ranking/README.md index 1609e3a22..1ecba0bdf 100644 --- a/commerce-product-ranking/README.md +++ b/commerce-product-ranking/README.md @@ -89,7 +89,7 @@ $ vespa clone commerce-product-ranking my-app && cd my-app Download cross-encoder model:
$ curl -L -o application/models/title_ranker.onnx \ - https://data.vespa.oath.cloud/sample-apps-data/title_ranker.onnx + https://data.vespa-cloud.com/sample-apps-data/title_ranker.onnxSee [scripts/export-bi-encoder.py](scripts/export-bi-encoder.py) and @@ -181,7 +181,7 @@ This run file can then be evaluated using the [trec_eval](https://github.com/usn Download a pre-processed query-product relevance judgments in TREC format:
$ curl -L -o test.qrels \ - https://data.vespa.oath.cloud/sample-apps-data/test.qrels + https://data.vespa-cloud.com/sample-apps-data/test.qrelsInstall `trec_eval` (your mileage may vary): @@ -237,7 +237,7 @@ Download a pre-processed feed file with all (1,215,854) products:
$ curl -L -o product-search-products.jsonl.zstd \ - https://data.vespa.oath.cloud/sample-apps-data/product-search-products.jsonl.zstd + https://data.vespa-cloud.com/sample-apps-data/product-search-products.jsonl.zstdThis step is resource intensive as the semantic embedding model encodes diff --git a/commerce-product-ranking/application/services.xml b/commerce-product-ranking/application/services.xml index 92fc1889e..545b8a306 100644 --- a/commerce-product-ranking/application/services.xml +++ b/commerce-product-ranking/application/services.xml @@ -10,12 +10,12 @@
$ curl -L -o search-as-you-type-index.jsonl \ - https://data.vespa.oath.cloud/sample-apps-data/search-as-you-type-index.jsonl + https://data.vespa-cloud.com/sample-apps-data/search-as-you-type-index.jsonlVerify that configuration service (deploy api) is ready: diff --git a/multilingual-search/services.xml b/multilingual-search/services.xml index 5972d3555..6d3187dcb 100644 --- a/multilingual-search/services.xml +++ b/multilingual-search/services.xml @@ -7,8 +7,8 @@
$ curl -L -o flickr-8k-clip-embeddings.jsonl.zst \ - https://data.vespa.oath.cloud/sample-apps-data/flickr-8k-clip-embeddings.jsonl.zst + https://data.vespa-cloud.com/sample-apps-data/flickr-8k-clip-embeddings.jsonl.zst
diff --git a/text-image-search/src/python/README.md b/text-image-search/src/python/README.md index 331660784..7bc30a527 100644 --- a/text-image-search/src/python/README.md +++ b/text-image-search/src/python/README.md @@ -51,4 +51,4 @@ Run the app: streamlit run app.py ``` -[Animation](https://data.vespa.oath.cloud/sample-apps-data/image_demo.gif) +[Animation](https://data.vespa-cloud.com/sample-apps-data/image_demo.gif) diff --git a/text-image-search/src/sh/download_onnx_model.sh b/text-image-search/src/sh/download_onnx_model.sh index f53b6d2d6..697eac29c 100755 --- a/text-image-search/src/sh/download_onnx_model.sh +++ b/text-image-search/src/sh/download_onnx_model.sh @@ -5,6 +5,6 @@ echo "[INFO] Downloading model into $DIR" mkdir -p $DIR -echo "Downloading https://data.vespa.oath.cloud/onnx_models/clip_transformer.onnx" +echo "Downloading https://data.vespa-cloud.com/onnx_models/clip_transformer.onnx" curl -L -o $DIR/transformer.onnx \ -https://data.vespa.oath.cloud/onnx_models/clip_transformer.onnx +https://data.vespa-cloud.com/onnx_models/clip_transformer.onnx diff --git a/text-video-search/README.md b/text-video-search/README.md index 300511478..c98963e3d 100644 --- a/text-video-search/README.md +++ b/text-video-search/README.md @@ -9,7 +9,7 @@ Build a text-video search from scratch based on CLIP models with Vespa python API. -[See Animation](https://data.vespa.oath.cloud/sample-apps-data/video_demo.gif) +[See Animation](https://data.vespa-cloud.com/sample-apps-data/video_demo.gif) ## Create the application from scratch in a Jupyter Notebook diff --git a/text-video-search/src/python/app.py b/text-video-search/src/python/app.py index 4a3e4cc86..0f23fe4fc 100644 --- a/text-video-search/src/python/app.py +++ b/text-video-search/src/python/app.py @@ -32,7 +32,7 @@ def get_video(video_file_name, video_dir): def get_predefined_queries(): return get( - "https://data.vespa.oath.cloud/blog/ucf101/predefined_queries.txt" + "https://data.vespa-cloud.com/blog/ucf101/predefined_queries.txt" ).text.split("\n")[:-1] diff --git a/use-case-shopping/README.md b/use-case-shopping/README.md index 249234975..c04685d9f 100644 --- a/use-case-shopping/README.md +++ b/use-case-shopping/README.md @@ -90,13 +90,13 @@ $ vespa test src/test/application/tests/system-test/product-search-test.json First, create data feed for products:-$ curl -L -o meta_sports_20k_sample.json.zst https://data.vespa.oath.cloud/sample-apps-data/meta_sports_20k_sample.json.zst +$ curl -L -o meta_sports_20k_sample.json.zst https://data.vespa-cloud.com/sample-apps-data/meta_sports_20k_sample.json.zst $ zstdcat meta_sports_20k_sample.json.zst | ./convert_meta.py > feed_items.jsonNext, data feed for reviews:-$ curl -L -o reviews_sports_24k_sample.json.zst https://data.vespa.oath.cloud/sample-apps-data/reviews_sports_24k_sample.json.zst +$ curl -L -o reviews_sports_24k_sample.json.zst https://data.vespa-cloud.com/sample-apps-data/reviews_sports_24k_sample.json.zst $ zstdcat reviews_sports_24k_sample.json.zst | ./convert_reviews.py > feed_reviews.json