diff --git a/.github/ISSUE_TEMPLATE/bug.yaml b/.github/ISSUE_TEMPLATE/bug.yaml index 45ba18777610..66797a861090 100644 --- a/.github/ISSUE_TEMPLATE/bug.yaml +++ b/.github/ISSUE_TEMPLATE/bug.yaml @@ -8,7 +8,7 @@ body: options: - label: I searched the existing issues and did not find anything similar. required: true - - label: I read/searched [the docs](https://opencv.github.io/cvat/docs/) + - label: I read/searched [the docs](https://docs.cvat.ai/docs/) required: true - type: textarea diff --git a/.github/ISSUE_TEMPLATE/feature_request.yaml b/.github/ISSUE_TEMPLATE/feature_request.yaml index ab7222ad3123..b06919d2be8c 100644 --- a/.github/ISSUE_TEMPLATE/feature_request.yaml +++ b/.github/ISSUE_TEMPLATE/feature_request.yaml @@ -8,7 +8,7 @@ body: options: - label: I searched the existing issues and did not find anything similar. required: true - - label: I read/searched [the docs](https://opencv.github.io/cvat/docs/) + - label: I read/searched [the docs](https://docs.cvat.ai/docs/) required: true - type: textarea attributes: diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 6346cf3ef9b7..d05c86d19a68 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,7 +1,7 @@ - +Read the [Contribution guide](https://docs.cvat.ai/docs/contributing/). --> @@ -27,13 +27,13 @@ If you're unsure about any of these, don't hesitate to ask. We're here to help! - [ ] I have linked related issues (see [GitHub docs]( https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword)) - [ ] I have increased versions of npm packages if it is necessary - ([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning), - [cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning), - [cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning) and - [cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning)) + ([cvat-canvas](https://github.com/cvat-ai/cvat/tree/develop/cvat-canvas#versioning), + [cvat-core](https://github.com/cvat-ai/cvat/tree/develop/cvat-core#versioning), + [cvat-data](https://github.com/cvat-ai/cvat/tree/develop/cvat-data#versioning) and + [cvat-ui](https://github.com/cvat-ai/cvat/tree/develop/cvat-ui#versioning)) ### License - [ ] I submit _my code changes_ under the same [MIT License]( - https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the project. + https://github.com/cvat-ai/cvat/blob/develop/LICENSE) that covers the project. Feel free to contact the maintainers if that's a concern. diff --git a/CITATION.cff b/CITATION.cff index 04b9fcf014f5..2934a318342f 100644 --- a/CITATION.cff +++ b/CITATION.cff @@ -13,7 +13,7 @@ authors: identifiers: - type: doi value: 10.5281/zenodo.4009388 -repository-code: 'https://github.com/opencv/cvat' +repository-code: 'https://github.com/cvat-ai/cvat' url: 'https://cvat.ai/' abstract: >- Annotate better with CVAT, the industry-leading diff --git a/README.md b/README.md index 8ae791c052e1..60c4800412f1 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ or [subscribe](https://www.cvat.ai/pricing/cloud) to get unlimited data, organizations, autoannotations, and [Roboflow and HuggingFace integration](https://www.cvat.ai/post/integrating-hugging-face-and-roboflow-models). Or set CVAT up as a self-hosted solution: -[Self-hosted Installation Guide](https://opencv.github.io/cvat/docs/administration/basics/installation/). +[Self-hosted Installation Guide](https://docs.cvat.ai/docs/administration/basics/installation/). We provide [Enterprise support](https://www.cvat.ai/pricing/on-prem) for self-hosted installations with premium features: SSO, LDAP, Roboflow and HuggingFace integrations, and advanced analytics (coming soon). We also @@ -36,16 +36,16 @@ do trainings and a dedicated support with 24 hour SLA. ## Quick start ⚡ -- [Installation guide](https://opencv.github.io/cvat/docs/administration/basics/installation/) -- [Manual](https://opencv.github.io/cvat/docs/manual/) -- [Contributing](https://opencv.github.io/cvat/docs/contributing/) +- [Installation guide](https://docs.cvat.ai/docs/administration/basics/installation/) +- [Manual](https://docs.cvat.ai/docs/manual/) +- [Contributing](https://docs.cvat.ai/docs/contributing/) - [Datumaro dataset framework](https://github.com/cvat-ai/datumaro/blob/develop/README.md) - [Server API](#api) - [Python SDK](#sdk) - [Command line tool](#cli) -- [XML annotation format](https://opencv.github.io/cvat/docs/manual/advanced/xml_format/) -- [AWS Deployment Guide](https://opencv.github.io/cvat/docs/administration/basics/aws-deployment-guide/) -- [Frequently asked questions](https://opencv.github.io/cvat/docs/faq/) +- [XML annotation format](https://docs.cvat.ai/docs/manual/advanced/xml_format/) +- [AWS Deployment Guide](https://docs.cvat.ai/docs/administration/basics/aws-deployment-guide/) +- [Frequently asked questions](https://docs.cvat.ai/docs/faq/) - [Where to ask questions](#where-to-ask-questions) ## Partners ❤️ @@ -80,7 +80,7 @@ This is an online version of CVAT. It's free, efficient, and easy to use. to 10 tasks there and upload up to 500Mb of data to annotate. It will only be visible to you or the people you assign to it. -For now, it does not have [analytics features](https://opencv.github.io/cvat/docs/administration/advanced/analytics/) +For now, it does not have [analytics features](https://docs.cvat.ai/docs/administration/advanced/analytics/) like management and monitoring the data annotation team. It also does not allow exporting images, just the annotations. We plan to enhance [cvat.ai](https://cvat.ai) with new powerful features. Stay tuned! @@ -124,19 +124,19 @@ For feedback, please see [Contact us](#contact-us) ## API -- [Documentation](https://opencv.github.io/cvat/docs/api_sdk/api/) +- [Documentation](https://docs.cvat.ai/docs/api_sdk/api/) ## SDK - Install with `pip install cvat-sdk` - [PyPI package homepage](https://pypi.org/project/cvat-sdk/) -- [Documentation](https://opencv.github.io/cvat/docs/api_sdk/sdk/) +- [Documentation](https://docs.cvat.ai/docs/api_sdk/sdk/) ## CLI - Install with `pip install cvat-cli` - [PyPI package homepage](https://pypi.org/project/cvat-cli/) -- [Documentation](https://opencv.github.io/cvat/docs/api_sdk/cli/) +- [Documentation](https://docs.cvat.ai/docs/api_sdk/cli/) ## Supported annotation formats @@ -146,14 +146,14 @@ after clicking the **Upload annotation** and **Dump annotation** buttons. additional dataset transformations with its command line tool and Python library. For more information about the supported formats, see: -[Annotation Formats](https://opencv.github.io/cvat/docs/manual/advanced/formats/). +[Annotation Formats](https://docs.cvat.ai/docs/manual/advanced/formats/). | Annotation format | Import | Export | | ------------------------------------------------------------------------------------------------ | ------ | ------ | -| [CVAT for images](https://opencv.github.io/cvat/docs/manual/advanced/xml_format/#annotation) | ✔️ | ✔️ | -| [CVAT for a video](https://opencv.github.io/cvat/docs/manual/advanced/xml_format/#interpolation) | ✔️ | ✔️ | +| [CVAT for images](https://docs.cvat.ai/docs/manual/advanced/xml_format/#annotation) | ✔️ | ✔️ | +| [CVAT for a video](https://docs.cvat.ai/docs/manual/advanced/xml_format/#interpolation) | ✔️ | ✔️ | | [Datumaro](https://github.com/cvat-ai/datumaro) | ✔️ | ✔️ | | [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | ✔️ | ✔️ | | Segmentation masks from [PASCAL VOC](http://host.robots.ox.ac.uk/pascal/VOC/) | ✔️ | ✔️ | @@ -256,8 +256,8 @@ questions and get our support. [docker-server-image-url]: https://hub.docker.com/r/cvat/server [docker-ui-pulls-img]: https://img.shields.io/docker/pulls/cvat/ui.svg?style=flat-square&label=UI%20pulls [docker-ui-image-url]: https://hub.docker.com/r/cvat/ui -[ci-img]: https://github.com/opencv/cvat/actions/workflows/main.yml/badge.svg?branch=develop -[ci-url]: https://github.com/opencv/cvat/actions +[ci-img]: https://github.com/cvat-ai/cvat/actions/workflows/main.yml/badge.svg?branch=develop +[ci-url]: https://github.com/cvat-ai/cvat/actions [gitter-img]: https://img.shields.io/gitter/room/opencv-cvat/public?style=flat [gitter-url]: https://gitter.im/opencv-cvat/public [coverage-img]: https://codecov.io/github/opencv/cvat/branch/develop/graph/badge.svg diff --git a/changelog.d/20240403_181532_roman_update_urls.md b/changelog.d/20240403_181532_roman_update_urls.md new file mode 100644 index 000000000000..c541e21c85b8 --- /dev/null +++ b/changelog.d/20240403_181532_roman_update_urls.md @@ -0,0 +1,5 @@ +### Changed + +- Updated links to the documentation website to point to the new domain, + `docs.cvat.ai` + () diff --git a/changelog.d/fragment.j2 b/changelog.d/fragment.j2 index 700bccd0c6e5..0016248a6323 100644 --- a/changelog.d/fragment.j2 +++ b/changelog.d/fragment.j2 @@ -1,4 +1,4 @@ ### {{ config.categories | join('|') }} - Describe your change here... - () + () diff --git a/cvat-sdk/README.md b/cvat-sdk/README.md index cf5732aaeba7..fa68c0e5d40d 100644 --- a/cvat-sdk/README.md +++ b/cvat-sdk/README.md @@ -10,7 +10,7 @@ The SDK API includes several layers: - PyTorch adapter. Located at `cvat_sdk.pytorch`. * Auto-annotation support. Located at `cvat_sdk.auto_annotation`. -Package documentation is available [here](https://opencv.github.io/cvat/docs/api_sdk/sdk). +Package documentation is available [here](https://docs.cvat.ai/docs/api_sdk/sdk). ## Installation & Usage @@ -26,7 +26,7 @@ To use the PyTorch adapter, request the `pytorch` extra: pip install "cvat-sdk[pytorch]" ``` -To install from the local directory, follow [the developer guide](https://opencv.github.io/cvat/docs/api_sdk/sdk/developer_guide). +To install from the local directory, follow [the developer guide](https://docs.cvat.ai/docs/api_sdk/sdk/developer_guide). After installation you can import the package: diff --git a/cvat-sdk/gen/templates/openapi-generator/api_doc.mustache b/cvat-sdk/gen/templates/openapi-generator/api_doc.mustache index 95be916e1cfe..7a885a6c00dd 100644 --- a/cvat-sdk/gen/templates/openapi-generator/api_doc.mustache +++ b/cvat-sdk/gen/templates/openapi-generator/api_doc.mustache @@ -38,7 +38,7 @@ Name | Type | Description | Notes {{/optionalParams}} There are also optional kwargs that control the function invocation behavior. -[Read more here](https://opencv.github.io/cvat/docs/api_sdk/sdk/lowlevel-api/#sending-requests). +[Read more here](https://docs.cvat.ai/docs/api_sdk/sdk/lowlevel-api/#sending-requests). ### Returned values @@ -49,7 +49,7 @@ Returns a tuple with 2 values: `({{#returnType}}parsed_response{{/returnType}}{{ {{#returnType}}The first value is a model parsed from the response data.{{/returnType}}{{^returnType}}This endpoint does not have any return value, so `None` is always returned as the first value.{{/returnType}} The second value is the raw response, which can be useful to get response parameters, such as status code, headers, or raw response data. Read more about invocation parameters -and returned values [here](https://opencv.github.io/cvat/docs/api_sdk/sdk/lowlevel-api/#sending-requests). +and returned values [here](https://docs.cvat.ai/docs/api_sdk/sdk/lowlevel-api/#sending-requests). ### Authorization diff --git a/cvat-ui/README.md b/cvat-ui/README.md index b89fc6449423..5205041568e6 100644 --- a/cvat-ui/README.md +++ b/cvat-ui/README.md @@ -36,5 +36,5 @@ yarn run build yarn run build --mode=development # without a minification ``` -Important: You also have to run CVAT REST API server (please read `https://opencv.github.io/cvat/docs/contributing/`) +Important: You also have to run CVAT REST API server (please read `https://docs.cvat.ai/docs/contributing/`) to correct working since UI gets all necessary data (tasks, users, annotations) from there diff --git a/cvat-ui/src/components/annotation-page/standard-workspace/objects-side-bar/object-item-attribute.tsx b/cvat-ui/src/components/annotation-page/standard-workspace/objects-side-bar/object-item-attribute.tsx index 69baae68ce51..c2b3f36c933f 100644 --- a/cvat-ui/src/components/annotation-page/standard-workspace/objects-side-bar/object-item-attribute.tsx +++ b/cvat-ui/src/components/annotation-page/standard-workspace/objects-side-bar/object-item-attribute.tsx @@ -58,7 +58,7 @@ function ItemAttributeComponent(props: Props): JSX.Element { useEffect(() => { // wrap to internal use effect to avoid issues // with chinese keyboard - // https://github.com/opencv/cvat/pull/6916 + // https://github.com/cvat-ai/cvat/pull/6916 if (localAttrValue !== attrValue) { changeAttribute(attrID, localAttrValue); } diff --git a/cvat-ui/src/config.tsx b/cvat-ui/src/config.tsx index 146f0d139c32..e06e7246a247 100644 --- a/cvat-ui/src/config.tsx +++ b/cvat-ui/src/config.tsx @@ -7,19 +7,19 @@ import React from 'react'; const NO_BREAK_SPACE = '\u00a0'; const UNDEFINED_ATTRIBUTE_VALUE = '__undefined__'; -const CHANGELOG_URL = 'https://github.com/opencv/cvat/blob/develop/CHANGELOG.md'; -const LICENSE_URL = 'https://github.com/opencv/cvat/blob/develop/LICENSE'; +const CHANGELOG_URL = 'https://github.com/cvat-ai/cvat/blob/develop/CHANGELOG.md'; +const LICENSE_URL = 'https://github.com/cvat-ai/cvat/blob/develop/LICENSE'; const DISCORD_URL = 'https://discord.gg/fNR3eXfk6C'; -const GITHUB_URL = 'https://github.com/opencv/cvat'; -const GITHUB_IMAGE_URL = 'https://github.com/opencv/cvat/raw/develop/site/content/en/images/cvat.jpg'; -const GUIDE_URL = 'https://opencv.github.io/cvat/docs'; -const UPGRADE_GUIDE_URL = 'https://opencv.github.io/cvat/docs/administration/advanced/upgrade_guide'; +const GITHUB_URL = 'https://github.com/cvat-ai/cvat'; +const GITHUB_IMAGE_URL = 'https://github.com/cvat-ai/cvat/raw/develop/site/content/en/images/cvat.jpg'; +const GUIDE_URL = 'https://docs.cvat.ai/docs'; +const UPGRADE_GUIDE_URL = 'https://docs.cvat.ai/docs/administration/advanced/upgrade_guide'; const SHARE_MOUNT_GUIDE_URL = - 'https://opencv.github.io/cvat/docs/administration/basics/installation/#share-path'; + 'https://docs.cvat.ai/docs/administration/basics/installation/#share-path'; const NUCLIO_GUIDE = - 'https://opencv.github.io/cvat//docs/administration/advanced/installation_automatic_annotation/'; -const FILTERS_GUIDE_URL = 'https://opencv.github.io/cvat/docs/manual/advanced/filter/'; -const DATASET_MANIFEST_GUIDE_URL = 'https://opencv.github.io/cvat/docs/manual/advanced/dataset_manifest/'; + 'https://docs.cvat.ai//docs/administration/advanced/installation_automatic_annotation/'; +const FILTERS_GUIDE_URL = 'https://docs.cvat.ai/docs/manual/advanced/filter/'; +const DATASET_MANIFEST_GUIDE_URL = 'https://docs.cvat.ai/docs/manual/advanced/dataset_manifest/'; const CANVAS_BACKGROUND_COLORS = ['#ffffff', '#f1f1f1', '#e5e5e5', '#d8d8d8', '#CCCCCC', '#B3B3B3', '#999999']; const NEW_LABEL_COLOR = '#b3b3b3'; const LATEST_COMMENTS_SHOWN_QUICK_ISSUE = 3; @@ -31,7 +31,7 @@ const CANVAS_WORKSPACE_COLS = 12; const CANVAS_WORKSPACE_MARGIN = 8; const CANVAS_WORKSPACE_DEFAULT_CONTEXT_HEIGHT = 4; const CANVAS_WORKSPACE_PADDING = CANVAS_WORKSPACE_MARGIN / 2; -const OUTSIDE_PIC_URL = 'https://opencv.github.io/cvat/images/image019.jpg'; +const OUTSIDE_PIC_URL = 'https://docs.cvat.ai/images/image019.jpg'; const DEFAULT_AWS_S3_REGIONS: string[][] = [ ['us-east-1', 'US East (N. Virginia)'], ['us-east-2', 'US East (Ohio)'], diff --git a/cvat/apps/dataset_manager/project.py b/cvat/apps/dataset_manager/project.py index 215a4ca8de29..67075d5de47b 100644 --- a/cvat/apps/dataset_manager/project.py +++ b/cvat/apps/dataset_manager/project.py @@ -26,7 +26,7 @@ def export_project(project_id, dst_file, format_name, # we dont need to acquire lock after the task has been initialized from DB. # But there is the bug with corrupted dump file in case 2 or # more dump request received at the same time: - # https://github.com/opencv/cvat/issues/217 + # https://github.com/cvat-ai/cvat/issues/217 with transaction.atomic(): project = ProjectAnnotationAndData(project_id) project.init_from_db() diff --git a/cvat/apps/dataset_manager/task.py b/cvat/apps/dataset_manager/task.py index b6dcb906e36f..31c7e339d0a1 100644 --- a/cvat/apps/dataset_manager/task.py +++ b/cvat/apps/dataset_manager/task.py @@ -851,7 +851,7 @@ def export_job(job_id, dst_file, format_name, server_url=None, save_images=False # we dont need to acquire lock after the task has been initialized from DB. # But there is the bug with corrupted dump file in case 2 or # more dump request received at the same time: - # https://github.com/opencv/cvat/issues/217 + # https://github.com/cvat-ai/cvat/issues/217 with transaction.atomic(): job = JobAnnotation(job_id) job.init_from_db() @@ -900,7 +900,7 @@ def export_task(task_id, dst_file, format_name, server_url=None, save_images=Fal # we dont need to acquire lock after the task has been initialized from DB. # But there is the bug with corrupted dump file in case 2 or # more dump request received at the same time: - # https://github.com/opencv/cvat/issues/217 + # https://github.com/cvat-ai/cvat/issues/217 with transaction.atomic(): task = TaskAnnotation(task_id) task.init_from_db() diff --git a/cvat/apps/engine/media_extractors.py b/cvat/apps/engine/media_extractors.py index 13c515c44c44..a7f601776c6b 100644 --- a/cvat/apps/engine/media_extractors.py +++ b/cvat/apps/engine/media_extractors.py @@ -754,10 +754,10 @@ def _add_video_stream(self, container, w, h, rate, options): if w % 2: w += 1 - # libopenh264 has 4K limitations, https://github.com/opencv/cvat/issues/7425 + # libopenh264 has 4K limitations, https://github.com/cvat-ai/cvat/issues/7425 if h * w > (self.MAX_MBS_PER_FRAME << 8): raise ValidationError( - 'The video codec being used does not support such high video resolution, refer https://github.com/opencv/cvat/issues/7425' + 'The video codec being used does not support such high video resolution, refer https://github.com/cvat-ai/cvat/issues/7425' ) video_stream = container.add_stream(self._codec_name, rate=rate) diff --git a/cvat/apps/engine/models.py b/cvat/apps/engine/models.py index 5ad10cbe9373..907937382a1b 100644 --- a/cvat/apps/engine/models.py +++ b/cvat/apps/engine/models.py @@ -496,7 +496,7 @@ class Meta: unique_together = ("data", "file") # Some DBs can shuffle the rows. Here we restore the insertion order. - # https://github.com/opencv/cvat/pull/5083#discussion_r1038032715 + # https://github.com/cvat-ai/cvat/pull/5083#discussion_r1038032715 ordering = ('id', ) # For server files on the mounted share @@ -509,7 +509,7 @@ class Meta: unique_together = ("data", "file") # Some DBs can shuffle the rows. Here we restore the insertion order. - # https://github.com/opencv/cvat/pull/5083#discussion_r1038032715 + # https://github.com/cvat-ai/cvat/pull/5083#discussion_r1038032715 ordering = ('id', ) # For URLs @@ -522,7 +522,7 @@ class Meta: unique_together = ("data", "file") # Some DBs can shuffle the rows. Here we restore the insertion order. - # https://github.com/opencv/cvat/pull/5083#discussion_r1038032715 + # https://github.com/cvat-ai/cvat/pull/5083#discussion_r1038032715 ordering = ('id', ) @@ -537,7 +537,7 @@ class Meta: unique_together = ("data", "path") # Some DBs can shuffle the rows. Here we restore the insertion order. - # https://github.com/opencv/cvat/pull/5083#discussion_r1038032715 + # https://github.com/cvat-ai/cvat/pull/5083#discussion_r1038032715 ordering = ('id', ) diff --git a/cvat/apps/engine/serializers.py b/cvat/apps/engine/serializers.py index b871cff4ca86..1f1c8b492132 100644 --- a/cvat/apps/engine/serializers.py +++ b/cvat/apps/engine/serializers.py @@ -673,7 +673,7 @@ def create(self, validated_data): if seed is not None and frame_count < size: # Reproduce the old (a little bit incorrect) behavior that existed before - # https://github.com/opencv/cvat/pull/7126 + # https://github.com/cvat-ai/cvat/pull/7126 # to make the old seed-based sequences reproducible valid_frame_ids = [v for v in valid_frame_ids if v != task.data.stop_frame] @@ -855,7 +855,7 @@ def __init__(self, *args, **kwargs): class DataSerializer(serializers.ModelSerializer): """ Read more about parameters here: - https://opencv.github.io/cvat/docs/manual/basics/create_an_annotation_task/#advanced-configuration + https://docs.cvat.ai/docs/manual/basics/create_an_annotation_task/#advanced-configuration """ image_quality = serializers.IntegerField(min_value=0, max_value=100, @@ -900,7 +900,7 @@ class DataSerializer(serializers.ModelSerializer): use_cache = serializers.BooleanField(default=False, help_text=textwrap.dedent("""\ Enable or disable task data chunk caching for the task. - Read more: https://opencv.github.io/cvat/docs/manual/advanced/data_on_fly/ + Read more: https://docs.cvat.ai/docs/manual/advanced/data_on_fly/ """)) copy_data = serializers.BooleanField(default=False, help_text=textwrap.dedent("""\ Copy data from the server file share to CVAT during the task creation. diff --git a/cvat/apps/engine/task.py b/cvat/apps/engine/task.py index 7f2d6471fd3c..192f2b68bb43 100644 --- a/cvat/apps/engine/task.py +++ b/cvat/apps/engine/task.py @@ -426,7 +426,7 @@ def _restore_file_order_from_manifest( """ Restores file ordering for the "predefined" file sorting method of the task creation. Checks for extra files in the input. - Read more: https://github.com/opencv/cvat/issues/5061 + Read more: https://github.com/cvat-ai/cvat/issues/5061 """ input_files = {os.path.relpath(p, upload_dir): p for p in extractor.absolute_source_paths} @@ -444,7 +444,7 @@ def _restore_file_order_from_manifest( "Uploaded files do no match the upload manifest file contents. " "Please check the upload manifest file contents and the list of uploaded files. " "Mismatching files: {}{}. " - "Read more: https://opencv.github.io/cvat/docs/manual/advanced/dataset_manifest/" + "Read more: https://docs.cvat.ai/docs/manual/advanced/dataset_manifest/" .format( ", ".join(mismatching_display), f" (and {remaining_count} more). " if 0 < remaining_count else "" @@ -843,7 +843,7 @@ def _update_status(msg: str) -> None: "Can't find upload manifest file '{}' " "in the uploaded files. When the 'predefined' sorting method is used, " "this file is required in the input files. " - "Read more: https://opencv.github.io/cvat/docs/manual/advanced/dataset_manifest/" + "Read more: https://docs.cvat.ai/docs/manual/advanced/dataset_manifest/" .format(manifest_file or os.path.basename(db_data.get_manifest_path())) ) diff --git a/cvat/apps/engine/views.py b/cvat/apps/engine/views.py index ef400b07ea67..d1ceaf247148 100644 --- a/cvat/apps/engine/views.py +++ b/cvat/apps/engine/views.py @@ -951,7 +951,7 @@ def _sort_uploaded_files(self, uploaded_files: List[str], ordering: List[str]) - """ Applies file ordering for the "predefined" file sorting method of the task creation. - Read more: https://github.com/opencv/cvat/issues/5061 + Read more: https://github.com/cvat-ai/cvat/issues/5061 """ expected_files = ordering @@ -1154,7 +1154,7 @@ def _handle_upload_backup(request): For archives (e.g. '.zip'), a manifest file ('*.jsonl') is required when using the 'predefined' file ordering. Such file must be provided next to the archive in the list of files. Read more about manifest files here: - https://opencv.github.io/cvat/docs/manual/advanced/dataset_manifest/ + https://docs.cvat.ai/docs/manual/advanced/dataset_manifest/ After all data is sent, the operation status can be retrieved via the /status endpoint. @@ -1429,7 +1429,7 @@ def _get_rq_response(queue, job_id): # FIXME: It seems that in some cases exc_info can be None. # It's not really clear how it is possible, but it can # lead to an error in serializing the response - # https://github.com/opencv/cvat/issues/5215 + # https://github.com/cvat-ai/cvat/issues/5215 response = { "state": "Failed", "message": parse_exception_message(job.exc_info or "Unknown error") } else: response = { "state": "Started" } diff --git a/cvat/apps/iam/rules/tests/README.md b/cvat/apps/iam/rules/tests/README.md index 7b30d4dbe86a..9c18958fb14a 100644 --- a/cvat/apps/iam/rules/tests/README.md +++ b/cvat/apps/iam/rules/tests/README.md @@ -1,3 +1,3 @@ # Open Policy Agent Tests -Read more [here](https://opencv.github.io/cvat/docs/contributing/running-tests/#opa-tests) +Read more [here](https://docs.cvat.ai/docs/contributing/running-tests/#opa-tests) diff --git a/cvat/apps/iam/templates/account/email/email_confirmation_message.html b/cvat/apps/iam/templates/account/email/email_confirmation_message.html index 4ef34dd474f8..fdc561849a9d 100644 --- a/cvat/apps/iam/templates/account/email/email_confirmation_message.html +++ b/cvat/apps/iam/templates/account/email/email_confirmation_message.html @@ -123,7 +123,7 @@ Logo Logo202X We have is a separate Gitter chat for developers to discuss the development of CVAT.
  • - Visit our GitHub repository. + Visit our GitHub repository.
  • Check out our Discord channel diff --git a/site/content/en/docs/administration/advanced/analytics.md b/site/content/en/docs/administration/advanced/analytics.md index 4b66b43602d9..bdaed0f28a99 100644 --- a/site/content/en/docs/administration/advanced/analytics.md +++ b/site/content/en/docs/administration/advanced/analytics.md @@ -69,7 +69,7 @@ see {{< ilink "/docs/contributing/development-environment#cvat-analytics-ports" ### Events log structure -[Relational database](https://github.com/opencv/cvat/blob/develop/components/analytics/clickhouse/init.sh) +[Relational database](https://github.com/cvat-ai/cvat/blob/develop/components/analytics/clickhouse/init.sh) schema with the following fields: diff --git a/site/content/en/docs/administration/advanced/upgrade_guide.md b/site/content/en/docs/administration/advanced/upgrade_guide.md index afc470ac1dba..3462b20d28a5 100644 --- a/site/content/en/docs/administration/advanced/upgrade_guide.md +++ b/site/content/en/docs/administration/advanced/upgrade_guide.md @@ -68,7 +68,7 @@ docker volume rm cvat_cvat_db export CVAT_VERSION="v2.3.0" cd .. mv cvat cvat_220 -wget https://github.com/opencv/cvat/archive/refs/tags/${CVAT_VERSION}.zip +wget https://github.com/cvat-ai/cvat/archive/refs/tags/${CVAT_VERSION}.zip unzip ${CVAT_VERSION}.zip && mv cvat-${CVAT_VERSION:1} cvat unset CVAT_VERSION cd cvat @@ -90,7 +90,7 @@ cd cvat docker compose down cd .. mv cvat cvat_170 -wget https://github.com/opencv/cvat/archive/refs/tags/${CVAT_VERSION}.zip +wget https://github.com/cvat-ai/cvat/archive/refs/tags/${CVAT_VERSION}.zip unzip ${CVAT_VERSION}.zip && mv cvat-${CVAT_VERSION:1} cvat cd cvat docker pull cvat/server:${CVAT_VERSION} diff --git a/site/content/en/docs/administration/advanced/webhooks.md b/site/content/en/docs/administration/advanced/webhooks.md index 0cc68973aca0..1740ac6803be 100644 --- a/site/content/en/docs/administration/advanced/webhooks.md +++ b/site/content/en/docs/administration/advanced/webhooks.md @@ -490,7 +490,7 @@ To create webhook via an API call, see [Swagger documentation](https://app.cvat.ai/api/docs). For examples, -see [REST API tests](https://github.com/opencv/cvat/blob/develop/tests/python/rest_api/test_webhooks.py). +see [REST API tests](https://github.com/cvat-ai/cvat/blob/develop/tests/python/rest_api/test_webhooks.py). ## Example of setup and use diff --git a/site/content/en/docs/administration/basics/installation.md b/site/content/en/docs/administration/basics/installation.md index 1ffc8093cf50..d54d818b17ca 100644 --- a/site/content/en/docs/administration/basics/installation.md +++ b/site/content/en/docs/administration/basics/installation.md @@ -64,12 +64,12 @@ For access from China, read [sources for users from China](#sources-for-users-fr that and check if `docker` group is in its output. - Clone _CVAT_ source code from the - [GitHub repository](https://github.com/opencv/cvat) with Git. + [GitHub repository](https://github.com/cvat-ai/cvat) with Git. Following command will clone the latest develop branch: ```shell - git clone https://github.com/opencv/cvat + git clone https://github.com/cvat-ai/cvat cd cvat ``` @@ -154,12 +154,12 @@ For access from China, read [sources for users from China](#sources-for-users-fr - Go to windows menu, find the Linux distribution you installed and run it. You should see a terminal window. - Clone _CVAT_ source code from the - [GitHub repository](https://github.com/opencv/cvat). + [GitHub repository](https://github.com/cvat-ai/cvat). The following command will clone the latest develop branch: ```shell - git clone https://github.com/opencv/cvat + git clone https://github.com/cvat-ai/cvat cd cvat ``` @@ -239,12 +239,12 @@ For access from China, read [sources for users from China](#sources-for-users-fr launch Spotlight and type "Terminal," then double-click the search result. - Clone _CVAT_ source code from the - [GitHub repository](https://github.com/opencv/cvat) with Git. + [GitHub repository](https://github.com/cvat-ai/cvat) with Git. The following command will clone the latest develop branch: ```shell - git clone https://github.com/opencv/cvat + git clone https://github.com/cvat-ai/cvat cd cvat ``` @@ -305,19 +305,19 @@ For access from China, read [sources for users from China](#sources-for-users-fr Follow instructions from [https://git-scm.com/download/win](https://git-scm.com/download/win) 2. Clone _CVAT_ source code from the - [GitHub repository](https://github.com/opencv/cvat). + [GitHub repository](https://github.com/cvat-ai/cvat). The command below will clone the default branch (develop): ```shell - git clone https://github.com/opencv/cvat + git clone https://github.com/cvat-ai/cvat cd cvat ``` To clone specific tag, e.g. v2.1.0: ```shell - git clone -b v2.1.0 https://github.com/opencv/cvat + git clone -b v2.1.0 https://github.com/cvat-ai/cvat cd cvat ``` @@ -326,7 +326,7 @@ For access from China, read [sources for users from China](#sources-for-users-fr To download latest develop branch: ```shell -wget https://github.com/opencv/cvat/archive/refs/heads/develop.zip +wget https://github.com/cvat-ai/cvat/archive/refs/heads/develop.zip unzip develop.zip && mv cvat-develop cvat cd cvat ``` @@ -334,7 +334,7 @@ cd cvat To download specific tag: ```shell -wget https://github.com/opencv/cvat/archive/refs/tags/v1.7.0.zip +wget https://github.com/cvat-ai/cvat/archive/refs/tags/v1.7.0.zip unzip v1.7.0.zip && mv cvat-1.7.0 cvat cd cvat ``` @@ -344,7 +344,7 @@ cd cvat To download the latest develop branch: ```shell -curl -LO https://github.com/opencv/cvat/archive/refs/heads/develop.zip +curl -LO https://github.com/cvat-ai/cvat/archive/refs/heads/develop.zip unzip develop.zip && mv cvat-develop cvat cd cvat ``` @@ -352,7 +352,7 @@ cd cvat To download specific tag: ```shell -curl -LO https://github.com/opencv/cvat/archive/refs/tags/v1.7.0.zip +curl -LO https://github.com/cvat-ai/cvat/archive/refs/tags/v1.7.0.zip unzip v1.7.0.zip && mv cvat-1.7.0 cvat cd cvat ``` diff --git a/site/content/en/docs/api_sdk/sdk/lowlevel-api.md b/site/content/en/docs/api_sdk/sdk/lowlevel-api.md index d69e21386bde..9b1ef5b5358d 100644 --- a/site/content/en/docs/api_sdk/sdk/lowlevel-api.md +++ b/site/content/en/docs/api_sdk/sdk/lowlevel-api.md @@ -343,7 +343,7 @@ response data can't be parsed automatically due to the incorrect schema. In this simplest workaround is to disable response parsing using the `_parse_response=False` method argument. -You can find many examples of API client usage in REST API tests [here](https://github.com/opencv/cvat/tree/develop/tests/python). +You can find many examples of API client usage in REST API tests [here](https://github.com/cvat-ai/cvat/tree/develop/tests/python). ### Organizations diff --git a/site/content/en/docs/contributing/coding-style.md b/site/content/en/docs/contributing/coding-style.md index 244640a1115d..88ef385d4281 100644 --- a/site/content/en/docs/contributing/coding-style.md +++ b/site/content/en/docs/contributing/coding-style.md @@ -12,5 +12,5 @@ for indentation of nested blocks and statements. For Python, we use [Black](https://github.com/psf/black) and [isort](https://pycqa.github.io/isort/) to enforce the coding style and autoformat files. Currently, not all components implement formatting, the actual information about the enabled -components is available in the CI checks [here](https://github.com/opencv/cvat/tree/develop/.github/workflows) +components is available in the CI checks [here](https://github.com/cvat-ai/cvat/tree/develop/.github/workflows) and in the formatting script at `dev/format_python_code.sh`. diff --git a/site/content/en/docs/contributing/development-environment.md b/site/content/en/docs/contributing/development-environment.md index fa896594551a..a47071289ef3 100644 --- a/site/content/en/docs/contributing/development-environment.md +++ b/site/content/en/docs/contributing/development-environment.md @@ -268,7 +268,7 @@ cvat_vector: ``` In addition, you can completely disable analytics if you don't need it by deleting the following data from -[launch.json](https://github.com/opencv/cvat/blob/develop/.vscode/launch.json): +[launch.json](https://github.com/cvat-ai/cvat/blob/develop/.vscode/launch.json): ```json "DJANGO_LOG_SERVER_HOST": "localhost", @@ -276,5 +276,5 @@ In addition, you can completely disable analytics if you don't need it by deleti ``` Analytics on GitHub: -[Analytics Components](https://github.com/opencv/cvat/tree/develop/components/analytics) +[Analytics Components](https://github.com/cvat-ai/cvat/tree/develop/components/analytics) diff --git a/site/content/en/docs/getting_started/overview.md b/site/content/en/docs/getting_started/overview.md index 112f115c24d9..ff1aef8e43f8 100644 --- a/site/content/en/docs/getting_started/overview.md +++ b/site/content/en/docs/getting_started/overview.md @@ -105,24 +105,24 @@ Below is a detailed table of the supported algorithms and the platforms they ope | Algorithm Name | Category | Framework | CPU Support | GPU Support | | -------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ---------- | ----------- | ----------- | -| [Segment Anything](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/facebookresearch/sam/nuclio) | Interactor | PyTorch | ✔️ | ✔️ | -| [Deep Extreme Cut](https://github.com/opencv/cvat/tree/develop/serverless/openvino/dextr/nuclio) | Interactor | OpenVINO | ✔️ | | -| [Faster RCNN](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/public/faster_rcnn_inception_resnet_v2_atrous_coco/nuclio) | Detector | OpenVINO | ✔️ | | -| [Mask RCNN](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | Detector | OpenVINO | ✔️ | | -| [YOLO v3](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | Detector | OpenVINO | ✔️ | | -| [YOLO v7](https://github.com/opencv/cvat/tree/develop/serverless/onnx/WongKinYiu/yolov7/nuclio) | Detector | ONNX | ✔️ | ✔️ | -| [Object Reidentification](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/intel/person-reidentification-retail-0277/nuclio) | ReID | OpenVINO | ✔️ | | -| [Semantic Segmentation for ADAS](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | Detector | OpenVINO | ✔️ | | -| [Text Detection v4](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/intel/text-detection-0004/nuclio) | Detector | OpenVINO | ✔️ | | -| [SiamMask](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/foolwood/siammask/nuclio) | Tracker | PyTorch | ✔️ | ✔️ | -| [TransT](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/dschoerk/transt/nuclio) | Tracker | PyTorch | ✔️ | ✔️ | -| [f-BRS](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/saic-vul/fbrs/nuclio) | Interactor | PyTorch | ✔️ | | -| [HRNet](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/saic-vul/hrnet/nuclio) | Interactor | PyTorch | | ✔️ | -| [Inside-Outside Guidance](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/shiyinzhang/iog/nuclio) | Interactor | PyTorch | ✔️ | | -| [Faster RCNN](https://github.com/opencv/cvat/tree/develop/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | Detector | TensorFlow | ✔️ | ✔️ | -| [Mask RCNN](https://github.com/opencv/cvat/tree/develop/serverless/tensorflow/matterport/mask_rcnn/nuclio) | Detector | TensorFlow | ✔️ | ✔️ | -| [RetinaNet](https://github.com/opencv/cvat/tree/develop/serverless/pytorch/facebookresearch/detectron2/retinanet_r101/nuclio) | Detector | PyTorch | ✔️ | ✔️ | -| [Face Detection](https://github.com/opencv/cvat/tree/develop/serverless/openvino/omz/intel/face-detection-0205/nuclio) | Detector | OpenVINO | ✔️ | | +| [Segment Anything](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/facebookresearch/sam/nuclio) | Interactor | PyTorch | ✔️ | ✔️ | +| [Deep Extreme Cut](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/dextr/nuclio) | Interactor | OpenVINO | ✔️ | | +| [Faster RCNN](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/public/faster_rcnn_inception_resnet_v2_atrous_coco/nuclio) | Detector | OpenVINO | ✔️ | | +| [Mask RCNN](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/public/mask_rcnn_inception_resnet_v2_atrous_coco/nuclio) | Detector | OpenVINO | ✔️ | | +| [YOLO v3](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/public/yolo-v3-tf/nuclio) | Detector | OpenVINO | ✔️ | | +| [YOLO v7](https://github.com/cvat-ai/cvat/tree/develop/serverless/onnx/WongKinYiu/yolov7/nuclio) | Detector | ONNX | ✔️ | ✔️ | +| [Object Reidentification](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/intel/person-reidentification-retail-0277/nuclio) | ReID | OpenVINO | ✔️ | | +| [Semantic Segmentation for ADAS](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/intel/semantic-segmentation-adas-0001/nuclio) | Detector | OpenVINO | ✔️ | | +| [Text Detection v4](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/intel/text-detection-0004/nuclio) | Detector | OpenVINO | ✔️ | | +| [SiamMask](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/foolwood/siammask/nuclio) | Tracker | PyTorch | ✔️ | ✔️ | +| [TransT](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/dschoerk/transt/nuclio) | Tracker | PyTorch | ✔️ | ✔️ | +| [f-BRS](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/saic-vul/fbrs/nuclio) | Interactor | PyTorch | ✔️ | | +| [HRNet](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/saic-vul/hrnet/nuclio) | Interactor | PyTorch | | ✔️ | +| [Inside-Outside Guidance](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/shiyinzhang/iog/nuclio) | Interactor | PyTorch | ✔️ | | +| [Faster RCNN](https://github.com/cvat-ai/cvat/tree/develop/serverless/tensorflow/faster_rcnn_inception_v2_coco/nuclio) | Detector | TensorFlow | ✔️ | ✔️ | +| [Mask RCNN](https://github.com/cvat-ai/cvat/tree/develop/serverless/tensorflow/matterport/mask_rcnn/nuclio) | Detector | TensorFlow | ✔️ | ✔️ | +| [RetinaNet](https://github.com/cvat-ai/cvat/tree/develop/serverless/pytorch/facebookresearch/detectron2/retinanet_r101/nuclio) | Detector | PyTorch | ✔️ | ✔️ | +| [Face Detection](https://github.com/cvat-ai/cvat/tree/develop/serverless/openvino/omz/intel/face-detection-0205/nuclio) | Detector | OpenVINO | ✔️ | | diff --git a/site/content/en/docs/manual/advanced/analytics-and-monitoring/auto-qa.md b/site/content/en/docs/manual/advanced/analytics-and-monitoring/auto-qa.md index 72417e71f581..e2642f4d59ee 100644 --- a/site/content/en/docs/manual/advanced/analytics-and-monitoring/auto-qa.md +++ b/site/content/en/docs/manual/advanced/analytics-and-monitoring/auto-qa.md @@ -78,7 +78,7 @@ To create a **Ground truth** job, do the following: It can be any integer number, the same value will yield the same random selection (given that the frame number is unchanged).
    **Note** that if you want to use a custom frame sequence, you can do this using the server API instead, - see [Jobs API #create](https://opencv.github.io/cvat/docs/api_sdk/sdk/reference/apis/jobs-api/#create). + see [Jobs API #create](https://docs.cvat.ai/docs/api_sdk/sdk/reference/apis/jobs-api/#create). 4. Click **Submit**. 5. Annotate frames, save your work. diff --git a/site/content/en/docs/manual/advanced/dataset_manifest.md b/site/content/en/docs/manual/advanced/dataset_manifest.md index 5780a6ac1b56..6aa611d2ec58 100644 --- a/site/content/en/docs/manual/advanced/dataset_manifest.md +++ b/site/content/en/docs/manual/advanced/dataset_manifest.md @@ -22,7 +22,7 @@ reduce the amount of network traffic used and speed up the task creation process However, they can also be used in other cases, which will be explained below. A dataset manifest file is a text file in the JSONL format. These files can be created -automatically with [the special command-line tool](https://github.com/opencv/cvat/tree/develop/utils/dataset_manifest), +automatically with [the special command-line tool](https://github.com/cvat-ai/cvat/tree/develop/utils/dataset_manifest), or manually, following [the manifest file format specification](#file-format). ## How and when to use manifest files @@ -50,7 +50,7 @@ If there are multiple manifest files in the input file list, an error will be ra ## How to generate manifest files CVAT provides a dedicated Python tool to generate manifest files. -The source code can be found [here](https://github.com/opencv/cvat/tree/develop/utils/dataset_manifest). +The source code can be found [here](https://github.com/cvat-ai/cvat/tree/develop/utils/dataset_manifest). Using the tool is the recommended way to create manifest files for you data. The data must be available locally to the tool to generate manifest. diff --git a/tests/cypress/e2e/actions_projects_models/markdown_base_pipeline.js b/tests/cypress/e2e/actions_projects_models/markdown_base_pipeline.js index a1e186378b35..51143c318794 100644 --- a/tests/cypress/e2e/actions_projects_models/markdown_base_pipeline.js +++ b/tests/cypress/e2e/actions_projects_models/markdown_base_pipeline.js @@ -136,7 +136,7 @@ context('Basic markdown pipeline', () => { }); it('Plain text with 3rdparty picture', () => { - const url = 'https://github.com/opencv/cvat/raw/develop/site/content/en/images/cvat_poster_with_name.png'; + const url = 'https://github.com/cvat-ai/cvat/raw/develop/site/content/en/images/cvat_poster_with_name.png'; const value = `Plain text with 3rdparty picture\n![image](${url})`; cy.intercept('GET', url).as('getPicture'); setupGuide(value); diff --git a/tests/cypress/e2e/actions_users/registration_involved/case_28_review_pipeline_feature.js b/tests/cypress/e2e/actions_users/registration_involved/case_28_review_pipeline_feature.js index 5d99e245afc5..858cebce973f 100644 --- a/tests/cypress/e2e/actions_users/registration_involved/case_28_review_pipeline_feature.js +++ b/tests/cypress/e2e/actions_users/registration_involved/case_28_review_pipeline_feature.js @@ -135,7 +135,7 @@ context('Review pipeline feature', () => { cy.saveJob(); // Annotator updates job state, both times update is successfull, logout - // check: https://github.com/opencv/cvat/pull/7158 + // check: https://github.com/cvat-ai/cvat/pull/7158 cy.intercept('PATCH', `/api/jobs/${jobIDs[0]}`).as('updateJobState'); cy.setJobState('completed'); cy.wait('@updateJobState').its('response.statusCode').should('equal', 200); @@ -344,7 +344,7 @@ context('Review pipeline feature', () => { }); } - // check: https://github.com/opencv/cvat/issues/7206 + // check: https://github.com/cvat-ai/cvat/issues/7206 cy.interactMenu('Finish the job'); cy.get('.cvat-modal-content-finish-job').within(() => { cy.contains('button', 'Continue').click(); diff --git a/tests/python/cli/test_cli.py b/tests/python/cli/test_cli.py index 66749f992aa6..364c7011e7ca 100644 --- a/tests/python/cli/test_cli.py +++ b/tests/python/cli/test_cli.py @@ -124,7 +124,7 @@ def test_can_create_task_from_local_images(self): assert self.client.tasks.retrieve(task_id).size == 5 def test_can_create_task_from_local_images_with_parameters(self): - # Checks for regressions of + # Checks for regressions of files = generate_images(self.tmp_path, 7) files.sort(reverse=True) diff --git a/tests/python/rest_api/test_labels.py b/tests/python/rest_api/test_labels.py index 73d1cd57274f..023d833c78dc 100644 --- a/tests/python/rest_api/test_labels.py +++ b/tests/python/rest_api/test_labels.py @@ -832,7 +832,7 @@ class TestLabelUpdates: def test_project_label_update_triggers_nested_task_and_job_update( self, update_kind, admin_user, labels, projects_wlc, tasks, jobs ): - # Checks for regressions against the issue https://github.com/opencv/cvat/issues/6871 + # Checks for regressions against the issue https://github.com/cvat-ai/cvat/issues/6871 project = next(p for p in projects_wlc if p["tasks"]["count"] and p["labels"]["count"]) project_labels = [l for l in labels if l.get("project_id") == project["id"]] @@ -885,7 +885,7 @@ def test_project_label_update_triggers_nested_task_and_job_update( def test_task_label_update_triggers_nested_task_and_job_update( self, update_kind, admin_user, labels, tasks_wlc, jobs ): - # Checks for regressions against the issue https://github.com/opencv/cvat/issues/6871 + # Checks for regressions against the issue https://github.com/cvat-ai/cvat/issues/6871 task = next(t for t in tasks_wlc if t["jobs"]["count"] and t["labels"]["count"]) task_labels = [l for l in labels if l.get("task_id") == task["id"]] diff --git a/tests/python/rest_api/test_projects.py b/tests/python/rest_api/test_projects.py index 5dc4d23b930b..132fbad8997e 100644 --- a/tests/python/rest_api/test_projects.py +++ b/tests/python/rest_api/test_projects.py @@ -637,9 +637,9 @@ def test_admin_can_get_project_backup_and_create_project_by_backup(self, admin_u @pytest.mark.parametrize("format_name", ("Datumaro 1.0", "ImageNet 1.0", "PASCAL VOC 1.1")) def test_can_import_export_dataset_with_some_format(self, format_name): - # https://github.com/opencv/cvat/issues/4410 - # https://github.com/opencv/cvat/issues/4850 - # https://github.com/opencv/cvat/issues/4621 + # https://github.com/cvat-ai/cvat/issues/4410 + # https://github.com/cvat-ai/cvat/issues/4850 + # https://github.com/cvat-ai/cvat/issues/4621 username = "admin1" project_id = 4 @@ -700,7 +700,7 @@ def test_exported_project_dataset_structure( check_func(content, values_to_be_checked) def test_can_import_export_annotations_with_rotation(self): - # https://github.com/opencv/cvat/issues/4378 + # https://github.com/cvat-ai/cvat/issues/4378 username = "admin1" project_id = 4 @@ -727,8 +727,8 @@ def test_can_import_export_annotations_with_rotation(self): assert task1_rotation == task2_rotation def test_can_export_dataset_with_skeleton_labels_with_spaces(self): - # https://github.com/opencv/cvat/issues/5257 - # https://github.com/opencv/cvat/issues/5600 + # https://github.com/cvat-ai/cvat/issues/5257 + # https://github.com/cvat-ai/cvat/issues/5600 username = "admin1" project_id = 11 diff --git a/tests/python/rest_api/test_remote_url.py b/tests/python/rest_api/test_remote_url.py index dabd95f8e2a6..a3b0f1c388a2 100644 --- a/tests/python/rest_api/test_remote_url.py +++ b/tests/python/rest_api/test_remote_url.py @@ -60,6 +60,6 @@ def test_cannot_create(self, find_users): def test_can_create(self, find_users): user = find_users(privilege="admin")[0]["username"] - remote_resources = ["https://opencv.github.io/cvat/favicons/favicon-32x32.png"] + remote_resources = ["https://docs.cvat.ai/favicons/favicon-32x32.png"] self._test_can_create(user, self.task_id, remote_resources) diff --git a/tests/python/rest_api/test_tasks.py b/tests/python/rest_api/test_tasks.py index 3efec38173bf..f65415eb447d 100644 --- a/tests/python/rest_api/test_tasks.py +++ b/tests/python/rest_api/test_tasks.py @@ -536,7 +536,7 @@ def test_remove_first_keyframe(self): assert response.status_code == HTTPStatus.OK def test_can_split_skeleton_tracks_on_jobs(self, jobs): - # https://github.com/opencv/cvat/pull/6968 + # https://github.com/cvat-ai/cvat/pull/6968 task_id = 21 task_jobs = [job for job in jobs if job["task_id"] == task_id] @@ -557,8 +557,8 @@ def test_can_split_skeleton_tracks_on_jobs(self, jobs): "label_id": 59, "frame": 0, "shapes": [ - # https://github.com/opencv/cvat/issues/7498 - # https://github.com/opencv/cvat/pull/7615 + # https://github.com/cvat-ai/cvat/issues/7498 + # https://github.com/cvat-ai/cvat/pull/7615 # This shape covers frame 0 to 7, # We need to check if frame 5 is generated correctly for job#1 {"type": "points", "frame": 0, "points": [1.0, 2.0]}, @@ -925,7 +925,7 @@ def test_can_create_task_with_exif_rotated_images(self): def test_can_create_task_with_big_images(self): # Checks for regressions about the issue - # https://github.com/opencv/cvat/issues/6878 + # https://github.com/cvat-ai/cvat/issues/6878 # In the case of big files (>2.5 MB by default), # uploaded files could be write-appended twice, # leading to bigger raw file sizes than expected. @@ -2062,7 +2062,7 @@ def test_can_import_backup(self, tasks, mode): @pytest.mark.parametrize("mode", ["annotation", "interpolation"]) def test_can_import_backup_for_task_in_nondefault_state(self, tasks, mode): # Reproduces the problem with empty 'mode' in a restored task, - # described in the reproduction steps https://github.com/opencv/cvat/issues/5668 + # described in the reproduction steps https://github.com/cvat-ai/cvat/issues/5668 task_json = next(t for t in tasks if t["mode"] == mode and t["jobs"]["count"]) @@ -2500,7 +2500,7 @@ def test_user_cannot_update_task_with_cloud_storage_without_access( @pytest.mark.usefixtures("restore_db_per_function") def test_can_report_correct_completed_jobs_count(tasks_wlc, jobs_wlc, admin_user): - # Reproduces https://github.com/opencv/cvat/issues/6098 + # Reproduces https://github.com/cvat-ai/cvat/issues/6098 task = next( t for t in tasks_wlc @@ -2677,7 +2677,7 @@ def _make_client() -> Client: autouse=True, scope="class", # classmethod way may not work in some versions - # https://github.com/opencv/cvat/actions/runs/5336023573/jobs/9670573955?pr=6350 + # https://github.com/cvat-ai/cvat/actions/runs/5336023573/jobs/9670573955?pr=6350 name="TestImportWithComplexFilenames.setup_class", ) @classmethod @@ -2796,7 +2796,7 @@ def _init_tasks(cls): ], ) def test_import_annotations(self, task_kind, annotation_kind, expect_success): - # Tests for regressions about https://github.com/opencv/cvat/issues/6319 + # Tests for regressions about https://github.com/cvat-ai/cvat/issues/6319 # # X annotations must be importable to X prefixed cases # with and without dots in filenames. diff --git a/tests/python/sdk/test_tasks.py b/tests/python/sdk/test_tasks.py index f5da9494bd7e..b5812c31e91e 100644 --- a/tests/python/sdk/test_tasks.py +++ b/tests/python/sdk/test_tasks.py @@ -167,7 +167,7 @@ def test_can_create_task_with_remote_data(self): resource_type=ResourceType.SHARE, resources=["images/image_1.jpg", "images/image_2.jpg"], # make sure string fields are transferred correctly; - # see https://github.com/opencv/cvat/issues/4962 + # see https://github.com/cvat-ai/cvat/issues/4962 data_params={"sorting_method": "lexicographical"}, ) diff --git a/utils/README.md b/utils/README.md index 2f85e05d4729..b873c2009033 100644 --- a/utils/README.md +++ b/utils/README.md @@ -5,4 +5,4 @@ This folder contains some useful utilities for Computer Vision Annotation Tool (CVAT). To read about a certain utility please choose a link: -- [Command line interface for working with CVAT tasks](https://opencv.github.io/cvat/docs/manual/advanced/cli/) +- [Command line interface for working with CVAT tasks](https://docs.cvat.ai/docs/manual/advanced/cli/)