From 61028675ba1a90772667928756c602cfc173887a Mon Sep 17 00:00:00 2001 From: Dilara Gokay Date: Tue, 7 Nov 2023 07:53:45 -0800 Subject: [PATCH] Replace "sandbox" with "research" in colab links. PiperOrigin-RevId: 580179105 Change-Id: I54eaf2ccac185146aa15add08440cd743b9c4ed2 --- README.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 2162dab..4a43ce3 100644 --- a/README.md +++ b/README.md @@ -28,9 +28,9 @@ clone this repo and run TAPIR on your own hardware, including a realtime demo. You can run colab demos to see how TAPIR works. You can also upload your own video and try point tracking with TAPIR. We provide two colab demos: -1. Offline TAPIR **Standard TAPIR**: This is the most powerful TAPIR model that runs on a whole video at once. We mainly report the results of this model in the paper. -2. Online TAPIR **Online TAPIR**: This is the sequential TAPIR model that allows for online tracking on points, which can be run in realtime on a GPU platform. -3. TAPIR Rainbow Visualization **Rainbow Visualization**: This visualization is used in many of our teaser videos: it does automatic foreground/background segmentation and corrects the tracks for the camera motion, so you can visualize the paths objects take through real space. +1. Offline TAPIR **Standard TAPIR**: This is the most powerful TAPIR model that runs on a whole video at once. We mainly report the results of this model in the paper. +2. Online TAPIR **Online TAPIR**: This is the sequential TAPIR model that allows for online tracking on points, which can be run in realtime on a GPU platform. +3. TAPIR Rainbow Visualization **Rainbow Visualization**: This visualization is used in many of our teaser videos: it does automatic foreground/background segmentation and corrects the tracks for the camera motion, so you can visualize the paths objects take through real space. ### Live Demo @@ -149,7 +149,7 @@ The [RoboTAP dataset](https://storage.googleapis.com/dm-tapnet/robotap/robotap.z For more details of downloading and visualization of the dataset, please see the [data section](https://github.com/deepmind/tapnet/tree/main/data). -Point Clustering **Point track based clustering**: You can run this colab demo to see how point track based clustering works. Given an input video, the point tracks are extracted from TAPIR and further separated into different clusters according to different motion patterns. This is purely based on the low level motion and does not depend on any semantics or segmentation labels. You can also upload your own video and try point track based clustering. +Point Clustering **Point track based clustering**: You can run this colab demo to see how point track based clustering works. Given an input video, the point tracks are extracted from TAPIR and further separated into different clusters according to different motion patterns. This is purely based on the low level motion and does not depend on any semantics or segmentation labels. You can also upload your own video and try point track based clustering. ## TAP-Net and TAPIR Training and Inference