Skip to content

Commit

Permalink
Update readme for rainbow visualization
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 580173755
Change-Id: I0bde12b0e93c18666871a025f7de0bc5a15dd595
  • Loading branch information
cdoersch committed Nov 7, 2023
1 parent 3fd9d98 commit 9e3113d
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 11 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ We provide two colab demos:

1. <a target="_blank" href="https://colab.sandbox.google.com/github/deepmind/tapnet/blob/master/colabs/tapir_demo.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Offline TAPIR"/></a> **Standard TAPIR**: This is the most powerful TAPIR model that runs on a whole video at once. We mainly report the results of this model in the paper.
2. <a target="_blank" href="https://colab.sandbox.google.com/github/deepmind/tapnet/blob/master/colabs/causal_tapir_demo.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Online TAPIR"/></a> **Online TAPIR**: This is the sequential TAPIR model that allows for online tracking on points, which can be run in realtime on a GPU platform.
3. <a target="_blank" href="https://colab.sandbox.google.com/github/deepmind/tapnet/blob/master/colabs/tapir_rainbow_demo.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="TAPIR Rainbow Visualization"/></a> **Rainbow Visualization**: This visualization is used in many of our teaser videos: it does automatic foreground/background segmentation and corrects the tracks for the camera motion, so you can visualize the paths objects take through real space.

### Live Demo

Expand Down
15 changes: 4 additions & 11 deletions colabs/tapir_rainbow_demo.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -282,20 +282,13 @@
" [width, height]\n",
")\n",
"\n",
"# sort by position in canonical frame. In this demo they're already essentially\n",
"# sorted, but if you query points from multiple frames or are chosen randomly,\n",
"# they won't be.\n",
"ordr = np.argsort(canonical[:,1])\n",
"sorted_tracks = tracks[ordr]\n",
"sorted_occ = occluded[ordr]\n",
"sorted_err = err[ordr]\n",
"inlier_ct = np.sum((sorted_err \u003c np.square(0.07)) * (1 - sorted_occ), axis=-1)\n",
"ratio = inlier_ct / np.maximum(1.0, np.sum(1 - sorted_occ, axis=1))\n",
"inlier_ct = np.sum((err \u003c np.square(0.07)) * visibles, axis=-1)\n",
"ratio = inlier_ct / np.maximum(1.0, np.sum(visibles, axis=1))\n",
"is_fg = ratio \u003c= 0.60\n",
"video = viz_utils.plot_tracks_tails(\n",
" orig_frames,\n",
" sorted_tracks[is_fg],\n",
" sorted_occ[is_fg],\n",
" tracks[is_fg],\n",
" occluded[is_fg],\n",
" homogs\n",
")\n",
"media.show_video(video, fps=16)"
Expand Down

0 comments on commit 9e3113d

Please sign in to comment.