Skip to content

Commit

Permalink
Implemented batched SurfaceMesh class that greatly simplifies working…
Browse files Browse the repository at this point in the history
… with surface meshes (NVIDIAGameWorks#740)

* Implemented batched SurfaceMesh class that greatly simplifies working with
normals and other attributes loaded from different mesh representations.

Changes also include:
- non-backward compatible changes to USD and OBJ readers
- extensive tutorial for working with meshes
- much simplified boiler plate code in manu tutorials
- utility to compute vertex normals
- Added utility function to center points and simplified code duplicated across tutorials

Signed-off-by: Maria Masha Shugrina <[email protected]>

* update python in readthedocs

Signed-off-by: Clement Fuji Tsang <[email protected]>

* force scipy version to 1.10.1

Signed-off-by: Clement Fuji Tsang <[email protected]>

---------

Signed-off-by: Maria Masha Shugrina <[email protected]>
Signed-off-by: Clement Fuji Tsang <[email protected]>
Co-authored-by: Maria Masha Shugrina <[email protected]>
Co-authored-by: Clement Fuji Tsang <[email protected]>
3 people authored Jul 11, 2023
1 parent e3716b2 commit ea70a80
Showing 45 changed files with 4,292 additions and 544 deletions.
27 changes: 25 additions & 2 deletions docs/_templates/layout.html
Original file line number Diff line number Diff line change
@@ -4,6 +4,7 @@
<style>
:root {
--nvidia-color: #76B900;
--dark-green: #008564;
}

a, a:visited, a:active {
@@ -27,13 +28,35 @@
background: #b8d27c;
}

html.writer-html4 .rst-content dl:not(.docutils) dl:not(.field-list)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple) dl:not(.field-list):not(.simple)>dt.sig {
background-color: #eaefe0;
border-left: 3px solid var(--nvidia-color);
}

html.writer-html4 .rst-content dl:not(.docutils)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt.sig {
background: #eaefe0;
border-top: 3px solid var(--nvidia-color);
}

html.writer-html4 .rst-content dl:not(.docutils)>dt, html.writer-html5 .rst-content dl[class]:not(.option-list):not(.field-list):not(.footnote):not(.glossary):not(.simple)>dt {
color: var(--dark-green);
}

.icon, .version, a.icon.icon-home {
color: white;
}

table.center-align-center-col td {
text-align: center
}
text-align: center
}

.rubric, p.rubric {
margin-bottom: 15px;
font-weight: 700;
font-size: 120%;
color: var(--dark-green);
border-bottom: 1px solid var(--dark-green);
}

</style>
{% endblock %}
2 changes: 2 additions & 0 deletions docs/conf.py
Original file line number Diff line number Diff line change
@@ -38,6 +38,8 @@

todo_include_todos = True

autodoc_typehints = "description"

intersphinx_mapping = {
'python': ("https://docs.python.org/3", None),
'numpy': ('https://numpy.org/doc/stable/', None),
1 change: 1 addition & 0 deletions docs/modules/kaolin.ops.rst
Original file line number Diff line number Diff line change
@@ -14,6 +14,7 @@ Tensor batching operators are in :ref:`kaolin.ops.batch`, conversions of 3D mode
kaolin.ops.batch
kaolin.ops.coords
kaolin.ops.conversions
kaolin.ops.pointcloud
kaolin.ops.gcn
kaolin.ops.mesh
kaolin.ops.random
2 changes: 2 additions & 0 deletions docs/modules/kaolin.render.camera.rst
Original file line number Diff line number Diff line change
@@ -3,6 +3,8 @@
kaolin.render.camera
====================

Kaolin provides extensive camera API. For an overview, see the :ref:`Camera class docs <kaolin.render.camera.Camera>`.

API
---

27 changes: 27 additions & 0 deletions docs/modules/kaolin.rep.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
.. _kaolin.rep:

kaolin.rep
==========

This module includes higher-level Kaolin classes ("representations").

API
---

Classes
^^^^^^^

* :ref:`SurfaceMesh <kaolin.rep.SurfaceMesh>`
* :ref:`Spc <kaolin.rep.Spc>`

Other
^^^^^^^^^

.. automodule:: kaolin.rep
:members:
:exclude-members:
SurfaceMesh,
Spc
:undoc-members:
:show-inheritance:

14 changes: 14 additions & 0 deletions docs/modules/kaolin.rep.spc.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
:orphan:

.. _kaolin.rep.Spc:

kaolin.rep.Spc
===========================

API
---

.. autoclass:: kaolin.rep.Spc
:members:
:undoc-members:
:show-inheritance:
146 changes: 146 additions & 0 deletions docs/modules/kaolin.rep.surface_mesh.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
:orphan:

.. _kaolin.rep.SurfaceMesh:

SurfaceMesh
===========================

Tutorial
--------

For a walk-through of :class:`kaolin.rep.SurfaceMesh` features,
see `working_with_meshes.ipynb <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/working_with_meshes.ipynb>`_.

API
---

* :ref:`Overview <rubric mesh overview>`
* :ref:`Supported Attributes <rubric mesh attributes>`
* :ref:`Batching <rubric mesh batching>`
* :ref:`Attribute Access and Auto-Computability <rubric mesh attribute access>`
* :ref:`Inspecting and Copying <rubric mesh inspecting>`
* :ref:`Tensor Operations <rubric mesh tensor ops>`

.. autoclass:: kaolin.rep.SurfaceMesh
:members:
:undoc-members:
:member-order: bysource
:exclude-members: Batching, attribute_info_string, set_batching, to_batched, getattr_batched, cat,
vertices, face_vertices, normals, face_normals, vertex_normals, uvs, face_uvs, faces, face_normals_idx, face_uvs_idx,
material_assignments, materials, cuda, cpu, to, float_tensors_to, detach, get_attributes, has_attribute, has_or_can_compute_attribute,
probably_can_compute_attribute, get_attribute, get_or_compute_attribute, check_sanity, to_string, as_dict, describe_attribute,
unset_attributes_return_none, allow_auto_compute, batching, convert_attribute_batching


.. _rubric mesh batching:

.. rubric:: Supported Batching Strategies

``SurfaceMesh`` can be instantiated with any of the following batching
strategies, and supports conversions between batching strategies. Current
batching strategy of a ``mesh`` object can be read from ``mesh.batching`` or
by running ``print(mesh)``.

For example::

mesh = kaolin.io.obj.load_mesh(path)
print(mesh)
mesh.to_batched()
print(mesh)

.. autoclass:: kaolin.rep.SurfaceMesh.Batching
:members:

.. automethod:: attribute_info_string
.. automethod:: check_sanity
.. automethod:: set_batching
.. automethod:: to_batched
.. automethod:: getattr_batched
.. automethod:: cat
.. automethod:: convert_attribute_batching

.. _rubric mesh attribute access:

.. rubric:: Attribute Access

By default, ``SurfaceMesh`` will attempt to auto-compute missing attributes
on access. These attributes will be cached, unless their ancestors have
``requires_grad == True``. This behavior of the ``mesh`` object can be changed
at construction time (``allow_auto_compute=False``) or by setting
``mesh.allow_auto_compute`` later. In addition to this convenience API,
explicit methods for attribute access are also supported.

For example, using **convenience API**::

# Caching is enabled by default
mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
print(mesh)
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.face_normals # Auto-computed
print(mesh.has_attribute('face_normals')) # True (cached)

# Caching is disabled when gradients need to flow
mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
mesh.vertices.requires_grad = True # causes caching to be off
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.face_normals # Auto-computed
print(mesh.has_attribute('face_normals')) # False (caching disabled)


For example, using **explicit API**::

mesh = kaolin.io.obj.load_mesh(path, with_normals=False)
print(mesh.has_attribute('face_normals')) # False
fnorm = mesh.get_or_compute_attribute('face_normals', should_cache=False)
print(mesh.has_attribute('face_normals')) # False


.. automethod:: get_attributes
.. automethod:: has_attribute
.. automethod:: has_or_can_compute_attribute
.. automethod:: probably_can_compute_attribute
.. automethod:: get_attribute
.. automethod:: get_or_compute_attribute

.. _rubric mesh inspecting:

.. rubric:: Inspecting and Copying Meshes

To make it easier to work with, ``SurfaceMesh`` supports detailed print
statements, as well as ``len()``, ``copy()``, ``deepcopy()`` and can be converted
to a dictionary.

Supported operations::

import copy
mesh_copy = copy.copy(mesh)
mesh_copy = copy.deepcopy(mesh)
batch_size = len(mesh)

# Print default attributes
print(mesh)

# Print more detailed attributes
print(mesh.to_string(detailed=True, print_stats=True))

# Print specific attribute
print(mesh.describe_attribute('vertices'))

.. automethod:: to_string
.. automethod:: describe_attribute
.. automethod:: as_dict

.. _rubric mesh tensor ops:

.. rubric:: Tensor Operations

Convenience operations for device and type conversions of some or all member
tensors.

.. automethod:: cuda
.. automethod:: cpu
.. automethod:: to
.. automethod:: float_tensors_to
.. automethod:: detach

.. rubric:: Other
2 changes: 1 addition & 1 deletion docs/notes/spc_summary.rst
Original file line number Diff line number Diff line change
@@ -274,6 +274,6 @@ Functions useful for working with SPCs are available in the following modules:

* :ref:`kaolin.ops.spc<kaolin.ops.spc>` - general explanation and operations
* :ref:`kaolin.render.spc<kaolin.render.spc>` - rendering utilities
* :class:`kaolin.rep.Spc` - high-level wrapper.. _kaolin.ops.spc:
* :class:`kaolin.rep.Spc` - high-level wrapper


41 changes: 23 additions & 18 deletions docs/notes/tutorial_index.rst
Original file line number Diff line number Diff line change
@@ -10,6 +10,28 @@ point to master.
Detailed Tutorials
------------------

* `Camera and Rasterization <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/camera_and_rasterization.ipynb>`_: Rasterize ShapeNet mesh with nvdiffrast and camera:
* Load ShapeNet mesh
* Preprocess mesh and materials
* Create a camera with ``from_args()`` general constructor
* Render a mesh with multiple materials with nvdiffrast
* Move camera and see the resulting rendering
* `Optimizing Diffuse Lighting <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/diffuse_lighting.ipynb>`_: Optimize lighting parameters with spherical gaussians and spherical harmonics:
* Load an obj mesh with normals and materials
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse lighting:
* Spherical harmonics
* Spherical gaussian with inner product implementation
* Spherical gaussian with fitted approximation
* `Optimize Diffuse and Specular Lighting with Spherical Gaussians <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/sg_specular_lighting.ipynb>`_:
* Load an obj mesh with normals and materials
* Generate view rays from camera
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse and specular lighting with spherical gaussians
* `Working with Surface Meshes <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/working_with_meshes.ipynb>`_:
* loading and constructing :class:`kaolin.rep.SurfaceMesh` objects
* batching of meshes
* auto-computing common attributes (like ``face_normals``)
* `Deep Marching Tetrahedra <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/dmtet_tutorial.ipynb>`_: reconstructs a tetrahedral mesh from point clouds with `DMTet <https://nv-tlabs.github.io/DMTet/>`_, covering:
* generating data with Omniverse Kaolin App
* loading point clouds from a ``.usd`` file
@@ -51,24 +73,7 @@ Detailed Tutorials
* applying marching tetrahedra
* using Timelapse API for 3D checkpoints
* visualizing 3D checkpoints using ``kaolin-dash3d``
* `Camera and Rasterization <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/camera_and_rasterization.ipynb>`_: Rasterize ShapeNet mesh with nvdiffrast and camera:
* Load ShapeNet mesh
* Preprocess mesh and materials
* Create a camera with ``from_args()`` general constructor
* Render a mesh with multiple materials with nvdiffrast
* Move camera and see the resulting rendering
* `Optimizing Diffuse Lighting <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/diffuse_lighting.ipynb>`_: Optimize lighting parameters with spherical gaussians and spherical harmonics:
* Load an obj mesh with normals and materials
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse lighting:
* Spherical harmonics
* Spherical gaussian with inner product implementation
* Spherical gaussian with fitted approximation
* `Optimize Diffuse and Specular Lighting with Spherical Gaussians <https://github.com/NVIDIAGameWorks/kaolin/blob/master/examples/tutorial/sg_specular_lighting.ipynb>`_:
* Load an obj mesh with normals and materials
* Generate view rays from camera
* Rasterize the diffuse and specular albedo
* Render and optimize diffuse and specular lighting with spherical gaussians


Simple Recipes
--------------
1 change: 1 addition & 0 deletions docs/readthedocs_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
numpy<1.27.0,>=1.19.5
scipy==1.10.1
-f https://download.pytorch.org/whl/lts/1.8/torch_lts.html
torch==1.8.2+cpu
6 changes: 3 additions & 3 deletions examples/tutorial/bbox_tutorial.ipynb
Original file line number Diff line number Diff line change
@@ -1584,13 +1584,13 @@
"metadata": {},
"outputs": [],
"source": [
"ani.save(\"animation.gif\", writer=animation.PillowWriter()) # optionally save the animation"
"# ani.save(\"animation.gif\", writer=animation.PillowWriter()) # optionally save the animation"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
@@ -1604,7 +1604,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.11"
"version": "3.8.13"
}
},
"nbformat": 4,
79 changes: 46 additions & 33 deletions examples/tutorial/camera_and_rasterization.ipynb

Large diffs are not rendered by default.

267 changes: 126 additions & 141 deletions examples/tutorial/dibr_tutorial.ipynb

Large diffs are not rendered by default.

133 changes: 76 additions & 57 deletions examples/tutorial/diffuse_lighting.ipynb

Large diffs are not rendered by default.

31 changes: 22 additions & 9 deletions examples/tutorial/dmtet_tutorial.ipynb
Original file line number Diff line number Diff line change
@@ -18,7 +18,9 @@
"cell_type": "code",
"execution_count": 1,
"id": "31d9198f",
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"import torch\n",
@@ -39,7 +41,9 @@
"cell_type": "code",
"execution_count": 2,
"id": "58c9c196",
"metadata": {},
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# arguments and hyperparameters\n",
@@ -64,10 +68,21 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 5,
"id": "5674d9a2",
"metadata": {},
"outputs": [],
"metadata": {
"tags": []
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"True\n",
"torch.Size([89164, 3])\n"
]
}
],
"source": [
"points = kaolin.io.usd.import_pointclouds(pcd_path)[0].points.to(device)\n",
"if points.shape[0] > 100000:\n",
@@ -77,9 +92,7 @@
" points = points[idx]\n",
"\n",
"# The reconstructed object needs to be slightly smaller than the grid to get watertight surface after MT.\n",
"center = (points.max(0)[0] + points.min(0)[0]) / 2\n",
"max_l = (points.max(0)[0] - points.min(0)[0]).max()\n",
"points = ((points - center) / max_l)* 0.9\n",
"points = kaolin.ops.pointcloud.center_points(points.unsqueeze(0), normalize=True).squeeze(0) * 0.9\n",
"timelapse.add_pointcloud_batch(category='input',\n",
" pointcloud_list=[points.cpu()], points_type = \"usd_geom_points\")"
]
@@ -348,7 +361,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.11"
"version": "3.8.13"
}
},
"nbformat": 4,
109 changes: 67 additions & 42 deletions examples/tutorial/interactive_visualizer.ipynb
Original file line number Diff line number Diff line change
@@ -26,15 +26,20 @@
"import copy\n",
"import glob\n",
"import math\n",
"import logging\n",
"import numpy as np\n",
"import os\n",
"import sys\n",
"import torch\n",
"\n",
"from tutorial_common import COMMON_DATA_DIR\n",
"import kaolin as kal\n",
"\n",
"import nvdiffrast\n",
"glctx = nvdiffrast.torch.RasterizeGLContext(False, device='cuda')"
"glctx = nvdiffrast.torch.RasterizeGLContext(False, device='cuda')\n",
"\n",
"def print_tensor(t, **kwargs):\n",
" print(kal.utils.testing.tensor_info(t, **kwargs))"
]
},
{
@@ -52,7 +57,26 @@
"metadata": {
"tags": []
},
"outputs": [],
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"SurfaceMesh object with batching strategy FIXED\n",
" vertices: [1, 52081, 3] (torch.float32)[cuda:0] \n",
" uvs: [1, 633, 2] (torch.float32)[cuda:0] \n",
" faces: [200018, 3] (torch.int64)[cuda:0] \n",
" face_uvs_idx: [1, 200018, 3] (torch.int64)[cuda:0] \n",
"material_assignments: [1, 200018] (torch.int16)[cuda:0] \n",
" materials: list of length 44\n",
" vertex_normals: will be computed on access from (if present): (faces, face_normals)\n",
" face_normals: will be computed on access from (if present): (normals, face_normals_idx) or (vertices, faces)\n",
" face_uvs: will be computed on access from (if present): (uvs, face_uvs_idx)\n",
" face_vertices: will be computed on access from (if present): (faces, vertices)\n",
"['batching', 'allow_auto_compute', 'unset_attributes_return_none', 'materials', 'vertices', 'uvs', 'faces', 'face_uvs_idx', 'material_assignments']\n"
]
}
],
"source": [
"# Set KAOLIN_TEST_SHAPENETV2_PATH env variable, or replace by your shapenet path\n",
"SHAPENETV2_PATH = os.getenv('KAOLIN_TEST_SHAPENETV2_PATH')\n",
@@ -67,33 +91,23 @@
"else:\n",
" # Load a specific obj instead\n",
" OBJ_PATH = os.path.join(COMMON_DATA_DIR, 'meshes', 'fox.obj')\n",
" mesh = kal.io.obj.import_mesh(OBJ_PATH, with_materials=True, triangulate=True)\n",
"\n",
"# Normalize the data between [-0.5, 0.5]\n",
"vertices = mesh.vertices.unsqueeze(0).cuda()\n",
"vertices_min = vertices.min(dim=1, keepdims=True)[0]\n",
"vertices_max = vertices.max(dim=1, keepdims=True)[0]\n",
"vertices -= (vertices_max + vertices_min) / 2.\n",
"vertices /= (vertices_max - vertices_min).max()\n",
"faces = mesh.faces.cuda()\n",
"\n",
"# Here we are preprocessing the materials, assigning faces to materials and\n",
"# using single diffuse color as backup when map doesn't exist (and face_uvs_idx == -1)\n",
"uvs = torch.nn.functional.pad(mesh.uvs.unsqueeze(0).cuda(), (0, 0, 0, 1)) % 1.\n",
"face_uvs_idx = mesh.face_uvs_idx.cuda()\n",
"face_material_idx = mesh.material_assignments.cuda()\n",
" mesh = kal.io.obj.import_mesh(OBJ_PATH, with_materials=True, with_normals=True, triangulate=True)\n",
"\n",
"# Batch, move to GPU and center and normalize vertices in the range [-0.5, 0.5]\n",
"mesh = mesh.to_batched().cuda()\n",
"mesh.vertices = kal.ops.pointcloud.center_points(mesh.vertices, normalize=True)\n",
"print(mesh)\n",
"\n",
"diffuse_maps = [m['map_Kd'].permute(2, 0, 1).unsqueeze(0).cuda().float() / 255. if 'map_Kd' in m else\n",
" m['Kd'].reshape(1, 3, 1, 1).cuda()\n",
" for m in mesh.materials]\n",
" for m in mesh.materials[0]]\n",
"specular_maps = [m['map_Ks'].permute(2, 0, 1).unsqueeze(0).cuda().float() / 255. if 'map_Ks' in m else\n",
" m['Ks'].reshape(1, 3, 1, 1).cuda()\n",
" for m in mesh.materials]\n",
"nb_faces = faces.shape[0]\n",
" for m in mesh.materials[0]]\n",
"\n",
"mask = face_uvs_idx == -1\n",
"face_uvs_idx[mask] = uvs.shape[1] - 1\n",
"face_vertices = kal.ops.mesh.index_vertices_by_faces(vertices, faces)\n",
"face_world_normals = kal.ops.mesh.face_normals(face_vertices, unit=True)"
"# Use a single diffuse color as backup when map doesn't exist (and face_uvs_idx == -1)\n",
"mesh.uvs = torch.nn.functional.pad(mesh.uvs, (0, 0, 0, 1)) % 1.\n",
"mesh.face_uvs_idx[mesh.face_uvs_idx == -1] = mesh.uvs.shape[1] - 1"
]
},
{
@@ -188,11 +202,11 @@
" return ray_dir[0].reshape(1, height, width, 3)\n",
"\n",
"\n",
"def base_render(camera, height, width):\n",
" \"\"\"Base function for rendering using separate height and width\"\"\"\n",
" transformed_vertices = camera.transform(vertices)\n",
"def base_render(mesh, camera, height, width):\n",
" \"\"\"Base function for rendering using separate height and width, assuming batch_size=1\"\"\"\n",
" transformed_vertices = camera.transform(mesh.vertices)\n",
" face_vertices_camera = kal.ops.mesh.index_vertices_by_faces(\n",
" transformed_vertices, faces)\n",
" transformed_vertices, mesh.faces)\n",
" face_normals_z = kal.ops.mesh.face_normals(\n",
" face_vertices_camera,\n",
" unit=True\n",
@@ -202,15 +216,26 @@
" transformed_vertices, (0, 1), mode='constant', value=1.\n",
" ).contiguous()\n",
" rast = nvdiffrast.torch.rasterize(\n",
" glctx, pos, faces.int(), (height, width), grad_db=False)\n",
" glctx, pos, mesh.faces.int(), (height, width), grad_db=False)\n",
" hard_mask = rast[0][:, :, :, -1:] != 0\n",
" face_idx = (rast[0][..., -1].long() - 1).contiguous()\n",
"\n",
" uv_map = nvdiffrast.torch.interpolate(\n",
" uvs, rast[0], face_uvs_idx.int())[0]\n",
"\n",
" im_world_normals = face_world_normals.reshape(-1, 3)[face_idx]\n",
" im_cam_normals = face_normals_z.reshape(-1, 1)[face_idx]\n",
" mesh.uvs, rast[0], mesh.face_uvs_idx[0, ...].int())[0]\n",
" \n",
" if mesh.has_attribute('normals') and mesh.has_attribute('face_normals_idx'):\n",
" im_world_normals = nvdiffrast.torch.interpolate(\n",
" mesh.normals, rast[0], mesh.face_normals_idx[0, ...].int())[0]\n",
" else:\n",
" im_world_normals = nvdiffrast.torch.interpolate(\n",
" mesh.face_normals.reshape(len(mesh), -1, 3), rast[0],\n",
" torch.arange(mesh.faces.shape[0] * 3, device='cuda', dtype=torch.int).reshape(-1, 3)\n",
" )[0]\n",
" \n",
" batch_idx = torch.arange(len(mesh), device='cuda', dtype=torch.long).reshape(\n",
" len(mesh), 1, 1).expand(len(mesh), height, width)\n",
" \n",
" im_cam_normals = face_normals_z[batch_idx, face_idx] * (face_idx.unsqueeze(-1) != -1)\n",
" im_world_normals = im_world_normals * torch.sign(im_cam_normals)\n",
" albedo = torch.zeros(\n",
" (1, height, width, 3),\n",
@@ -222,7 +247,7 @@
" )\n",
" # Obj meshes can be composed of multiple materials\n",
" # so at rendering we need to interpolate from corresponding materials\n",
" im_material_idx = face_material_idx[face_idx]\n",
" im_material_idx = mesh.material_assignments[0, ...][face_idx]\n",
" im_material_idx[face_idx == -1] = -1\n",
"\n",
" for i, material in enumerate(diffuse_maps):\n",
@@ -276,14 +301,14 @@
" \n",
" This is the main function provided to the interactive visualizer\n",
" \"\"\"\n",
" return base_render(camera, camera.height, camera.width)\n",
" return base_render(mesh, camera, camera.height, camera.width)\n",
"\n",
"def lowres_render(camera):\n",
" \"\"\"Render with lower dimension.\n",
" \n",
" This function will be used as a \"fast\" rendering used when the mouse is moving to avoid slow down.\n",
" \"\"\"\n",
" return base_render(camera, int(camera.height / 4), int(camera.width / 4))"
" return base_render(mesh, camera, int(camera.height / 4), int(camera.width / 4))"
]
},
{
@@ -309,7 +334,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "58df1f054a304a7d930351071772df71",
"model_id": "675199e8811d4315be9106a4b9839459",
"version_major": 2,
"version_minor": 0
},
@@ -323,7 +348,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "7fb6fcd6b1654bbf85b0fb29d90db63b",
"model_id": "e5d1372ce3cd4a29be14e2738c80d94f",
"version_major": 2,
"version_minor": 0
},
@@ -370,7 +395,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "203477ec894846608804c53003c943aa",
"model_id": "dd7e3bc6947241e98e97211a9611027a",
"version_major": 2,
"version_minor": 0
},
@@ -384,7 +409,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "855cef7b48d84178949f45d63da8c5d4",
"model_id": "b152d54454194a1a9d5377b0ca4625c4",
"version_major": 2,
"version_minor": 0
},
@@ -431,7 +456,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c66c4c66d41d4162a74531b5eb548877",
"model_id": "84164d50ea1343bcba76b9625c93962b",
"version_major": 2,
"version_minor": 0
},
@@ -445,7 +470,7 @@
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "f0979c37078448bbb4635e76ebca8e0a",
"model_id": "d288140ef7c34e1d8d7a8e2d2dfc2174",
"version_major": 2,
"version_minor": 0
},
112 changes: 66 additions & 46 deletions examples/tutorial/sg_specular_lighting.ipynb

Large diffs are not rendered by default.

217 changes: 105 additions & 112 deletions examples/tutorial/understanding_spcs_tutorial.ipynb

Large diffs are not rendered by default.

17 changes: 5 additions & 12 deletions examples/tutorial/visualize_main.py
Original file line number Diff line number Diff line change
@@ -30,9 +30,7 @@ def __normalize_vertices(vertices):
Normalizes vertices to fit an [-1...1] bounding box,
common during training, but not necessary for visualization.
"""
result = vertices - torch.mean(vertices, dim=0).unsqueeze(0)
span = torch.max(result, dim=0).values - torch.min(result, dim=0).values
return result / torch.max(span)
return kaolin.ops.pointcloud.center_points(res.vertices.unsqueeze(0), normalize=True).squeeze(0) * 2


if __name__ == "__main__":
@@ -70,7 +68,7 @@ def __normalize_vertices(vertices):
# TODO: add textured example
for f in obj_files:
res = kaolin.io.obj.import_mesh(f)
vertices = res.vertices if args.skip_normalization else __normalize_vertices(res.vertices)
vertices = res.vertices if args.skip_normalization else __normalize_vertices(vertices)
num_samples = random.randint(1000, 1500) # Vary to ensure robustness
pts = kaolin.ops.mesh.sample_points(
vertices.unsqueeze(0), res.faces, num_samples)[0].squeeze(0)
@@ -131,14 +129,9 @@ def __normalize_vertices(vertices):
voxelgrid_list=out_voxels)

logger.info('Emulated training complete!\n'
'You can now view created USD files found here: {}\n\n'
'You will soon be able to visualize these in the Kaolin Omniverse App '
'and our web visualizer. Stay tuned!'.format(args.output_dir))

# TODO(mshugrina): update command line once finalized
# 'Now try visualizing the results by running:\n'
# ' kaolin/dash3d/run.py --logdir={}\n'
# 'And then navigating to localhost:8080\n'.format(args.output_dir))
'You can now view created USD files by running:\n\n'
f'kaolin-dash3d --logdir={args.output_dir}\n\n'
'And then navigating to localhost:8080\n')

# TODO(mshugrina): once dash3d is also integrated, write an integration test
# to ensure timelapse output is properly parsed by the visualizer
836 changes: 836 additions & 0 deletions examples/tutorial/working_with_meshes.ipynb

Large diffs are not rendered by default.

10 changes: 4 additions & 6 deletions kaolin/io/obj.py
Original file line number Diff line number Diff line change
@@ -23,6 +23,7 @@
from kaolin.io.materials import MaterialLoadError, MaterialFileError, MaterialNotFoundError, \
process_materials_and_assignments
from kaolin.io import utils
from kaolin.rep import SurfaceMesh

__all__ = [
'ignore_error_handler',
@@ -32,10 +33,6 @@
'import_mesh'
]

return_type = namedtuple('return_type',
['vertices', 'faces', 'uvs', 'face_uvs_idx', 'materials',
'material_assignments', 'normals', 'face_normals_idx'])


def ignore_error_handler(error, **kwargs):
"""Simple error handler to use in :func:`load_obj` that ignore all errors"""
@@ -256,8 +253,9 @@ def _apply_handler(handler):
normals = None
face_normals_idx = None

return return_type(vertices, faces, uvs, face_uvs_idx, materials,
material_assignments, normals, face_normals_idx)
return SurfaceMesh(vertices=vertices, faces=faces, uvs=uvs, face_uvs_idx=face_uvs_idx, materials=materials,
material_assignments=material_assignments, normals=normals, face_normals_idx=face_normals_idx,
unset_attributes_return_none=True) # for greater backward compatibility


def load_mtl(mtl_path, error_handler):
37 changes: 17 additions & 20 deletions kaolin/io/usd/mesh.py
Original file line number Diff line number Diff line change
@@ -28,13 +28,11 @@

from kaolin.io import materials as usd_materials
from kaolin.io import utils
from kaolin.rep import SurfaceMesh

from .utils import _get_stage_from_maybe_file, get_scene_paths, create_stage


mesh_return_type = namedtuple('mesh_return_type', ['vertices', 'faces',
'uvs', 'face_uvs_idx', 'face_normals', 'material_assignments',
'materials'])
__all__ = [
'import_mesh',
'import_meshes',
@@ -445,7 +443,7 @@ def import_mesh(file_path_or_stage, scene_path=None, with_materials=False, with_
heterogeneous_mesh_handler=heterogeneous_mesh_handler,
with_materials=with_materials,
with_normals=with_normals, times=[time], triangulate=triangulate)
return mesh_return_type(*meshes_list[0])
return meshes_list[0]


def import_meshes(file_path_or_stage, scene_paths=None, with_materials=False, with_normals=False,
@@ -561,20 +559,18 @@ def import_meshes(file_path_or_stage, scene_paths=None, with_materials=False, wi
faces = faces.view(-1 if len(faces) > 0 else 0, facesize) # Nfaces x facesize
nfaces = faces.shape[0]

# TODO: note - this means if there is no face information normals/uvs are actually not processed;
# this will come up as a problem later.
if nfaces > 0:
if face_uvs_idx is not None and face_uvs_idx.size(0) > 0:
uvs = uvs.reshape(-1, 2)
face_uvs_idx = face_uvs_idx.reshape(-1, facesize)
else:
uvs = None
face_uvs_idx = None
# Process face-related attributes, correctly handling absence of face information
if face_uvs_idx is not None and face_uvs_idx.size(0) > 0:
uvs = uvs.reshape(-1, 2)
face_uvs_idx = face_uvs_idx.reshape(-1, max(1, facesize))
else:
uvs = None
face_uvs_idx = None

if face_normals is not None and face_normals.size(0) > 0:
face_normals = face_normals.reshape(nfaces, -1, 3)
else:
face_normals = None
if face_normals is not None and face_normals.size(0) > 0:
face_normals = face_normals.reshape((nfaces, -1, 3) if nfaces > 0 else (-1, 1, 3))
else:
face_normals = None

materials = None
material_assignments = None
@@ -586,9 +582,10 @@ def _default_error_handler(error, **kwargs):
materials_dict, material_assignments_dict, _default_error_handler, nfaces,
error_context_str=scene_path)

# TODO(mshugrina): Replace tuple output with mesh class
results.append(mesh_return_type(vertices, faces, uvs, face_uvs_idx, face_normals,
material_assignments, materials))
results.append(SurfaceMesh(
vertices=vertices, faces=faces, uvs=uvs, face_uvs_idx=face_uvs_idx, face_normals=face_normals,
material_assignments=material_assignments, materials=materials,
unset_attributes_return_none=True)) # for greater backward compatibility

return results

1 change: 1 addition & 0 deletions kaolin/ops/__init__.py
Original file line number Diff line number Diff line change
@@ -3,6 +3,7 @@
from . import coords
from . import gcn
from . import mesh
from . import pointcloud
from . import random
from . import reduction
from . import spc
40 changes: 40 additions & 0 deletions kaolin/ops/mesh/mesh.py
Original file line number Diff line number Diff line change
@@ -18,6 +18,7 @@
__all__ = [
'index_vertices_by_faces',
'adjacency_matrix',
'compute_vertex_normals',
'uniform_laplacian',
]

@@ -119,3 +120,42 @@ def uniform_laplacian(num_vertices, faces):
L[torch.isnan(L)] = 0

return L


def compute_vertex_normals(faces, face_normals, num_vertices=None):
r"""Computes normals for every vertex by averaging face normals
assigned to that vertex for every face that has this vertex.
Args:
faces (torch.LongTensor): vertex indices of faces of a fixed-topology mesh batch with
shape :math:`(\text{num_faces}, \text{face_size})`.
face_normals (torch.FloatTensor): pre-normalized xyz normal values
for every vertex of every face with shape
:math:`(\text{batch_size}, \text{num_faces}, \text{face_size}, 3)`.
num_vertices (int, optional): number of vertices V (set to max index in faces, if not set)
Return:
(torch.FloatTensor): of shape (B, V, 3)
"""
if num_vertices is None:
num_vertices = int(faces.max()) + 1

B = face_normals.shape[0]
V = num_vertices
F = faces.shape[0]
FSz = faces.shape[1]
vertex_normals = torch.zeros((B, V, 3), dtype=face_normals.dtype, device=face_normals.device)
counts = torch.zeros((B, V), dtype=face_normals.dtype, device=face_normals.device)

faces = faces.unsqueeze(0).repeat(B, 1, 1)
fake_counts = torch.ones((B, F), dtype=face_normals.dtype, device=face_normals.device)
# B x F B x F x 3
# self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0
# self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1
for i in range(FSz):
vertex_normals.scatter_add_(1, faces[..., i:i + 1].repeat(1, 1, 3), face_normals[..., i, :])
counts.scatter_add_(1, faces[..., i], fake_counts)

counts = counts.clip(min=1).unsqueeze(-1)
vertex_normals = vertex_normals / counts
return vertex_normals
2 changes: 1 addition & 1 deletion kaolin/ops/mesh/trianglemesh.py
Original file line number Diff line number Diff line change
@@ -311,7 +311,7 @@ def packed_sample_points(vertices, first_idx_vertices,


def face_normals(face_vertices, unit=False):
r"""Calculate normals of triangle meshes.
r"""Calculate normals of triangle meshes. Left-hand rule convention is used for picking normal direction.
Args:
face_vertices (torch.Tensor):
43 changes: 43 additions & 0 deletions kaolin/ops/pointcloud.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from __future__ import annotations
import torch


def center_points(points: torch.FloatTensor, normalize: bool = False, eps=1e-6):
r"""Returns points centered at the origin for every pointcloud. If `normalize` is
set, will also normalize each point cloud spearately to the range of [-0.5, 0.5].
Note that each point cloud is centered individually.
Args:
points (torch.FloatTensor): point clouds of shape :math:`(\text{batch_size}, \text{num_points}, 3)`,
(other channel numbers supported).
normalize (bool): if true, will also normalize each point cloud to be in the range [-0.5, 0.5]
eps (float): eps to use to avoid division by zero when normalizing
Return:
(torch.FloatTensor) modified points with same shape, device and dtype as input
"""
assert len(points.shape) == 3, f'Points have unexpected shape {points.shape}'

vmin = points.min(dim=1, keepdim=True)[0]
vmax = points.max(dim=1, keepdim=True)[0]
vmid = (vmin + vmax) / 2
res = points - vmid
if normalize:
den = (vmax - vmin).max(dim=-1, keepdim=True)[0].clip(min=eps)
res = res / den
return res
1 change: 1 addition & 0 deletions kaolin/rep/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
from .spc import Spc
from .surface_mesh import SurfaceMesh

__all__ = [k for k in locals().keys() if not k.startswith('__')]
1,236 changes: 1,236 additions & 0 deletions kaolin/rep/surface_mesh.py

Large diffs are not rendered by default.

61 changes: 43 additions & 18 deletions kaolin/utils/testing.py
Original file line number Diff line number Diff line change
@@ -256,31 +256,35 @@ def _get_stats_str():

def _get_details_str():
if torch.is_tensor(t):
return ' - req_grad={}, is_leaf={}, device={}, layout={}'.format(
t.requires_grad, t.is_leaf, t.device, t.layout)
return ' - req_grad={}, is_leaf={}, layout={}'.format(
t.requires_grad, t.is_leaf, t.layout)

if t is None:
return '%s: None' % name

shape_str = ''
if hasattr(t, 'shape'):
shape_str = '%s ' % str(t.shape)
shape_str = '%s ' % str(list(t.shape))

if hasattr(t, 'dtype'):
type_str = '%s' % str(t.dtype)
else:
type_str = '{}'.format(type(t))

device_str = ''
if hasattr(t, 'device'):
device_str = '[{}]'.format(t.device)

name_str = ''
if name is not None and len(name) > 0:
name_str = '%s: ' % name

return ('%s%s(%s) %s %s' %
(name_str, shape_str, type_str,
return ('%s%s(%s)%s %s %s' %
(name_str, shape_str, type_str, device_str,
(_get_stats_str() if print_stats else ''),
(_get_details_str() if detailed else '')))

def contained_torch_equal(elem, other, approximate=False, **allclose_args):
def contained_torch_equal(elem, other, approximate=False, print_error_context=None, **allclose_args):
"""Check for equality (or allclose if approximate) of two objects potentially containing tensors.
:func:`torch.equal` do not support data structure like dictionary / arrays
@@ -293,13 +297,19 @@ def contained_torch_equal(elem, other, approximate=False, **allclose_args):
elem (object, dict, list, tuple): The first object
other (object, dict, list, tuple): The other object to compare to ``elem``
approximate (bool): if requested will use allclose for comparison instead (default=False)
print_error_context (str): set to any string value to print the context for the first nested failed match
allclose_args: arguments to `torch.allclose` if approximate comparison requested
Return (bool): the comparison result
"""
def _maybe_print(val, extra_context='', prefix_string='Failed match for '):
if not val and print_error_context is not None: # match failed
print(f'{prefix_string}{print_error_context}{extra_context}')
return val

elem_type = type(elem)
if elem_type != type(other):
return False
return _maybe_print(False)

def _tensor_compare(a, b):
if not approximate:
@@ -310,31 +320,46 @@ def _tensor_compare(a, b):
def _number_compare(a, b):
return _tensor_compare(torch.tensor([a]), torch.tensor([b]))

def _attrs_to_dict(a, attrs):
return {k : getattr(a, k) for k in attrs if hasattr(a, k)}

def _recursive_error_context(append_context):
if print_error_context is None:
return None
return f'{print_error_context}{append_context}'

recursive_args = copy.copy(allclose_args)
recursive_args['approximate'] = approximate

if isinstance(elem, torch.Tensor):
return _tensor_compare(elem, other)
return _maybe_print(_tensor_compare(elem, other))
elif isinstance(elem, str):
return elem == other
return _maybe_print(elem == other, extra_context=f': {elem} vs {other}')
elif isinstance(elem, float):
return _number_compare(elem, other)
return _maybe_print(_number_compare(elem, other), extra_context=f': {elem} vs {other}')
elif isinstance(elem, collections.abc.Mapping):
if elem.keys() != other.keys():
return False
return all(contained_torch_equal(elem[key], other[key], **recursive_args) for key in elem)
return _maybe_print(False, f': {elem.keys()} vs {other.keys()}', 'Different keys for ')
return all(contained_torch_equal(
elem[key], other[key],
print_error_context=_recursive_error_context(f'[{key}]'), **recursive_args) for key in elem)
elif isinstance(elem, tuple) and hasattr(elem, '_fields'): # namedtuple
if set(elem._fields) != set(other._fields):
return False
return _maybe_print(False, f': {elem._fields} vs {other._fields}', 'Different fields for ')
return all(contained_torch_equal(
getattr(elem, f), getattr(other, f), **recursive_args) for f in elem._fields
)
getattr(elem, f), getattr(other, f),
print_error_context=_recursive_error_context(f'.{f}'), **recursive_args) for f in elem._fields)
elif isinstance(elem, collections.abc.Sequence):
if len(elem) != len(other):
return False
return all(contained_torch_equal(a, b, **recursive_args) for a, b in zip(elem, other))
return _maybe_print(False, ': {len(elem)} vs {len(other)}', 'Different length for ')
return all(contained_torch_equal(
a, b, print_error_context=_recursive_error_context(f'[{i}]'), **recursive_args)
for i, (a, b) in enumerate(zip(elem, other)))
elif hasattr(elem, '__slots__'):
return contained_torch_equal(_attrs_to_dict(elem, elem.__slots__), _attrs_to_dict(other, other.__slots__),
print_error_context=print_error_context, **recursive_args)
else:
return elem == other
return _maybe_print(elem == other)

def check_allclose(tensor, other, rtol=1e-5, atol=1e-8, equal_nan=False):
if not torch.allclose(tensor, other, atol=atol, rtol=rtol, equal_nan=equal_nan):
3 changes: 3 additions & 0 deletions pytest.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[pytest]
log_cli = true
log_cli_level = 10
8 changes: 4 additions & 4 deletions tests/python/kaolin/io/test_shapenet.py
Original file line number Diff line number Diff line change
@@ -20,7 +20,7 @@
import torch
import random

from kaolin.io.obj import return_type
from kaolin.rep import SurfaceMesh
from kaolin.io.dataset import KaolinDatasetItem
from kaolin.io import shapenet
from kaolin.utils.testing import contained_torch_equal
@@ -59,7 +59,7 @@ def transform(self, output_dict, use_transform):
if output_dict:
def transform(inputs):
outputs = copy.copy(inputs)
outputs['mesh'] = return_type(
outputs['mesh'] = SurfaceMesh(
vertices=outputs['mesh'].vertices + 1.,
faces=outputs['mesh'].faces,
uvs=outputs['mesh'].uvs,
@@ -74,7 +74,7 @@ def transform(inputs):
else:
def transform(inputs):
outputs = KaolinDatasetItem(
data=return_type(
data=SurfaceMesh(
vertices=inputs.data.vertices + 1.,
faces=inputs.data.faces,
uvs=inputs.data.uvs,
@@ -125,7 +125,7 @@ def test_basic_getitem(self, shapenet_dataset, index, with_materials, output_dic
else:
data = item.data
attributes = item.attributes
assert isinstance(data, return_type)
assert isinstance(data, SurfaceMesh)
assert isinstance(attributes, dict)

assert isinstance(data.vertices, torch.Tensor)
8 changes: 4 additions & 4 deletions tests/python/kaolin/io/test_shrec.py
Original file line number Diff line number Diff line change
@@ -20,7 +20,7 @@
import pytest
import torch

from kaolin.io.obj import return_type
from kaolin.rep import SurfaceMesh
from kaolin.io.dataset import KaolinDatasetItem
from kaolin.io.shrec import SHREC16

@@ -57,7 +57,7 @@ def transform(self, output_dict, use_transform):
if output_dict:
def transform(inputs):
outputs = copy.copy(inputs)
outputs['mesh'] = return_type(
outputs['mesh'] = SurfaceMesh(
vertices=outputs['mesh'].vertices + 1.,
faces=outputs['mesh'].faces,
uvs=outputs['mesh'].uvs,
@@ -72,7 +72,7 @@ def transform(inputs):
else:
def transform(inputs):
outputs = KaolinDatasetItem(
data=return_type(
data=SurfaceMesh(
vertices=inputs.data.vertices + 1.,
faces=inputs.data.faces,
uvs=inputs.data.uvs,
@@ -110,7 +110,7 @@ def test_basic_getitem(self, shrec16_dataset, index, split, output_dict):
else:
data = item.data
attributes = item.attributes
assert isinstance(data, return_type)
assert isinstance(data, SurfaceMesh)
assert isinstance(attributes, dict)

assert isinstance(data.vertices, torch.Tensor)
25 changes: 11 additions & 14 deletions tests/python/kaolin/io/usd/test_mesh.py
Original file line number Diff line number Diff line change
@@ -358,7 +358,7 @@ def test_import_material_subsets(self, scene_paths, out_dir, hetero_subsets_mate
for i in range(len(mesh.materials)):
mesh.materials[i] = mesh.materials[i].diffuse_color
reimported_mesh.materials[i] = reimported_mesh.materials[i].diffuse_color
assert contained_torch_equal(mesh, reimported_mesh)
assert contained_torch_equal(mesh, reimported_mesh, print_error_context='')

@pytest.mark.parametrize('input_stage', [False, True])
def test_import_with_material(self, scene_paths, out_dir, hetero_subsets_materials_mesh_path, input_stage):
@@ -483,7 +483,7 @@ def test_export_only_face_normals(self, out_dir, mesh):
assert torch.allclose(mesh_in.face_normals.view(-1, 3), mesh.normals[mesh.face_normals_idx].view(-1, 3))
# TODO: support and test normals for various interpolations

@pytest.mark.parametrize('with_normals', [False, True])
@pytest.mark.parametrize('with_normals', [False]) #False, True])
@pytest.mark.parametrize('with_materials', [False, True])
@pytest.mark.parametrize('flatten', [True, False])
def test_import_triangulate(self, with_normals, with_materials, flatten):
@@ -510,6 +510,11 @@ def test_import_triangulate(self, with_normals, with_materials, flatten):
for i in range(len(orig)):
qmesh = orig[i] # quad mesh
tmesh = triangulated[i] # triangle mesh

# disallow automatic computation of properties (specifically face_normals can be auto-computed)
qmesh.allow_auto_compute = False
tmesh.allow_auto_compute = False

check_tensor_attribute_shapes(
qmesh, vertices=[expected_num_vertices[i], 3], faces=[expected_num_quads[i], 4])
check_tensor_attribute_shapes(
@@ -583,18 +588,10 @@ def test_read_write_read_consistency(self, bname, out_dir, expected_sizes, expec
# Ensure vertex order is consistent before performing any further checks
check_allclose(read_obj_mesh.vertices, read_usd_mesh.vertices, atol=1e-04)

# Spot check a few values between OBJ and USD read meshes
for f in [0, 10, 15]:
# TODO: simplify these once mesh rep is in, and compare full face UVs and normals between OBJ and USD
usd_uv = read_usd_mesh.uvs[read_usd_mesh.face_uvs_idx[f, :], :]
obj_uv = read_obj_mesh.uvs[read_obj_mesh.face_uvs_idx[f, :], :]
check_allclose(obj_uv, usd_uv, atol=1e-04)

for vidx in range(3):
usd_normal = read_usd_mesh.face_normals[f, vidx, ...]
obj_normal = read_obj_mesh.normals[read_obj_mesh.face_normals_idx[f, vidx]]
assert torch.allclose(obj_normal, usd_normal, atol=1e-04), \
f'USD [{f}, {vidx}] {usd_normal} vs. OBJ {obj_normal}'
# Check that final face values between the two meshes agree (note the OBJ and USD may store
# and index uvs and faces differently, but final per-face per-vertex values must agree
assert torch.allclose(read_usd_mesh.face_uvs, read_obj_mesh.face_uvs, atol=1e-04)
assert torch.allclose(read_usd_mesh.face_normals, read_obj_mesh.face_normals, atol=1e-04)

# Check material consistency
assert len(read_usd_mesh.materials) == expected_material_counts[bname]
31 changes: 31 additions & 0 deletions tests/python/kaolin/ops/mesh/test_mesh.py
Original file line number Diff line number Diff line change
@@ -866,3 +866,34 @@ def test_subdivide_trianglemesh_5_iter(self, vertices_icosahedron, faces_icosahe
assert torch.allclose(mesh.face_areas(new_vertices, new_faces).sum(),
torch.tensor([6.2005], dtype=new_vertices.dtype, device=new_faces.device), atol=1e-04)
assert new_faces.shape[0] == faces_icosahedron.shape[0] * 4 ** 5


@pytest.mark.parametrize('device,dtype', FLOAT_TYPES)
class TestComputeVertexNormals:
def test_compute_vertex_normals(self, device, dtype):
# Faces are a fan around the 0th vertex
faces = torch.tensor([[0, 2, 1],
[0, 3, 2],
[0, 4, 3]],
device=device, dtype=torch.long)
B = 3
F = faces.shape[0]
FSize = faces.shape[1]
V = 6 # one vertex not in faces
face_normals = torch.rand((B, F, FSize, 3), device=device, dtype=dtype)

expected = torch.zeros((B, V, 3), device=device, dtype=dtype)
for b in range(B):
expected[b, 0, :] = (face_normals[b, 0, 0, :] + face_normals[b, 1, 0, :] + face_normals[b, 2, 0, :]) / 3
expected[b, 1, :] = face_normals[b, 0, 2, :]
expected[b, 2, :] = (face_normals[b, 0, 1, :] + face_normals[b, 1, 2, :]) / 2
expected[b, 3, :] = (face_normals[b, 1, 1, :] + face_normals[b, 2, 2, :]) / 2
expected[b, 4, :] = face_normals[b, 2, 1, :]
expected[b, 5, :] = 0 # DNE in faces

vertex_normals = mesh.compute_vertex_normals(faces, face_normals, num_vertices=V)
assert torch.allclose(expected, vertex_normals)

# Now let's not pass in num_vertices; we will not get normals for the last vertex which is not in faces
vertex_normals = mesh.compute_vertex_normals(faces, face_normals)
assert torch.allclose(expected[:, :5, :], vertex_normals)
69 changes: 69 additions & 0 deletions tests/python/kaolin/ops/test_pointcloud.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
# Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES.
# All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import os
import pytest
import torch

from kaolin.utils.testing import FLOAT_TYPES, with_seed
import kaolin.ops.pointcloud


@pytest.mark.parametrize('device, dtype', FLOAT_TYPES)
def test_center_points(device, dtype):
with_seed(9, 9, 9)
if dtype == torch.half:
rtol, atol = 1e-3, 1e-3
else:
rtol, atol = 1e-5, 1e-8 # default torch values

B = 4
N = 20
points = torch.rand((B, N, 3), device=device, dtype=dtype) # 0..1
points[:, 0, :] = 1.0 # make sure 1 is included
points[:, 1, :] = 0.0 # make sure 0 is included
points = points - 0.5 # -0.5...0.5

factors = 0.2 + 2 * torch.rand((B, 1, 1), device=device, dtype=dtype)
translations = torch.rand((B, 1, 3), device=device, dtype=dtype) - 0.5

# Points are already centered
assert torch.allclose(points, kaolin.ops.pointcloud.center_points(points), atol=atol, rtol=rtol)
assert torch.allclose(points * factors, kaolin.ops.pointcloud.center_points(points * factors), atol=atol, rtol=rtol)

# Points translated
assert torch.allclose(points, kaolin.ops.pointcloud.center_points(points + 0.5), atol=atol, rtol=rtol)

points_centered = kaolin.ops.pointcloud.center_points(points + translations)
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)

points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations)
assert torch.allclose(points * factors, points_centered, atol=atol, rtol=rtol)

# Now let's also try to normalize
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)

# Now let's test normalizing when there is zero range in one of the dimensions
points[:, :, 1] = 1.0
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
points[:, :, 1] = 0.0
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)

# Now let's try normalizing when one element of the batch is degenerate
points[0, :, :] = torch.tensor([0, 2., 4.], dtype=dtype, device=device).reshape((1, 3))
points_centered = kaolin.ops.pointcloud.center_points(points * factors + translations, normalize=True)
points[0, :, :] = 0
assert torch.allclose(points, points_centered, atol=atol, rtol=rtol)
1,181 changes: 1,181 additions & 0 deletions tests/python/kaolin/rep/test_surface_mesh.py

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions tests/samples/rep/amsterdam.usd
1 change: 1 addition & 0 deletions tests/samples/rep/ico_flat.mtl
1 change: 1 addition & 0 deletions tests/samples/rep/ico_flat.obj
1 change: 1 addition & 0 deletions tests/samples/rep/ico_flat.usda
1 change: 1 addition & 0 deletions tests/samples/rep/ico_smooth.mtl
1 change: 1 addition & 0 deletions tests/samples/rep/ico_smooth.obj
1 change: 1 addition & 0 deletions tests/samples/rep/ico_smooth.usda
1 change: 1 addition & 0 deletions tests/samples/rep/pizza.usd
2 changes: 1 addition & 1 deletion tools/linux/run_tests.sh
Original file line number Diff line number Diff line change
@@ -128,7 +128,7 @@ if [ $RUN_RECIPES -eq "1" ]; then
NPASS=0

cd $KAOLIN_ROOT/examples/recipes
for F in $(find . -name "*.py"); do
for F in $(find . -name "*.py" | grep -v "ipynb_checkpoints"); do

echo "Executing python $F" >> $RECIPES_LOG
python $F >> $RECIPES_LOG 2>&1

0 comments on commit ea70a80

Please sign in to comment.