This code-base implements the method presented in the paper A Novel Method to Improve Quality Surface Coverage in Multi-View Capture. Given the 3D mesh of the target object and camera poses in a scan, we would like to derive a focus distance for each camera that optimizes the quality of the covered surface area captured in the images.
The depth of field of a camera is a limiting factor for applications that require taking images at a short subject-to-camera distance or using a large focal length, such as total body photography, archaeology, and other close-range photogrammetry applications. Furthermore, in multi-view capture, where the target is larger than the camera's field of view, an efficient way to optimize surface coverage captured with quality remains a challenge. Given the 3D mesh of the target object and camera poses, we propose a novel method to derive a focus distance for each camera that optimizes the quality of the covered surface area. We first design an Expectation-Minimization (EM) algorithm to assign points on the mesh uniquely to cameras and then solve for a focus distance for each camera given the associated point set. We further improve the quality surface coverage by proposing a
- conda: conda-forge channel
- python: 3.8 for Windows, 3.9 for Linux
- Essential: vtk, opencv, trimesh, rtree, igl, pyembree, open3d
- Vis: pyvista, pyvistaqt, matplotlib, pyqt5
- Test: networkx (pip)
conda create --name afap -c conda-forge python=3.8 pyvista pyvistaqt opencv trimesh rtree igl matplotlib
conda activate afap
pip install open3d
pip install pyembree==0.2.11
conda create --name afap -c conda-forge python=3.9 pyvista pyvistaqt opencv trimesh rtree igl matplotlib embree=2.17.7 pyembree
conda activate afap
pip install open3d
The textured 3D human mesh model can be downloaded: 3DBodyTex.v1.
In script/
- Transform all data: (convert to mm scale)
python transform_3dbodytex.py -i path_to_3dbodytex_data
- Sample a point cloud
$\mathcal{P}$ :
python sample_pts.py -i ../data/3dbodytex/005 -n sampled.ply -p 1000
An output of sampled.ply will be created in the input folder.
- Create a cylindrical camera network
$\mathcal{C}$ :
python sample_camera_poses_3dbodytex.py -i ../data/3dbodytex/005 -z 7 -a 24 -r 750
An output of camera_poses_cylinder_7_24_266_750.pkl will be created in the input folder.
- EM algorithm:
python EM.py -i ../data/3dbodytex/005 -c camera_poses_cylinder_7_24_266_750.pkl -p ../params/test.yml -s output_em
-
$k$ -View algorithm:
python KViews.py -i ../data/3dbodytex/005 -c camera_poses_cylinder_7_24_266_750.pkl -p ../params/test.yml -s output_kview
-
$k$ -View algorithm initialized with EM results
python KViews.py -i ../data/3dbodytex/005 -c camera_poses_cylinder_7_24_266_750.pkl -p ../params/test.yml -s output_kview -ini ../data/005/output_em/output_iter_6.pkl
- Visualize input
$\mathcal{P}$ and$\mathcal{C}$ :
python visualize_input.py -i ..\data\3dbodytex\005 -c camera_poses_cylinder_7_24_266_750.pkl
- Visualize point quality:
python visualize_quality_pts.py -i ..\data\3dbodytex\005 -m kview
We visualize the pointwise cost.
We simulate the image using different focus distances (with the depth-of-field effect), as determined by different methods.