diff --git a/.gitignore b/.gitignore
new file mode 100644
index 0000000..818e4ba
--- /dev/null
+++ b/.gitignore
@@ -0,0 +1,4 @@
+*.pyc
+*.ipynb_checkpoints*
+*__pycache__*
+*.off
diff --git a/.gitmodules b/.gitmodules
new file mode 100644
index 0000000..456a072
--- /dev/null
+++ b/.gitmodules
@@ -0,0 +1,3 @@
+[submodule "src/src/pykdtree"]
+ path = src/src/pykdtree
+ url = https://github.com/storpipfugl/pykdtree.git
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..e2d936e
--- /dev/null
+++ b/README.md
@@ -0,0 +1,133 @@
+# Occupancy Networks for Single View Reconstruction
+
+__Team__: Noisy Pixels
+
+__Team Members__:
+Shubham Dokania _(2020701016)_
+Shanthika Shankar Naik _(2020701013)_
+Sai Amrit Patnaik _(2020701026)_
+Madhvi Panchal _(2019201061)_
+
+__Assigned TA__: Meher Shashwat Nigam
+
+This project is undertaken as a part of the Computer Vision coursework at IIIT Hyderabad in Spring semester 2021. The paper implemented in this project is: [Occupancy Networks: Learning 3D Reconstruction in Function Space](https://openaccess.thecvf.com/content_CVPR_2019/papers/Mescheder_Occupancy_Networks_Learning_3D_Reconstruction_in_Function_Space_CVPR_2019_paper.pdf) by _Mescheder et. al._
+
+The approach focuses on implicit learning of 3D surface as a continuous decision boundary of a non-linear classifier. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. The details of the implementation have been outlined in the project report and the proposal document which can be found [here](./resources/proposal.pdf).
+
+The following sections outline how to run the demo and some examples of the expected output from running the mentioned scripts.
+
+Code structure:
+```
+- resources
+ - propossal.pdf
+ - mid_eval.pdf
+- src
+ - dataset
+ - __init__.py
+ - dataloader.py
+ - models
+ - __init__.py
+ - encoder.py
+ - decoder.py
+ - viz
+ - visualization.py
+ - train.py
+ - test.py
+ - run.py
+- demo.py
+- README.md
+- proposal.pdf
+```
+
+In the above structure, the source code for the whole implementation can be found in the `src` directory. The scripts each contain a description of the functions/classes implemented and provide a wrapper to experiment with the flow of the program.
+
+Metrics Functionality uses pykdtree library. pykdtree is a kd-tree implementation for fast nearest neighbour search in Python.The implementation is based on scipy.spatial.cKDTree and libANN by combining the best features from both and focus on implementation efficiency.
+
+
+Dataset
+---
+
+Download the dataset for shapenet from: [here](https://s3.eu-central-1.amazonaws.com/avg-projects/occupancy_networks/data/dataset_small_v1.1.zip)
+
+Then to process the dataset, use the script as: `python3 src/dataset/data_process.py --dataroot --output `
+
+This script will process the dataset and prepare it in the form of HDF5 files for each object separately. This will also apply the point encoding on the dataset.
+
+Setup
+---
+
+To setup the required libraries for mesh processing, run the following command:
+```
+python3 setup.py build_ext --inplace
+```
+
+The following also need to be installed to run the code properly:
+```
+pip3 install --user pytorch-lightning efficientnet-pytorch pykdtree
+```
+
+Training
+---
+
+To train the model, use the following command:
+```
+$ python3 src/train.py --help
+usage: train.py [-h] [--cdim CDIM] [--hdim HDIM] [--pdim PDIM] [--data_root DATA_ROOT] [--batch_size BATCH_SIZE] [--output_path OUTPUT_PATH] [--exp_name EXP_NAME] [--encoder ENCODER] [--decoder DECODER]
+
+Argument parser for training the model
+
+optional arguments:
+ -h, --help show this help message and exit
+ --cdim CDIM feature dimension
+ --hdim HDIM hidden size for decoder
+ --pdim PDIM points input size for decoder
+ --data_root DATA_ROOT
+ location of the parsed and processed dataset
+ --batch_size BATCH_SIZE
+ Training batch size
+ --output_path OUTPUT_PATH
+ Model saving and checkpoint paths
+ --exp_name EXP_NAME Name of the experiment. Artifacts will be created with this name
+ --encoder ENCODER Name of the Encoder architecture to use
+ --decoder DECODER Name of the decoder architecture to use
+```
+
+Fill the values accordingly for the configuration and the model shall start training. We can also make use of mixed precision training via pytorch lightning. To do this, edit the `src/trainer.py` script.
+
+To view the training progress, run tensorboard in your experiment directory `tensorboard --logdir=`
+
+
+Evaluation
+----
+To run evaluation on the test set (selective objects: roughly 500 for now. Change it in the script for more), use the following script:
+
+```
+$ python3 run_evals.py --help
+usage: run_evals.py [-h] [--cdim CDIM] [--hdim HDIM] [--pdim PDIM] [--data_root DATA_ROOT] [--batch_size BATCH_SIZE] [--output_path OUTPUT_PATH] [--exp_name EXP_NAME] [--encoder ENCODER] [--decoder DECODER]
+ [--checkpoint CHECKPOINT]
+
+Argument parser for training the model
+
+optional arguments:
+ -h, --help show this help message and exit
+ --cdim CDIM feature dimension
+ --hdim HDIM hidden size for decoder
+ --pdim PDIM points input size for decoder
+ --data_root DATA_ROOT
+ location of the parsed and processed dataset
+ --batch_size BATCH_SIZE
+ Training batch size
+ --output_path OUTPUT_PATH
+ Model saving and checkpoint paths
+ --exp_name EXP_NAME Name of the experiment. Artifacts will be created with this name
+ --encoder ENCODER Name of the Encoder architecture to use
+ --decoder DECODER Name of the decoder architecture to use
+ --checkpoint CHECKPOINT
+ Checkpoint Path
+```
+
+
+Visualization
+---
+
+To generate 3D models and meshes, use `jupyter` notebook or lab environment, and run `check_model.ipynb`. Keep the config flags same as evaluation and tweak the save path to see results.
\ No newline at end of file
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..126f405
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libmcubes/mcubes.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libmcubes/mcubes.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..d86e77b
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libmcubes/mcubes.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..5361a9a
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libmise/mise.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libmise/mise.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..d9208f0
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libmise/mise.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..6c421e8
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/lib.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.cpython-38-x86_64-linux-gnu.so b/build/lib.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.cpython-38-x86_64-linux-gnu.so
new file mode 100755
index 0000000..fa7b0bf
Binary files /dev/null and b/build/lib.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.cpython-38-x86_64-linux-gnu.so differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/_kdtree_core.o b/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/_kdtree_core.o
new file mode 100644
index 0000000..4c05db7
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/_kdtree_core.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.o b/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.o
new file mode 100644
index 0000000..c5961e4
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libkdtree/pykdtree/kdtree.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libmcubes/marchingcubes.o b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/marchingcubes.o
new file mode 100644
index 0000000..332ee7a
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/marchingcubes.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libmcubes/mcubes.o b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/mcubes.o
new file mode 100644
index 0000000..565f06d
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/mcubes.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libmcubes/pywrapper.o b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/pywrapper.o
new file mode 100644
index 0000000..ada54ec
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libmcubes/pywrapper.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.o b/build/temp.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.o
new file mode 100644
index 0000000..52f9070
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libmesh/triangle_hash.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libmise/mise.o b/build/temp.linux-x86_64-3.8/src/utils/libmise/mise.o
new file mode 100644
index 0000000..6a9ee88
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libmise/mise.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.o b/build/temp.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.o
new file mode 100644
index 0000000..33edd83
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libsimplify/simplify_mesh.o differ
diff --git a/build/temp.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.o b/build/temp.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.o
new file mode 100644
index 0000000..c153c0e
Binary files /dev/null and b/build/temp.linux-x86_64-3.8/src/utils/libvoxelize/voxelize.o differ
diff --git a/check_model.ipynb b/check_model.ipynb
new file mode 100755
index 0000000..861f028
--- /dev/null
+++ b/check_model.ipynb
@@ -0,0 +1,384 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "c5b6a83d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import sys\n",
+ "sys.path.append(\"/home2/sdokania/all_projects/project-noisypixel/\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "b0ad72bb",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import glob\n",
+ "import cv2\n",
+ "import random\n",
+ "import pandas as pd\n",
+ "from skimage import io\n",
+ "import numpy as np\n",
+ "from PIL import Image\n",
+ "from torch.utils.data import Dataset, DataLoader\n",
+ "from torchvision import transforms, utils\n",
+ "import h5py\n",
+ "\n",
+ "# Network building stuff\n",
+ "import torch\n",
+ "import torch.nn as nn\n",
+ "import torch.nn.functional as F\n",
+ "\n",
+ "import pytorch_lightning as pl\n",
+ "from pytorch_lightning.loggers import TensorBoardLogger\n",
+ "import torchmetrics\n",
+ "import torch.distributions as dist\n",
+ "\n",
+ "\n",
+ "#mesh\n",
+ "from src.utils.libmise.mise import MISE\n",
+ "from src.utils.libmcubes.mcubes import marching_cubes\n",
+ "import trimesh\n",
+ "from src.evaluate import *"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "3a7ccf46",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import seaborn as sns\n",
+ "import matplotlib.pyplot as plt\n",
+ "%matplotlib inline"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "01cc0b62",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "DEVICE=\"cuda:0\""
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "df8b3caa",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from src.models import *\n",
+ "from src.dataset.dataloader import OccupancyNetDatasetHDF\n",
+ "from src.trainer import ONetLit\n",
+ "from src.utils import Config, count_parameters"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "c916cceb",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Setting sexperiment path as : /home2/sdokania/all_projects/occ_artifacts/initial\n",
+ "Setting sexperiment path as : ../occ_artifacts/mesh_exp\n"
+ ]
+ }
+ ],
+ "source": [
+ "config = Config()\n",
+ "config.data_root = \"/ssd_scratch/cvit/sdokania/processed_data/hdf_data/\"\n",
+ "config.batch_size = 32\n",
+ "config.output_dir = '../occ_artifacts/'\n",
+ "config.exp_name = 'mesh_exp'\n",
+ "# config.encoder = \"resnet-18\"\n",
+ "# config.decoder = \"decoder-cbn\"\n",
+ "# config.c_dim = 256\n",
+ "\n",
+ "config.encoder = \"efficientnet-b0\"\n",
+ "config.decoder = \"decoder-cbn\"\n",
+ "# config.c_dim = 256"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "id": "5e1abd90",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'c_dim': 128,\n",
+ " 'h_dim': 128,\n",
+ " 'p_dim': 3,\n",
+ " 'data_root': '/ssd_scratch/cvit/sdokania/processed_data/hdf_data/',\n",
+ " 'batch_size': 32,\n",
+ " 'output_dir': '../occ_artifacts/',\n",
+ " '_exp_name': 'mesh_exp',\n",
+ " 'encoder': 'efficientnet-b0',\n",
+ " 'decoder': 'decoder-cbn',\n",
+ " 'lr': 0.0003,\n",
+ " 'exp_path': '../occ_artifacts/mesh_exp'}"
+ ]
+ },
+ "execution_count": 7,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "vars(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "id": "6c9ee280",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loaded pretrained weights for efficientnet-b0\n"
+ ]
+ }
+ ],
+ "source": [
+ "onet = ONetLit(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "id": "5f225bfd",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loaded pretrained weights for efficientnet-b0\n"
+ ]
+ }
+ ],
+ "source": [
+ "net = ONetLit.load_from_checkpoint(\"../occ_artifacts/efficient_cbn_bs_64_full_data/lightning_logs/version_1/checkpoints/epoch=131-step=63359.ckpt\", cfg=config).eval()\n",
+ "# net = ONetLit.load_from_checkpoint(\"../occ_artifacts/resnet50_fc_bs_64_full_data_balanced/lightning_logs/version_1/checkpoints/epoch=157-step=75770.ckpt\", cfg=config).eval()\n",
+ "# net = ONetLit.load_from_checkpoint(\"../occ_artifacts/efficient_fcdecoder_bs_64_full_data/lightning_logs/version_1/checkpoints/epoch=129-step=62399.ckpt\", cfg=config)\n",
+ "\n",
+ "# net = ONetLit.load_from_checkpoint(\"../occ_artifacts/resnet18_cbn_bs_256_sub_data_balanced/lightning_logs/version_1/checkpoints/epoch=95-step=2495.ckpt\", cfg=config).eval()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "id": "e3c6f8a7",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "8751\n"
+ ]
+ }
+ ],
+ "source": [
+ "dataset = OccupancyNetDatasetHDF(config.data_root, num_points=2048, mode=\"test\", point_cloud=True)\n",
+ "print(len(dataset))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 57,
+ "id": "b7260344",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# mesh, mesh_data = get_mesh(dataset[0][:-1], return_points=True)\n",
+ "\n",
+ "\n",
+ "mesh_out_file = os.path.join('./', '%s.off' % 'onet')\n",
+ "opf = mesh.export(mesh_out_file)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "id": "050a2909-4e38-4322-8fa0-3e7e23a6de5d",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "empty_point_dict = {\n",
+ " 'completeness': np.sqrt(3),\n",
+ " 'accuracy': np.sqrt(3),\n",
+ " 'completeness2': 3,\n",
+ " 'accuracy2': 3,\n",
+ " 'chamfer': 6,\n",
+ "}\n",
+ "\n",
+ "empty_normal_dict = {\n",
+ " 'normals completeness': -1.,\n",
+ " 'normals accuracy': -1.,\n",
+ " 'normals': -1.,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "id": "2cf2a681-620d-48d8-8b51-b97197577f99",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import datetime\n",
+ "import tqdm\n",
+ "import torch.distributions as dist\n",
+ "import pandas as pd"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "id": "b66c820c-43fc-41aa-8f0e-96a2bdd4e55e",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "0:00:00.921384\n",
+ "completeness: 0.007918988902723943\n",
+ "accuracy: 0.008183860976921182\n",
+ "normals completeness: 0.8879243731498718\n",
+ "normals accuracy: 0.8527359366416931\n",
+ "normals: 0.8703301548957825\n",
+ "completeness_sq: 0.0001103035606231151\n",
+ "accuracy_sq: 0.00016926852475772773\n",
+ "chamfer-L2: 0.0001397860426904214\n",
+ "chamfer-L1: 0.08051424939822562\n",
+ "iou: 0.5555555820465088\n"
+ ]
+ }
+ ],
+ "source": [
+ "DEVICE=\"cuda:0\"\n",
+ "nux = 0\n",
+ "start = datetime.datetime.now()\n",
+ "result = []\n",
+ "\n",
+ "shuffled_idx = 100\n",
+ "\n",
+ "test_img, test_pts, test_gt, pcl_gt, norm_gt = dataset[ix][:]\n",
+ "net.to(DEVICE)\n",
+ "pred_pts = net(test_img.unsqueeze(0).to(DEVICE), test_pts.unsqueeze(0).to(DEVICE)).cpu()\n",
+ "mesh, mesh_data, normals = get_mesh(net, (test_img.to(DEVICE), test_pts, test_gt), threshold_g=0.5, return_points=True)\n",
+ "pred_occ = dist.Bernoulli(logits=pred_pts).probs.data.numpy().squeeze()\n",
+ "result.append(eval_pointcloud(mesh_data[0], pcl_gt, normals, norm_gt, pred_occ, test_gt))\n",
+ "\n",
+ "print(datetime.datetime.now() - start)\n",
+ "for kx in result[0]:\n",
+ " if kx == \"chamfer-L1\":\n",
+ " print(\"{}: {}\".format(kx, result[0][kx]*10))\n",
+ " else:\n",
+ " print(\"{}: {}\".format(kx, result[0][kx]))\n",
+ " \n",
+ "\n",
+ "mesh_out_file = os.path.join('./', '%s.off' % 'onet')\n",
+ "opf = mesh.export(mesh_out_file)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "id": "0873710d-2df1-4094-8173-7a1842c16b65",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 23,
+ "metadata": {},
+ "output_type": "execute_result"
+ },
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAQEAAAD8CAYAAAB3lxGOAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjQuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/Z1A+gAAAACXBIWXMAAAsTAAALEwEAmpwYAAA5dUlEQVR4nO29e7BkR33n+fmdPI+qW/fZ3bdfeiAhCYwAYzQ9PMzuGJuxwdgL9qzDgccxhjG7xGx4Zz32RNiwjljH/mevJ2aGmZj1LOEXs8HYeDG2GS/YZjGMY9aLcAswSEICWUhqtfpxu/u+q+o8Mn/7R+apqm5JCPW9t293V37iVtyqU1Un85w6+T2/3y9/mSmqSiQSmV6S/a5AJBLZX6IIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuXsmQiIyNtE5FEReUxE3r9X5UQikZ0he5EnICIG+Drw/cDTwF8DP6GqD+96YZFIZEfslSXwOuAxVX1cVSvg94B37lFZkUhkB6R7tN9bgFMTr58GXv98Hz506JDecccde1SVSCQC8MADD1xQ1eUrt++VCLwgIvI+4H0At99+OydPntyvqkQiU4GIPPlc2/fKHTgN3Dbx+tawbYSqfkhVT6jqieXlZ4lTJBK5RuyVCPw1cI+I3CkiOfAu4BN7VFYkEtkBe+IOqGojIv8j8GeAAX5LVR/ai7IikcjO2LOYgKp+EvjkXu0/EonsDjFjMBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppyrFgERuU1EPisiD4vIQyLys2H7ARH5tIh8I/xf2r3qRiKR3WYnlkAD/HNVvRd4A/AzInIv8H7gM6p6D/CZ8DoSiVynXLUIqOoZVf1ieL4JfA2/JPk7gQ+Hj30Y+JEd1jESiewhuxITEJE7gNcC9wNHVPVMeOsscGQ3yohEInvDjkVARGaBPwD+mapuTL6nqgro83zvfSJyUkROrqys7LQakUjkKtmRCIhIhheAj6jqx8PmcyJyLLx/DDj/XN9V1Q+p6glVPbG8vLyTakQikR2wk94BAX4T+Jqq/suJtz4BvDs8fzfwx1dfvUgkstfsZGnyNwH/CPiqiHw5bPufgV8Bfl9E3gs8Cfz4jmoYiUT2lKsWAVX9L4A8z9tvudr9RiKRa0vMGIxEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSlnN9YiNCLyJRH5k/D6ThG5X0QeE5GPiki+82pGIpG9YjcsgZ/FL0ve8qvAv1LVu4FV4L27UEYkEtkjdrog6a3ADwG/EV4L8H3Ax8JHPgz8yE7KiEQie8tOLYF/DfwC4MLrg8Caqjbh9dPALTssIxKJ7CE7WZX4h4HzqvrAVX7/fSJyUkROrqysXG01IpHIDtmJJfAm4B0i8gTwe3g34IPAooi0C53eCpx+ri+r6odU9YSqnlheXt5BNSKRyE64ahFQ1Q+o6q2qegfwLuAvVPUngc8CPxY+9m7gj3dcy0gksmfsRZ7ALwI/LyKP4WMEv7kHZUQikV0ifeGPvDCq+jngc+H548DrdmO/kUhk74kZg5HIlBNFIBKZcqIIRCJTThSBSGTK2ZXAYCSgNVRPYfvb2MGA1BhUwTpHU9eoKibPcU5RBUkSVBXnQERAQAXUhYcqJIKYhCzLSBJDmuZIsYCkXSRJQcR/F/99T0rU98i3SxSB3cSuwcq/ZfjIV9l+5BHm5+exVtnaGnDp4kVqZ5k/tkw1cFSlkvYK6gaGQ4sxBkmEJoV6CE0FTdOQFBmm1+HQ8jK93hyLi0cwt34PyYFX0CnmIcshy0AMYxWYBzr7dx4iNxRRBHaJ6sK/w238FcXpx8kvnSVxq5Rnt6hrqAcNeVmSoZhzF5lBmQHyxOCcUjeKIUEcNDUM+1CVUFWKyYS8SjkgF+j0czrVLCJnkY1lcAJNjZYVg77gLDgHWZZhjAHAhm1Jgt9X10DqLQjqGhIFUSgKvw1gWGJLy/b6+PiKAkxqSLMUshQS8cZG0YOsgG4X8tw/eotenNIOSA8vSD0gC4/FsG2OCfMlsk9EEdghaitcvY7d+HPcpU/BuUXMVh9xA7bWB9SVv6un4ttYst0nSyHNoGjw7oJC6gAB20BWw7ACM4TM+uYy24FcBZNkSHYRhl0oS3R7G7a2qFeVpvaNng5oBijUjReWLAeKBBZS31BFoCwhcWAUZmfHIrCxhes3DM/6fSCQzAC5ISlyNCtQI2gC0luATg+Zn4NOB7ozyNIRNO+iRQ9hEaSHcAAo/EOOIjKHJMs4TQAJRctlmqAElyggkiAIihL+EIIrhCCJIJJAEED/JROOq3WPouhcSRSBHVKtfplLX/pFFhfO0dXDNCtnGG45Bn3Yrv1dmNQ3QhNuwkkCGLCpb7S1BQ3XqgWaFJrwvhj/WnOgEOik0ADbDWwP0UGN7Sta+Ru6Sb2wNDU0dvx/pue9BsAX2jZ4Ul8ZMxMqYMEOEGtJjXpLQqHfh2RoMWZA1R9SWxhYSMw6xggzs4IqOBXyNEcTYYAjMSkmScmLhVHswpiMrNOlc+AgfTtPQ4eiKEiShCRJSNOUBmWIZTCwWOuFoNedp1vMUJYlTeMNGWO6JJJj6NI5sEg2PwvLR0PjB3q3g+nhrY7MH2vkMqIIXCWqjurM57Brf8OMHcL5mnrY0AyUuh63syQ07jTzIpAk4/bnnG+wim9oErY5518rIIkXDpNBkiX+Lo6idQNlhS0tVen3g4K48bjupvaWhbOQmJQkTXwFVP3DOV/BJPH/VVGnvnCnjJ66cX2d9W5KXcFgCBj1X7VeZLIcrzqAQRGTIiYjywuEhCQREmoSZ6FOyGSTRAxGcxJNECfQJCQombPYfkVTW6xtkGoWzbskjWKHDZubQ6xNUU1Qm9OdnyOf6ZHML+IQnBNmFm/B5B1MmiKSImKQJMGpw6qjrmustTRNg6qGQK0jyztkeQfnHNY6yrIBNWRpzuHDh0my3B/swcOIWQSOXrNrb7eJInA1qANXMXzy4yRbjzOfFAyfcZSrNY31ZrlzvgEn+MafZWCCRRraKzb0ArS7bLdZ6/8z+d1MSHLjTe6BhaqCsqQpHWU5Ube2Z8FBU4YyFJI0CyLAOFDgnFceY/zDWtpbv7ZP7VgEwFsqdQ11BeU2uMRbMaowO+ctDnAI3o2xJkFSQ1EYjElITPhwqkBNnq4jqfXlT1jqximJs0jVpylryrIkqTtInpNRsLUxYOP8Gv1+TV0rZQm9TpcizzEm9RZWDcvLyxRdQ94tMcaMLI3KNZSupr+9TVVVDAYDrLU452gaS2/uADOzizTWMhzWbKwPwOb0evMcfO1roTdH0puHhdeAvAySI6HmN567EUXgatj6C1j7A2bdl6DcgmdqiuYg2ewBrG5QVg1JCSZMrSKh66/h8vYH/iacplCFbVUJlfV383wG8q6PvUlvFjo5YKBqoG+phtA0z66etb6d5UVwQTIwWWjoGMB6k8EU4ZF736PxCjQcKGUf+tsjTfBilPoAIV1vmSDBiEihMw/dGej1EkzeC70VeP/EGEwnvHYNtqqQWki2h9CsgVa+WsFNohueJ0qWKFmqFDiSokRygUqYRTmKpTSKBTSFbmdImpUMK2FoYVjCTLlF3oBcVLJMgksmdEJAtKkd1ipN47CJ309SKJvrq2yd3sA2kItyLHGQCUW2QboBsp75D698CW79Hrj3NuAQcONNqRlF4MWgNVRfx21/GV07SWIvITjQHKnxpmxXSJ1grY7uwsrY9LfN2MRuu/dHFkDjA3lWvWhkGeSFkBUJkqbeZLeKrRpc2dA0irMh5wC/H4KlD15c0jzx3zfjABrig3Ek4htpknrVsQq1w1bquyircb0TBxo8CQleRWjfXiAMpCbBZCmmWyBJaPQSlCIFrEW1QZvSB/xsQtKU/ry2AmDwrnsKkgpiFJzvxMBYfwgVmDoEVifcnwIlVX/emwaSGtLGkjqQIbQGhwkumiRA448tCXEZDGQCqhZ1ln7tLbhu4X+ftFTs+kVMUpCYDFYrWHga6schnYUbcF7dKAIvBrcJax/EnnuA5vQXKdIukszB4gF4bABbJeZOgzEOY+xld/zWAmjv3MI4iGeDdW8b/x/8BdrpQrdn6M4VkIduvaqk6g+ot3xwzAX3IU0BHbnjJCF8kM1mpL0rcgbawESaBl/D+B3VFvoNdV+p+mD7E99JvTDZNrgojBut8e0zlRSTd2G+GyoUDlwdYMFWYIfYwRYunJjCgGkt6FYIOkCW+GNuah/xVIJJBWwosgFmEIxvxZtZAjgwJSRDoA9JCamBwvpdSzj/oUYjl8dOCERewMEMDhyEM5f8frtd2FwDN7AM5AKdQ4vki3MwqGD7cdj+TzB7K6TzL/662meiCHyb6MYXYPthWDuP2egjA0GGFer6wAZNtYEtB5TrFbVVqgqGA3/Xh3EwsL3YktY6D9vaLi8Ym/B57v+LAWp/97TDkqrfMByM43so2AlXtL1L592UpMiQPPcOcotJwm2QoCLeBNFyCNZ3G4oZawMEqyTzjbbGl9c2GmO8RwENdtjHDILvkCS+XOeDHLaqaYYNw4Hz1kUCUnsBAX+cifFaIHXiu0RKgjhZ35VpgK5BrGJKR1b7hl0GIW17DerKa049AMmgdsHjSCBJBBcyNZtw7sWAy0KPTC5Utf8NNwfj43NB8AakpFmPvLvklWRtE05+EU5s+RSIG4woAi+Augpt1nEbD6EbXyTd3kCGFlNnuI0adTWabmPtJlaHDIfq++bDQ9ULQNvgW6M8CXkDGiLu7XttjCBL2+5ERXHQOFzjsIOKuvQXukw0fAnuRVtWmgomM0gaov/ouHCR8S3UWVRDMKKucHZcGUn8TRi8qKTG36AnxSppHylIoiGS2AQbPUGbygtM3dBUlmo4DmQa4810F+7O7b7ymXASmo5XG9sE30T8fjPjEyuMG93ZG8soRuEmxNFZaMI2E/Kb0mTUAUKj4/OOCcecwFBhaH2sxoj/bxNQhFKFrqS4JPOfr0o4c3Zsxt1gRBF4AezwCQZP/Us2n3yQeu0Mt87NkpQZNLexfvEU5XbJYOssBw8o+TKUm2PTP2/Ny+AmKv56tg7a+Zg19OUb4z/fDVnAWeZTAow6KEuqvqOufH99NZhwAdp6Wn8hFwX0ekLeSaBT+I1NA5TBBcDfUdvMpCqoVVnRVMow7Lsly8bCZMKjiw8fMGENdAowcwXJfA96IeegqmBzC90esr0Ng9I/bDP+bq3jnvvWG0jNPCafg94hyEofKGF77Mb0+zTlgO2ypr/ti+n3fc9Ep/ABVZXQ7ZplNKJs1Q1GgwWWagg8+hu5TpxD1HdtIqAd0DlvZQyG7XlRirIm3V4l6zR0ji171XD4jLAbkCgCz4Oqo3/xk7itr2K2/4aeXsAlQ2S9oNosGaz1Kbcd6qA7p2zU4Gp/h28bZ5b7u2eej4OBpXoRcEEIXBCB9vrOgpuehXiBa3zgrxyGbrmh33fyHGOEkhTSHJKZ1CtIJl5lXDBJLuu+UnAVrmnQ0tGUip3Mb0hCsFHHLg1X5Aw0QdBMmyNQNRT1AEpfZ/oVw42Gqh/qb3111I6DiwM7DmQahVSEwgmZg0x9q1QUpxZRRUhCN0WKdoSmUmoHderjFbYALRKaRhkkiq28olk7SovygUaZCA6OTmAIfk5sy7pA7d0K11pvAoPhELOppAsWU2RIUaByEVhFWAyZjDcGUQSeA3UWdSX9lY/D5hdZLJ+go6GP+2JFebHP+vkNwJIW0FuA0xehP4Sj877hJwkUHS8IeeYvwqbxDbk1U9u++Kb2n0kYC0CaelfBWw0axhJ4qz3LQxddaDxt8xYDWQeSmQzpTuQN2yv7ETVE50u0ctgSquFYnNrGofg6tJ6DtvWGUQQeoAnJUEnZoGXjo/1W0c2GwQb0BxNVUCDsM8Gb3GWwPDIgM0rPgThHZuugGg3O1iQkvtchN97G70Iz8OJSp9Dk4HKwhaGuHENj2SodifqEZQ0N2Fh8MDOIXYsNAjApDHknVHngK9xmIA/KCrtRMb+5iUgH01vEch7cBVJZvKHSBaIIPAfl+c+x/be/zby5H1NexJzdhAFQCqxv0imFAybh0gAqYKOE5Vt84+VC6HkzUCyKN/fXdZS80l/1DaZrYH3Nl1ek/iLN8f+Ng6R1F0LvgWkgC91hWebv+ltbfp9N4xN1CgPFjCBpPVYYdYD6SHtigSEMLVo7qk3vYth63JUJIWcoaEeajvOInAtChG9MbTZjmnk3pLWABis11RC2Ny/PiWifN00boBtbAdZ6YZMU8pkyZB6Gg3cNpNshaOBvx7Wt2R4qG30/BKKxkFa+EZe2xin0en7/xkG39mLrGt+RkOc+2DcZnO04HwMpJlpFv+9/izQduwtS+m2mBrsKxpWY/ALnHz+JLmbc8tK7uJFUIIrABOpq3KUvweqXSAePYXSVdDiAbQfb+Eh16XxivzNkuY8oSxIG1SWQdkOQLnRbaTM2s21osG1crjVJ03BtQ7DewV9sMs7+w+e2jPIBbHho4i2ANPd5P1II0tq9o8E36u+erX1f+502YdxC263Y9jbYsK0O5RJc6Pa5xYtAm4VsknHCn9ZQB8uiqsbByvbBFa+Daw6EXkEDaWoRU3tFEAtiEeN8rkMi4CwidvRdg3/SBv6MhOzMzJ8746DTePGy4XdJg1eRSgjetr9L4rsTJ1O7rfqG4kIPpQt5E8aCqSCpfBRRh6tQr+7iFXltiCLQogrNNtVDv4ZpnmbBXIKnt2C7hD7oBjAEwSfr1HVD7wCQg839Rd9UsLwEOgTXh3pdR33/1o4tcw29ckWI25kQlVb1n23T+SddgtFtmnF2YROy9XIDnQUoZvBRuzyMBTBtAALoZP7Ktg3aNKj1jaJ0PkMRxjkH1vpjGYTIeipB7CZO16hrPxkHB3HePB/2x8fcJua0/1sVa7+XhfwfDPQymM0hNRXGhDRFLCLOX6htVmFTkRqla7xFZcJV3Annosi9EBOsDtP402IIIlAFqyvEb1oLp+3t6MjY6MjwlkOFL1sJblBIYMorSAbAllDYNWCtvaDgBrEGdiQCIrKIX4z0Vfij/mngUeCjwB3AE8CPq+r1L48X/ggu/Wdy+whSrvvE+O0GhvgrYAgMQsQYfzfszUHS9d3ZVejqqtfHwa/hYDw+QMT7/QuLjIb8GuOFYKY37tJqA2ajgUahLbedTyaFzU3oh/EJee6TihIHSZtpkyTh9qijgUGIgHNoVdEMHfXQxxcmewImuazhfpuUJQwG3oRuE5na4zAmNAt3uQku6u/KnQJmuzDThUSMb3mDCiofX8A6SBuvSOk4u7hL6OxwMCvQMf7h8FnQVdtY83GZbU9K+3pitPJoKEV73Im5vJG0n3XOH0898FZHmkHWvwjDJ4DPA3cBR7gRSF74I9+SDwJ/qqrfAbwGv0T5+4HPqOo9wGfC6+sW2wzZuvgNhhe+QHPhP+MGK9j+Bs3GgGbb0fTBVoILjU4lRJBTSLLw46eQIiSNUA2C76nh2g0WgIa7S9HxGWlpNu4XbzOC28mBRqYzYwNg0mdv79ho6I0wYNIQuEzyYEa3t81kvGPHZWnB7QjDSZdjVGjY9yi1YKKhtO+33xvlG4Xei6aeyH0IFnxiQo5Sm38QzPAE3647OeS5kGXBvNHEt+DRA9+f2Pj8ZUNCDuQKRXh0gG7ip03Ixd/ts2DKS3r5QxNv3rdulZNg7iegRnCJ+NcydrmkzZAMp9Kpt6JsjR/+UG+SVOexW4/i6rVdvEr3lqu2BERkAfh7wHsAVLUCKhF5J/Dm8LEP4xcl+cWdVHIv2brwKP/ff/xRji+ucbDbx56pMEMlGwCrkFghzzMcDWochfFR6LaPWSqfvlqu5wyHCWUzoNPxd+e86++Kly75FFSTh0BaAR2F7W3vd2/0L69TawnAOJhWliHApjA3DwuZtwo6HZiZgXRhBily/6J16if7I3vFKB+gHtRUA/WjDO3lowTBJw26ZpwVnIA/2Pb9NoZgoFLfNWrL0GVY+TtvGxzthhG3aeFDKUkyLtNaH8zME5gvwBR56FIpfL3LGiphNKSyvVptStI0FE3FjPW6YFLoJT6wSrh7Fwpp6V2aMocqGYtnIz7EMyJkT1sjmKKLOod1jrLxg5tMSJd2GpIYwzBtG8rpNQ7TnMet1Vz6f4fMvfI43dteflXX5LVmJ+7AncAK8Nsi8hrgAeBngSOqeiZ85izXqU2kqnzprz7LE1/7An/56QssFyULWYNbV9LGD05ptgGnGGPpzih5AVL4bqOi691s43zWW1M2qBOS+ZCzn4dRv304twKl8e0zSfz2oiPYApzVkXnq0/mFNE1Qk4Sx7H6EmxYgkmBSQ+MstTpShUSFVBOMyxCXeeed1NvdHQMS+uybBls3VLVl0ChD631+O5Gx2HYJ1qGhFBkjY8JN2IxtRp7iLYQ2l8CqTyl2ybjLsgkxABtykupgMbRDpjsOMOBmILHGH0NpaAcz0YDPLAoDnRIvS5IM0awmTXVstaSJjy5mOZI4FEsqDU6VWnzyoXsO23eUC5H4rMeaKnggfmRh65a1iZcEV8AK1GF7nUBht5AaqGbADnb9mt0rdiICKXAf8E9V9X4R+SBXmP6qqiKXGZIjROR9wPsAbr/99h1U4+r5+ldP8tW//gJf/Iowpzk9zcBC6pRClbJqG4llaUnozii1WBZm4eC8d1GpwK5DgSXLYOZWRuMC5udhOITz530iy8zQXzyLC8J8MPcbK5RDHQ9eSSDHYEix2vgGb8EYwRghKTKfPVh7X1iNILUhawxpYpA6QUMLlCKE7pMEvXiJpm4Y1JZByLUfhobYJgNpGxh0vu0XbUJSMh6pB2MRcEwE+/AN3o78h7HJ3QBMpFKXbW+J9b2WSQrOCuoMaOZbVmvNtIWYFEzH+2DggyBpH5ON/SQxwTdKu37HWmMSS+J0JE7uOWIc7XEnwVVpaEaeB6Hnh8S7EW1AyIXjr5OxCHRtn6R2SNULImDDCby+A4QyOYfbi/qiyFHg82FVYkTkv8aLwN3Am1X1jIgcAz6nqt/SLjpx4oSePHnyqupxtagq/a0NqrJkOBiSsIawBVjq1UsMzpyhb7tsbPT5+tcfZXXtFGvrZ3j4S39Nr2hYmoW5vu/rf/Drfl6ZWQHJYaD+cSD1prENwTAncNF6v7Ub7khFBr0OTI7s7RSQ5d5pbs1mExKIul1hcV6ZaRORCm9ZLC0lFB2h0xXyPMWkhrzIydIUk6Zsrq9TlTX9fk0V+tXbQTyj7slwbpIEH1nfYjTpUGomzh3jYF+ejes2ilUwPpZ2NiVgNGJvWI5H7s1m0CkSFg7PI7NzSG8Gqk0ftRwOxopaFOPCqhKGQ3Rri/6lxmc7VtDrGfJuBgvziCpqLXa9pN80PO1Kb20l4VhC70EbGGxj+W3qdWu1wMRxGh8jKBsf+6jqcTJYpwtHliHvFLj8OPKa/57kjrcCr8b3Mew/IvKAqp64cvtVWwKqelZETonIy1X1UeAtwMPh8W7gV7iOlyYXEXpzC/Tm2i0H8BlBjnpxk3LxKKUtWBqUmKXDbGydZ2vzEkvH76Ywll4HuiVcOnOJPl/BnblE0x/SLfy49dT6u6IkfjLezU1/8dgsuOoKg9qPhZkt/N3SBXO1yCBLFcM43diEi7ebK7O9cfdiexHOzjryMPIwSR2JSUhT52cITgyurGlq6wN3YU4Dk4y7I5OJPvws9e9lE9H9doLiUf9+cvlI5CyfeC8cd1v3JLgEo2nTghg4C7bru1g1d0jeoGkNdY1KgxUbDBnxKdCp+CBFWft5CURpEqVOfLZfJf42nWrp4xgCzlicdWhwuYz4c04QukQmAqOMg6CJjhvHZbkHwdpp8J+RkH3oJ4cEcYpJGtg+B5eegMVX7jz8vsfsNE/gnwIfEZEceBz4x/hD/n0ReS/wJPDjOyzjGjEbHpDNHSGbu5tZ/Nrqt937+uf91sqjJ7l98X/h85/+IueeGrJ83AeN2pFy/u4N64/7SPLhueA/W7i04dNmmz5s4ANOfUZzatBh3B3ZBumKibIbxsk2KWOj0+JwOBqaUTLNcu4v2iqMKBbx/entndqkY1Eocuh24Jbl8T7bCVInhw8XxeU36vZhwmdTM+5vZyJzeXIeBDMH5IrNhpAKSeKAAQ01Qy2ZIUVEfZg/xx+EK1FXYmkoCbFDA4ijwdGzGyAJCQaX1KgopvI5BLnxwUiZrIv1AU3wAmAmzH3C+csZi0BCsB5C14Jp36vxWY6mRi88DsMF+M63Isnkr3b9sSMRUNUvA88yL/BWwVQwbIRnVg1f2xCeWIelge+7LoBN5+NUizk8NoRiLuEfvmOBxfk5Ot0ujz/xNBdWKk4/VXPmvB+7bstnJ+UYGM2bYfEX5JU/XCsEaRJyExib9wKs16HvvN2mkFWM7v516A404uMSmcDplfH+03DxJ+GGnMg4666dB7Ht1TCh4RsZP5/MgWhzIlQhOQT5nHL0WM3MrKPT7VMNHc46nIPZOUtnxrF4cJU0F1Ij9KSmUUdlYVt9D+Kggk7iE4BcRzGJI0kcpVGaZJySnQvMmHFdWwvMwmgWJWHkLYzOlQm9JUh4Hh6E814kPkyBs1BuMKjOUQ2eZl7ddR4RiBmDO8aJUJmUdZdwsQapfYPqis80Nom/uFYtLHS7vPS+13LwwCLdbpfu0cOcP1/SPVLRPLQCK9tcfGZtlB5smbhjkdCZ7fGSu++iEBlZoK0l4CfTVkyiVFVN0zRUVYW1DdY2rA1LVF2IUfgJNeumxlmLWue7y8KVb2p/YcyX4ztia1EYGC1yZgg5EuEO2wpKkozr1boaCYzSq9vGpUDdh6QHF7eUmZmGTtf3HrShqtk5pdtVllYrssyX1ctBG2iGfvxEVUE58G5XlsNyH0yi/hzaEIwcQF56sVqoQ25FEsz5EBOwYbBlYsJ07c67jYkRjAi5CCJKjUWNj1G6mpG51s4NIQ24xKFXZiJdp0QR2CGma5i5a4amZ6jwE8u4EBicm4HKwBkD52rIFm/nVT/9CYrCm4eHFcB34X3sP3yQL3/+r/jq7/wRPad0FAaOUZS9ostL7/tufv3Tn0BExnMKMsrExeIYaMWFlWdYW7/Ek9/8JpvrF9lav8ijjz5CNSgxFrYG6/QHW5w9c5bh+hrV1uZoRp7hADaH0HXwutwnTQ6sv2O2ZnGY7tSLgYNU/cJnrTthK0YzioEXgB7jGEHJOBV35YJ3gdqZkVrLpxW/Lt4qmcjTocTfzRdSKINgGPXju0jgNcd854ArYWYOyhk4cwjqLd/o7zgA3dTfvZfDHATdOR9vVPyo0O0G+g2YIiUtMvLZLgdmDXlmKS9dpNvx3x8+47+T4oOFjhTDAbrLy/QOHR5HRa9jogjskPn5Bb7rvtfxyT99iCe+cQoz9Nd+DaPhs4XxDaHRhouDcyyZZWbyudGFrqp0Zmbo9WaYLcCEZB1HuJE4mJvrMj/bxaQpyRUX1iiqr4pgWF46ysLMIku9Q1TVgLoc8upXvRHXWEShbkrqxk+3basSW9WjBtpGxVOFI8bfDetR9HxAwgZmcw036LN5YZVBf4Oq7JPYBiNKKo6NjS2G2wPWL6yyuTVkWFouhMk/KhdM73CONoz36cFn/6XOu/+tJaGEREHG2X0WH1wtHWyHsQdz+NmArMIDl7wopBYWK6jW4fSaLzB1UF0I67gk8Ezbg5HDlvV1KvJQpsLMksWhDGtHZ1YwiZKUcGAe5npQXfLnqiOwcBDybkNnYQ2ZfYRkboOF9T/kwB2v4ujLXrvbl96uEUVgh8z05rj7Za9mcWneD08Nk35YfEMW8b5oAjS25vza0xRZh5l87rL9dIqCmW7BTBHGFgQzXPFCkBUpefHcP1crJiJCTko+swAzCywv7Xae1jroebjwDOX6OuefOMXqpXNsba1TliUGR6qWixcusLm+wdknT2MubrGxVVKu+q7zKpyfVrjarjcDdEPyUO58I05UKa31STuuwYYFQ1SgUWXbOdZUvc+OtxAq4Ny29/27CWyWvmGfWQ2pxMFsCgui0Qknzwqshn0YGK27uNgUVLVw6ZLDSQMomUs5ckhZWoRm0yGNktYwfxCKnqO7sIWmA5LOJY5Xn+UuyaMI3MyYYone0TeR9w6OLuY2FpwQfOTU+8Kb6xt84qN/wA983zs59Prjl+3n4PwSy4sH6BTCYKCjHoEhsAU8dvE8XDx/LQ/tOZgDenDgdvIl5fhtlqPOoeqC66thFSXnJ0W1NizDrqOg2+REKOD73VtGeUaK900G26x8/WusrzzDE6e+xPrWWbYHq9i0YWtzkwsXLvDIQ2tsb9aUeBegsH4pg4OLcGwZHnvSi+qxLizOwEwKnb63zjoZzBWAgSqHO3Pveg2Al7/qFbzs1a/hnu//h1ibsHrqLKfOPcj6xlkunXsC0S3EbbP69BobFwdcPLPNucoHdqtnYG1gKXXAy7e/zvcdfg2vvhY/z1USRWCHiCSIKWhqQ12No/QAozkAQqCvGpY8/OWvct+rvvtZ+zl4cJHlQwcQN05e6YjPvV+cgaRyLOXuWd+7toRsG5OOpkw3L/idq6Rp0KrEGGH+lmN0bj3IYLhGWW9hE8dwOGBzY5OXfuc25SDMLRACm9U2zPZgYc5RfvKTrJ25gLlYk4f4xTCsz1iGvn01fu7DPAjCVgLdYy/jJa9/Kwde8h04J3Tmj5DfcoD+YI2t9RXUDVFbsn1pm8FWxeZq6ccmqB+GvV1CrYajd9zOS+65d6/O0q4QRWCXKIcw2L48im7Vp/An1pu3w36fB/7qft7y5h9+1vePHz/C1rmjSJhvQPEZiPOzcOAILG3CHbPX+KD2kzRF0pT5u17OPHCUN7zoXVjbcG7wAzx2/wOcf6bG5F6QV8NU8IK3shywic+NSFLYSEGO3setb/5Ho33ly7DE9WvS74QoArtF6tPeq3qc7AM+ONjfhJ6DGuX0+SGb21fO+QfZ0p3MHL7IoUPCQs+vI7iyAoN1uNgHOxMG9kS+fRQuDi1rlSPP/UDKIvWj3treiTo838JngJrauwb1s3+im5brv//iBiHPfbZc6HYeDxtR33fcUegqDCtHYy/vOxYRkmwGSWdG6bxF6ofpDhrYGvj5B/LrO/HsukNRtgYN/WHjf4uJ9GC94tH2aFoB0xE/m/OUMEWHurccWITDy+C2gRCtDtPrkwILYXjt6W/h1tc1nD4XZshJfK6BDd8/egscP369j0e7/jh/fsDK+SGdsDZBnYyzAg2wwDgpy6RgutC7M6O7tGfRjuuOaAnsEvOHljl4/BZEZJTvPxqZFgafmBc4243C2tD3fVcCy7Ow2PV+6pOn/OP6zz+7vsitz97szfhEpjb7sU0Nbt0Bi88T2LSQhNjAtBBFYJfoLSwyf2gZggiMbvgyHon3Qrdxpz5LrR/G+s9kYWIPA+cv+kfkxZE5H6MpOkEEZDLDcvxQfPLRECHtGJIbdDWhq2GK9G5vOXDoVg4fv4uLyYNU1o0H8AR/YFiHNOBvRUh7fWYAT/dhGT+0uErhlIPD+91DeAPiV0sGsxBctYmMzpIQv0l8zkAJpDMpt912jPn5uW+125uKaAnsEnfedQ+vfNVrSZKECsL0JP7u3tjxqLtvxUxvlte+4U3cfc8dHDncg0JojL9DFd2Mbvf6mJziRmI4HE8nrinYFLYldMESBCH8RsZAbzbnnle/goOHl/e55teOKAK7xL2v/E5e9/o3kRhDiZ8foMH7+XVY7CJ/AbtrfmGR7337f8PfOfFqXvbSQxSLOTKT4UzG0kKPpfmZa3AkNws+46rfVwYDn+OkqV+ubANvdC3if6My/EYmhbmFDq953QmO3HJsX2t/LYnuwC7RnTnI7PxRUiN+HH2I4DmC2Tk5NO55WDp0kP/2PT9FOfwH1NWQqqr9HIcKYhK6vR7yYhYCmGq8FG87y3oNq9s+k68KS66104AOGbsEmYFut8NLX3mCmbnjz7/rm4woArtEajIyk417BBgHnNoZgEYTcjrLsC4p0vyyRp1mGQePHAYOX9vK34RU1YDt7QvUNFgZz27sGh8onPx9RgutpCBZRjFznDSLMYHIiyTFD4GtwtRU7ai2AT4zbTKmtzXc5vzGJeyVE/5Hdo319Qv87Te/SskAKcLsTGGRkAVgBv+b5fh5AbqFXxfCr4T6CuDQ/lX+GhNFYJdou53aVcvau4wLr8X5SDXANx57kM9+7j8xLPvPvbPIjtnsK0+fs2w1PkEo73pBXm38jE+tOCt+HoVqCGuXYPUiE90600EUgV2k7Xtu5wFoLyPHeAZbgDPPnOKhB09SV9Vz7SayC6imWDfDoaO3cvj4EWZ6vrt1GGY2qplYVEn9GI9hWEx12hKyogjsIu00WgVh8snw6IT/YfUyHvva1/jspz7FcHDjrFJzo3Hn7Xfwjh/4If7wD/+U3/7t3+C7XgGL85cnCbVTprWTIU9b42+JgcFdxCRweBbWGuhX42nAa3wwSvC+aLlZc+H8AOem9bLbe5IkQZKE2SxjaWGeY8cXme9u06GmZmIiVPzEJkkK811YXORFrcR8MxBFYBcxCRxfEJKBD0CBb/itFSD4BJWNvuXSxfpZC4FGdpc2U7tTpBw9ush8t6ZDPRow1I4hkMTPMTi7BAcOTFM0wLMjd0BEfk5EHhKRB0Xkd0WkIyJ3isj9IvKYiHw0LEwyFRR5wqte3eGWoykdxm5AgZ/0Mrd+Vt6pOSHXCeWw5Myp02z2+5T4mMAQHyC8CKw6v1jMzAIsHGbqVOCqRUBEbgH+J+CEqr4K7169C/hV4F+p6t34uRvfuxsVvREwJuHIwUXmZ7qjabcSfKPPxLsEYTxQ5BrSNI611ZphpZctEdrGAqyAGOjOCjNzydS5AzsNDKZAV0RSvLt7Bvg+4GPh/Q8DP7LDMm4YUpNyx9E7OTS3RE7oFcDPnd9N/ErhbXAwcu0oSzh9FjYGPj7TxVtpbZwmCV2IS4cMB5bTqbMEdrIg6WkR+RfAU/hu1z8HHgDWVLWdnOlp4JYd1/IGIZ1Z4Phb/wnD0x/la//lKea7PlNtq/R56u2adUmMB15TqiGceRJc39+pCsaWQLvykhnAq1/2o3zXfd+NkekKle3EHVgC3gncCRzH94697UV8/30iclJETq6srLzwF24Akqxg9q6/gztwC6vAwMAwPMoEGgOd/PJlviN7T2Nha90vGdYuSJwxjtdk6mcqPnLkXm65/QRyA6watJvs5Gj/PvBNVV1R1Rr4OPAmYDG4BwC3Aqef68uq+iFVPaGqJ5aXb5JhmyaFI7eTzi6SA8Mtv2jlHUt+NeLleXjZbXBgetLSrwuc+tmamjCLc5vG3c4KbQzQg6W77+Lwd7zSdxdMETs52qeAN4jIjPhRMG8BHgY+C/xY+My7gT/eWRVvIERAEnod4cgCLHVhoQNzHT96bXMbNtbHy5ZHrg3tNGJtlmCb4t0wXn3YAqQdJO1OW0jg6kVAVe/HBwC/CHw17OtDwC8CPy8ijwEHgd/chXreUMwUcGgO5jswW/jFL+satvuwueEXvoxcO9oG3z4SxsLQzjDsAE1yMJ2pyxbaUQREVX8Z+OUrNj8OvG4n+73RcQNoLvl18AxQbXiL4GDXL+V9atP3T0euLY6xELTPcwDxMw7pdLX9EdMVBr1GFBnM9yAt8JFn9ctfJwlUjfdR1Snb51YZzi7RWZqmpYWuPRqGeDc6Hh/QugHtfAKSTp0BMCKKwB4wO+fXCAC/1Hc7Tqhu4MKWFwIVx8rDT7GUzUYR2GOsg+0KajcxcrAlLB6ZTHECRxSBPaBR2G5g/aLvnrL4VYgcfnGLJQeJtXzxi3+D9jocec1L97vKNzWC76Vpp3wzjFceMgayLnQPC9mUrvAURWAPSIsuxfwB1k6vUdUOkrD8VQJpOOPilGfOrXD76vq+1vVmRlWxzmKt9fM+Ml4Rqk0WMgnkecLsXEb6QtNB36REEdgDjt75Cr7zzT/C5x79GKvbGxggz3ySUKFwuoaL6njy/EXuXt/Y7+re1GxsbtDvb9ErfKama/wgrtG8Ajn0ZnKOHjhEMaWmQBSBPeDY3a/k7xZd/vlL/i7lcA3hEpsXTrO1sc6pU5c4utqwVub82Hv+Ad/xypfvd3VvaobbG9hyg6UOdCrvki0I2ARqgaFAniR0Oh3MC60Td5MSRWAPOHjrnRy89U5e8ca3AOuoPskzjz3IypmznPziUxw5W7O5nfGWH/oelg4c2O/q3sQomxuXGG6t0UshD8uQ9QSaBErjuwkzkzLb62HMdOZzRxHYc+aAV3D0zrs5fLvjnvsszilOhbm5mD+8lzinfOr//iR/+/nPc/483DoHsz0fsM2AwsDaNix0DvC9b/oelg9OzwzDk0QR2HMSRBJMmmFSpjYCvV+sXFjl0uoaWcJo+uc894uTSgLzc3Dw0AzHXvISiu50rvAURSBy06KqXLi0wfr6Fsc7YWrxCma6IVlI4MhhOH57j+MvfzkyO535GlEEIjc1a2vbrKz2MaUfLpwBR4Pr7wSyIZQswYE3QhpFIBK56RiWDf2yYTPMGZApdEPSkAokA9iqM8iWpjZvOIpA5KZme1CxXlaI8bPe5A4ec+PpxjfOw+ylfa7kPhNFIHLTkiQJP/9zP8PqhSdh+AjNhYZ6rWLlqdOU/Q3KwTorm5vcfc/R/a7qvhJFIHLTkiQJb//B76dxawyaB9k8VbF9fsiphx5he/08W5vnOX/hIrffddt+V3VfiSIQuekxMs9s9np6dyh6O7zktd+LqgPncKokJr1sifhpI4pA5KZHxEcAJEwqaLL9rtH1xXQmS0cikRFRBCKRKSeKQCQy5UQRiESmnCgCkciUE0UgEplyXlAEROS3ROS8iDw4se2AiHxaRL4R/i+F7SIi/0ZEHhORr4jIfXtZ+UgksnO+HUvgd3j2QqPvBz6jqvcAnwmvAX4QuCc83gf8+u5UMxKJ7BUvKAKq+pfAlUMs3gl8ODz/MPAjE9v/g3o+j1+c9Ngu1TUSiewBVxsTOKKqZ8Lzs8CR8PwW4NTE554O2yKRyHXKjgODqtqu4/CiEJH3ichJETm5srKy02pEIpGr5GpF4Fxr5of/58P208DkkKxbw7ZnoaofUtUTqnpieXn5KqsRiUR2ytWKwCeAd4fn7wb+eGL7T4VegjcA6xNuQyQSuQ55wVGEIvK7wJuBQyLyNH4p8l8Bfl9E3gs8Cfx4+PgngbcDjwF94B/vQZ0jkcgu8oIioKo/8TxvveU5PqvAz+y0UpFI5NoRMwYjkSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKiSIQiUw5UQQikSknikAkMuVEEYhEppwoApHIlBNFIBKZcqIIRCJTThSBSGTKeUEREJHfEpHzIvLgxLZfE5FHROQrIvKHIrI48d4HROQxEXlURN66R/WORCK7xLdjCfwO8LYrtn0aeJWqfifwdeADACJyL/Au4JXhO/+7iJhdq20kEtl1XlAEVPUvgUtXbPtzVW3Cy8/jFx4FeCfwe6paquo38cuRvW4X6xuJRHaZ3YgJ/DTwqfD8FuDUxHtPh22RSOQ6ZUciICK/BDTAR67iu+8TkZMicnJlZWUn1YhEIjvgqkVARN4D/DDwk2EhUoDTwG0TH7s1bHsWqvohVT2hqieWl5evthqRSGSHXJUIiMjbgF8A3qGq/Ym3PgG8S0QKEbkTuAf4ws6rGYlE9ooXXJpcRH4XeDNwSESeBn4Z3xtQAJ8WEYDPq+o/UdWHROT3gYfxbsLPqKrdq8pHIpGdI2NLfv84ceKEnjx5cr+rEYnc1IjIA6p64srtMWMwEplyoghEIlNOFIFIZMqJIhCJTDlRBCKRKSeKQCQy5UQRiESmnCgCkciUE0UgEplyoghEIlNOFIFIZMqJIhCJTDlRBCKRKSeKQCQy5UQRiESmnCgCkciUE0UgEplyoghEIlNOFIFIZMqJIhCJTDlRBCKRKee6mG1YRFaAbeDCPlbj0D6Wv59lx/Kn57d/iao+a6Wf60IEAETk5HNNhzwN5U/zsU97+ft97BDdgUhk6okiEIlMOdeTCHxoisuf5mOf9vL3+9ivn5hAJBLZH64nSyASiewD+y4CIvI2EXlURB4Tkfdfg/JuE5HPisjDIvKQiPxs2H5ARD4tIt8I/5f2uB5GRL4kIn8SXt8pIveH8/BREcn3sOxFEfmYiDwiIl8TkTdeq+MXkZ8L5/1BEfldEens5bGLyG+JyHkReXBi23Meq3j+TajHV0Tkvj0q/9fCuf+KiPyhiCxOvPeBUP6jIvLWnZb/baGq+/YADPC3wEuBHPgb4N49LvMYcF94Pgd8HbgX+N+A94ft7wd+dY/r8fPAfwT+JLz+feBd4fm/B/6HPSz7w8B/F57nwOK1OH7gFuCbQHfimN+zl8cO/D3gPuDBiW3PeazA24FPAQK8Abh/j8r/ASANz391ovx7QxsogDtD2zB7eR2q6r6LwBuBP5t4/QHgA9e4Dn8MfD/wKHAsbDsGPLqHZd4KfAb4PuBPwkV3YeLCuOy87HLZC6EhyhXb9/z4gwicAg4AaTj2t+71sQN3XNEIn/NYgf8D+Inn+txuln/Fez8KfCQ8v+z6B/4MeONeXYftY7/dgfaiaHk6bLsmiMgdwGuB+4EjqnomvHUWOLKHRf9r4BcAF14fBNZUtQmv9/I83AmsAL8d3JHfEJEe1+D4VfU08C+Ap4AzwDrwANfu2Fue71j343r8abz1sV/l77sI7BsiMgv8AfDPVHVj8j31Mrwn3SYi8sPAeVV9YC/2/22Q4s3TX1fV1+LTtS+LxezV8Qff+514IToO9IC37XY5L4a9/K1fCBH5JaABPrIf5bfstwicBm6beH1r2LaniEiGF4CPqOrHw+ZzInIsvH8MOL9Hxb8JeIeIPAH8Ht4l+CCwKCJp+MxenoengadV9f7w+mN4UbgWx//3gW+q6oqq1sDH8efjWh17y/Md6zW7HkXkPcAPAz8ZhOialj/JfovAXwP3hOhwDrwL+MReFigiAvwm8DVV/ZcTb30CeHd4/m58rGDXUdUPqOqtqnoH/nj/QlV/Evgs8GPXoPyzwCkReXnY9BbgYa7N8T8FvEFEZsLv0JZ9TY59guc71k8APxV6Cd4ArE+4DbuGiLwN7w6+Q1X7V9TrXSJSiMidwD3AF3a7/Gex10GHbyNo8nZ8hP5vgV+6BuX9V3jz7yvAl8Pj7Xi//DPAN4D/BzhwDeryZsa9Ay8NP/hjwP8FFHtY7ncBJ8M5+CNg6VodP/C/Ao8ADwL/Jz4SvmfHDvwuPv5Q462g9z7fseIDtP8uXItfBU7sUfmP4X3/9vr79xOf/6VQ/qPAD+71NaiqMWMwEpl29tsdiEQi+0wUgUhkyokiEIlMOVEEIpEpJ4pAJDLlRBGIRKacKAKRyJQTRSASmXL+f6ahd3WroIdvAAAAAElFTkSuQmCC\n",
+ "text/plain": [
+ ""
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "plt.imshow(test_img.transpose(0, 1).transpose(1, 2))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "a64773aa-6bea-47e6-903b-7e41cfb31098",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "27a479c7-a73d-412e-9f81-520a368cc8a0",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/install_geometric.sh b/install_geometric.sh
new file mode 100755
index 0000000..590e767
--- /dev/null
+++ b/install_geometric.sh
@@ -0,0 +1,8 @@
+TORCH='1.8.1'
+CUDA='10.2'
+
+pip3 install --user torch-scatter -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
+pip3 install --user torch-sparse -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
+pip3 install --user torch-cluster -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
+pip3 install --user torch-spline-conv -f https://pytorch-geometric.com/whl/torch-${TORCH}+${CUDA}.html
+pip3 install --user torch-geometric
diff --git a/notebooks/3d visualisation.ipynb b/notebooks/3d visualisation.ipynb
new file mode 100644
index 0000000..e9ac53f
--- /dev/null
+++ b/notebooks/3d visualisation.ipynb
@@ -0,0 +1,97 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "import open3d as o3d\n",
+ "from open3d import JVisualizer"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#from shapenet dataset\n",
+ "\n",
+ "def visualize(path,viz_type = 'pc', num_points = 5000,radii = [0.005, 0.01, 0.02, 0.04]):\n",
+ " \n",
+ " #load point cloud and normals\n",
+ " pc_xyz = np.load(path+'points.npy')\n",
+ " pc_normals = np.load(path+'normals.npy')\n",
+ " \n",
+ " \n",
+ " #sample points or visualisation\n",
+ " \n",
+ " selected_idx = np.random.permutation(np.arange(pc_xyz.shape[0]))[:num_points]\n",
+ " pc_xyz = pc_xyz[selected_idx]\n",
+ " pc_normals = pc_normals[selected_idx]\n",
+ "\n",
+ " \n",
+ " #create point cloud dataset using open3d\n",
+ " pcd = o3d.geometry.PointCloud()\n",
+ " pcd.points = o3d.utility.Vector3dVector(pc_xyz)\n",
+ " pcd.normals = o3d.utility.Vector3dVector(pc_normals)\n",
+ " pcd.colors = o3d.utility.Vector3dVector(pc_normals)\n",
+ "\n",
+ " if(viz_type=='pc'):\n",
+ " #visualize point cloud\n",
+ " \n",
+ " visualizer = JVisualizer()\n",
+ " visualizer.add_geometry(pcd)\n",
+ " visualizer.show()\n",
+ "\n",
+ " \n",
+ " else:\n",
+ " \n",
+ " #create mesh\n",
+ " rec_mesh = o3d.geometry.TriangleMesh.create_from_point_cloud_ball_pivoting(pcd, o3d.utility.DoubleVector(radii))\n",
+ " \n",
+ " #visualize mesh\n",
+ " o3d.visualization.draw_geometries([rec_mesh])\n",
+ "\n",
+ " \n",
+ " "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "path = '/home/shanthika/Documents/CV/project/subset(1)/subset/ShapeNet/'\n",
+ "class_id = '02828884/'\n",
+ "file_id = '1b0463c11f3cc1b3601104cd2d998272/'\n",
+ "filename = path+class_id+file_id+'pointcloud/'\n",
+ "visualize(filename,viz_type='mesh')"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/notebooks/check_model.ipynb b/notebooks/check_model.ipynb
new file mode 100644
index 0000000..1daf343
--- /dev/null
+++ b/notebooks/check_model.ipynb
@@ -0,0 +1,894 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 19,
+=======
+ "execution_count": 1,
+ "id": "0650cc49-b6a9-4d41-8000-162918edc0b6",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import glob\n",
+ "import cv2\n",
+ "import random\n",
+ "import pandas as pd\n",
+ "from skimage import io\n",
+ "import numpy as np\n",
+ "from PIL import Image\n",
+ "from torch.utils.data import Dataset, DataLoader\n",
+ "from torchvision import transforms, utils\n",
+ "import h5py\n",
+ "\n",
+ "# Network building stuff\n",
+ "import torch\n",
+ "import torch.nn as nn\n",
+ "import torch.nn.functional as F\n",
+ "\n",
+ "import pytorch_lightning as pl\n",
+ "from pytorch_lightning.loggers import TensorBoardLogger\n",
+<<<<<<< HEAD
+ "import torchmetrics\n",
+ "import torch.distributions as dist"
+=======
+ "import torchmetrics"
+>>>>>>> main
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "da8e62a7-7b2e-4ceb-9ee6-c9673fffb141",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 5,
+=======
+ "execution_count": 2,
+ "id": "93b17279-9fc3-48c8-8857-7313f7a9246b",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import sys\n",
+<<<<<<< HEAD
+ "# sys.path.append(\"/home2/sdokania/all_projects/project-noisypixel/\")"
+=======
+ "sys.path.append(\"/home2/sdokania/all_projects/project-noisypixel/\")"
+>>>>>>> main
+ ]
+ },
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+=======
+ "execution_count": 3,
+ "id": "c113081b-0dae-4889-ae96-31704a8b130d",
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "ModuleNotFoundError",
+ "evalue": "No module named 'models'",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mModuleNotFoundError\u001b[0m Traceback (most recent call last)",
+ "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msrc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmodels\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msrc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdataset\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdataloader\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mOccupancyNetDatasetHDF\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 3\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0msrc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtrainer\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mONetLit\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msrc\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mutils\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mConfig\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mcount_parameters\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;32m~/all_projects/project-noisypixel/src/trainer.py\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[1;32m 19\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mtorchmetrics\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 20\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 21\u001b[0;31m \u001b[0;32mfrom\u001b[0m \u001b[0mmodels\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0;34m*\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 22\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0mdataset\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mdataloader\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mOccupancyNetDatasetHDF\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 23\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;31mModuleNotFoundError\u001b[0m: No module named 'models'"
+ ]
+ }
+ ],
+>>>>>>> main
+ "source": [
+ "from src.models import *\n",
+ "from src.dataset.dataloader import OccupancyNetDatasetHDF\n",
+ "from src.trainer import ONetLit\n",
+ "from src.utils import Config, count_parameters"
+ ]
+ },
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Setting sexperiment path as : /home2/sdokania/all_projects/occ_artifacts/initial\n"
+ ]
+ }
+ ],
+=======
+ "execution_count": null,
+ "id": "82ca747d-996c-4c21-a9a9-266da441f9b4",
+ "metadata": {},
+ "outputs": [],
+>>>>>>> main
+ "source": [
+ "config = Config()\n",
+ "config.data_root = \"/ssd_scratch/cvit/sdokania/processed_data/hdf_data/\"\n",
+ "config.batch_size = 32"
+ ]
+ },
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'c_dim': 128,\n",
+ " 'h_dim': 128,\n",
+ " 'p_dim': 3,\n",
+ " 'data_root': '/ssd_scratch/cvit/sdokania/processed_data/hdf_data/',\n",
+ " 'batch_size': 32,\n",
+ " 'output_dir': '/home2/sdokania/all_projects/occ_artifacts/',\n",
+ " 'exp_name': 'initial',\n",
+ " 'encoder': 'efficientnet-b0',\n",
+ " 'decoder': 'decoder-cbn',\n",
+ " 'exp_path': '/home2/sdokania/all_projects/occ_artifacts/initial',\n",
+ " 'lr': 0.0003}"
+ ]
+ },
+ "execution_count": 8,
+ "metadata": {},
+ "output_type": "execute_result"
+=======
+ "execution_count": 4,
+ "id": "3325255a-c561-46cd-b3e7-96fdfe1e6954",
+ "metadata": {},
+ "outputs": [
+ {
+ "ename": "NameError",
+ "evalue": "name 'config' is not defined",
+ "output_type": "error",
+ "traceback": [
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
+ "\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
+ "\u001b[0;32m\u001b[0m in \u001b[0;36m\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mvars\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mconfig\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m",
+ "\u001b[0;31mNameError\u001b[0m: name 'config' is not defined"
+ ]
+>>>>>>> main
+ }
+ ],
+ "source": [
+ "\n",
+ "vars(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "e91c84a6-2914-4341-87c9-21fc8bb76726",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "1db21aba-9f44-4fed-b8aa-e7aef11a7bdd",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+<<<<<<< HEAD
+=======
+ "id": "31ca1d12-41bc-4c70-81cc-f5bb694428a6",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loaded pretrained weights for efficientnet-b0\n"
+ ]
+ }
+ ],
+ "source": [
+ "onet = ONetLit(config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+<<<<<<< HEAD
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loaded pretrained weights for efficientnet-b0\n"
+ ]
+ }
+ ],
+ "source": [
+ "net = ONetLit.load_from_checkpoint(\"../occ_artifacts/efficient_cbn_bs_64_full_data/lightning_logs/version_1/checkpoints/epoch=28-step=13919.ckpt\", cfg=config)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "dataset = OccupancyNetDatasetHDF(config.data_root, mode=\"val\", num_points=10000)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "test_img, test_pts, test_gt = dataset[0]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def make_3d_grid(bb_min, bb_max, shape):\n",
+ " ''' Makes a 3D grid.\n",
+ " Args:\n",
+ " bb_min (tuple): bounding box minimum\n",
+ " bb_max (tuple): bounding box maximum\n",
+ " shape (tuple): output shape\n",
+ " '''\n",
+ " size = shape[0] * shape[1] * shape[2]\n",
+ "\n",
+ " pxs = torch.linspace(bb_min[0], bb_max[0], shape[0])\n",
+ " pys = torch.linspace(bb_min[1], bb_max[1], shape[1])\n",
+ " pzs = torch.linspace(bb_min[2], bb_max[2], shape[2])\n",
+ "\n",
+ " pxs = pxs.view(-1, 1, 1).expand(*shape).contiguous().view(size)\n",
+ " pys = pys.view(1, -1, 1).expand(*shape).contiguous().view(size)\n",
+ " pzs = pzs.view(1, 1, -1).expand(*shape).contiguous().view(size)\n",
+ " p = torch.stack([pxs, pys, pzs], dim=1)\n",
+ "\n",
+ " return p"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 21,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "torch.Size([32768, 3])"
+ ]
+ },
+ "execution_count": 21,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "vg = make_3d_grid((-0.5,)*3, (0.5,)*3, (32,)*3)\n",
+ "vg.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 22,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "out = net(test_img.unsqueeze(0), vg.unsqueeze(0))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 23,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "c = net.net.encoder(test_img.unsqueeze(0)).detach()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 24,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "torch.Size([621, 3])"
+ ]
+ },
+ "execution_count": 24,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "p = dist.Bernoulli(logits=out)\n",
+ "vg[p.probs.flatten() > 0.5].shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 25,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "tensor([[-0.4032, -0.0161, -0.0161],\n",
+ " [-0.4032, -0.0161, 0.0161],\n",
+ " [-0.4032, -0.0161, 0.0484],\n",
+ " ...,\n",
+ " [ 0.3387, -0.0161, 0.0484],\n",
+ " [ 0.3710, -0.0161, -0.0161],\n",
+ " [ 0.3710, -0.0161, 0.0161]], requires_grad=True)"
+ ]
+ },
+ "execution_count": 25,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "good_p = vg[p.probs.flatten() > 0.5]\n",
+ "good_p.requires_grad_()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 26,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "net.zero_grad()\n",
+ "outs = net.net.decoder(good_p.unsqueeze(0), c)\n",
+ "outs = outs.sum()\n",
+ "outs.backward()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 29,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "(621, 3)\n"
+ ]
+ }
+ ],
+ "source": [
+ " ni = -good_p.grad\n",
+ "ni = ni / torch.norm(ni, dim=-1, keepdim=True)\n",
+ "ni = ni.squeeze(0).cpu().numpy()\n",
+ "print(ni.shape)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 32,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "np.save(\"../point_vals\", good_p.detach().numpy())\n",
+ "np.save(\"../normal_vals\", ni)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+=======
+ "execution_count": 10,
+ "id": "68c89709-d3a6-4abb-bd89-46605d1ea047",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "torch.Size([1, 1024])"
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "onet(torch.randn(1, 3, 224, 224), torch.randn(1, 1024, 3)).shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+<<<<<<< HEAD
+=======
+ "id": "7c9c5c58-eaad-424a-a941-59925c95ec4b",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+<<<<<<< HEAD
+=======
+ "id": "ecb83ae9-9afd-4b1a-8161-a5e2b0c866c4",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "\u001b[0;31mInit signature:\u001b[0m\n",
+ "\u001b[0mTensorBoardLogger\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0msave_dir\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mUnion\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mNoneType\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m'default'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0mversion\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mUnion\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0mint\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mstr\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mNoneType\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mNone\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0mlog_graph\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mbool\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mFalse\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0mdefault_hp_metric\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mbool\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;32mTrue\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0mprefix\u001b[0m\u001b[0;34m:\u001b[0m \u001b[0mstr\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0;34m''\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\n",
+ "\u001b[0;34m\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
+ "\u001b[0;31mDocstring:\u001b[0m \n",
+ "Log to local file system in `TensorBoard `_ format.\n",
+ "\n",
+ "Implemented using :class:`~torch.utils.tensorboard.SummaryWriter`. Logs are saved to\n",
+ "``os.path.join(save_dir, name, version)``. This is the default logger in Lightning, it comes\n",
+ "preinstalled.\n",
+ "\n",
+ "Example:\n",
+ " >>> from pytorch_lightning import Trainer\n",
+ " >>> from pytorch_lightning.loggers import TensorBoardLogger\n",
+ " >>> logger = TensorBoardLogger(\"tb_logs\", name=\"my_model\")\n",
+ " >>> trainer = Trainer(logger=logger)\n",
+ "\n",
+ "Args:\n",
+ " save_dir: Save directory\n",
+ " name: Experiment name. Defaults to ``'default'``. If it is the empty string then no per-experiment\n",
+ " subdirectory is used.\n",
+ " version: Experiment version. If version is not specified the logger inspects the save\n",
+ " directory for existing versions, then automatically assigns the next available version.\n",
+ " If it is a string then it is used as the run-specific subdirectory name,\n",
+ " otherwise ``'version_${version}'`` is used.\n",
+ " log_graph: Adds the computational graph to tensorboard. This requires that\n",
+ " the user has defined the `self.example_input_array` attribute in their\n",
+ " model.\n",
+ " default_hp_metric: Enables a placeholder metric with key `hp_metric` when `log_hyperparams` is\n",
+ " called without a metric (otherwise calls to log_hyperparams without a metric are ignored).\n",
+ " prefix: A string to put at the beginning of metric keys.\n",
+ " \\**kwargs: Additional arguments like `comment`, `filename_suffix`, etc. used by\n",
+ " :class:`SummaryWriter` can be passed as keyword arguments in this logger.\n",
+ "\u001b[0;31mFile:\u001b[0m ~/.local/lib/python3.8/site-packages/pytorch_lightning/loggers/tensorboard.py\n",
+ "\u001b[0;31mType:\u001b[0m ABCMeta\n",
+ "\u001b[0;31mSubclasses:\u001b[0m \n"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "TensorBoardLogger?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "309ee90f-28dc-4e70-8e7a-a351a756a555",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "d036f4fb-feec-4244-9c30-c04a181db652",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+<<<<<<< HEAD
+=======
+ "id": "f00c287b-ccaf-465a-bdf3-c57c405c3bff",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+<<<<<<< HEAD
+=======
+ "id": "49293238-657b-4fb9-af68-910e9ed8b6ac",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "GPU available: True, used: True\n",
+ "TPU available: False, using: 0 TPU cores\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Define the trainer object\n",
+ "trainer = pl.Trainer(\n",
+ " gpus=1,\n",
+ " # auto_scale_batch_size='binsearch',\n",
+ " logger=logger,\n",
+ " min_epochs=1,\n",
+ " max_epochs=1,\n",
+ " default_root_dir=config.output_dir,\n",
+ " log_every_n_steps=10,\n",
+ " progress_bar_refresh_rate=5,\n",
+ " # precision=16,\n",
+ " # stochastic_weight_avg=True,\n",
+ " # track_grad_norm=2,\n",
+ " callbacks=[checkpoint_callback],\n",
+ " check_val_every_n_epoch=1,\n",
+ ")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+<<<<<<< HEAD
+=======
+ "id": "8af1382f-2840-46c4-bfc6-1d2f5309e883",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
+ "\n",
+ " | Name | Type | Params\n",
+ "-----------------------------------\n",
+ "0 | net | OccNetImg | 4.7 M \n",
+ "-----------------------------------\n",
+ "4.7 M Trainable params\n",
+ "0 Non-trainable params\n",
+ "4.7 M Total params\n",
+ "18.802 Total estimated model params size (MB)\n",
+ "/home2/sdokania/.local/lib/python3.8/site-packages/pytorch_lightning/utilities/distributed.py:68: UserWarning: Your val_dataloader has `shuffle=True`, it is best practice to turn this off for validation and test dataloaders.\n",
+ " warnings.warn(*args, **kwargs)\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "6be5bd3972cc484e9a772aa6b6c302dd",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Validation sanity check: 0it [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "f72161166ebf474bbe886f867d35c361",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Training: 0it [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ "Validating: 0it [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "text/plain": [
+ "1"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Start training\n",
+ "trainer.fit(onet)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+<<<<<<< HEAD
+=======
+ "id": "fc99d5c8-1bf4-4517-b048-6c69adae8f99",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "09ff0871-bfb7-4163-9713-51e5c76c537d",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+<<<<<<< HEAD
+=======
+ "id": "a3ca1499-5a1c-4afa-ab89-24c267c66c17",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+<<<<<<< HEAD
+=======
+ "id": "2b0488fe-f952-4821-888e-9de8e8bae483",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "vd = OccupancyNetDatasetHDF(config.data_root, mode=\"val\")\n",
+ "vdl = torch.utils.data.DataLoader(vd, batch_size=100, shuffle=False, num_workers=8)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+<<<<<<< HEAD
+=======
+ "id": "229b5b9b-4ef8-4cb0-8c70-fc2ba728a7da",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([100, 3, 137, 137]) torch.Size([100, 1024, 3]) torch.Size([100, 1024])\n",
+ "torch.Size([71, 3, 137, 137]) torch.Size([71, 1024, 3]) torch.Size([71, 1024])\n"
+ ]
+ }
+ ],
+ "source": [
+ "for ix in vdl:\n",
+ " print(ix[0].shape, ix[1].shape, ix[2].shape)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 42,
+<<<<<<< HEAD
+=======
+ "id": "c5c80319-caaa-441b-ab6d-f90463d8297c",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "44"
+ ]
+ },
+ "execution_count": 42,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "len(vdl)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+<<<<<<< HEAD
+=======
+ "id": "024abf17-ba2b-4924-991d-10c792de4773",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "fname = \"/ssd_scratch/cvit/sdokania/hdf_data/hdf_data/04256520_bdfcf2086fafb0fec8a04932b17782af.h5\"\n",
+ "hf = h5py.File(fname, 'r')"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+<<<<<<< HEAD
+=======
+ "id": "16652d9d-57a0-43eb-9e5c-d85273fedb9c",
+>>>>>>> main
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ ""
+ ]
+ },
+ "execution_count": 15,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "hf.keys()"
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+<<<<<<< HEAD
+ "version": "3.7.6"
+=======
+ "version": "3.8.3"
+>>>>>>> main
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/notebooks/data_loader_trial.ipynb b/notebooks/data_loader_trial.ipynb
new file mode 100644
index 0000000..12fb343
--- /dev/null
+++ b/notebooks/data_loader_trial.ipynb
@@ -0,0 +1,289 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "after-reserve",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import os\n",
+ "import glob\n",
+ "import torch\n",
+ "import cv2\n",
+ "from skimage import io\n",
+ "import numpy as np\n",
+ "from PIL import Image\n",
+ "import matplotlib.pyplot as plt\n",
+ "from torch.utils.data import Dataset, DataLoader\n",
+ "from torchvision import transforms, utils\n",
+ "import h5py"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 67,
+ "id": "charitable-bibliography",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from efficientnet_pytorch import EfficientNet"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "id": "middle-compilation",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "class OccupancyNetDatasetHDF(Dataset):\n",
+ " \"\"\"Occupancy Network dataset.\"\"\"\n",
+ "\n",
+ " def __init__(self, root_dir, transform=None, num_points=1024, default_transform=True):\n",
+ " \"\"\"\n",
+ " Args:\n",
+ " root_dir (string): Directory with all the images.\n",
+ " transform (callable, optional): Optional transform to be applied\n",
+ " num_points (int): Number of points to sample in the object point cloud from the data\n",
+ " on a sample.\n",
+ " \"\"\"\n",
+ " self.root_dir = root_dir\n",
+ " self.transform = transform\n",
+ " self.num_points = num_points\n",
+ " self.files = []\n",
+ " \n",
+ " for sub in os.listdir(self.root_dir):\n",
+ " self.files.append(sub)\n",
+ " \n",
+ " # If not transforms have been provided, apply default imagenet transform\n",
+ " if transform is None and default_transform:\n",
+ " self.transform = transforms.Normalize(mean=[0.485, 0.456, 0.406],\n",
+ " std=[0.229, 0.224, 0.225])\n",
+ "\n",
+ " def __len__(self):\n",
+ " return len(self.files)\n",
+ "\n",
+ " def __getitem__(self, idx):\n",
+ " # Fetch the file path and setup image folder paths\n",
+ " req_path = self.files[idx]\n",
+ " file_path = os.path.join(self.root_dir, req_path)\n",
+ "\n",
+ " # Load the h5 file\n",
+ " hf = h5py.File(file_path, 'r')\n",
+ " \n",
+ " # [NOTE]: the notation [()] below is to extract the value from HDF5 file\n",
+ " # get all images and randomly pick one\n",
+ " all_imgs = hf['images'][()]\n",
+ " random_idx = int(np.random.random()*all_imgs.shape[0])\n",
+ " \n",
+ " # Fetch the image we need\n",
+ " image = all_imgs[random_idx]\n",
+ " \n",
+ " # Get the points and occupancies\n",
+ " points = hf['points']['points'][()]\n",
+ " occupancies = np.unpackbits(hf['points']['occupancies'][()])\n",
+ "\n",
+ " # Sample n points from the data\n",
+ " selected_idx = np.random.permutation(np.arange(points.shape[0]))[:self.num_points]\n",
+ "\n",
+ " # Use only the selected indices and pack everything up in a nice dictionary\n",
+ " final_image = torch.from_numpy(image).float().transpose(1, 2).transpose(0, 1)\n",
+ " final_points = torch.from_numpy(points[selected_idx])\n",
+ " final_gt = torch.from_numpy(occupancies[selected_idx])\n",
+ " \n",
+ " # Close the hdf file\n",
+ " hf.close()\n",
+ " \n",
+ " # Apply any transformation necessary\n",
+ " if self.transform:\n",
+ " final_image = self.transform(final_image)\n",
+ "\n",
+ " return final_image, final_points, final_gt"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "portuguese-participation",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 52,
+ "id": "recognized-journal",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "ds = OccupancyNetDatasetHDF(\"/home/shubham/datasets/hdf_data/\", num_points=1024)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 61,
+ "id": "senior-brunei",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "tensor(1051.4952)"
+ ]
+ },
+ "execution_count": 61,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "ds[0][0].mean()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 49,
+ "id": "hungry-defendant",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "31\n"
+ ]
+ }
+ ],
+ "source": [
+ "loader = torch.utils.data.DataLoader(ds, batch_size=128, shuffle=True)\n",
+ "print(len(loader))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "id": "dedicated-tenant",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([128, 3, 137, 137]) torch.Size([128, 1024, 3]) torch.Size([128, 1024])\n",
+ "torch.Size([32, 3, 137, 137]) torch.Size([32, 1024, 3]) torch.Size([32, 1024])\n"
+ ]
+ }
+ ],
+ "source": [
+ "for ix in loader:\n",
+ " print(ix[0].shape, ix[1].shape, ix[2].shape)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 92,
+ "id": "collective-sigma",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Loaded pretrained weights for efficientnet-b7\n"
+ ]
+ }
+ ],
+ "source": [
+ "net = EfficientNet.from_pretrained('efficientnet-b7', include_top=False)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "joined-police",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 93,
+ "id": "southwest-actor",
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "torch.Size([1, 2560, 1, 1])"
+ ]
+ },
+ "execution_count": 93,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "net(ds[0][0].unsqueeze(0)).shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "brazilian-dominant",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/notebooks/data_preprocess_trial.ipynb b/notebooks/data_preprocess_trial.ipynb
new file mode 100644
index 0000000..6f6ce8c
--- /dev/null
+++ b/notebooks/data_preprocess_trial.ipynb
@@ -0,0 +1,446 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "id": "starting-developer",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ "/home/shubham/.local/lib/python3.8/site-packages/skimage/io/manage_plugins.py:23: UserWarning: Your installed pillow version is < 7.1.0. Several security issues (CVE-2020-11538, CVE-2020-10379, CVE-2020-10994, CVE-2020-10177) have been fixed in pillow 7.1.0 or higher. We recommend to upgrade this library.\n",
+ " from .collection import imread_collection_wrapper\n"
+ ]
+ }
+ ],
+ "source": [
+ "import numpy as np\n",
+ "import pandas as pd\n",
+ "import h5py\n",
+ "import os\n",
+ "import skimage.io as sio\n",
+ "import tqdm"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "id": "immune-station",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "data_root = \"/home/shubham/datasets/subset/ShapeNet/\"\n",
+ "dataset_dir = \"/home/shubham/datasets/\"\n",
+ "\n",
+ "os.makedirs(os.path.join(dataset_dir, \"hdf_data\"), exist_ok=True)\n",
+ "save_path = os.path.join(dataset_dir, \"hdf_data\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "id": "subsequent-merchandise",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "os.makedirs(os.path.join(dataset_dir, \"hdf_data\"), exist_ok=True)\n",
+ "save_path = os.path.join(dataset_dir, \"hdf_data\")"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "id": "transsexual-controversy",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def save_dict_to_hdf5(dic, filename):\n",
+ " \"\"\"\n",
+ " ....\n",
+ " \"\"\"\n",
+ " with h5py.File(filename, 'w') as h5file:\n",
+ " recursively_save_dict_contents_to_group(h5file, '/', dic)\n",
+ "\n",
+ "def recursively_save_dict_contents_to_group(h5file, path, dic):\n",
+ " \"\"\"\n",
+ " ....\n",
+ " \"\"\"\n",
+ " for key, item in dic.items():\n",
+ " if isinstance(item, (np.ndarray, np.int64, np.float64, str, bytes)):\n",
+ " h5file[path + key] = item\n",
+ " elif isinstance(item, dict):\n",
+ " recursively_save_dict_contents_to_group(h5file, path + key + '/', item)\n",
+ " else:\n",
+ " raise ValueError('Cannot save %s type'%type(item))\n",
+ "\n",
+ "def load_dict_from_hdf5(filename):\n",
+ " \"\"\"\n",
+ " ....\n",
+ " \"\"\"\n",
+ " with h5py.File(filename, 'r') as h5file:\n",
+ " return recursively_load_dict_contents_from_group(h5file, '/')\n",
+ "\n",
+ "def recursively_load_dict_contents_from_group(h5file, path):\n",
+ " \"\"\"\n",
+ " ....\n",
+ " \"\"\"\n",
+ " ans = {}\n",
+ " for key, item in h5file[path].items():\n",
+ " if isinstance(item, h5py._hl.dataset.Dataset):\n",
+ " ans[key] = item.value\n",
+ " elif isinstance(item, h5py._hl.group.Group):\n",
+ " ans[key] = recursively_load_dict_contents_from_group(h5file, path + key + '/')\n",
+ " return ans"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "id": "above-coordination",
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def load_data(path): \n",
+ " # Load the pointcloud.npz and points.npz file\n",
+ " pc_file = np.load(os.path.join(path, \"pointcloud.npz\"))\n",
+ " points_file = np.load(os.path.join(path, \"points.npz\"))\n",
+ " \n",
+ " # create image placeholder and camera data placeholder\n",
+ " img_data = []\n",
+ " cam_data = None\n",
+ " \n",
+ " # Load images\n",
+ " for imx in os.listdir(os.path.join(path, \"img_choy2016\")):\n",
+ " current = os.path.join(path, \"img_choy2016\", imx)\n",
+ " if 'npz' in imx:\n",
+ " cam_data = np.load(current)\n",
+ " else:\n",
+ " img_current = sio.imread(current)\n",
+ " if img_current.ndim == 2:\n",
+ " img_current = np.stack([img_current, img_current, img_current], axis=-1)\n",
+ " img_data.append(img_current)\n",
+ " img_data = np.asarray(img_data)\n",
+ " \n",
+ " all_data = {\n",
+ " 'images': img_data,\n",
+ " 'camera': dict(cam_data),\n",
+ " 'points': dict(points_file),\n",
+ " 'pointcloud': dict(pc_file)\n",
+ " }\n",
+ " \n",
+ " return all_data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "robust-internship",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "personal-theater",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "id": "confirmed-tractor",
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stderr",
+ "output_type": "stream",
+ "text": [
+ ":4: TqdmDeprecationWarning: This function will be removed in tqdm==5.0.0\n",
+ "Please use `tqdm.notebook.tqdm` instead of `tqdm.tqdm_notebook`\n",
+ " for obx in tqdm.tqdm_notebook(obj_list):\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "4cc3dbf83ceb42a4857fe069fedc534f",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/289 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Error at 02933112-test.lst\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "f7979ba4bea343c99a0de268330307de",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/296 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "04360fa7a41d48a3b6e8ac9136241648",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/277 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "5800056fc32543a3a53263a2faba0838",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/296 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Error at 04090263-test.lst\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "ce759a5827f44043843824cecbbb7f26",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/307 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "660ccc2ca106442e8e7700ad2b4395ed",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/297 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "cec182d6cc514d37806e1c6416d38ef0",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/315 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "ba50374cc8524e7caf9d9709cc7bca7a",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/296 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "b8c03cda3db24981b5079697a7eb59bd",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/314 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Error at 03001627-test.lst\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "f7df8e7a00d74df19787834c3c170322",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/316 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "75db28c1776447cb8cd836514e89ac53",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/292 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Error at 02828884-val.lst\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "2b036e4ea9714d0bad51211a9602adfb",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/275 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ },
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "Error at 04401088-val.lst\n"
+ ]
+ },
+ {
+ "data": {
+ "application/vnd.jupyter.widget-view+json": {
+ "model_id": "0c2ac06d88fa461694ef30ce7f1040b4",
+ "version_major": 2,
+ "version_minor": 0
+ },
+ "text/plain": [
+ " 0%| | 0/307 [00:00, ?it/s]"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "for cid in os.listdir(data_root):\n",
+ " objs_path = os.path.join(data_root, cid)\n",
+ " obj_list = os.listdir(objs_path)\n",
+ " for obx in tqdm.tqdm_notebook(obj_list):\n",
+ " current_path = os.path.join(objs_path, obx)\n",
+ " new_filename = \"{}_{}.h5\".format(cid, obx)\n",
+ " \n",
+ " try:\n",
+ " data_current = load_data(current_path)\n",
+ " save_dict_to_hdf5(data_current, os.path.join(save_path, new_filename))\n",
+ " except:\n",
+ " print(\"Error at {}-{}\".format(cid, obx))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 75,
+ "id": "found-wildlife",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 45,
+ "id": "radical-daniel",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "id": "mounted-commitment",
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.5"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 5
+}
diff --git a/notebooks/marching_cubes.ipynb b/notebooks/marching_cubes.ipynb
new file mode 100644
index 0000000..16e70d4
--- /dev/null
+++ b/notebooks/marching_cubes.ipynb
@@ -0,0 +1,190 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [],
+ "source": [
+ "import numpy as np\n",
+ "import torch\n",
+ "from utils.libmise.mise import MISE\n",
+ "from utils.libmcubes.mcubes import marching_cubes\n",
+ "import trimesh\n",
+ "import os"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 72,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "padding = 0.1\n",
+ "threshold_g = 0.2"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 73,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def make_3d_grid(bb_min, bb_max, shape):\n",
+ " ''' Makes a 3D grid.\n",
+ "\n",
+ " Args:\n",
+ " bb_min (tuple): bounding box minimum\n",
+ " bb_max (tuple): bounding box maximum\n",
+ " shape (tuple): output shape\n",
+ " '''\n",
+ " size = shape[0] * shape[1] * shape[2]\n",
+ "\n",
+ " pxs = torch.linspace(bb_min[0], bb_max[0], shape[0])\n",
+ " pys = torch.linspace(bb_min[1], bb_max[1], shape[1])\n",
+ " pzs = torch.linspace(bb_min[2], bb_max[2], shape[2])\n",
+ "\n",
+ " pxs = pxs.view(-1, 1, 1).expand(*shape).contiguous().view(size)\n",
+ " pys = pys.view(1, -1, 1).expand(*shape).contiguous().view(size)\n",
+ " pzs = pzs.view(1, 1, -1).expand(*shape).contiguous().view(size)\n",
+ " p = torch.stack([pxs, pys, pzs], dim=1)\n",
+ "\n",
+ " return p"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 93,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def extract_mesh(occ_hat):\n",
+ " n_x, n_y, n_z = occ_hat.shape\n",
+ " box_size = 1 + padding\n",
+ " threshold = np.log( threshold_g) - np.log(1. - threshold_g)\n",
+ " \n",
+ " occ_hat_padded = np.pad(occ_hat, 1, 'constant', constant_values=-1e6)\n",
+ " \n",
+ " vertices, triangles = marching_cubes(occ_hat_padded, threshold)\n",
+ " print(triangles)\n",
+ " \n",
+ " vertices -= 0.5\n",
+ " # Undo padding\n",
+ " vertices -= 1\n",
+ " # Normalize to bounding box\n",
+ " vertices /= np.array([n_x-1, n_y-1, n_z-1])\n",
+ " vertices = box_size * (vertices - 0.5)\n",
+ " \n",
+ " normals = None\n",
+ "\n",
+ " # Create mesh\n",
+ " mesh = trimesh.Trimesh(vertices, triangles, vertex_normals=normals,process=False)\n",
+ "\n",
+ " return mesh"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 94,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def get_mesh(occ, points,threshold = 0.5,padding=0.1,resolution0=16,upsampling_steps=3):\n",
+ " threshold = np.log(threshold_g) - np.log(1. - threshold_g)\n",
+ " \n",
+ " nx = 32\n",
+ " pointsf = 2 * make_3d_grid((-0.5,)*3, (0.5,)*3, (nx,)*3 )\n",
+ " \n",
+ " value_grid = occ.reshape(nx, nx, nx)\n",
+ " \n",
+ " mesh = extract_mesh(value_grid)\n",
+ "\n",
+ " return mesh"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 95,
+ "metadata": {
+ "tags": []
+ },
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[]\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "'OFF\\n0 0 0\\n\\n'"
+ ]
+ },
+ "execution_count": 95,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "occ_file = '../sample_data/points/occupancies.npy'\n",
+ "points_file = '../sample_data/points/points.npy'\n",
+ "\n",
+ "points = np.load(points_file)\n",
+ "\n",
+ "occ = np.load(occ_file)\n",
+ "occ = np.unpackbits(occ)\n",
+ "\n",
+ "idx = np.random.choice(np.arange(100000), 32768, replace=False)\n",
+ "occ_sample = occ[idx]\n",
+ "points_sample = points[idx]\n",
+ "\n",
+ "\n",
+ "\n",
+ "mesh = get_mesh(occ_sample,points_sample)\n",
+ "\n",
+ "\n",
+ "mesh_out_file = os.path.join('./', '%s.off' % 'onet')\n",
+ "mesh.export(mesh_out_file)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.7.6"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/notebooks/metrics.ipynb b/notebooks/metrics.ipynb
new file mode 100755
index 0000000..a6c557b
--- /dev/null
+++ b/notebooks/metrics.ipynb
@@ -0,0 +1,259 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pykdtree.kdtree import KDTree\n",
+ "import numpy as np"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pc_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'\n",
+ "pc_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'\n",
+ "p_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'\n",
+ "p_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pc_data1 = np.load(pc_path1)\n",
+ "pc_data2 = np.load(pc_path2)\n",
+ "p_data1 = np.load(p_path1)\n",
+ "p_data2 = np.load(p_path2)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pointcloud = pc_data1['points']\n",
+ "pointcloud_gt = pc_data2['points']\n",
+ "normals = pc_data1['normals']\n",
+ "normals_gt = pc_data2['normals']\n",
+ "occ_1 = p_data1['occupancies']\n",
+ "occ_2 = p_data2['occupancies']"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_iou(occ1, occ2):\n",
+ " ''' Computes the Intersection over Union (IoU) value for two sets of\n",
+ " occupancy values.\n",
+ " Args:\n",
+ " occ1 (tensor): first set of occupancy values\n",
+ " occ2 (tensor): second set of occupancy values\n",
+ " '''\n",
+ " occ1 = np.asarray(occ1)\n",
+ " occ2 = np.asarray(occ2)\n",
+ "\n",
+ " # Put all data in second dimension\n",
+ " # Also works for 1-dimensional data\n",
+ " if occ1.ndim >= 2:\n",
+ " occ1 = occ1.reshape(occ1.shape[0], -1)\n",
+ " if occ2.ndim >= 2:\n",
+ " occ2 = occ2.reshape(occ2.shape[0], -1)\n",
+ "\n",
+ " # Convert to boolean values\n",
+ " occ1 = (occ1 >= 0.5)\n",
+ " occ2 = (occ2 >= 0.5)\n",
+ "\n",
+ " # Compute IOU\n",
+ " area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1)\n",
+ " area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1)\n",
+ "\n",
+ " iou = (area_intersect / area_union)\n",
+ "\n",
+ " return iou"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "empty_point_dict = {\n",
+ " 'completeness': np.sqrt(3),\n",
+ " 'accuracy': np.sqrt(3),\n",
+ " 'completeness2': 3,\n",
+ " 'accuracy2': 3,\n",
+ " 'chamfer': 6,\n",
+ "}\n",
+ "\n",
+ "empty_normal_dict = {\n",
+ " 'normals completeness': -1.,\n",
+ " 'normals accuracy': -1.,\n",
+ " 'normals': -1.,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_separation(points_src, normals_src, points_tgt, normals_tgt):\n",
+ " ''' Computes minimal distances of each point in points_src to points_tgt.\n",
+ " Args:\n",
+ " points_src (numpy array): source points\n",
+ " normals_src (numpy array): source normals\n",
+ " points_tgt (numpy array): target points\n",
+ " normals_tgt (numpy array): target normals\n",
+ " '''\n",
+ " kdtree = KDTree(points_tgt)\n",
+ " sepr, ind = kdtree.query(points_src)\n",
+ "\n",
+ " if normals_src is not None and normals_tgt is not None:\n",
+ " normals_src = normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True)\n",
+ " normals_tgt = normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True)\n",
+ "\n",
+ " normals_dot_product = (normals_tgt[ind] * normals_src).sum(axis=-1)\n",
+ " normals_dot_product = np.abs(normals_dot_product)\n",
+ " else:\n",
+ " normals_dot_product = np.array(\n",
+ " [np.nan] * points_src.shape[0], dtype=np.float32)\n",
+ " return sepr, normals_dot_product"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def eval_pointcloud(pointcloud, pointcloud_gt,\n",
+ " normals, normals_gt, occ1, occ2):\n",
+ " ''' \n",
+ " Evaluates a point cloud.\n",
+ " Args:\n",
+ " pointcloud (numpy array): predicted point cloud\n",
+ " pointcloud_gt (numpy array): ground truth point cloud\n",
+ " normals (numpy array): predicted normals\n",
+ " normals_gt (numpy array): ground truth normals\n",
+ " '''\n",
+ " # Return maximum losses if pointcloud is empty\n",
+ " if pointcloud.shape[0] == 0:\n",
+ " print('Empty pointcloud / mesh detected!')\n",
+ " # [ERR]: there's supposed to be a .copy() here\n",
+ " out_dict = empty_point_dict.copy()\n",
+ " if normals is not None and normals_tgt is not None:\n",
+ " out_dict.update(empty_normal_dict)\n",
+ " return out_dict\n",
+ "\n",
+ " pointcloud = np.asarray(pointcloud)\n",
+ " pointcloud_gt = np.asarray(pointcloud_gt)\n",
+ "\n",
+ " # Completeness: how far are the points of the groundtruth point cloud\n",
+ " # from the predicted point cloud\n",
+ " completeness, normal_completeness = compute_separation(\n",
+ " pointcloud_gt, normals_gt, pointcloud, normals\n",
+ " )\n",
+ " completeness_sq = completeness**2\n",
+ "\n",
+ " completeness = completeness.mean()\n",
+ " completeness_sq = completeness_sq.mean()\n",
+ " normal_completeness = normal_completeness.mean()\n",
+ "\n",
+ " # Accuracy: how far are the points of the predicted pointcloud\n",
+ " # from the groundtruth pointcloud\n",
+ " accuracy, normal_accuracy = compute_separation(\n",
+ " pointcloud, normals, pointcloud_gt, normals_gt\n",
+ " )\n",
+ " accuracy_sq = accuracy**2\n",
+ "\n",
+ " accuracy = accuracy.mean()\n",
+ " accuracy_sq = accuracy_sq.mean()\n",
+ " normal_accuracy = normal_accuracy.mean()\n",
+ "\n",
+ " # Chamfer distance\n",
+ " chamferL2 = 0.5 * (completeness_sq + accuracy_sq)\n",
+ " normals_correction = (\n",
+ " 0.5 * normal_completeness + 0.5 * normal_accuracy\n",
+ " )\n",
+ " chamferL1 = 0.5 * (completeness + accuracy)\n",
+ " \n",
+ " occupancy_iou = compute_iou(occ1, occ2)\n",
+ "\n",
+ " out_dict = {\n",
+ " 'completeness': completeness,\n",
+ " 'accuracy': accuracy,\n",
+ " 'normals completeness': normal_completeness,\n",
+ " 'normals accuracy': normal_accuracy,\n",
+ " 'normals': normals_correction,\n",
+ " 'completeness_sq': completeness_sq,\n",
+ " 'accuracy_sq': accuracy_sq,\n",
+ " 'chamfer-L2': chamferL2,compute_iou(occ1, occ2)\n",
+ " 'chamfer-L1': chamferL1,\n",
+ " 'iou': occupancy_iou\n",
+ " }\n",
+ "\n",
+ " return out_dict"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'completeness': 0.0, 'accuracy': 0.0, 'normals completeness': 1.0, 'normals accuracy': 1.0, 'normals': 1.0, 'completeness_sq': 0.0, 'accuracy_sq': 0.0, 'chamfer-L2': 0.0, 'chamfer-L1': 0.0, 'iou': 1.0}\n"
+ ]
+ }
+ ],
+ "source": [
+ "eval_dict = eval_pointcloud(pointcloud, pointcloud_gt, normals, normals_gt, occ_1, occ_2)\n",
+ "print(eval_dict)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000..2567703
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,13 @@
+numpy
+scipy
+matplotlib
+scikit-learn
+scikit-image
+pandas
+h5py
+opencv-python
+tqdm
+Pillow
+torch
+torchvision
+efficientnet_pytorch
\ No newline at end of file
diff --git a/resources/CV Project.pdf b/resources/CV Project.pdf
new file mode 100644
index 0000000..13bbad8
Binary files /dev/null and b/resources/CV Project.pdf differ
diff --git a/resources/mid_term.pdf b/resources/mid_term.pdf
new file mode 100644
index 0000000..fd46ade
Binary files /dev/null and b/resources/mid_term.pdf differ
diff --git a/resources/proposal.pdf b/resources/proposal.pdf
new file mode 100644
index 0000000..8634a4b
Binary files /dev/null and b/resources/proposal.pdf differ
diff --git a/run_evals.py b/run_evals.py
new file mode 100644
index 0000000..d1c8493
--- /dev/null
+++ b/run_evals.py
@@ -0,0 +1,99 @@
+import sys
+sys.path.append("/home2/sdokania/all_projects/project-noisypixel/")
+
+import os
+import glob
+import cv2
+import random
+import pandas as pd
+from skimage import io
+import numpy as np
+from PIL import Image
+from torch.utils.data import Dataset, DataLoader
+from torchvision import transforms, utils
+import h5py
+
+# Network building stuff
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+import pytorch_lightning as pl
+from pytorch_lightning.loggers import TensorBoardLogger
+import torchmetrics
+import torch.distributions as dist
+
+
+#mesh
+from src.utils.libmise.mise import MISE
+from src.utils.libmcubes.mcubes import marching_cubes
+import trimesh
+from src.evaluate import *
+
+from src.models import *
+from src.dataset.dataloader import OccupancyNetDatasetHDF
+from src.trainer import ONetLit
+from src.utils import Config, count_parameters
+import datetime
+import tqdm
+import torch.distributions as dist
+import pandas as pd
+import argparse
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Argument parser for training the model")
+ default_ckpt = "../occ_artifacts/efficient_cbn_bs_64_full_data/lightning_logs/version_1/checkpoints/epoch=131-step=63359.ckpt"
+ parser.add_argument('--cdim', action='store', type=int, default=128, help="feature dimension")
+ parser.add_argument('--hdim', action='store', type=int, default=128, help="hidden size for decoder")
+ parser.add_argument('--pdim', action='store', type=int, default=3, help="points input size for decoder")
+ parser.add_argument('--data_root', action='store', type=str, default="/ssd_scratch/cvit/sdokania/processed_data/hdf_data/", help="location of the parsed and processed dataset")
+ parser.add_argument('--batch_size', action='store', type=int, default=64, help="Training batch size")
+ parser.add_argument('--output_path', action='store', type=str, default="/home2/sdokania/all_projects/occ_artifacts/", help="Model saving and checkpoint paths")
+ parser.add_argument('--exp_name', action='store', type=str, default="initial", help="Name of the experiment. Artifacts will be created with this name")
+ parser.add_argument('--encoder', action='store', type=str, default="efficientnet-b0", help="Name of the Encoder architecture to use")
+ parser.add_argument('--decoder', action='store', type=str, default="decoder-cbn", help="Name of the decoder architecture to use")
+ parser.add_argument('--checkpoint', action='store', type=str, default=default_ckpt, help="Checkpoint Path")
+
+ args = parser.parse_args()
+ # Get the model configuration
+ config = Config(args)
+
+ onet = ONetLit(config)
+ net = ONetLit.load_from_checkpoint(args.checkpoint, cfg=config).eval()
+ dataset = OccupancyNetDatasetHDF(config.data_root, num_points=2048, mode="test", point_cloud=True)
+
+ empty_point_dict = {
+ 'completeness': np.sqrt(3),
+ 'accuracy': np.sqrt(3),
+ 'completeness2': 3,
+ 'accuracy2': 3,
+ 'chamfer': 6,
+ }
+
+ empty_normal_dict = {
+ 'normals completeness': -1.,
+ 'normals accuracy': -1.,
+ 'normals': -1.,
+ }
+
+ DEVICE="cuda:0"
+ nux = 0
+ start = datetime.datetime.now()
+ result = []
+
+ shuffled_idx = np.random.permutation(np.arange(len(dataset)))[:500]
+
+ for ix in tqdm.tqdm(shuffled_idx):
+ try:
+ test_img, test_pts, test_gt, pcl_gt, norm_gt = dataset[ix][:]
+ net.to(DEVICE)
+ pred_pts = net(test_img.unsqueeze(0).to(DEVICE), test_pts.unsqueeze(0).to(DEVICE)).cpu()
+ mesh, mesh_data, normals = get_mesh(net, (test_img.to(DEVICE), test_pts, test_gt), threshold_g=0.5, return_points=True)
+ pred_occ = dist.Bernoulli(logits=pred_pts).probs.data.numpy().squeeze()
+ result.append(eval_pointcloud(mesh_data[0], pcl_gt, normals, norm_gt, pred_occ, test_gt))
+ except:
+ pass
+ print(datetime.datetime.now() - start)
+ df = pd.DataFrame(result)
+ print(df.mean())
\ No newline at end of file
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000..d772f7a
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,89 @@
+try:
+ from setuptools import setup
+except ImportError:
+ from distutils.core import setup
+from distutils.extension import Extension
+from Cython.Build import cythonize
+from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
+import numpy
+
+
+# Get the numpy include directory.
+numpy_include_dir = numpy.get_include()
+
+# Extensions
+# pykdtree (kd tree)
+pykdtree = Extension(
+ 'src.utils.libkdtree.pykdtree.kdtree',
+ sources=[
+ 'src/utils/libkdtree/pykdtree/kdtree.c',
+ 'src/utils/libkdtree/pykdtree/_kdtree_core.c'
+ ],
+ language='c',
+ extra_compile_args=['-std=c99', '-O3', '-fopenmp'],
+ extra_link_args=['-lgomp'],
+)
+
+# mcubes (marching cubes algorithm)
+mcubes_module = Extension(
+ 'src.utils.libmcubes.mcubes',
+ sources=[
+ 'src/utils/libmcubes/mcubes.pyx',
+ 'src/utils/libmcubes/pywrapper.cpp',
+ 'src/utils/libmcubes/marchingcubes.cpp'
+ ],
+ language='c++',
+ extra_compile_args=['-std=c++11'],
+ include_dirs=[numpy_include_dir]
+)
+
+# triangle hash (efficient mesh intersection)
+triangle_hash_module = Extension(
+ 'src.utils.libmesh.triangle_hash',
+ sources=[
+ 'src/utils/libmesh/triangle_hash.pyx'
+ ],
+ libraries=['m'] # Unix-like specific
+)
+
+# mise (efficient mesh extraction)
+mise_module = Extension(
+ 'src.utils.libmise.mise',
+ sources=[
+ 'src/utils/libmise/mise.pyx'
+ ],
+)
+
+# simplify (efficient mesh simplification)
+simplify_mesh_module = Extension(
+ 'src.utils.libsimplify.simplify_mesh',
+ sources=[
+ 'src/utils/libsimplify/simplify_mesh.pyx'
+ ]
+)
+
+# voxelization (efficient mesh voxelization)
+voxelize_module = Extension(
+ 'src.utils.libvoxelize.voxelize',
+ sources=[
+ 'src/utils/libvoxelize/voxelize.pyx'
+ ],
+ libraries=['m'] # Unix-like specific
+)
+
+# Gather all extension modules
+ext_modules = [
+ pykdtree,
+ mcubes_module,
+ triangle_hash_module,
+ mise_module,
+ simplify_mesh_module,
+ voxelize_module,
+]
+
+setup(
+ ext_modules=cythonize(ext_modules),
+ cmdclass={
+ 'build_ext': BuildExtension
+ }
+)
diff --git a/src/.ipynb_checkpoints/metrics-checkpoint.ipynb b/src/.ipynb_checkpoints/metrics-checkpoint.ipynb
new file mode 100644
index 0000000..72acdc2
--- /dev/null
+++ b/src/.ipynb_checkpoints/metrics-checkpoint.ipynb
@@ -0,0 +1,258 @@
+{
+ "cells": [
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from pykdtree.kdtree import KDTree\n",
+ "import numpy as np"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pc_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'\n",
+ "pc_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'\n",
+ "p_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'\n",
+ "p_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pc_data1 = np.load(pc_path1)\n",
+ "pc_data2 = np.load(pc_path2)\n",
+ "p_data1 = np.load(p_path1)\n",
+ "p_data2 = np.load(p_path2)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "pointcloud = pc_data1['points']\n",
+ "pointcloud_gt = pc_data2['points']\n",
+ "normals = pc_data1['normals']\n",
+ "normals_gt = pc_data2['normals']\n",
+ "occ_1 = p_data1['occupancies']\n",
+ "occ_2 = p_data2['occupancies']"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_iou(occ1, occ2):\n",
+ " ''' Computes the Intersection over Union (IoU) value for two sets of\n",
+ " occupancy values.\n",
+ " Args:\n",
+ " occ1 (tensor): first set of occupancy values\n",
+ " occ2 (tensor): second set of occupancy values\n",
+ " '''\n",
+ " occ1 = np.asarray(occ1)\n",
+ " occ2 = np.asarray(occ2)\n",
+ "\n",
+ " # Put all data in second dimension\n",
+ " # Also works for 1-dimensional data\n",
+ " if occ1.ndim >= 2:\n",
+ " occ1 = occ1.reshape(occ1.shape[0], -1)\n",
+ " if occ2.ndim >= 2:\n",
+ " occ2 = occ2.reshape(occ2.shape[0], -1)\n",
+ "\n",
+ " # Convert to boolean values\n",
+ " occ1 = (occ1 >= 0.5)\n",
+ " occ2 = (occ2 >= 0.5)\n",
+ "\n",
+ " # Compute IOU\n",
+ " area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1)\n",
+ " area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1)\n",
+ "\n",
+ " iou = (area_intersect / area_union)\n",
+ "\n",
+ " return iou"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "empty_point_dict = {\n",
+ " 'completeness': np.sqrt(3),\n",
+ " 'accuracy': np.sqrt(3),\n",
+ " 'completeness2': 3,\n",
+ " 'accuracy2': 3,\n",
+ " 'chamfer': 6,\n",
+ "}\n",
+ "\n",
+ "empty_normal_dict = {\n",
+ " 'normals completeness': -1.,\n",
+ " 'normals accuracy': -1.,\n",
+ " 'normals': -1.,\n",
+ "}"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def compute_separation(points_src, normals_src, points_tgt, normals_tgt):\n",
+ " ''' Computes minimal distances of each point in points_src to points_tgt.\n",
+ " Args:\n",
+ " points_src (numpy array): source points\n",
+ " normals_src (numpy array): source normals\n",
+ " points_tgt (numpy array): target points\n",
+ " normals_tgt (numpy array): target normals\n",
+ " '''\n",
+ " kdtree = KDTree(points_tgt)\n",
+ " sepr, ind = kdtree.query(points_src)\n",
+ "\n",
+ " if normals_src is not None and normals_tgt is not None:\n",
+ " normals_src = normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True)\n",
+ " normals_tgt = normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True)\n",
+ "\n",
+ " normals_dot_product = (normals_tgt[ind] * normals_src).sum(axis=-1)\n",
+ " normals_dot_product = np.abs(normals_dot_product)\n",
+ " else:\n",
+ " normals_dot_product = np.array(\n",
+ " [np.nan] * points_src.shape[0], dtype=np.float32)\n",
+ " return sepr, normals_dot_product"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "def eval_pointcloud(pointcloud, pointcloud_gt,\n",
+ " normals, normals_gt, occ1, occ2):\n",
+ " ''' \n",
+ " Evaluates a point cloud.\n",
+ " Args:\n",
+ " pointcloud (numpy array): predicted point cloud\n",
+ " pointcloud_gt (numpy array): ground truth point cloud\n",
+ " normals (numpy array): predicted normals\n",
+ " normals_gt (numpy array): ground truth normals\n",
+ " '''\n",
+ " # Return maximum losses if pointcloud is empty\n",
+ " if pointcloud.shape[0] == 0:\n",
+ " print('Empty pointcloud / mesh detected!')\n",
+ " out_dict = empty_point_dict\n",
+ " if normals is not None and normals_tgt is not None:\n",
+ " out_dict.update(empty_normal_dict)\n",
+ " return out_dict\n",
+ "\n",
+ " pointcloud = np.asarray(pointcloud)\n",
+ " pointcloud_gt = np.asarray(pointcloud_gt)\n",
+ "\n",
+ " # Completeness: how far are the points of the groundtruth point cloud\n",
+ " # from the predicted point cloud\n",
+ " completeness, normal_completeness = compute_separation(\n",
+ " pointcloud_gt, normals_gt, pointcloud, normals\n",
+ " )\n",
+ " completeness_sq = completeness**2\n",
+ "\n",
+ " completeness = completeness.mean()\n",
+ " completeness_sq = completeness_sq.mean()\n",
+ " normal_completeness = normal_completeness.mean()\n",
+ "\n",
+ " # Accuracy: how far are the points of the predicted pointcloud\n",
+ " # from the groundtruth pointcloud\n",
+ " accuracy, normal_accuracy = compute_separation(\n",
+ " pointcloud, normals, pointcloud_gt, normals_gt\n",
+ " )\n",
+ " accuracy_sq = accuracy**2\n",
+ "\n",
+ " accuracy = accuracy.mean()\n",
+ " accuracy_sq = accuracy_sq.mean()\n",
+ " normal_accuracy = normal_accuracy.mean()\n",
+ "\n",
+ " # Chamfer distance\n",
+ " chamferL2 = 0.5 * (completeness_sq + accuracy_sq)\n",
+ " normals_correction = (\n",
+ " 0.5 * normal_completeness + 0.5 * normal_accuracy\n",
+ " )\n",
+ " chamferL1 = 0.5 * (completeness + accuracy)\n",
+ " \n",
+ " occupancy_iou = compute_iou(occ1, occ2)\n",
+ "\n",
+ " out_dict = {\n",
+ " 'completeness': completeness,\n",
+ " 'accuracy': accuracy,\n",
+ " 'normals completeness': normal_completeness,\n",
+ " 'normals accuracy': normal_accuracy,\n",
+ " 'normals': normals_correction,\n",
+ " 'completeness_sq': completeness_sq,\n",
+ " 'accuracy_sq': accuracy_sq,\n",
+ " 'chamfer-L2': chamferL2,compute_iou(occ1, occ2)\n",
+ " 'chamfer-L1': chamferL1,\n",
+ " 'iou': occupancy_iou\n",
+ " }\n",
+ "\n",
+ " return out_dict"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 17,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "{'completeness': 0.0, 'accuracy': 0.0, 'normals completeness': 1.0, 'normals accuracy': 1.0, 'normals': 1.0, 'completeness_sq': 0.0, 'accuracy_sq': 0.0, 'chamfer-L2': 0.0, 'chamfer-L1': 0.0, 'iou': 1.0}\n"
+ ]
+ }
+ ],
+ "source": [
+ "eval_dict = eval_pointcloud(pointcloud, pointcloud_gt, normals, normals_gt, occ_1, occ_2)\n",
+ "print(eval_dict)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.3"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 4
+}
diff --git a/src/__init__.py b/src/__init__.py
new file mode 100755
index 0000000..e69de29
diff --git a/src/dataset/.ipynb_checkpoints/data_process-checkpoint.py b/src/dataset/.ipynb_checkpoints/data_process-checkpoint.py
new file mode 100755
index 0000000..bb72527
--- /dev/null
+++ b/src/dataset/.ipynb_checkpoints/data_process-checkpoint.py
@@ -0,0 +1,149 @@
+import numpy as np
+import pandas as pd
+import h5py
+import os
+import skimage.io as sio
+import tqdm
+import argparse
+import pickle as pkl
+
+def save_dict_to_hdf5(dic, filename):
+ """
+ ....
+ """
+ if os.path.exists(filename):
+ return
+ with h5py.File(filename, 'w') as h5file:
+ recursively_save_dict_contents_to_group(h5file, '/', dic)
+
+def recursively_save_dict_contents_to_group(h5file, path, dic):
+ """
+ ....
+ """
+ for key, item in dic.items():
+ if isinstance(item, (np.ndarray, np.int64, np.float64, str, bytes)):
+ h5file[path + key] = item
+ elif isinstance(item, dict):
+ recursively_save_dict_contents_to_group(h5file, path + key + '/', item)
+ else:
+ raise ValueError('Cannot save %s type'%type(item))
+
+def load_dict_from_hdf5(filename):
+ """
+ ....
+ """
+ with h5py.File(filename, 'r') as h5file:
+ return recursively_load_dict_contents_from_group(h5file, '/')
+
+def recursively_load_dict_contents_from_group(h5file, path):
+ """
+ ....
+ """
+ ans = {}
+ for key, item in h5file[path].items():
+ if isinstance(item, h5py._hl.dataset.Dataset):
+ ans[key] = item.value
+ elif isinstance(item, h5py._hl.group.Group):
+ ans[key] = recursively_load_dict_contents_from_group(h5file, path + key + '/')
+ return ans
+
+def load_data(path):
+ # Load the pointcloud.npz and points.npz file
+ pc_file = np.load(os.path.join(path, "pointcloud.npz"))
+ points_file = np.load(os.path.join(path, "points.npz"))
+
+ # create image placeholder and camera data placeholder
+ img_data = []
+ cam_data = None
+
+ # Load images
+ for imx in os.listdir(os.path.join(path, "img_choy2016")):
+ current = os.path.join(path, "img_choy2016", imx)
+ if 'npz' in imx:
+ cam_data = np.load(current)
+ else:
+ img_current = sio.imread(current)
+ if img_current.ndim == 2:
+ img_current = np.stack([img_current, img_current, img_current], axis=-1)
+ img_data.append(img_current)
+ img_data = np.asarray(img_data)
+
+ all_data = {
+ 'images': img_data,
+ 'camera': dict(cam_data),
+ 'points': dict(points_file),
+ 'pointcloud': dict(pc_file)
+ }
+
+ return all_data
+
+def main(args):
+ data_root = args.dataroot
+ dataset_dir = args.output
+
+ # Create the output folder
+ os.makedirs(os.path.join(dataset_dir, "hdf_data"), exist_ok=True)
+ save_path = os.path.join(dataset_dir, "hdf_data")
+
+ file_lists = {
+ 'train.lst': [],
+ 'test.lst': [],
+ 'val.lst': []
+ }
+
+ # iterate over each class in the dataset
+ for cid in os.listdir(data_root):
+ # Get the path to each object and list of objects
+ objs_path = os.path.join(data_root, cid)
+ if "metadata" in cid.lower():
+ continue
+ obj_list = os.listdir(objs_path)
+
+ # iterate over each object in the dataset class
+ for obx in tqdm.tqdm(obj_list):
+ current_path = os.path.join(objs_path, obx)
+ new_filename = "{}_{}.h5".format(cid, obx)
+
+ try:
+ if os.path.exists(os.path.join(save_path, new_filename)):
+ continue
+
+ # If possible, load the object and it's propertiess
+ data_current = load_data(current_path)
+
+ # Save the output into the HDF5 file at the output location
+ save_dict_to_hdf5(data_current, os.path.join(save_path, new_filename))
+ except:
+ # Print the file name for error logs
+ if obx.lower() in ["train.lst", "test.lst", "val.lst"]:
+ # read each file
+ f = open(current_path, 'r')
+ flist = ["{}_{}.h5".format(cid, yx) for yx in f.read().split()]
+ f.close()
+
+ # Append to file lists
+ file_lists[obx] += flist
+ else:
+ print("Error at {}-{}".format(cid, obx))
+
+ # Now save the file lists as well
+ for kx in file_lists.keys():
+ # Get each file list and save
+ print("Processing list for {}".format(kx))
+ flist = "\n".join(file_lists[kx])
+ f = open(os.path.join(save_path, kx), 'w')
+ f.write(flist)
+ f.close()
+ print("Saved data with train-test-val splits...")
+
+
+if __name__ == "__main__":
+ # Create the argument parser and parse the script parameters
+ parser = argparse.ArgumentParser(description='Process dataset to create HDF5 data file for each object')
+ parser.add_argument('--dataroot', action='store', type=str, help="dataset path for the preprocessed shapenet files")
+ parser.add_argument('--output', action='store', type=str, help="output data folder to save the dataset")
+
+ args = parser.parse_args()
+
+ # Run the main function
+ main(args)
\ No newline at end of file
diff --git a/src/dataset/__init__.py b/src/dataset/__init__.py
new file mode 100755
index 0000000..e69de29
diff --git a/src/dataset/data_process.py b/src/dataset/data_process.py
new file mode 100755
index 0000000..bb72527
--- /dev/null
+++ b/src/dataset/data_process.py
@@ -0,0 +1,149 @@
+import numpy as np
+import pandas as pd
+import h5py
+import os
+import skimage.io as sio
+import tqdm
+import argparse
+import pickle as pkl
+
+def save_dict_to_hdf5(dic, filename):
+ """
+ ....
+ """
+ if os.path.exists(filename):
+ return
+ with h5py.File(filename, 'w') as h5file:
+ recursively_save_dict_contents_to_group(h5file, '/', dic)
+
+def recursively_save_dict_contents_to_group(h5file, path, dic):
+ """
+ ....
+ """
+ for key, item in dic.items():
+ if isinstance(item, (np.ndarray, np.int64, np.float64, str, bytes)):
+ h5file[path + key] = item
+ elif isinstance(item, dict):
+ recursively_save_dict_contents_to_group(h5file, path + key + '/', item)
+ else:
+ raise ValueError('Cannot save %s type'%type(item))
+
+def load_dict_from_hdf5(filename):
+ """
+ ....
+ """
+ with h5py.File(filename, 'r') as h5file:
+ return recursively_load_dict_contents_from_group(h5file, '/')
+
+def recursively_load_dict_contents_from_group(h5file, path):
+ """
+ ....
+ """
+ ans = {}
+ for key, item in h5file[path].items():
+ if isinstance(item, h5py._hl.dataset.Dataset):
+ ans[key] = item.value
+ elif isinstance(item, h5py._hl.group.Group):
+ ans[key] = recursively_load_dict_contents_from_group(h5file, path + key + '/')
+ return ans
+
+def load_data(path):
+ # Load the pointcloud.npz and points.npz file
+ pc_file = np.load(os.path.join(path, "pointcloud.npz"))
+ points_file = np.load(os.path.join(path, "points.npz"))
+
+ # create image placeholder and camera data placeholder
+ img_data = []
+ cam_data = None
+
+ # Load images
+ for imx in os.listdir(os.path.join(path, "img_choy2016")):
+ current = os.path.join(path, "img_choy2016", imx)
+ if 'npz' in imx:
+ cam_data = np.load(current)
+ else:
+ img_current = sio.imread(current)
+ if img_current.ndim == 2:
+ img_current = np.stack([img_current, img_current, img_current], axis=-1)
+ img_data.append(img_current)
+ img_data = np.asarray(img_data)
+
+ all_data = {
+ 'images': img_data,
+ 'camera': dict(cam_data),
+ 'points': dict(points_file),
+ 'pointcloud': dict(pc_file)
+ }
+
+ return all_data
+
+def main(args):
+ data_root = args.dataroot
+ dataset_dir = args.output
+
+ # Create the output folder
+ os.makedirs(os.path.join(dataset_dir, "hdf_data"), exist_ok=True)
+ save_path = os.path.join(dataset_dir, "hdf_data")
+
+ file_lists = {
+ 'train.lst': [],
+ 'test.lst': [],
+ 'val.lst': []
+ }
+
+ # iterate over each class in the dataset
+ for cid in os.listdir(data_root):
+ # Get the path to each object and list of objects
+ objs_path = os.path.join(data_root, cid)
+ if "metadata" in cid.lower():
+ continue
+ obj_list = os.listdir(objs_path)
+
+ # iterate over each object in the dataset class
+ for obx in tqdm.tqdm(obj_list):
+ current_path = os.path.join(objs_path, obx)
+ new_filename = "{}_{}.h5".format(cid, obx)
+
+ try:
+ if os.path.exists(os.path.join(save_path, new_filename)):
+ continue
+
+ # If possible, load the object and it's propertiess
+ data_current = load_data(current_path)
+
+ # Save the output into the HDF5 file at the output location
+ save_dict_to_hdf5(data_current, os.path.join(save_path, new_filename))
+ except:
+ # Print the file name for error logs
+ if obx.lower() in ["train.lst", "test.lst", "val.lst"]:
+ # read each file
+ f = open(current_path, 'r')
+ flist = ["{}_{}.h5".format(cid, yx) for yx in f.read().split()]
+ f.close()
+
+ # Append to file lists
+ file_lists[obx] += flist
+ else:
+ print("Error at {}-{}".format(cid, obx))
+
+ # Now save the file lists as well
+ for kx in file_lists.keys():
+ # Get each file list and save
+ print("Processing list for {}".format(kx))
+ flist = "\n".join(file_lists[kx])
+ f = open(os.path.join(save_path, kx), 'w')
+ f.write(flist)
+ f.close()
+ print("Saved data with train-test-val splits...")
+
+
+if __name__ == "__main__":
+ # Create the argument parser and parse the script parameters
+ parser = argparse.ArgumentParser(description='Process dataset to create HDF5 data file for each object')
+ parser.add_argument('--dataroot', action='store', type=str, help="dataset path for the preprocessed shapenet files")
+ parser.add_argument('--output', action='store', type=str, help="output data folder to save the dataset")
+
+ args = parser.parse_args()
+
+ # Run the main function
+ main(args)
\ No newline at end of file
diff --git a/src/dataset/dataloader.py b/src/dataset/dataloader.py
new file mode 100755
index 0000000..aba2018
--- /dev/null
+++ b/src/dataset/dataloader.py
@@ -0,0 +1,172 @@
+import os
+import glob
+import torch
+import h5py
+import cv2
+import random
+import pandas as pd
+from skimage import io
+import numpy as np
+from PIL import Image
+from torch.utils.data import Dataset, DataLoader
+from torchvision import transforms, utils
+
+
+
+class OccupancyNetDataset(Dataset):
+ """Occupancy Network dataset."""
+
+ def __init__(self, root_dir, transform=None, num_points=1024):
+ """
+ Args:
+ root_dir (string): Directory with all the images.
+ transform (callable, optional): Optional transform to be applied
+ num_points (int): Number of points to sample in the object point cloud from the data
+ on a sample.
+ """
+ self.root_dir = root_dir
+ self.transform = transform
+ self.num_points = num_points
+ self.files = []
+
+ for sub in glob.glob(self.root_dir+'/*'):
+ self.files.extend(glob.glob(sub+'/*'))
+
+ def __len__(self):
+ return len(self.files)
+
+ def __getitem__(self, idx):
+ # Fetch the file path and setup image folder paths
+ req_path = self.files[idx]
+ img_folder = os.path.join(req_path, 'img_choy2016')
+
+ img_path = random.choice(glob.glob(img_folder + '/*.jpg'))
+
+ # Load the image with opencv and convert to RGB
+ image = cv2.cvtColor(cv2.imread(img_path), cv2.COLOR_BGR2RGB)
+
+ # Load the points data
+ points_path = os.path.join(req_path, 'points.npz')
+ data = np.load(points_path)
+
+ # Get the actual point of the object
+ points = data['points']
+ # Unpack the occupancies of the object
+ occupancies = np.unpackbits(data['occupancies'])
+
+ # Sample n points from the data
+ selected_idx = np.random.permutation(np.arange(points.shape[0]))[:self.num_points]
+
+ # Use only the selected indices and pack everything up in a nice dictionary
+ sample = (
+ torch.from_numpy(image).float().transpose(1, 2).transpose(0, 1),
+ torch.from_numpy(points[selected_idx]),
+ torch.from_numpy(occupancies[selected_idx]))
+
+ # Apply any transformation necessary
+ if self.transform:
+ sample[0] = self.transform(sample[0])
+
+ return sample
+
+
+class OccupancyNetDatasetHDF(Dataset):
+ """Occupancy Network dataset."""
+
+ def __init__(self, root_dir, transform=None, num_points=1024, default_transform=True, mode="train", balance=False, point_cloud=False):
+ """
+ Args:
+ root_dir (string): Directory with all the images.
+ transform (callable, optional): Optional transform to be applied
+ num_points (int): Number of points to sample in the object point cloud from the data
+ on a sample.
+ mode (str): Which data split do we want among train, test and val
+ """
+ self.root_dir = root_dir
+ self.transform = transform
+ self.num_points = num_points
+ self.mode = mode
+ self.files = []
+ self.pos_neg_ratio = [0.1, 0.35]
+ self.balance = balance
+ self.point_cloud = point_cloud
+
+ # Save the files
+ f = open(os.path.join(self.root_dir, "{}.lst".format(self.mode)), 'r')
+ self.files = f.read().split()
+ f.close()
+
+ # If not transforms have been provided, apply default imagenet transform
+ if transform is None and default_transform:
+ self.transform = transforms.Normalize(mean=[0.485, 0.456, 0.406],
+ std=[0.229, 0.224, 0.225])
+
+ def __len__(self):
+ return len(self.files)
+
+ def get_prob(self):
+ return self.pos_neg_ratio[0] + (np.random.random() * (self.pos_neg_ratio[1] - self.pos_neg_ratio[0]))
+
+ def __getitem__(self, idx):
+ # Fetch the file path and setup image folder paths
+ req_path = self.files[idx]
+ file_path = os.path.join(self.root_dir, req_path)
+
+ # Load the h5 file
+ # print(file_path)
+ hf = h5py.File(file_path, 'r')
+
+ # [NOTE]: the notation [()] below is to extract the value from HDF5 file
+ # get all images and randomly pick one
+ all_imgs = hf['images'][()]
+ random_idx = int(np.random.random()*all_imgs.shape[0])
+
+ # Fetch the image we need
+ image = all_imgs[random_idx]
+ try:
+ # Get the points and occupancies
+ points = hf['points']['points'][()]
+ occupancies = np.unpackbits(hf['points']['occupancies'][()])
+
+ if self.point_cloud:
+ pc = hf.get('pointcloud').get('points')[()]
+ normal = hf.get('pointcloud').get('normals')[()]
+
+ # Sample n points from the data
+ if self.balance:
+ # Create index list
+ indices = np.arange(occupancies.shape[0])
+ n_pos = min(int(self.num_points * self.get_prob()), (occupancies == 1).sum())
+ n_neg = self.num_points - n_pos
+ positive_idx = np.random.permutation(indices[occupancies == 1])[:n_pos]
+ negative_idx = np.random.permutation(indices[occupancies == 0])[:n_neg]
+ selected_idx = np.concatenate([positive_idx, negative_idx])
+
+ else:
+ selected_idx = np.random.permutation(np.arange(points.shape[0]))[:self.num_points]
+
+
+ # Use only the selected indices and pack everything up in a nice dictionary
+ final_image = torch.from_numpy(image).float().transpose(1, 2).transpose(0, 1) / image.max()
+ final_points = torch.from_numpy(points[selected_idx]).float()
+ final_gt = torch.from_numpy(occupancies[selected_idx]).float()
+ except:
+ print(idx, file_path)
+
+ # Close the hdf file
+ hf.close()
+
+ # Apply any transformation necessary
+ if self.transform:
+ final_image = self.transform(final_image)
+
+ if self.point_cloud:
+ return [final_image, final_points, final_gt, pc, normal]
+ return [final_image, final_points, final_gt]
+
+
+if __name__ == '__main__':
+ dataset = OccupancyNetDataset( root_dir='/home/saiamrit/Documents/CV Project/data/subset/ShapeNet')
+ print(len(dataset))
+ dataloader = DataLoader(dataset, batch_size=64, shuffle=True, num_workers=0)
+ print(len(dataloader))
diff --git a/src/evaluate.py b/src/evaluate.py
new file mode 100644
index 0000000..d89ed95
--- /dev/null
+++ b/src/evaluate.py
@@ -0,0 +1,282 @@
+import os
+import glob
+import cv2
+import random
+import pandas as pd
+from skimage import io
+import numpy as np
+from PIL import Image
+from torch.utils.data import Dataset, DataLoader
+from torchvision import transforms, utils
+import h5py
+
+# Network building stuff
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+import pytorch_lightning as pl
+from pytorch_lightning.loggers import TensorBoardLogger
+import torchmetrics
+import torch.distributions as dist
+
+
+#mesh
+from src.utils.libmise.mise import MISE
+from src.utils.libmcubes.mcubes import marching_cubes
+import trimesh
+from pykdtree.kdtree import KDTree
+
+
+def make_3d_grid(bb_min, bb_max, shape):
+ ''' Makes a 3D grid.
+ Args:
+ bb_min (tuple): bounding box minimum
+ bb_max (tuple): bounding box maximum
+ shape (tuple): output shape
+ '''
+ size = shape[0] * shape[1] * shape[2]
+
+ pxs = torch.linspace(bb_min[0], bb_max[0], shape[0])
+ pys = torch.linspace(bb_min[1], bb_max[1], shape[1])
+ pzs = torch.linspace(bb_min[2], bb_max[2], shape[2])
+
+ pxs = pxs.view(-1, 1, 1).expand(*shape).contiguous().view(size)
+ pys = pys.view(1, -1, 1).expand(*shape).contiguous().view(size)
+ pzs = pzs.view(1, 1, -1).expand(*shape).contiguous().view(size)
+ p = torch.stack([pxs, pys, pzs], dim=1)
+
+ return p
+
+def eval_points(net, p, c, points_batch_size=100000):
+ """
+ """
+ p_split = torch.split(p, points_batch_size)
+ # print(len(p_split))
+ occ_hats = []
+
+ for pi in p_split:
+ pi = pi.unsqueeze(0)
+ with torch.no_grad():
+ occ_hat = net.net.decoder(pi.to(net.device), c.to(net.device))
+
+ occ_hats.append(occ_hat.squeeze(0).detach().cpu())
+
+ occ_hat = torch.cat(occ_hats, dim=0)
+
+ return occ_hat
+
+def extract_mesh(occ_hat, padding=0.1, threshold_g=0.2):
+ n_x, n_y, n_z = occ_hat.shape
+ box_size = 1 + padding
+ threshold = np.log( threshold_g) - np.log(1. - threshold_g)
+
+ occ_hat_padded = np.pad(occ_hat, 1, 'constant', constant_values=-1e6)
+ # print(threshold,occ_hat_padded.shape, np.min(occ_hat_padded), np.max(occ_hat_padded))
+ vertices, triangles = marching_cubes(occ_hat_padded, threshold)
+
+ vertices -= 0.5
+ # Undo padding
+ vertices -= 1
+ # Normalize to bounding box
+ vertices /= np.array([n_x-1, n_y-1, n_z-1])
+ vertices = box_size * (vertices - 0.5)
+
+ mesh = build_mesh(vertices, triangles)
+ return mesh, (vertices, triangles)
+
+def build_mesh(vertices, triangles, normals=None):
+ mesh = trimesh.Trimesh(vertices, triangles, vertex_normals=normals, process=False)
+ return mesh
+
+def get_mesh(net, data, padding=0.1, resolution0=32, upsampling_steps=2, threshold_g=0.2, return_points=False):
+ # Get the image, points, and the ground truth
+ test_img, test_pts, test_gt = data
+
+ # Get the threshold and the box padding
+ threshold = np.log( threshold_g) - np.log(1. - threshold_g)
+ box_size = 1 + padding
+ nx = 32
+ pointsf = 2 * make_3d_grid((-0.5,)*3, (0.5,)*3, (nx,)*3 )
+ c = net.net.encoder(test_img.unsqueeze(0)).detach()
+
+ if(upsampling_steps==0):
+ values = eval_points(net, pointsf,c ).cpu().numpy()
+ value_grid = values.reshape(nx, nx, nx)
+ else:
+ mesh_extractor = MISE(resolution0, upsampling_steps, threshold)
+ points = mesh_extractor.query()
+ while points.shape[0] != 0:
+ # Query points
+ pointsf = torch.FloatTensor(points)
+ # Normalize to bounding box
+ pointsf = pointsf / mesh_extractor.resolution
+ pointsf = box_size * (pointsf - 0.5)
+ # Evaluate model and update
+ # print(pointsf.shape, c.shape)
+ values = eval_points(net, pointsf, c).cpu().numpy()
+ values = values.astype(np.float64)
+ mesh_extractor.update(points, values)
+ points = mesh_extractor.query()
+ value_grid = mesh_extractor.to_dense()
+ mesh, mesh_data = extract_mesh(value_grid, threshold_g=threshold_g)
+
+ normals = get_normals(net, mesh_data[0], c)
+ mesh = build_mesh(mesh_data[0], mesh_data[1], normals)
+
+ if return_points:
+ return mesh, mesh_data, normals
+ return mesh
+
+def get_normals(net, vertices, c):
+ pts = torch.FloatTensor(vertices)
+ vertices_split = torch.split(pts, 10000)
+
+ normals = []
+ for vi in vertices_split:
+ # net.zero_grad()
+ vi = vi.unsqueeze(0)
+ vi.requires_grad_()
+ occ_hat = net.net.decoder(vi.to(net.device), c.to(net.device))
+ out = occ_hat.sum()
+ out.backward()
+ ni = -vi.grad
+ ni = ni / torch.norm(ni, dim=-1, keepdim=True)
+ ni = ni.squeeze(0).cpu().numpy()
+ normals.append(ni)
+
+ normals = np.concatenate(normals, axis=0)
+ return normals
+
+def compute_iou(occ1, occ2):
+ ''' Computes the Intersection over Union (IoU) value for two sets of
+ occupancy values.
+ Args:
+ occ1 (tensor): first set of occupancy values
+ occ2 (tensor): second set of occupancy values
+ '''
+ occ1 = np.asarray(occ1)
+ occ2 = np.asarray(occ2)
+
+ # Put all data in second dimension
+ # Also works for 1-dimensional data
+ if occ1.ndim >= 2:
+ occ1 = occ1.reshape(occ1.shape[0], -1)
+ if occ2.ndim >= 2:
+ occ2 = occ2.reshape(occ2.shape[0], -1)
+
+ # Convert to boolean values
+ occ1 = (occ1 >= 0.5)
+ occ2 = (occ2 >= 0.5)
+
+ # Compute IOU
+ area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1)
+ area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1)
+
+ iou = (area_intersect / area_union)
+
+ return iou
+
+empty_point_dict = {
+ 'completeness': np.sqrt(3),
+ 'accuracy': np.sqrt(3),
+ 'completeness2': 3,
+ 'accuracy2': 3,
+ 'chamfer': 6,
+}
+
+empty_normal_dict = {
+ 'normals completeness': -1.,
+ 'normals accuracy': -1.,
+ 'normals': -1.,
+}
+
+def compute_separation(points_src, normals_src, points_tgt, normals_tgt):
+ ''' Computes minimal distances of each point in points_src to points_tgt.
+ Args:
+ points_src (numpy array): source points
+ normals_src (numpy array): source normals
+ points_tgt (numpy array): target points
+ normals_tgt (numpy array): target normals
+ '''
+ kdtree = KDTree(points_tgt)
+ sepr, ind = kdtree.query(points_src)
+
+ if normals_src is not None and normals_tgt is not None:
+ normals_src = normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True)
+ normals_tgt = normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True)
+
+ normals_dot_product = (normals_tgt[ind] * normals_src).sum(axis=-1)
+ normals_dot_product = np.abs(normals_dot_product)
+ else:
+ normals_dot_product = np.array(
+ [np.nan] * points_src.shape[0], dtype=np.float32)
+ return sepr, normals_dot_product
+
+def eval_pointcloud(pointcloud, pointcloud_gt,
+ normals, normals_gt, occ1, occ2):
+ '''
+ Evaluates a point cloud.
+ Args:
+ pointcloud (numpy array): predicted point cloud
+ pointcloud_gt (numpy array): ground truth point cloud
+ normals (numpy array): predicted normals
+ normals_gt (numpy array): ground truth normals
+ '''
+ # Return maximum losses if pointcloud is empty
+ if pointcloud.shape[0] == 0:
+ print('Empty pointcloud / mesh detected!')
+ # [ERR]: there's supposed to be a .copy() here
+ out_dict = empty_point_dict.copy()
+ if normals is not None and normals_tgt is not None:
+ out_dict.update(empty_normal_dict)
+ return out_dict
+
+ pointcloud = np.asarray(pointcloud)
+ pointcloud_gt = np.asarray(pointcloud_gt)
+
+ # Completeness: how far are the points of the groundtruth point cloud
+ # from the predicted point cloud
+ completeness, normal_completeness = compute_separation(
+ pointcloud_gt, normals_gt, pointcloud, normals
+ )
+ completeness_sq = completeness**2
+
+ completeness = completeness.mean()
+ completeness_sq = completeness_sq.mean()
+ normal_completeness = normal_completeness.mean()
+
+ # Accuracy: how far are the points of the predicted pointcloud
+ # from the groundtruth pointcloud
+ accuracy, normal_accuracy = compute_separation(
+ pointcloud, normals, pointcloud_gt, normals_gt
+ )
+ accuracy_sq = accuracy**2
+
+ accuracy = accuracy.mean()
+ accuracy_sq = accuracy_sq.mean()
+ normal_accuracy = normal_accuracy.mean()
+
+ # Chamfer distance
+ chamferL2 = 0.5 * (completeness_sq + accuracy_sq)
+ normals_correction = (
+ 0.5 * normal_completeness + 0.5 * normal_accuracy
+ )
+ chamferL1 = 0.5 * (completeness + accuracy)
+
+ occupancy_iou = compute_iou(occ1, occ2)
+
+ out_dict = {
+ 'completeness': completeness,
+ 'accuracy': accuracy,
+ 'normals completeness': normal_completeness,
+ 'normals accuracy': normal_accuracy,
+ 'normals': normals_correction,
+ 'completeness_sq': completeness_sq,
+ 'accuracy_sq': accuracy_sq,
+ 'chamfer-L2': chamferL2,
+ 'chamfer-L1': chamferL1,
+ 'iou': occupancy_iou
+ }
+
+ return out_dict
\ No newline at end of file
diff --git a/src/metrics.py b/src/metrics.py
new file mode 100755
index 0000000..b1fced9
--- /dev/null
+++ b/src/metrics.py
@@ -0,0 +1,160 @@
+from pykdtree.kdtree import KDTree
+import numpy as np
+
+
+def compute_iou(occ1, occ2):
+ ''' Computes the Intersection over Union (IoU) value for two sets of
+ occupancy values.
+ Args:
+ occ1 (tensor): first set of occupancy values
+ occ2 (tensor): second set of occupancy values
+ '''
+ occ1 = np.asarray(occ1)
+ occ2 = np.asarray(occ2)
+
+ # Put all data in second dimension
+ # Also works for 1-dimensional data
+ if occ1.ndim >= 2:
+ occ1 = occ1.reshape(occ1.shape[0], -1)
+ if occ2.ndim >= 2:
+ occ2 = occ2.reshape(occ2.shape[0], -1)
+
+ # Convert to boolean values
+ occ1 = (occ1 >= 0.5)
+ occ2 = (occ2 >= 0.5)
+
+ # Compute IOU
+ area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1)
+ area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1)
+
+ iou = (area_intersect / area_union)
+
+ return iou
+
+
+def compute_separation(points_src, normals_src, points_tgt, normals_tgt):
+ ''' Computes minimal distances of each point in points_src to points_tgt.
+ Args:
+ points_src (numpy array): source points
+ normals_src (numpy array): source normals
+ points_tgt (numpy array): target points
+ normals_tgt (numpy array): target normals
+ '''
+ kdtree = KDTree(points_tgt)
+ sepr, ind = kdtree.query(points_src)
+
+ if normals_src is not None and normals_tgt is not None:
+ normals_src = normals_src / np.linalg.norm(normals_src, axis=-1, keepdims=True)
+ normals_tgt = normals_tgt / np.linalg.norm(normals_tgt, axis=-1, keepdims=True)
+
+ normals_dot_product = (normals_tgt[ind] * normals_src).sum(axis=-1)
+ normals_dot_product = np.abs(normals_dot_product)
+ else:
+ normals_dot_product = np.array(
+ [np.nan] * points_src.shape[0], dtype=np.float32)
+ return sepr, normals_dot_product
+
+
+def eval_pointcloud(pointcloud, pointcloud_gt,
+ normals, normals_gt, occ1, occ2):
+ '''
+ Evaluates a point cloud.
+ Args:
+ pointcloud (numpy array): predicted point cloud
+ pointcloud_gt (numpy array): ground truth point cloud
+ normals (numpy array): predicted normals
+ normals_gt (numpy array): ground truth normals
+ '''
+ # Return maximum losses if pointcloud is empty
+
+ empty_point_dict = {
+ 'completeness': np.sqrt(3),
+ 'accuracy': np.sqrt(3),
+ 'completeness2': 3,
+ 'accuracy2': 3,
+ 'chamfer': 6,
+ }
+
+ empty_normal_dict = {
+ 'normals completeness': -1.,
+ 'normals accuracy': -1.,
+ 'normals': -1.,
+ }
+
+ if pointcloud.shape[0] == 0:
+ print('Empty pointcloud / mesh detected!')
+ out_dict = empty_point_dict
+ if normals is not None and normals_tgt is not None:
+ out_dict.update(empty_normal_dict)
+ return out_dict
+
+ pointcloud = np.asarray(pointcloud)
+ pointcloud_gt = np.asarray(pointcloud_gt)
+
+ # Completeness: how far are the points of the groundtruth point cloud
+ # from the predicted point cloud
+ completeness, normal_completeness = compute_separation(
+ pointcloud_gt, normals_gt, pointcloud, normals
+ )
+ completeness_sq = completeness**2
+
+ completeness = completeness.mean()
+ completeness_sq = completeness_sq.mean()
+ normal_completeness = normal_completeness.mean()
+
+ # Accuracy: how far are the points of the predicted pointcloud
+ # from the groundtruth pointcloud
+ accuracy, normal_accuracy = compute_separation(
+ pointcloud, normals, pointcloud_gt, normals_gt
+ )
+ accuracy_sq = accuracy**2
+
+ accuracy = accuracy.mean()
+ accuracy_sq = accuracy_sq.mean()
+ normal_accuracy = normal_accuracy.mean()
+
+ # Chamfer distance
+ chamferL2 = 0.5 * (completeness_sq + accuracy_sq)
+ normals_correction = (
+ 0.5 * normal_completeness + 0.5 * normal_accuracy
+ )
+ chamferL1 = 0.5 * (completeness + accuracy)
+
+ occupancy_iou = compute_iou(occ1, occ2)
+
+ out_dict = {
+ 'completeness': completeness,
+ 'accuracy': accuracy,
+ 'normals completeness': normal_completeness,
+ 'normals accuracy': normal_accuracy,
+ 'normals': normals_correction,
+ 'completeness_sq': completeness_sq,
+ 'accuracy_sq': accuracy_sq,
+ 'chamfer-L2': chamferL2,compute_iou(occ1, occ2)
+ 'chamfer-L1': chamferL1,
+ 'iou': occupancy_iou
+ }
+
+ return out_dict
+
+if __name__ == '__main__':
+
+ pc_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'
+ pc_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/pointcloud.npz'
+ point_path1 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'
+ point_path2 = '/home/madhvi/Documents/CV Project/data/subset/ShapeNet/02691156/1ac29674746a0fc6b87697d3904b168b/points.npz'
+
+ pc_data1 = np.load(pc_path1)
+ pc_data2 = np.load(pc_path2)
+ points_data1 = np.load(point_path1)
+ points_data2 = np.load(point_path2)
+
+ pointcloud = pc_data1['points']
+ pointcloud_gt = pc_data2['points']
+ normals = pc_data1['normals']
+ normals_gt = pc_data2['normals']
+ occ_1 = points_data1['occupancies']
+ occ_2 = points_data2['occupancies']
+
+ eval_dict = eval_pointcloud(pointcloud, pointcloud_gt, normals, normals_gt, occ_1, occ_2)
+ print(eval_dict)
\ No newline at end of file
diff --git a/src/models/__init__.py b/src/models/__init__.py
new file mode 100755
index 0000000..af3c848
--- /dev/null
+++ b/src/models/__init__.py
@@ -0,0 +1,58 @@
+import torch
+import numpy as np
+import torch.nn as nn
+import torch.nn.functional as F
+import torchvision.models as models
+
+from .decoder import DecoderFC, DecoderCBN
+from .efficientnet import EfficientNetB0, EfficientNetB1, EfficientNetB5, EfficientNetB7
+from .resnet import Resnet50, Resnet18
+
+
+encoder_models = {
+ "resnet-50": Resnet50,
+ "resnet-18": Resnet18,
+ "efficientnet-b0": EfficientNetB0,
+ "efficientnet-b1": EfficientNetB1,
+ "efficientnet-b5": EfficientNetB5,
+ "efficientnet-b7": EfficientNetB7,
+}
+
+decoder_models = {
+ "decoder-fc": DecoderFC,
+ "decoder-cbn": DecoderCBN,
+}
+
+
+def build_encoder(model_name="efficientnet-b0"):
+ return encoder_models[model_name]
+
+def build_decoder(model_name="decoder-cbn"):
+ return decoder_models[model_name]
+
+
+class OccNetImg(nn.Module):
+ """
+ Wrapper for the overall occupancy network module. This will
+ contain the encoder as well as the decoder and provide functionalities
+ such as extraction of feature, decoding to compute occupancy, and an
+ end-to-end forward pass over the encoder-decoder architectures.
+ """
+ def __init__(self, encoder, decoder):
+ super().__init__()
+ self.encoder = encoder
+ self.decoder = decoder
+
+
+ def extract_features(self, x):
+ return self.encoder(x)
+
+ def forward(self, img, pts):
+ # print(img.shape, pts.shape)
+ # Compute the image features
+ c = self.extract_features(img)
+
+ # print(c.shape)
+ out = self.decoder(pts, c)
+
+ return out
\ No newline at end of file
diff --git a/src/models/decoder.py b/src/models/decoder.py
new file mode 100755
index 0000000..726920c
--- /dev/null
+++ b/src/models/decoder.py
@@ -0,0 +1,218 @@
+import torch.nn as nn
+from torchvision import models
+import torch.nn.functional as F
+
+
+class ResBlockFC(nn.Module):
+ def __init__(self, in_dim, out_dim=None, h_dim=None):
+ super().__init__()
+ if out_dim is None:
+ out_dim = in_dim
+ if h_dim is None:
+ h_dim = min(in_dim, out_dim)
+
+ self.fc_0 = nn.Linear(in_dim, h_dim)
+ self.fc_1 = nn.Linear(h_dim, out_dim)
+ self.act = nn.ReLU()
+
+ if in_dim == out_dim:
+ self.skip = None
+ else:
+ self.skip = nn.Linear(in_dim, out_dim, bias=False)
+
+ # Initialize weights to zero
+ nn.init.zeros_(self.fc_1.weight)
+
+ def forward(self, x):
+ out_0 = self.act(self.fc_0(x))
+ out = self.act(self.fc_1(x))
+
+ if self.skip is not None:
+ x_skip = self.skip(x)
+ else:
+ x_skip = x
+
+ return x_skip + out
+
+class DecoderFC(nn.Module):
+ def __init__(self, p_dim=3, c_dim=128, h_dim=128):
+ super().__init__()
+ self.p_dim = p_dim
+ self.c_dim = c_dim
+ self.h_dim = h_dim
+
+ self.fc_p = nn.Linear(p_dim, h_dim)
+ self.fc_c = nn.Linear(c_dim, h_dim)
+
+ self.blocks = nn.Sequential(
+ ResBlockFC(h_dim),
+ ResBlockFC(h_dim),
+ ResBlockFC(h_dim),
+ ResBlockFC(h_dim),
+ ResBlockFC(h_dim)
+ )
+
+ self.fc = nn.Linear(h_dim, 1)
+ self.act = nn.ReLU()
+
+ def forward(self, p, c):
+ # Get size (B, N, D)
+ batch_size, n_points, dim = p.size()
+ # print(p.shape)
+ enc_p = self.fc_p(p) # (B, N, h_dim)
+ enc_c = self.fc_c(c).unsqueeze(1) # (B, 1, h_dim)
+
+ # Add the features now
+ enc = enc_p + enc_c
+
+ # Run through the res blocks
+ enc = self.blocks(enc)
+ out = self.fc(self.act(enc)).squeeze(-1)
+ return out
+
+
+class CondBatchNorm(nn.Module):
+ ''' Conditional batch normalization layer class.
+ Args:
+ c_dim: dimension of latent conditioned code c
+ p_dim: points feature dimension
+ norm: normalization method
+ '''
+
+ def __init__(self, c_dim, in_dim, norm = 'batch_norm'):
+ super().__init__()
+ self.c_dim = c_dim
+ self.in_dim = in_dim
+ self.norm = norm
+
+ # computing the gamma and beta values
+ self.gamma = nn.Linear(c_dim, in_dim)
+ self.beta = nn.Linear(c_dim, in_dim)
+
+ if self.norm == 'batch_norm':
+ self.batchnorm = nn.BatchNorm1d(in_dim, affine=False)
+ elif self.norm == 'instance_norm':
+ self.batchnorm = nn.InstanceNorm1d(in_dim, affine=False)
+ elif self.norm == 'group_norm':
+ self.batchnorm = nn.GroupNorm1d(in_dim, affine=False)
+ else:
+ raise ValueError('Invalid normalization method!')
+ self.reset_parameters()
+
+ def reset_parameters(self):
+ nn.init.zeros_(self.gamma.weight)
+ nn.init.zeros_(self.beta.weight)
+ nn.init.ones_(self.gamma.bias)
+ nn.init.zeros_(self.beta.bias)
+
+ def forward(self, x, c):
+ batch_size = x.size(0)
+ # Affine mapping
+ gamma = self.gamma(c)
+ beta = self.beta(c)
+ gamma = gamma.view(batch_size, self.in_dim, 1)
+ beta = beta.view(batch_size, self.in_dim, 1)
+ # Batchnorm
+ net = self.batchnorm(x)
+ out = gamma * net + beta
+
+ return out
+
+class CondResBlock(nn.Module):
+ ''' Conditional batch normalization-based Resnet block class.
+ Args:
+ c_dim (int): dimension of latent conditioned code c
+ in_dim (int): input dimension
+ out_dim (int): output dimension
+ h_dim (int): hidden dimension
+ norm (str): normalization method
+ '''
+
+ def __init__(self, c_dim, in_dim, h_dim=None, out_dim=None,
+ norm = 'batch_norm'):
+ super().__init__()
+ # Attributes
+ if h_dim is None:
+ h_dim = in_dim
+ if out_dim is None:
+ out_dim = in_dim
+
+ self.in_dim = in_dim
+ self.h_dim = h_dim
+ self.out_dim = out_dim
+
+ self.batchnorm_0 = CondBatchNorm(
+ c_dim, in_dim, norm = norm)
+ self.batchnorm_1 = CondBatchNorm(
+ c_dim, h_dim, norm = norm)
+
+ self.fc_0 = nn.Conv1d(in_dim, h_dim, 1)
+ self.fc_1 = nn.Conv1d(h_dim, out_dim, 1)
+ self.act = nn.ReLU()
+
+ if in_dim == out_dim:
+ self.skip = None
+ else:
+ self.skip = nn.Conv1d(in_dim, out_dim, 1, bias=False)
+ # Initialization
+ nn.init.zeros_(self.fc_1.weight)
+
+ def forward(self, x, c):
+ out = self.fc_0(self.act(self.batchnorm_0(x, c)))
+ out = self.fc_1(self.act(self.batchnorm_1(out, c)))
+
+ if self.skip is not None:
+ skip = self.skip(x)
+ else:
+ skip = x
+
+ return skip + out
+
+class DecoderCBN(nn.Module):
+ ''' Decoder with conditional batch normalization (CBN) class.
+ Args:
+ p_dim (int): input dimension
+ c_dim (int): dimension of latent conditioned code c
+ h_dim (int): hidden size of Decoder network
+ '''
+
+ def __init__(self, p_dim=3, c_dim=128,
+ h_dim=256):
+ super().__init__()
+ # self.z_dim = z_dim
+ in_dim = p_dim
+
+ # self.fc_z = nn.Linear(z_dim, h_dim)
+
+ self.fc_p = nn.Conv1d(in_dim, h_dim, 1)
+
+ self.block1 = CondResBlock(c_dim, h_dim)
+ self.block2 = CondResBlock(c_dim, h_dim)
+ self.block3 = CondResBlock(c_dim, h_dim)
+ self.block4 = CondResBlock(c_dim, h_dim)
+ self.block5 = CondResBlock(c_dim, h_dim)
+
+
+ self.bn = CondBatchNorm(c_dim, h_dim)
+
+ self.fc_out = nn.Conv1d(h_dim, 1, 1)
+
+ self.act = F.relu
+
+ def forward(self, p, c, **kwargs):
+ p = p.transpose(1, 2)
+ batch_size, D, T = p.size()
+ enc_p = self.fc_p(p)
+
+ # enc_z = self.fc_z(z).unsqueeze(2)
+ enc = enc_p #+ enc_z
+
+ enc = self.block1(enc, c)
+ enc = self.block2(enc, c)
+ enc = self.block3(enc, c)
+ enc = self.block4(enc, c)
+ enc = self.block5(enc, c)
+
+ out = self.fc_out(self.act(self.bn(enc, c)))
+ out = out.squeeze(1)
+ return out
\ No newline at end of file
diff --git a/src/models/efficientnet.py b/src/models/efficientnet.py
new file mode 100755
index 0000000..b29fa88
--- /dev/null
+++ b/src/models/efficientnet.py
@@ -0,0 +1,68 @@
+import torch.nn as nn
+from torchvision import models
+from efficientnet_pytorch import EfficientNet
+
+
+class EfficientNetB0(nn.Module):
+ ''' EfficientNet-b0 encoder network for image input.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = EfficientNet.from_pretrained('efficientnet-b0', num_classes=c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = x.view(x.size(0), -1)
+ return out
+
+
+
+class EfficientNetB1(nn.Module):
+ ''' EfficientNet-b1 encoder network for image input.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = EfficientNet.from_pretrained('efficientnet-b1', num_classes=c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = x.view(x.size(0), -1)
+ return out
+
+
+class EfficientNetB5(nn.Module):
+ ''' EfficientNet-b5 encoder network for image input.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = EfficientNet.from_pretrained('efficientnet-b5', num_classes=c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = x.view(x.size(0), -1)
+ return out
+
+
+class EfficientNetB7(nn.Module):
+ ''' EfficientNet-b7 encoder network for image input.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = EfficientNet.from_pretrained('efficientnet-b7', num_classes=c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = x.view(x.size(0), -1)
+ return out
\ No newline at end of file
diff --git a/src/models/resnet.py b/src/models/resnet.py
new file mode 100755
index 0000000..04c1854
--- /dev/null
+++ b/src/models/resnet.py
@@ -0,0 +1,38 @@
+import torch.nn as nn
+from torchvision import models
+
+
+class Resnet18(nn.Module):
+ ''' ResNet-18 encoder network for image input.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = models.resnet18(pretrained=True)
+ self.features.fc = nn.Sequential()
+ self.fc = nn.Linear(512, c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = self.fc(x)
+ return out
+
+
+class Resnet50(nn.Module):
+ ''' ResNet-50 encoder network.
+ Args:
+ c_dim (int): output dimension of the latent embedding
+ '''
+
+ def __init__(self, c_dim):
+ super().__init__()
+ self.features = models.resnet50(pretrained=True)
+ self.features.fc = nn.Sequential()
+ self.fc = nn.Linear(2048, c_dim)
+
+ def forward(self, x):
+ x = self.features(x)
+ out = self.fc(x)
+ return out
diff --git a/src/run.py b/src/run.py
new file mode 100755
index 0000000..e69de29
diff --git a/src/test.py b/src/test.py
new file mode 100755
index 0000000..e69de29
diff --git a/src/train.py b/src/train.py
new file mode 100755
index 0000000..dd20afb
--- /dev/null
+++ b/src/train.py
@@ -0,0 +1,81 @@
+import os
+import glob
+import cv2
+import random
+import pandas as pd
+from skimage import io
+import numpy as np
+from PIL import Image
+from torch.utils.data import Dataset, DataLoader
+from torchvision import transforms, utils
+import h5py
+import argparse
+
+# Network building stuff
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+import pytorch_lightning as pl
+from pytorch_lightning.loggers import TensorBoardLogger
+from pytorch_lightning.callbacks import ModelCheckpoint
+import torchmetrics
+
+from models import *
+from dataset.dataloader import OccupancyNetDatasetHDF
+from trainer import ONetLit
+from utils import Config, count_parameters
+
+
+if __name__ == "__main__":
+ parser = argparse.ArgumentParser(description="Argument parser for training the model")
+ parser.add_argument('--cdim', action='store', type=int, default=128, help="feature dimension")
+ parser.add_argument('--hdim', action='store', type=int, default=128, help="hidden size for decoder")
+ parser.add_argument('--pdim', action='store', type=int, default=3, help="points input size for decoder")
+ parser.add_argument('--data_root', action='store', type=str, default="/ssd_scratch/cvit/sdokania/hdf_shapenet/hdf_data/", help="location of the parsed and processed dataset")
+ parser.add_argument('--batch_size', action='store', type=int, default=64, help="Training batch size")
+ parser.add_argument('--output_path', action='store', type=str, default="/home2/sdokania/all_projects/occ_artifacts/", help="Model saving and checkpoint paths")
+ parser.add_argument('--exp_name', action='store', type=str, default="initial", help="Name of the experiment. Artifacts will be created with this name")
+ parser.add_argument('--encoder', action='store', type=str, default="efficientnet-b0", help="Name of the Encoder architecture to use")
+ parser.add_argument('--decoder', action='store', type=str, default="decoder-cbn", help="Name of the decoder architecture to use")
+
+ args = parser.parse_args()
+ # Get the model configuration
+ config = Config(args)
+
+ # Define the lightning module
+ onet = ONetLit(config)
+
+ # Initialize tensorboard logger
+ logger = TensorBoardLogger(
+ save_dir=config.exp_path,
+ version=1,
+ name="lightning_logs"
+ )
+
+ # Initialize the checkpoint module
+ checkpoint_callback = ModelCheckpoint(
+ monitor="val_loss",
+ mode="min",
+ save_top_k=3
+ )
+
+ # Define the trainer object
+ trainer = pl.Trainer(
+ gpus=1,
+ # auto_scale_batch_size='binsearch',
+ logger=logger,
+ min_epochs=1,
+ max_epochs=200,
+ default_root_dir=config.output_dir,
+ log_every_n_steps=10,
+ progress_bar_refresh_rate=5,
+ # precision=16,
+ # stochastic_weight_avg=True,
+ # track_grad_norm=2,
+ callbacks=[checkpoint_callback],
+ check_val_every_n_epoch=1,
+ )
+
+ # Start training
+ trainer.fit(onet)
diff --git a/src/trainer.py b/src/trainer.py
new file mode 100755
index 0000000..9518581
--- /dev/null
+++ b/src/trainer.py
@@ -0,0 +1,86 @@
+import os
+import glob
+import cv2
+import random
+import pandas as pd
+from skimage import io
+import numpy as np
+from PIL import Image
+from torch.utils.data import Dataset, DataLoader
+from torchvision import transforms, utils
+import h5py
+
+# Network building stuff
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+import pytorch_lightning as pl
+import torchmetrics
+
+try:
+ from .models import *
+ from .dataset.dataloader import OccupancyNetDatasetHDF
+ from .utils import Config
+except:
+ from models import *
+ from dataset.dataloader import OccupancyNetDatasetHDF
+ from utils import Config
+
+
+
+class ONetLit(pl.LightningModule):
+ def __init__(self, cfg=None):
+ super().__init__()
+ if cfg is None:
+ cfg = Config()
+ self.config = cfg
+
+ self.build_model()
+
+ def build_model(self):
+ # First we create the encoder and decoder models
+ encoder_model = build_encoder(self.config.encoder)(self.config.c_dim)
+ decoder_model = build_decoder(self.config.decoder)(
+ self.config.p_dim, self.config.c_dim, self.config.h_dim)
+
+ # Now, we initialize the decoder model
+ self.net = OccNetImg(encoder_model, decoder_model)
+
+ def forward(self, img, pts):
+ return self.net(img, pts)
+
+ def training_step(self, batch, batch_idx):
+ imgs, pts, gts = batch
+ output = self(imgs, pts)
+
+ loss = F.binary_cross_entropy_with_logits(output, gts, reduction='none').sum(-1).mean()
+ self.log("train_loss", loss.item())
+ return loss
+
+ def validation_step(self, batch, batch_idx):
+ imgs, pts, gts = batch
+ output = self(imgs, pts)
+
+ loss = F.binary_cross_entropy_with_logits(output, gts, reduction='none').mean()
+ acc = ((output > 0.5) == (gts > 0.5)).sum() / gts.flatten().shape[0]
+ self.log("val_loss", loss.item())
+ self.log("acc_loss", acc.item())
+
+ def configure_optimizers(self):
+ return torch.optim.Adam(self.parameters(), lr=self.config.lr)
+
+ def setup(self, stage=None):
+ self.train_dataset = OccupancyNetDatasetHDF(self.config.data_root, mode="subtrain", balance=True)
+ self.val_dataset = OccupancyNetDatasetHDF(self.config.data_root, mode="val", balance=True)
+
+ def train_dataloader(self):
+ return torch.utils.data.DataLoader(self.train_dataset,
+ batch_size=self.config.batch_size,
+ shuffle=True,
+ num_workers=4)
+
+ def val_dataloader(self):
+ return torch.utils.data.DataLoader(self.val_dataset,
+ batch_size=self.config.batch_size,
+ shuffle=False)
\ No newline at end of file
diff --git a/src/utils/__init__.py b/src/utils/__init__.py
new file mode 100755
index 0000000..2b582ea
--- /dev/null
+++ b/src/utils/__init__.py
@@ -0,0 +1,66 @@
+import os
+import torch
+import numpy as np
+
+def count_parameters(network):
+ """
+ Function to count the number of parameters in an network
+ """
+ tot = 0
+ for ix in network.parameters():
+ tot += ix.flatten().shape[0]
+ print("Parameters: {}M".format(np.round(tot/1e06, 3)))
+
+
+class Config:
+ def __init__(self, args=None):
+ if args is None:
+ self.set_default_data()
+ else:
+ self.c_dim = args.cdim
+ self.h_dim = args.hdim
+ self.p_dim = args.pdim
+ self.data_root = args.data_root
+ self.batch_size = args.batch_size
+ self.output_dir = args.output_path
+ self._exp_name = args.exp_name
+ self.encoder = args.encoder
+ self.decoder = args.decoder
+
+ # optimizer related config
+ self.lr = 3e-04
+ self.prepare_experiment_path()
+
+ def prepare_experiment_path(self):
+ self.exp_path = os.path.join(self.output_dir, self._exp_name)
+ print("Setting sexperiment path as : {}".format(self.exp_path))
+
+ os.makedirs(self.output_dir, exist_ok=True)
+ os.makedirs(self.exp_path, exist_ok=True)
+
+ def set_default_data(self):
+ self.c_dim = 128
+ self.h_dim = 128
+ self.p_dim = 3
+ self.data_root = "/ssd_scratch/"
+ self.batch_size = 64
+ self.output_dir = "/home2/sdokania/all_projects/occ_artifacts/"
+ self._exp_name = "initial"
+ self.encoder = "efficientnet-b0"
+ self.decoder = "decoder-cbn"
+
+ def print_config(self):
+ # Print as a dictionary
+ print(vars(self))
+
+ def export_config(self):
+ return vars(self)
+
+ @property
+ def exp_name(self):
+ return self._exp_name
+
+ @exp_name.setter
+ def exp_name(self, value):
+ self._exp_name = value
+ self.prepare_experiment_path()
\ No newline at end of file
diff --git a/src/utils/binvox_rw.py b/src/utils/binvox_rw.py
new file mode 100644
index 0000000..c9c11d6
--- /dev/null
+++ b/src/utils/binvox_rw.py
@@ -0,0 +1,287 @@
+# Copyright (C) 2012 Daniel Maturana
+# This file is part of binvox-rw-py.
+#
+# binvox-rw-py is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# binvox-rw-py is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with binvox-rw-py. If not, see .
+#
+# Modified by Christopher B. Choy
+# for python 3 support
+
+"""
+Binvox to Numpy and back.
+
+
+>>> import numpy as np
+>>> import binvox_rw
+>>> with open('chair.binvox', 'rb') as f:
+... m1 = binvox_rw.read_as_3d_array(f)
+...
+>>> m1.dims
+[32, 32, 32]
+>>> m1.scale
+41.133000000000003
+>>> m1.translate
+[0.0, 0.0, 0.0]
+>>> with open('chair_out.binvox', 'wb') as f:
+... m1.write(f)
+...
+>>> with open('chair_out.binvox', 'rb') as f:
+... m2 = binvox_rw.read_as_3d_array(f)
+...
+>>> m1.dims==m2.dims
+True
+>>> m1.scale==m2.scale
+True
+>>> m1.translate==m2.translate
+True
+>>> np.all(m1.data==m2.data)
+True
+
+>>> with open('chair.binvox', 'rb') as f:
+... md = binvox_rw.read_as_3d_array(f)
+...
+>>> with open('chair.binvox', 'rb') as f:
+... ms = binvox_rw.read_as_coord_array(f)
+...
+>>> data_ds = binvox_rw.dense_to_sparse(md.data)
+>>> data_sd = binvox_rw.sparse_to_dense(ms.data, 32)
+>>> np.all(data_sd==md.data)
+True
+>>> # the ordering of elements returned by numpy.nonzero changes with axis
+>>> # ordering, so to compare for equality we first lexically sort the voxels.
+>>> np.all(ms.data[:, np.lexsort(ms.data)] == data_ds[:, np.lexsort(data_ds)])
+True
+"""
+
+import numpy as np
+
+class Voxels(object):
+ """ Holds a binvox model.
+ data is either a three-dimensional numpy boolean array (dense representation)
+ or a two-dimensional numpy float array (coordinate representation).
+
+ dims, translate and scale are the model metadata.
+
+ dims are the voxel dimensions, e.g. [32, 32, 32] for a 32x32x32 model.
+
+ scale and translate relate the voxels to the original model coordinates.
+
+ To translate voxel coordinates i, j, k to original coordinates x, y, z:
+
+ x_n = (i+.5)/dims[0]
+ y_n = (j+.5)/dims[1]
+ z_n = (k+.5)/dims[2]
+ x = scale*x_n + translate[0]
+ y = scale*y_n + translate[1]
+ z = scale*z_n + translate[2]
+
+ """
+
+ def __init__(self, data, dims, translate, scale, axis_order):
+ self.data = data
+ self.dims = dims
+ self.translate = translate
+ self.scale = scale
+ assert (axis_order in ('xzy', 'xyz'))
+ self.axis_order = axis_order
+
+ def clone(self):
+ data = self.data.copy()
+ dims = self.dims[:]
+ translate = self.translate[:]
+ return Voxels(data, dims, translate, self.scale, self.axis_order)
+
+ def write(self, fp):
+ write(self, fp)
+
+def read_header(fp):
+ """ Read binvox header. Mostly meant for internal use.
+ """
+ line = fp.readline().strip()
+ if not line.startswith(b'#binvox'):
+ raise IOError('Not a binvox file')
+ dims = [int(i) for i in fp.readline().strip().split(b' ')[1:]]
+ translate = [float(i) for i in fp.readline().strip().split(b' ')[1:]]
+ scale = [float(i) for i in fp.readline().strip().split(b' ')[1:]][0]
+ line = fp.readline()
+ return dims, translate, scale
+
+def read_as_3d_array(fp, fix_coords=True):
+ """ Read binary binvox format as array.
+
+ Returns the model with accompanying metadata.
+
+ Voxels are stored in a three-dimensional numpy array, which is simple and
+ direct, but may use a lot of memory for large models. (Storage requirements
+ are 8*(d^3) bytes, where d is the dimensions of the binvox model. Numpy
+ boolean arrays use a byte per element).
+
+ Doesn't do any checks on input except for the '#binvox' line.
+ """
+ dims, translate, scale = read_header(fp)
+ raw_data = np.frombuffer(fp.read(), dtype=np.uint8)
+ # if just using reshape() on the raw data:
+ # indexing the array as array[i,j,k], the indices map into the
+ # coords as:
+ # i -> x
+ # j -> z
+ # k -> y
+ # if fix_coords is true, then data is rearranged so that
+ # mapping is
+ # i -> x
+ # j -> y
+ # k -> z
+ values, counts = raw_data[::2], raw_data[1::2]
+ data = np.repeat(values, counts).astype(np.bool)
+ data = data.reshape(dims)
+ if fix_coords:
+ # xzy to xyz TODO the right thing
+ data = np.transpose(data, (0, 2, 1))
+ axis_order = 'xyz'
+ else:
+ axis_order = 'xzy'
+ return Voxels(data, dims, translate, scale, axis_order)
+
+
+def read_as_coord_array(fp, fix_coords=True):
+ """ Read binary binvox format as coordinates.
+
+ Returns binvox model with voxels in a "coordinate" representation, i.e. an
+ 3 x N array where N is the number of nonzero voxels. Each column
+ corresponds to a nonzero voxel and the 3 rows are the (x, z, y) coordinates
+ of the voxel. (The odd ordering is due to the way binvox format lays out
+ data). Note that coordinates refer to the binvox voxels, without any
+ scaling or translation.
+
+ Use this to save memory if your model is very sparse (mostly empty).
+
+ Doesn't do any checks on input except for the '#binvox' line.
+ """
+ dims, translate, scale = read_header(fp)
+ raw_data = np.frombuffer(fp.read(), dtype=np.uint8)
+
+ values, counts = raw_data[::2], raw_data[1::2]
+
+ sz = np.prod(dims)
+ index, end_index = 0, 0
+ end_indices = np.cumsum(counts)
+ indices = np.concatenate(([0], end_indices[:-1])).astype(end_indices.dtype)
+
+ values = values.astype(np.bool)
+ indices = indices[values]
+ end_indices = end_indices[values]
+
+ nz_voxels = []
+ for index, end_index in zip(indices, end_indices):
+ nz_voxels.extend(range(index, end_index))
+ nz_voxels = np.array(nz_voxels)
+ # TODO are these dims correct?
+ # according to docs,
+ # index = x * wxh + z * width + y; // wxh = width * height = d * d
+
+ x = nz_voxels / (dims[0]*dims[1])
+ zwpy = nz_voxels % (dims[0]*dims[1]) # z*w + y
+ z = zwpy / dims[0]
+ y = zwpy % dims[0]
+ if fix_coords:
+ data = np.vstack((x, y, z))
+ axis_order = 'xyz'
+ else:
+ data = np.vstack((x, z, y))
+ axis_order = 'xzy'
+
+ #return Voxels(data, dims, translate, scale, axis_order)
+ return Voxels(np.ascontiguousarray(data), dims, translate, scale, axis_order)
+
+def dense_to_sparse(voxel_data, dtype=np.int):
+ """ From dense representation to sparse (coordinate) representation.
+ No coordinate reordering.
+ """
+ if voxel_data.ndim!=3:
+ raise ValueError('voxel_data is wrong shape; should be 3D array.')
+ return np.asarray(np.nonzero(voxel_data), dtype)
+
+def sparse_to_dense(voxel_data, dims, dtype=np.bool):
+ if voxel_data.ndim!=2 or voxel_data.shape[0]!=3:
+ raise ValueError('voxel_data is wrong shape; should be 3xN array.')
+ if np.isscalar(dims):
+ dims = [dims]*3
+ dims = np.atleast_2d(dims).T
+ # truncate to integers
+ xyz = voxel_data.astype(np.int)
+ # discard voxels that fall outside dims
+ valid_ix = ~np.any((xyz < 0) | (xyz >= dims), 0)
+ xyz = xyz[:,valid_ix]
+ out = np.zeros(dims.flatten(), dtype=dtype)
+ out[tuple(xyz)] = True
+ return out
+
+#def get_linear_index(x, y, z, dims):
+ #""" Assuming xzy order. (y increasing fastest.
+ #TODO ensure this is right when dims are not all same
+ #"""
+ #return x*(dims[1]*dims[2]) + z*dims[1] + y
+
+def write(voxel_model, fp):
+ """ Write binary binvox format.
+
+ Note that when saving a model in sparse (coordinate) format, it is first
+ converted to dense format.
+
+ Doesn't check if the model is 'sane'.
+
+ """
+ if voxel_model.data.ndim==2:
+ # TODO avoid conversion to dense
+ dense_voxel_data = sparse_to_dense(voxel_model.data, voxel_model.dims)
+ else:
+ dense_voxel_data = voxel_model.data
+
+ fp.write('#binvox 1\n')
+ fp.write('dim '+' '.join(map(str, voxel_model.dims))+'\n')
+ fp.write('translate '+' '.join(map(str, voxel_model.translate))+'\n')
+ fp.write('scale '+str(voxel_model.scale)+'\n')
+ fp.write('data\n')
+ if not voxel_model.axis_order in ('xzy', 'xyz'):
+ raise ValueError('Unsupported voxel model axis order')
+
+ if voxel_model.axis_order=='xzy':
+ voxels_flat = dense_voxel_data.flatten()
+ elif voxel_model.axis_order=='xyz':
+ voxels_flat = np.transpose(dense_voxel_data, (0, 2, 1)).flatten()
+
+ # keep a sort of state machine for writing run length encoding
+ state = voxels_flat[0]
+ ctr = 0
+ for c in voxels_flat:
+ if c==state:
+ ctr += 1
+ # if ctr hits max, dump
+ if ctr==255:
+ fp.write(chr(state))
+ fp.write(chr(ctr))
+ ctr = 0
+ else:
+ # if switch state, dump
+ fp.write(chr(state))
+ fp.write(chr(ctr))
+ state = c
+ ctr = 1
+ # flush out remainders
+ if ctr > 0:
+ fp.write(chr(state))
+ fp.write(chr(ctr))
+
+if __name__ == '__main__':
+ import doctest
+ doctest.testmod()
diff --git a/src/utils/icp.py b/src/utils/icp.py
new file mode 100644
index 0000000..982b4d7
--- /dev/null
+++ b/src/utils/icp.py
@@ -0,0 +1,121 @@
+import numpy as np
+from sklearn.neighbors import NearestNeighbors
+
+
+def best_fit_transform(A, B):
+ '''
+ Calculates the least-squares best-fit transform that maps corresponding
+ points A to B in m spatial dimensions
+ Input:
+ A: Nxm numpy array of corresponding points
+ B: Nxm numpy array of corresponding points
+ Returns:
+ T: (m+1)x(m+1) homogeneous transformation matrix that maps A on to B
+ R: mxm rotation matrix
+ t: mx1 translation vector
+ '''
+
+ assert A.shape == B.shape
+
+ # get number of dimensions
+ m = A.shape[1]
+
+ # translate points to their centroids
+ centroid_A = np.mean(A, axis=0)
+ centroid_B = np.mean(B, axis=0)
+ AA = A - centroid_A
+ BB = B - centroid_B
+
+ # rotation matrix
+ H = np.dot(AA.T, BB)
+ U, S, Vt = np.linalg.svd(H)
+ R = np.dot(Vt.T, U.T)
+
+ # special reflection case
+ if np.linalg.det(R) < 0:
+ Vt[m-1,:] *= -1
+ R = np.dot(Vt.T, U.T)
+
+ # translation
+ t = centroid_B.T - np.dot(R,centroid_A.T)
+
+ # homogeneous transformation
+ T = np.identity(m+1)
+ T[:m, :m] = R
+ T[:m, m] = t
+
+ return T, R, t
+
+
+def nearest_neighbor(src, dst):
+ '''
+ Find the nearest (Euclidean) neighbor in dst for each point in src
+ Input:
+ src: Nxm array of points
+ dst: Nxm array of points
+ Output:
+ distances: Euclidean distances of the nearest neighbor
+ indices: dst indices of the nearest neighbor
+ '''
+
+ assert src.shape == dst.shape
+
+ neigh = NearestNeighbors(n_neighbors=1)
+ neigh.fit(dst)
+ distances, indices = neigh.kneighbors(src, return_distance=True)
+ return distances.ravel(), indices.ravel()
+
+
+def icp(A, B, init_pose=None, max_iterations=20, tolerance=0.001):
+ '''
+ The Iterative Closest Point method: finds best-fit transform that maps
+ points A on to points B
+ Input:
+ A: Nxm numpy array of source mD points
+ B: Nxm numpy array of destination mD point
+ init_pose: (m+1)x(m+1) homogeneous transformation
+ max_iterations: exit algorithm after max_iterations
+ tolerance: convergence criteria
+ Output:
+ T: final homogeneous transformation that maps A on to B
+ distances: Euclidean distances (errors) of the nearest neighbor
+ i: number of iterations to converge
+ '''
+
+ assert A.shape == B.shape
+
+ # get number of dimensions
+ m = A.shape[1]
+
+ # make points homogeneous, copy them to maintain the originals
+ src = np.ones((m+1,A.shape[0]))
+ dst = np.ones((m+1,B.shape[0]))
+ src[:m,:] = np.copy(A.T)
+ dst[:m,:] = np.copy(B.T)
+
+ # apply the initial pose estimation
+ if init_pose is not None:
+ src = np.dot(init_pose, src)
+
+ prev_error = 0
+
+ for i in range(max_iterations):
+ # find the nearest neighbors between the current source and destination points
+ distances, indices = nearest_neighbor(src[:m,:].T, dst[:m,:].T)
+
+ # compute the transformation between the current source and nearest destination points
+ T,_,_ = best_fit_transform(src[:m,:].T, dst[:m,indices].T)
+
+ # update the current source
+ src = np.dot(T, src)
+
+ # check error
+ mean_error = np.mean(distances)
+ if np.abs(prev_error - mean_error) < tolerance:
+ break
+ prev_error = mean_error
+
+ # calculate final transformation
+ T,_,_ = best_fit_transform(A, src[:m,:].T)
+
+ return T, distances, i
diff --git a/src/utils/io.py b/src/utils/io.py
new file mode 100644
index 0000000..247b3b7
--- /dev/null
+++ b/src/utils/io.py
@@ -0,0 +1,112 @@
+import os
+from plyfile import PlyElement, PlyData
+import numpy as np
+
+
+def export_pointcloud(vertices, out_file, as_text=True):
+ assert(vertices.shape[1] == 3)
+ vertices = vertices.astype(np.float32)
+ vertices = np.ascontiguousarray(vertices)
+ vector_dtype = [('x', 'f4'), ('y', 'f4'), ('z', 'f4')]
+ vertices = vertices.view(dtype=vector_dtype).flatten()
+ plyel = PlyElement.describe(vertices, 'vertex')
+ plydata = PlyData([plyel], text=as_text)
+ plydata.write(out_file)
+
+
+def load_pointcloud(in_file):
+ plydata = PlyData.read(in_file)
+ vertices = np.stack([
+ plydata['vertex']['x'],
+ plydata['vertex']['y'],
+ plydata['vertex']['z']
+ ], axis=1)
+ return vertices
+
+
+def read_off(file):
+ """
+ Reads vertices and faces from an off file.
+
+ :param file: path to file to read
+ :type file: str
+ :return: vertices and faces as lists of tuples
+ :rtype: [(float)], [(int)]
+ """
+
+ assert os.path.exists(file), 'file %s not found' % file
+
+ with open(file, 'r') as fp:
+ lines = fp.readlines()
+ lines = [line.strip() for line in lines]
+
+ # Fix for ModelNet bug were 'OFF' and the number of vertices and faces
+ # are all in the first line.
+ if len(lines[0]) > 3:
+ assert lines[0][:3] == 'OFF' or lines[0][:3] == 'off', \
+ 'invalid OFF file %s' % file
+
+ parts = lines[0][3:].split(' ')
+ assert len(parts) == 3
+
+ num_vertices = int(parts[0])
+ assert num_vertices > 0
+
+ num_faces = int(parts[1])
+ assert num_faces > 0
+
+ start_index = 1
+ # This is the regular case!
+ else:
+ assert lines[0] == 'OFF' or lines[0] == 'off', \
+ 'invalid OFF file %s' % file
+
+ parts = lines[1].split(' ')
+ assert len(parts) == 3
+
+ num_vertices = int(parts[0])
+ assert num_vertices > 0
+
+ num_faces = int(parts[1])
+ assert num_faces > 0
+
+ start_index = 2
+
+ vertices = []
+ for i in range(num_vertices):
+ vertex = lines[start_index + i].split(' ')
+ vertex = [float(point.strip()) for point in vertex if point != '']
+ assert len(vertex) == 3
+
+ vertices.append(vertex)
+
+ faces = []
+ for i in range(num_faces):
+ face = lines[start_index + num_vertices + i].split(' ')
+ face = [index.strip() for index in face if index != '']
+
+ # check to be sure
+ for index in face:
+ assert index != '', \
+ 'found empty vertex index: %s (%s)' \
+ % (lines[start_index + num_vertices + i], file)
+
+ face = [int(index) for index in face]
+
+ assert face[0] == len(face) - 1, \
+ 'face should have %d vertices but as %d (%s)' \
+ % (face[0], len(face) - 1, file)
+ assert face[0] == 3, \
+ 'only triangular meshes supported (%s)' % file
+ for index in face:
+ assert index >= 0 and index < num_vertices, \
+ 'vertex %d (of %d vertices) does not exist (%s)' \
+ % (index, num_vertices, file)
+
+ assert len(face) > 1
+
+ faces.append(face)
+
+ return vertices, faces
+
+ assert False, 'could not open %s' % file
diff --git a/src/utils/libkdtree/.gitignore b/src/utils/libkdtree/.gitignore
new file mode 100644
index 0000000..378eac2
--- /dev/null
+++ b/src/utils/libkdtree/.gitignore
@@ -0,0 +1 @@
+build
diff --git a/src/utils/libkdtree/LICENSE.txt b/src/utils/libkdtree/LICENSE.txt
new file mode 100644
index 0000000..e3acbd5
--- /dev/null
+++ b/src/utils/libkdtree/LICENSE.txt
@@ -0,0 +1,165 @@
+ GNU LESSER GENERAL PUBLIC LICENSE
+ Version 3, 29 June 2007
+
+ Copyright (C) 2007, 2015 Free Software Foundation, Inc.
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+
+ This version of the GNU Lesser General Public License incorporates
+the terms and conditions of version 3 of the GNU General Public
+License, supplemented by the additional permissions listed below.
+
+ 0. Additional Definitions.
+
+ As used herein, "this License" refers to version 3 of the GNU Lesser
+General Public License, and the "GNU GPL" refers to version 3 of the GNU
+General Public License.
+
+ "The Library" refers to a covered work governed by this License,
+other than an Application or a Combined Work as defined below.
+
+ An "Application" is any work that makes use of an interface provided
+by the Library, but which is not otherwise based on the Library.
+Defining a subclass of a class defined by the Library is deemed a mode
+of using an interface provided by the Library.
+
+ A "Combined Work" is a work produced by combining or linking an
+Application with the Library. The particular version of the Library
+with which the Combined Work was made is also called the "Linked
+Version".
+
+ The "Minimal Corresponding Source" for a Combined Work means the
+Corresponding Source for the Combined Work, excluding any source code
+for portions of the Combined Work that, considered in isolation, are
+based on the Application, and not on the Linked Version.
+
+ The "Corresponding Application Code" for a Combined Work means the
+object code and/or source code for the Application, including any data
+and utility programs needed for reproducing the Combined Work from the
+Application, but excluding the System Libraries of the Combined Work.
+
+ 1. Exception to Section 3 of the GNU GPL.
+
+ You may convey a covered work under sections 3 and 4 of this License
+without being bound by section 3 of the GNU GPL.
+
+ 2. Conveying Modified Versions.
+
+ If you modify a copy of the Library, and, in your modifications, a
+facility refers to a function or data to be supplied by an Application
+that uses the facility (other than as an argument passed when the
+facility is invoked), then you may convey a copy of the modified
+version:
+
+ a) under this License, provided that you make a good faith effort to
+ ensure that, in the event an Application does not supply the
+ function or data, the facility still operates, and performs
+ whatever part of its purpose remains meaningful, or
+
+ b) under the GNU GPL, with none of the additional permissions of
+ this License applicable to that copy.
+
+ 3. Object Code Incorporating Material from Library Header Files.
+
+ The object code form of an Application may incorporate material from
+a header file that is part of the Library. You may convey such object
+code under terms of your choice, provided that, if the incorporated
+material is not limited to numerical parameters, data structure
+layouts and accessors, or small macros, inline functions and templates
+(ten or fewer lines in length), you do both of the following:
+
+ a) Give prominent notice with each copy of the object code that the
+ Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the object code with a copy of the GNU GPL and this license
+ document.
+
+ 4. Combined Works.
+
+ You may convey a Combined Work under terms of your choice that,
+taken together, effectively do not restrict modification of the
+portions of the Library contained in the Combined Work and reverse
+engineering for debugging such modifications, if you also do each of
+the following:
+
+ a) Give prominent notice with each copy of the Combined Work that
+ the Library is used in it and that the Library and its use are
+ covered by this License.
+
+ b) Accompany the Combined Work with a copy of the GNU GPL and this license
+ document.
+
+ c) For a Combined Work that displays copyright notices during
+ execution, include the copyright notice for the Library among
+ these notices, as well as a reference directing the user to the
+ copies of the GNU GPL and this license document.
+
+ d) Do one of the following:
+
+ 0) Convey the Minimal Corresponding Source under the terms of this
+ License, and the Corresponding Application Code in a form
+ suitable for, and under terms that permit, the user to
+ recombine or relink the Application with a modified version of
+ the Linked Version to produce a modified Combined Work, in the
+ manner specified by section 6 of the GNU GPL for conveying
+ Corresponding Source.
+
+ 1) Use a suitable shared library mechanism for linking with the
+ Library. A suitable mechanism is one that (a) uses at run time
+ a copy of the Library already present on the user's computer
+ system, and (b) will operate properly with a modified version
+ of the Library that is interface-compatible with the Linked
+ Version.
+
+ e) Provide Installation Information, but only if you would otherwise
+ be required to provide such information under section 6 of the
+ GNU GPL, and only to the extent that such information is
+ necessary to install and execute a modified version of the
+ Combined Work produced by recombining or relinking the
+ Application with a modified version of the Linked Version. (If
+ you use option 4d0, the Installation Information must accompany
+ the Minimal Corresponding Source and Corresponding Application
+ Code. If you use option 4d1, you must provide the Installation
+ Information in the manner specified by section 6 of the GNU GPL
+ for conveying Corresponding Source.)
+
+ 5. Combined Libraries.
+
+ You may place library facilities that are a work based on the
+Library side by side in a single library together with other library
+facilities that are not Applications and are not covered by this
+License, and convey such a combined library under terms of your
+choice, if you do both of the following:
+
+ a) Accompany the combined library with a copy of the same work based
+ on the Library, uncombined with any other library facilities,
+ conveyed under the terms of this License.
+
+ b) Give prominent notice with the combined library that part of it
+ is a work based on the Library, and explaining where to find the
+ accompanying uncombined form of the same work.
+
+ 6. Revised Versions of the GNU Lesser General Public License.
+
+ The Free Software Foundation may publish revised and/or new versions
+of the GNU Lesser General Public License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns.
+
+ Each version is given a distinguishing version number. If the
+Library as you received it specifies that a certain numbered version
+of the GNU Lesser General Public License "or any later version"
+applies to it, you have the option of following the terms and
+conditions either of that published version or of any later version
+published by the Free Software Foundation. If the Library as you
+received it does not specify a version number of the GNU Lesser
+General Public License, you may choose any version of the GNU Lesser
+General Public License ever published by the Free Software Foundation.
+
+ If the Library as you received it specifies that a proxy can decide
+whether future versions of the GNU Lesser General Public License shall
+apply, that proxy's public statement of acceptance of any version is
+permanent authorization for you to choose that version for the
+Library.
diff --git a/src/utils/libkdtree/MANIFEST.in b/src/utils/libkdtree/MANIFEST.in
new file mode 100644
index 0000000..0ff2a61
--- /dev/null
+++ b/src/utils/libkdtree/MANIFEST.in
@@ -0,0 +1,2 @@
+exclude pykdtree/render_template.py
+include LICENSE.txt
diff --git a/src/utils/libkdtree/README b/src/utils/libkdtree/README
new file mode 120000
index 0000000..92cacd2
--- /dev/null
+++ b/src/utils/libkdtree/README
@@ -0,0 +1 @@
+README.rst
\ No newline at end of file
diff --git a/src/utils/libkdtree/README.rst b/src/utils/libkdtree/README.rst
new file mode 100644
index 0000000..cb7001e
--- /dev/null
+++ b/src/utils/libkdtree/README.rst
@@ -0,0 +1,148 @@
+.. image:: https://travis-ci.org/storpipfugl/pykdtree.svg?branch=master
+ :target: https://travis-ci.org/storpipfugl/pykdtree
+.. image:: https://ci.appveyor.com/api/projects/status/ubo92368ktt2d25g/branch/master
+ :target: https://ci.appveyor.com/project/storpipfugl/pykdtree
+
+========
+pykdtree
+========
+
+Objective
+---------
+pykdtree is a kd-tree implementation for fast nearest neighbour search in Python.
+The aim is to be the fastest implementation around for common use cases (low dimensions and low number of neighbours) for both tree construction and queries.
+
+The implementation is based on scipy.spatial.cKDTree and libANN by combining the best features from both and focus on implementation efficiency.
+
+The interface is similar to that of scipy.spatial.cKDTree except only Euclidean distance measure is supported.
+
+Queries are optionally multithreaded using OpenMP.
+
+Installation
+------------
+Default build of pykdtree with OpenMP enabled queries using libgomp
+
+.. code-block:: bash
+
+ $ cd
+ $ python setup.py install
+
+If it fails with undefined compiler flags or you want to use another OpenMP implementation please modify setup.py at the indicated point to match your system.
+
+Building without OpenMP support is controlled by the USE_OMP environment variable
+
+.. code-block:: bash
+
+ $ cd
+ $ export USE_OMP=0
+ $ python setup.py install
+
+Note evironment variables are by default not exported when using sudo so in this case do
+
+.. code-block:: bash
+
+ $ USE_OMP=0 sudo -E python setup.py install
+
+Usage
+-----
+The usage of pykdtree is similar to scipy.spatial.cKDTree so for now refer to its documentation
+
+ >>> from pykdtree.kdtree import KDTree
+ >>> kd_tree = KDTree(data_pts)
+ >>> dist, idx = kd_tree.query(query_pts, k=8)
+
+The number of threads to be used in OpenMP enabled queries can be controlled with the standard OpenMP environment variable OMP_NUM_THREADS.
+
+The **leafsize** argument (number of data points per leaf) for the tree creation can be used to control the memory overhead of the kd-tree. pykdtree uses a default **leafsize=16**.
+Increasing **leafsize** will reduce the memory overhead and construction time but increase query time.
+
+pykdtree accepts data in double precision (numpy.float64) or single precision (numpy.float32) floating point. If data of another type is used an internal copy in double precision is made resulting in a memory overhead. If the kd-tree is constructed on single precision data the query points must be single precision as well.
+
+Benchmarks
+----------
+Comparison with scipy.spatial.cKDTree and libANN. This benchmark is on geospatial 3D data with 10053632 data points and 4276224 query points. The results are indexed relative to the construction time of scipy.spatial.cKDTree. A leafsize of 10 (scipy.spatial.cKDTree default) is used.
+
+Note: libANN is *not* thread safe. In this benchmark libANN is compiled with "-O3 -funroll-loops -ffast-math -fprefetch-loop-arrays" in order to achieve optimum performance.
+
+================== ===================== ====== ======== ==================
+Operation scipy.spatial.cKDTree libANN pykdtree pykdtree 4 threads
+------------------ --------------------- ------ -------- ------------------
+
+Construction 100 304 96 96
+
+query 1 neighbour 1267 294 223 70
+
+Total 1 neighbour 1367 598 319 166
+
+query 8 neighbours 2193 625 449 143
+
+Total 8 neighbours 2293 929 545 293
+================== ===================== ====== ======== ==================
+
+Looking at the combined construction and query this gives the following performance improvement relative to scipy.spatial.cKDTree
+
+========== ====== ======== ==================
+Neighbours libANN pykdtree pykdtree 4 threads
+---------- ------ -------- ------------------
+1 129% 329% 723%
+
+8 147% 320% 682%
+========== ====== ======== ==================
+
+Note: mileage will vary with the dataset at hand and computer architecture.
+
+Test
+----
+Run the unit tests using nosetest
+
+.. code-block:: bash
+
+ $ cd
+ $ python setup.py nosetests
+
+Installing on AppVeyor
+----------------------
+
+Pykdtree requires the "stdint.h" header file which is not available on certain
+versions of Windows or certain Windows compilers including those on the
+continuous integration platform AppVeyor. To get around this the header file(s)
+can be downloaded and placed in the correct "include" directory. This can
+be done by adding the `anaconda/missing-headers.ps1` script to your repository
+and running it the install step of `appveyor.yml`:
+
+ # install missing headers that aren't included with MSVC 2008
+ # https://github.com/omnia-md/conda-recipes/pull/524
+ - "powershell ./appveyor/missing-headers.ps1"
+
+In addition to this, AppVeyor does not support OpenMP so this feature must be
+turned off by adding the following to `appveyor.yml` in the
+`environment` section:
+
+ environment:
+ global:
+ # Don't build with openmp because it isn't supported in appveyor's compilers
+ USE_OMP: "0"
+
+Changelog
+---------
+v1.3.1 : Fix masking in the "query" method introduced in 1.3.0
+
+v1.3.0 : Keyword argument "mask" added to "query" method. OpenMP compilation now works for MS Visual Studio compiler
+
+v1.2.2 : Build process fixes
+
+v1.2.1 : Fixed OpenMP thread safety issue introduced in v1.2.0
+
+v1.2.0 : 64 and 32 bit MSVC Windows support added
+
+v1.1.1 : Same as v1.1 release due to incorrect pypi release
+
+v1.1 : Build process improvements. Add data attribute to kdtree class for scipy interface compatibility
+
+v1.0 : Switched license from GPLv3 to LGPLv3
+
+v0.3 : Avoid zipping of installed egg
+
+v0.2 : Reduced memory footprint. Can now handle single precision data internally avoiding copy conversion to double precision. Default leafsize changed from 10 to 16 as this reduces the memory footprint and makes it a cache line multiplum (negligible if any query performance observed in benchmarks). Reduced memory allocation for leaf nodes. Applied patch for building on OS X.
+
+v0.1 : Initial version.
diff --git a/src/utils/libkdtree/__init__.py b/src/utils/libkdtree/__init__.py
new file mode 100644
index 0000000..cbd34df
--- /dev/null
+++ b/src/utils/libkdtree/__init__.py
@@ -0,0 +1,6 @@
+from .pykdtree.kdtree import KDTree
+
+
+__all__ = [
+ KDTree
+]
diff --git a/src/utils/libkdtree/pykdtree/__init__.py b/src/utils/libkdtree/pykdtree/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/src/utils/libkdtree/pykdtree/_kdtree_core.c b/src/utils/libkdtree/pykdtree/_kdtree_core.c
new file mode 100644
index 0000000..aebb816
--- /dev/null
+++ b/src/utils/libkdtree/pykdtree/_kdtree_core.c
@@ -0,0 +1,1417 @@
+/*
+pykdtree, Fast kd-tree implementation with OpenMP-enabled queries
+
+Copyright (C) 2013 - present Esben S. Nielsen
+
+This program is free software: you can redistribute it and/or modify it under
+the terms of the GNU Lesser General Public License as published by the Free
+Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
+details.
+
+You should have received a copy of the GNU Lesser General Public License along
+with this program. If not, see .
+*/
+
+/*
+This kd-tree implementation is based on the scipy.spatial.cKDTree by
+Anne M. Archibald and libANN by David M. Mount and Sunil Arya.
+*/
+
+
+#include
+#include
+#include
+#include
+
+#define PA(i,d) (pa[no_dims * pidx[i] + d])
+#define PASWAP(a,b) { uint32_t tmp = pidx[a]; pidx[a] = pidx[b]; pidx[b] = tmp; }
+
+#ifdef _MSC_VER
+#define restrict __restrict
+#endif
+
+
+typedef struct
+{
+ float cut_val;
+ int8_t cut_dim;
+ uint32_t start_idx;
+ uint32_t n;
+ float cut_bounds_lv;
+ float cut_bounds_hv;
+ struct Node_float *left_child;
+ struct Node_float *right_child;
+} Node_float;
+
+typedef struct
+{
+ float *bbox;
+ int8_t no_dims;
+ uint32_t *pidx;
+ struct Node_float *root;
+} Tree_float;
+
+
+typedef struct
+{
+ double cut_val;
+ int8_t cut_dim;
+ uint32_t start_idx;
+ uint32_t n;
+ double cut_bounds_lv;
+ double cut_bounds_hv;
+ struct Node_double *left_child;
+ struct Node_double *right_child;
+} Node_double;
+
+typedef struct
+{
+ double *bbox;
+ int8_t no_dims;
+ uint32_t *pidx;
+ struct Node_double *root;
+} Tree_double;
+
+
+
+void insert_point_float(uint32_t *closest_idx, float *closest_dist, uint32_t pidx, float cur_dist, uint32_t k);
+void get_bounding_box_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, float *bbox);
+int partition_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *bbox, int8_t *cut_dim,
+ float *cut_val, uint32_t *n_lo);
+Tree_float* construct_tree_float(float *pa, int8_t no_dims, uint32_t n, uint32_t bsp);
+Node_float* construct_subtree_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, float *bbox);
+Node_float * create_node_float(uint32_t start_idx, uint32_t n, int is_leaf);
+void delete_subtree_float(Node_float *root);
+void delete_tree_float(Tree_float *tree);
+void print_tree_float(Node_float *root, int level);
+float calc_dist_float(float *point1_coord, float *point2_coord, int8_t no_dims);
+float get_cube_offset_float(int8_t dim, float *point_coord, float *bbox);
+float get_min_dist_float(float *point_coord, int8_t no_dims, float *bbox);
+void search_leaf_float(float *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, float *restrict closest_dist);
+void search_leaf_float_mask(float *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *restrict point_coord,
+ uint32_t k, uint8_t *restrict mask, uint32_t *restrict closest_idx, float *restrict closest_dist);
+void search_splitnode_float(Node_float *root, float *pa, uint32_t *pidx, int8_t no_dims, float *point_coord,
+ float min_dist, uint32_t k, float distance_upper_bound, float eps_fac, uint8_t *mask, uint32_t * closest_idx, float *closest_dist);
+void search_tree_float(Tree_float *tree, float *pa, float *point_coords,
+ uint32_t num_points, uint32_t k, float distance_upper_bound,
+ float eps, uint8_t *mask, uint32_t *closest_idxs, float *closest_dists);
+
+
+void insert_point_double(uint32_t *closest_idx, double *closest_dist, uint32_t pidx, double cur_dist, uint32_t k);
+void get_bounding_box_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, double *bbox);
+int partition_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *bbox, int8_t *cut_dim,
+ double *cut_val, uint32_t *n_lo);
+Tree_double* construct_tree_double(double *pa, int8_t no_dims, uint32_t n, uint32_t bsp);
+Node_double* construct_subtree_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, double *bbox);
+Node_double * create_node_double(uint32_t start_idx, uint32_t n, int is_leaf);
+void delete_subtree_double(Node_double *root);
+void delete_tree_double(Tree_double *tree);
+void print_tree_double(Node_double *root, int level);
+double calc_dist_double(double *point1_coord, double *point2_coord, int8_t no_dims);
+double get_cube_offset_double(int8_t dim, double *point_coord, double *bbox);
+double get_min_dist_double(double *point_coord, int8_t no_dims, double *bbox);
+void search_leaf_double(double *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, double *restrict closest_dist);
+void search_leaf_double_mask(double *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *restrict point_coord,
+ uint32_t k, uint8_t *restrict mask, uint32_t *restrict closest_idx, double *restrict closest_dist);
+void search_splitnode_double(Node_double *root, double *pa, uint32_t *pidx, int8_t no_dims, double *point_coord,
+ double min_dist, uint32_t k, double distance_upper_bound, double eps_fac, uint8_t *mask, uint32_t * closest_idx, double *closest_dist);
+void search_tree_double(Tree_double *tree, double *pa, double *point_coords,
+ uint32_t num_points, uint32_t k, double distance_upper_bound,
+ double eps, uint8_t *mask, uint32_t *closest_idxs, double *closest_dists);
+
+
+
+/************************************************
+Insert point into priority queue
+Params:
+ closest_idx : index queue
+ closest_dist : distance queue
+ pidx : permutation index of data points
+ cur_dist : distance to point inserted
+ k : number of neighbours
+************************************************/
+void insert_point_float(uint32_t *closest_idx, float *closest_dist, uint32_t pidx, float cur_dist, uint32_t k)
+{
+ int i;
+ for (i = k - 1; i > 0; i--)
+ {
+ if (closest_dist[i - 1] > cur_dist)
+ {
+ closest_dist[i] = closest_dist[i - 1];
+ closest_idx[i] = closest_idx[i - 1];
+ }
+ else
+ {
+ break;
+ }
+ }
+ closest_idx[i] = pidx;
+ closest_dist[i] = cur_dist;
+}
+
+/************************************************
+Get the bounding box of a set of points
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ n : number of points
+ bbox : bounding box (return)
+************************************************/
+void get_bounding_box_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, float *bbox)
+{
+ float cur;
+ int8_t bbox_idx, i, j;
+ uint32_t i2;
+
+ /* Use first data point to initialize */
+ for (i = 0; i < no_dims; i++)
+ {
+ bbox[2 * i] = bbox[2 * i + 1] = PA(0, i);
+ }
+
+ /* Update using rest of data points */
+ for (i2 = 1; i2 < n; i2++)
+ {
+ for (j = 0; j < no_dims; j++)
+ {
+ bbox_idx = 2 * j;
+ cur = PA(i2, j);
+ if (cur < bbox[bbox_idx])
+ {
+ bbox[bbox_idx] = cur;
+ }
+ else if (cur > bbox[bbox_idx + 1])
+ {
+ bbox[bbox_idx + 1] = cur;
+ }
+ }
+ }
+}
+
+/************************************************
+Partition a range of data points by manipulation the permutation index.
+The sliding midpoint rule is used for the partitioning.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bbox : bounding box of data points
+ cut_dim : dimension used for partition (return)
+ cut_val : value of cutting point (return)
+ n_lo : number of point below cutting plane (return)
+************************************************/
+int partition_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *bbox, int8_t *cut_dim, float *cut_val, uint32_t *n_lo)
+{
+ int8_t dim = 0, i;
+ uint32_t p, q, i2;
+ float size = 0, min_val, max_val, split, side_len, cur_val;
+ uint32_t end_idx = start_idx + n - 1;
+
+ /* Find largest bounding box side */
+ for (i = 0; i < no_dims; i++)
+ {
+ side_len = bbox[2 * i + 1] - bbox[2 * i];
+ if (side_len > size)
+ {
+ dim = i;
+ size = side_len;
+ }
+ }
+
+ min_val = bbox[2 * dim];
+ max_val = bbox[2 * dim + 1];
+
+ /* Check for zero length or inconsistent */
+ if (min_val >= max_val)
+ return 1;
+
+ /* Use middle for splitting */
+ split = (min_val + max_val) / 2;
+
+ /* Partition all data points around middle */
+ p = start_idx;
+ q = end_idx;
+ while (p <= q)
+ {
+ if (PA(p, dim) < split)
+ {
+ p++;
+ }
+ else if (PA(q, dim) >= split)
+ {
+ /* Guard for underflow */
+ if (q > 0)
+ {
+ q--;
+ }
+ else
+ {
+ break;
+ }
+ }
+ else
+ {
+ PASWAP(p, q);
+ p++;
+ q--;
+ }
+ }
+
+ /* Check for empty splits */
+ if (p == start_idx)
+ {
+ /* No points less than split.
+ Split at lowest point instead.
+ Minimum 1 point will be in lower box.
+ */
+
+ uint32_t j = start_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx + 1; i2 <= end_idx; i2++)
+ {
+ /* Find lowest point */
+ cur_val = PA(i2, dim);
+ if (cur_val < split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, start_idx);
+ p = start_idx + 1;
+ }
+ else if (p == end_idx + 1)
+ {
+ /* No points greater than split.
+ Split at highest point instead.
+ Minimum 1 point will be in higher box.
+ */
+
+ uint32_t j = end_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx; i2 < end_idx; i2++)
+ {
+ /* Find highest point */
+ cur_val = PA(i2, dim);
+ if (cur_val > split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, end_idx);
+ p = end_idx;
+ }
+
+ /* Set return values */
+ *cut_dim = dim;
+ *cut_val = split;
+ *n_lo = p - start_idx;
+ return 0;
+}
+
+/************************************************
+Construct a sub tree over a range of data points.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bsp : number of points per leaf
+ bbox : bounding box of set of data points
+************************************************/
+Node_float* construct_subtree_float(float *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, float *bbox)
+{
+ /* Create new node */
+ int is_leaf = (n <= bsp);
+ Node_float *root = create_node_float(start_idx, n, is_leaf);
+ int rval;
+ int8_t cut_dim;
+ uint32_t n_lo;
+ float cut_val, lv, hv;
+ if (is_leaf)
+ {
+ /* Make leaf node */
+ root->cut_dim = -1;
+ }
+ else
+ {
+ /* Make split node */
+ /* Partition data set and set node info */
+ rval = partition_float(pa, pidx, no_dims, start_idx, n, bbox, &cut_dim, &cut_val, &n_lo);
+ if (rval == 1)
+ {
+ root->cut_dim = -1;
+ return root;
+ }
+ root->cut_val = cut_val;
+ root->cut_dim = cut_dim;
+
+ /* Recurse on both subsets */
+ lv = bbox[2 * cut_dim];
+ hv = bbox[2 * cut_dim + 1];
+
+ /* Set bounds for cut dimension */
+ root->cut_bounds_lv = lv;
+ root->cut_bounds_hv = hv;
+
+ /* Update bounding box before call to lower subset and restore after */
+ bbox[2 * cut_dim + 1] = cut_val;
+ root->left_child = (struct Node_float *)construct_subtree_float(pa, pidx, no_dims, start_idx, n_lo, bsp, bbox);
+ bbox[2 * cut_dim + 1] = hv;
+
+ /* Update bounding box before call to higher subset and restore after */
+ bbox[2 * cut_dim] = cut_val;
+ root->right_child = (struct Node_float *)construct_subtree_float(pa, pidx, no_dims, start_idx + n_lo, n - n_lo, bsp, bbox);
+ bbox[2 * cut_dim] = lv;
+ }
+ return root;
+}
+
+/************************************************
+Construct a tree over data points.
+Params:
+ pa : data points
+ no_dims: number of dimensions
+ n : number of data points
+ bsp : number of points per leaf
+************************************************/
+Tree_float* construct_tree_float(float *pa, int8_t no_dims, uint32_t n, uint32_t bsp)
+{
+ Tree_float *tree = (Tree_float *)malloc(sizeof(Tree_float));
+ uint32_t i;
+ uint32_t *pidx;
+ float *bbox;
+
+ tree->no_dims = no_dims;
+
+ /* Initialize permutation array */
+ pidx = (uint32_t *)malloc(sizeof(uint32_t) * n);
+ for (i = 0; i < n; i++)
+ {
+ pidx[i] = i;
+ }
+
+ bbox = (float *)malloc(2 * sizeof(float) * no_dims);
+ get_bounding_box_float(pa, pidx, no_dims, n, bbox);
+ tree->bbox = bbox;
+
+ /* Construct subtree on full dataset */
+ tree->root = (struct Node_float *)construct_subtree_float(pa, pidx, no_dims, 0, n, bsp, bbox);
+
+ tree->pidx = pidx;
+ return tree;
+}
+
+/************************************************
+Create a tree node.
+Params:
+ start_idx : index of first data point to use
+ n : number of data points
+************************************************/
+Node_float* create_node_float(uint32_t start_idx, uint32_t n, int is_leaf)
+{
+ Node_float *new_node;
+ if (is_leaf)
+ {
+ /*
+ Allocate only the part of the struct that will be used in a leaf node.
+ This relies on the C99 specification of struct layout conservation and padding and
+ that dereferencing is never attempted for the node pointers in a leaf.
+ */
+ new_node = (Node_float *)malloc(sizeof(Node_float) - 2 * sizeof(Node_float *));
+ }
+ else
+ {
+ new_node = (Node_float *)malloc(sizeof(Node_float));
+ }
+ new_node->n = n;
+ new_node->start_idx = start_idx;
+ return new_node;
+}
+
+/************************************************
+Delete subtree
+Params:
+ root : root node of subtree to delete
+************************************************/
+void delete_subtree_float(Node_float *root)
+{
+ if (root->cut_dim != -1)
+ {
+ delete_subtree_float((Node_float *)root->left_child);
+ delete_subtree_float((Node_float *)root->right_child);
+ }
+ free(root);
+}
+
+/************************************************
+Delete tree
+Params:
+ tree : Tree struct of kd tree
+************************************************/
+void delete_tree_float(Tree_float *tree)
+{
+ delete_subtree_float((Node_float *)tree->root);
+ free(tree->bbox);
+ free(tree->pidx);
+ free(tree);
+}
+
+/************************************************
+Print
+************************************************/
+void print_tree_float(Node_float *root, int level)
+{
+ int i;
+ for (i = 0; i < level; i++)
+ {
+ printf(" ");
+ }
+ printf("(cut_val: %f, cut_dim: %i)\n", root->cut_val, root->cut_dim);
+ if (root->cut_dim != -1)
+ print_tree_float((Node_float *)root->left_child, level + 1);
+ if (root->cut_dim != -1)
+ print_tree_float((Node_float *)root->right_child, level + 1);
+}
+
+/************************************************
+Calculate squared cartesian distance between points
+Params:
+ point1_coord : point 1
+ point2_coord : point 2
+************************************************/
+float calc_dist_float(float *point1_coord, float *point2_coord, int8_t no_dims)
+{
+ /* Calculate squared distance */
+ float dist = 0, dim_dist;
+ int8_t i;
+ for (i = 0; i < no_dims; i++)
+ {
+ dim_dist = point2_coord[i] - point1_coord[i];
+ dist += dim_dist * dim_dist;
+ }
+ return dist;
+}
+
+/************************************************
+Get squared distance from point to cube in specified dimension
+Params:
+ dim : dimension
+ point_coord : cartesian coordinates of point
+ bbox : cube
+************************************************/
+float get_cube_offset_float(int8_t dim, float *point_coord, float *bbox)
+{
+ float dim_coord = point_coord[dim];
+
+ if (dim_coord < bbox[2 * dim])
+ {
+ /* Left of cube in dimension */
+ return dim_coord - bbox[2 * dim];
+ }
+ else if (dim_coord > bbox[2 * dim + 1])
+ {
+ /* Right of cube in dimension */
+ return dim_coord - bbox[2 * dim + 1];
+ }
+ else
+ {
+ /* Inside cube in dimension */
+ return 0.;
+ }
+}
+
+/************************************************
+Get minimum squared distance between point and cube.
+Params:
+ point_coord : cartesian coordinates of point
+ no_dims : number of dimensions
+ bbox : cube
+************************************************/
+float get_min_dist_float(float *point_coord, int8_t no_dims, float *bbox)
+{
+ float cube_offset = 0, cube_offset_dim;
+ int8_t i;
+
+ for (i = 0; i < no_dims; i++)
+ {
+ cube_offset_dim = get_cube_offset_float(i, point_coord, bbox);
+ cube_offset += cube_offset_dim * cube_offset_dim;
+ }
+
+ return cube_offset;
+}
+
+/************************************************
+Search a leaf node for closest point
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_float(float *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, float *restrict closest_dist)
+{
+ float cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Get distance to query point */
+ cur_dist = calc_dist_float(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_float(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+
+/************************************************
+Search a leaf node for closest point with data point mask
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_float_mask(float *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, float *restrict point_coord,
+ uint32_t k, uint8_t *mask, uint32_t *restrict closest_idx, float *restrict closest_dist)
+{
+ float cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Is this point masked out? */
+ if (mask[pidx[start_idx + i]])
+ {
+ continue;
+ }
+ /* Get distance to query point */
+ cur_dist = calc_dist_float(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_float(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+/************************************************
+Search subtree for nearest to query point
+Params:
+ root : root node of subtree
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ point_coord : query point
+ min_dist : minumum distance to nearest neighbour
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_splitnode_float(Node_float *root, float *pa, uint32_t *pidx, int8_t no_dims, float *point_coord,
+ float min_dist, uint32_t k, float distance_upper_bound, float eps_fac, uint8_t *mask,
+ uint32_t *closest_idx, float *closest_dist)
+{
+ int8_t dim;
+ float dist_left, dist_right;
+ float new_offset;
+ float box_diff;
+
+ /* Skip if distance bound exeeded */
+ if (min_dist > distance_upper_bound)
+ {
+ return;
+ }
+
+ dim = root->cut_dim;
+
+ /* Handle leaf node */
+ if (dim == -1)
+ {
+ if (mask)
+ {
+ search_leaf_float_mask(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, mask, closest_idx, closest_dist);
+ }
+ else
+ {
+ search_leaf_float(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, closest_idx, closest_dist);
+ }
+ return;
+ }
+
+ /* Get distance to cutting plane */
+ new_offset = point_coord[dim] - root->cut_val;
+
+ if (new_offset < 0)
+ {
+ /* Left of cutting plane */
+ dist_left = min_dist;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit */
+ search_splitnode_float((Node_float *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Right of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = root->cut_bounds_lv - point_coord[dim];
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_right = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_float((Node_float *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+ else
+ {
+ /* Right of cutting plane */
+ dist_right = min_dist;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_float((Node_float *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Left of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = point_coord[dim] - root->cut_bounds_hv;
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_left = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit*/
+ search_splitnode_float((Node_float *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+}
+
+/************************************************
+Search for nearest neighbour for a set of query points
+Params:
+ tree : Tree struct of kd tree
+ pa : data points
+ pidx : permutation index of data points
+ point_coords : query points
+ num_points : number of query points
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_tree_float(Tree_float *tree, float *pa, float *point_coords,
+ uint32_t num_points, uint32_t k, float distance_upper_bound,
+ float eps, uint8_t *mask, uint32_t *closest_idxs, float *closest_dists)
+{
+ float min_dist;
+ float eps_fac = 1 / ((1 + eps) * (1 + eps));
+ int8_t no_dims = tree->no_dims;
+ float *bbox = tree->bbox;
+ uint32_t *pidx = tree->pidx;
+ uint32_t j = 0;
+#if defined(_MSC_VER) && defined(_OPENMP)
+ int32_t i = 0;
+ int32_t local_num_points = (int32_t) num_points;
+#else
+ uint32_t i;
+ uint32_t local_num_points = num_points;
+#endif
+ Node_float *root = (Node_float *)tree->root;
+
+ /* Queries are OpenMP enabled */
+ #pragma omp parallel
+ {
+ /* The low chunk size is important to avoid L2 cache trashing
+ for spatial coherent query datasets
+ */
+ #pragma omp for private(i, j) schedule(static, 100) nowait
+ for (i = 0; i < local_num_points; i++)
+ {
+ for (j = 0; j < k; j++)
+ {
+ closest_idxs[i * k + j] = UINT32_MAX;
+ closest_dists[i * k + j] = DBL_MAX;
+ }
+ min_dist = get_min_dist_float(point_coords + no_dims * i, no_dims, bbox);
+ search_splitnode_float(root, pa, pidx, no_dims, point_coords + no_dims * i, min_dist,
+ k, distance_upper_bound, eps_fac, mask, &closest_idxs[i * k], &closest_dists[i * k]);
+ }
+ }
+}
+
+/************************************************
+Insert point into priority queue
+Params:
+ closest_idx : index queue
+ closest_dist : distance queue
+ pidx : permutation index of data points
+ cur_dist : distance to point inserted
+ k : number of neighbours
+************************************************/
+void insert_point_double(uint32_t *closest_idx, double *closest_dist, uint32_t pidx, double cur_dist, uint32_t k)
+{
+ int i;
+ for (i = k - 1; i > 0; i--)
+ {
+ if (closest_dist[i - 1] > cur_dist)
+ {
+ closest_dist[i] = closest_dist[i - 1];
+ closest_idx[i] = closest_idx[i - 1];
+ }
+ else
+ {
+ break;
+ }
+ }
+ closest_idx[i] = pidx;
+ closest_dist[i] = cur_dist;
+}
+
+/************************************************
+Get the bounding box of a set of points
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ n : number of points
+ bbox : bounding box (return)
+************************************************/
+void get_bounding_box_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, double *bbox)
+{
+ double cur;
+ int8_t bbox_idx, i, j;
+ uint32_t i2;
+
+ /* Use first data point to initialize */
+ for (i = 0; i < no_dims; i++)
+ {
+ bbox[2 * i] = bbox[2 * i + 1] = PA(0, i);
+ }
+
+ /* Update using rest of data points */
+ for (i2 = 1; i2 < n; i2++)
+ {
+ for (j = 0; j < no_dims; j++)
+ {
+ bbox_idx = 2 * j;
+ cur = PA(i2, j);
+ if (cur < bbox[bbox_idx])
+ {
+ bbox[bbox_idx] = cur;
+ }
+ else if (cur > bbox[bbox_idx + 1])
+ {
+ bbox[bbox_idx + 1] = cur;
+ }
+ }
+ }
+}
+
+/************************************************
+Partition a range of data points by manipulation the permutation index.
+The sliding midpoint rule is used for the partitioning.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bbox : bounding box of data points
+ cut_dim : dimension used for partition (return)
+ cut_val : value of cutting point (return)
+ n_lo : number of point below cutting plane (return)
+************************************************/
+int partition_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *bbox, int8_t *cut_dim, double *cut_val, uint32_t *n_lo)
+{
+ int8_t dim = 0, i;
+ uint32_t p, q, i2;
+ double size = 0, min_val, max_val, split, side_len, cur_val;
+ uint32_t end_idx = start_idx + n - 1;
+
+ /* Find largest bounding box side */
+ for (i = 0; i < no_dims; i++)
+ {
+ side_len = bbox[2 * i + 1] - bbox[2 * i];
+ if (side_len > size)
+ {
+ dim = i;
+ size = side_len;
+ }
+ }
+
+ min_val = bbox[2 * dim];
+ max_val = bbox[2 * dim + 1];
+
+ /* Check for zero length or inconsistent */
+ if (min_val >= max_val)
+ return 1;
+
+ /* Use middle for splitting */
+ split = (min_val + max_val) / 2;
+
+ /* Partition all data points around middle */
+ p = start_idx;
+ q = end_idx;
+ while (p <= q)
+ {
+ if (PA(p, dim) < split)
+ {
+ p++;
+ }
+ else if (PA(q, dim) >= split)
+ {
+ /* Guard for underflow */
+ if (q > 0)
+ {
+ q--;
+ }
+ else
+ {
+ break;
+ }
+ }
+ else
+ {
+ PASWAP(p, q);
+ p++;
+ q--;
+ }
+ }
+
+ /* Check for empty splits */
+ if (p == start_idx)
+ {
+ /* No points less than split.
+ Split at lowest point instead.
+ Minimum 1 point will be in lower box.
+ */
+
+ uint32_t j = start_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx + 1; i2 <= end_idx; i2++)
+ {
+ /* Find lowest point */
+ cur_val = PA(i2, dim);
+ if (cur_val < split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, start_idx);
+ p = start_idx + 1;
+ }
+ else if (p == end_idx + 1)
+ {
+ /* No points greater than split.
+ Split at highest point instead.
+ Minimum 1 point will be in higher box.
+ */
+
+ uint32_t j = end_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx; i2 < end_idx; i2++)
+ {
+ /* Find highest point */
+ cur_val = PA(i2, dim);
+ if (cur_val > split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, end_idx);
+ p = end_idx;
+ }
+
+ /* Set return values */
+ *cut_dim = dim;
+ *cut_val = split;
+ *n_lo = p - start_idx;
+ return 0;
+}
+
+/************************************************
+Construct a sub tree over a range of data points.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bsp : number of points per leaf
+ bbox : bounding box of set of data points
+************************************************/
+Node_double* construct_subtree_double(double *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, double *bbox)
+{
+ /* Create new node */
+ int is_leaf = (n <= bsp);
+ Node_double *root = create_node_double(start_idx, n, is_leaf);
+ int rval;
+ int8_t cut_dim;
+ uint32_t n_lo;
+ double cut_val, lv, hv;
+ if (is_leaf)
+ {
+ /* Make leaf node */
+ root->cut_dim = -1;
+ }
+ else
+ {
+ /* Make split node */
+ /* Partition data set and set node info */
+ rval = partition_double(pa, pidx, no_dims, start_idx, n, bbox, &cut_dim, &cut_val, &n_lo);
+ if (rval == 1)
+ {
+ root->cut_dim = -1;
+ return root;
+ }
+ root->cut_val = cut_val;
+ root->cut_dim = cut_dim;
+
+ /* Recurse on both subsets */
+ lv = bbox[2 * cut_dim];
+ hv = bbox[2 * cut_dim + 1];
+
+ /* Set bounds for cut dimension */
+ root->cut_bounds_lv = lv;
+ root->cut_bounds_hv = hv;
+
+ /* Update bounding box before call to lower subset and restore after */
+ bbox[2 * cut_dim + 1] = cut_val;
+ root->left_child = (struct Node_double *)construct_subtree_double(pa, pidx, no_dims, start_idx, n_lo, bsp, bbox);
+ bbox[2 * cut_dim + 1] = hv;
+
+ /* Update bounding box before call to higher subset and restore after */
+ bbox[2 * cut_dim] = cut_val;
+ root->right_child = (struct Node_double *)construct_subtree_double(pa, pidx, no_dims, start_idx + n_lo, n - n_lo, bsp, bbox);
+ bbox[2 * cut_dim] = lv;
+ }
+ return root;
+}
+
+/************************************************
+Construct a tree over data points.
+Params:
+ pa : data points
+ no_dims: number of dimensions
+ n : number of data points
+ bsp : number of points per leaf
+************************************************/
+Tree_double* construct_tree_double(double *pa, int8_t no_dims, uint32_t n, uint32_t bsp)
+{
+ Tree_double *tree = (Tree_double *)malloc(sizeof(Tree_double));
+ uint32_t i;
+ uint32_t *pidx;
+ double *bbox;
+
+ tree->no_dims = no_dims;
+
+ /* Initialize permutation array */
+ pidx = (uint32_t *)malloc(sizeof(uint32_t) * n);
+ for (i = 0; i < n; i++)
+ {
+ pidx[i] = i;
+ }
+
+ bbox = (double *)malloc(2 * sizeof(double) * no_dims);
+ get_bounding_box_double(pa, pidx, no_dims, n, bbox);
+ tree->bbox = bbox;
+
+ /* Construct subtree on full dataset */
+ tree->root = (struct Node_double *)construct_subtree_double(pa, pidx, no_dims, 0, n, bsp, bbox);
+
+ tree->pidx = pidx;
+ return tree;
+}
+
+/************************************************
+Create a tree node.
+Params:
+ start_idx : index of first data point to use
+ n : number of data points
+************************************************/
+Node_double* create_node_double(uint32_t start_idx, uint32_t n, int is_leaf)
+{
+ Node_double *new_node;
+ if (is_leaf)
+ {
+ /*
+ Allocate only the part of the struct that will be used in a leaf node.
+ This relies on the C99 specification of struct layout conservation and padding and
+ that dereferencing is never attempted for the node pointers in a leaf.
+ */
+ new_node = (Node_double *)malloc(sizeof(Node_double) - 2 * sizeof(Node_double *));
+ }
+ else
+ {
+ new_node = (Node_double *)malloc(sizeof(Node_double));
+ }
+ new_node->n = n;
+ new_node->start_idx = start_idx;
+ return new_node;
+}
+
+/************************************************
+Delete subtree
+Params:
+ root : root node of subtree to delete
+************************************************/
+void delete_subtree_double(Node_double *root)
+{
+ if (root->cut_dim != -1)
+ {
+ delete_subtree_double((Node_double *)root->left_child);
+ delete_subtree_double((Node_double *)root->right_child);
+ }
+ free(root);
+}
+
+/************************************************
+Delete tree
+Params:
+ tree : Tree struct of kd tree
+************************************************/
+void delete_tree_double(Tree_double *tree)
+{
+ delete_subtree_double((Node_double *)tree->root);
+ free(tree->bbox);
+ free(tree->pidx);
+ free(tree);
+}
+
+/************************************************
+Print
+************************************************/
+void print_tree_double(Node_double *root, int level)
+{
+ int i;
+ for (i = 0; i < level; i++)
+ {
+ printf(" ");
+ }
+ printf("(cut_val: %f, cut_dim: %i)\n", root->cut_val, root->cut_dim);
+ if (root->cut_dim != -1)
+ print_tree_double((Node_double *)root->left_child, level + 1);
+ if (root->cut_dim != -1)
+ print_tree_double((Node_double *)root->right_child, level + 1);
+}
+
+/************************************************
+Calculate squared cartesian distance between points
+Params:
+ point1_coord : point 1
+ point2_coord : point 2
+************************************************/
+double calc_dist_double(double *point1_coord, double *point2_coord, int8_t no_dims)
+{
+ /* Calculate squared distance */
+ double dist = 0, dim_dist;
+ int8_t i;
+ for (i = 0; i < no_dims; i++)
+ {
+ dim_dist = point2_coord[i] - point1_coord[i];
+ dist += dim_dist * dim_dist;
+ }
+ return dist;
+}
+
+/************************************************
+Get squared distance from point to cube in specified dimension
+Params:
+ dim : dimension
+ point_coord : cartesian coordinates of point
+ bbox : cube
+************************************************/
+double get_cube_offset_double(int8_t dim, double *point_coord, double *bbox)
+{
+ double dim_coord = point_coord[dim];
+
+ if (dim_coord < bbox[2 * dim])
+ {
+ /* Left of cube in dimension */
+ return dim_coord - bbox[2 * dim];
+ }
+ else if (dim_coord > bbox[2 * dim + 1])
+ {
+ /* Right of cube in dimension */
+ return dim_coord - bbox[2 * dim + 1];
+ }
+ else
+ {
+ /* Inside cube in dimension */
+ return 0.;
+ }
+}
+
+/************************************************
+Get minimum squared distance between point and cube.
+Params:
+ point_coord : cartesian coordinates of point
+ no_dims : number of dimensions
+ bbox : cube
+************************************************/
+double get_min_dist_double(double *point_coord, int8_t no_dims, double *bbox)
+{
+ double cube_offset = 0, cube_offset_dim;
+ int8_t i;
+
+ for (i = 0; i < no_dims; i++)
+ {
+ cube_offset_dim = get_cube_offset_double(i, point_coord, bbox);
+ cube_offset += cube_offset_dim * cube_offset_dim;
+ }
+
+ return cube_offset;
+}
+
+/************************************************
+Search a leaf node for closest point
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_double(double *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, double *restrict closest_dist)
+{
+ double cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Get distance to query point */
+ cur_dist = calc_dist_double(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_double(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+
+/************************************************
+Search a leaf node for closest point with data point mask
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_double_mask(double *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, double *restrict point_coord,
+ uint32_t k, uint8_t *mask, uint32_t *restrict closest_idx, double *restrict closest_dist)
+{
+ double cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Is this point masked out? */
+ if (mask[pidx[start_idx + i]])
+ {
+ continue;
+ }
+ /* Get distance to query point */
+ cur_dist = calc_dist_double(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_double(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+/************************************************
+Search subtree for nearest to query point
+Params:
+ root : root node of subtree
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ point_coord : query point
+ min_dist : minumum distance to nearest neighbour
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_splitnode_double(Node_double *root, double *pa, uint32_t *pidx, int8_t no_dims, double *point_coord,
+ double min_dist, uint32_t k, double distance_upper_bound, double eps_fac, uint8_t *mask,
+ uint32_t *closest_idx, double *closest_dist)
+{
+ int8_t dim;
+ double dist_left, dist_right;
+ double new_offset;
+ double box_diff;
+
+ /* Skip if distance bound exeeded */
+ if (min_dist > distance_upper_bound)
+ {
+ return;
+ }
+
+ dim = root->cut_dim;
+
+ /* Handle leaf node */
+ if (dim == -1)
+ {
+ if (mask)
+ {
+ search_leaf_double_mask(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, mask, closest_idx, closest_dist);
+ }
+ else
+ {
+ search_leaf_double(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, closest_idx, closest_dist);
+ }
+ return;
+ }
+
+ /* Get distance to cutting plane */
+ new_offset = point_coord[dim] - root->cut_val;
+
+ if (new_offset < 0)
+ {
+ /* Left of cutting plane */
+ dist_left = min_dist;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit */
+ search_splitnode_double((Node_double *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Right of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = root->cut_bounds_lv - point_coord[dim];
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_right = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_double((Node_double *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+ else
+ {
+ /* Right of cutting plane */
+ dist_right = min_dist;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_double((Node_double *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Left of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = point_coord[dim] - root->cut_bounds_hv;
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_left = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit*/
+ search_splitnode_double((Node_double *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+}
+
+/************************************************
+Search for nearest neighbour for a set of query points
+Params:
+ tree : Tree struct of kd tree
+ pa : data points
+ pidx : permutation index of data points
+ point_coords : query points
+ num_points : number of query points
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_tree_double(Tree_double *tree, double *pa, double *point_coords,
+ uint32_t num_points, uint32_t k, double distance_upper_bound,
+ double eps, uint8_t *mask, uint32_t *closest_idxs, double *closest_dists)
+{
+ double min_dist;
+ double eps_fac = 1 / ((1 + eps) * (1 + eps));
+ int8_t no_dims = tree->no_dims;
+ double *bbox = tree->bbox;
+ uint32_t *pidx = tree->pidx;
+ uint32_t j = 0;
+#if defined(_MSC_VER) && defined(_OPENMP)
+ int32_t i = 0;
+ int32_t local_num_points = (int32_t) num_points;
+#else
+ uint32_t i;
+ uint32_t local_num_points = num_points;
+#endif
+ Node_double *root = (Node_double *)tree->root;
+
+ /* Queries are OpenMP enabled */
+ #pragma omp parallel
+ {
+ /* The low chunk size is important to avoid L2 cache trashing
+ for spatial coherent query datasets
+ */
+ #pragma omp for private(i, j) schedule(static, 100) nowait
+ for (i = 0; i < local_num_points; i++)
+ {
+ for (j = 0; j < k; j++)
+ {
+ closest_idxs[i * k + j] = UINT32_MAX;
+ closest_dists[i * k + j] = DBL_MAX;
+ }
+ min_dist = get_min_dist_double(point_coords + no_dims * i, no_dims, bbox);
+ search_splitnode_double(root, pa, pidx, no_dims, point_coords + no_dims * i, min_dist,
+ k, distance_upper_bound, eps_fac, mask, &closest_idxs[i * k], &closest_dists[i * k]);
+ }
+ }
+}
diff --git a/src/utils/libkdtree/pykdtree/_kdtree_core.c.mako b/src/utils/libkdtree/pykdtree/_kdtree_core.c.mako
new file mode 100644
index 0000000..a8270f5
--- /dev/null
+++ b/src/utils/libkdtree/pykdtree/_kdtree_core.c.mako
@@ -0,0 +1,734 @@
+/*
+pykdtree, Fast kd-tree implementation with OpenMP-enabled queries
+
+Copyright (C) 2013 - present Esben S. Nielsen
+
+This program is free software: you can redistribute it and/or modify it under
+the terms of the GNU Lesser General Public License as published by the Free
+Software Foundation, either version 3 of the License, or
+ (at your option) any later version.
+
+This program is distributed in the hope that it will be useful, but WITHOUT
+ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
+FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more
+details.
+
+You should have received a copy of the GNU Lesser General Public License along
+with this program. If not, see .
+*/
+
+/*
+This kd-tree implementation is based on the scipy.spatial.cKDTree by
+Anne M. Archibald and libANN by David M. Mount and Sunil Arya.
+*/
+
+
+#include
+#include
+#include
+#include
+
+#define PA(i,d) (pa[no_dims * pidx[i] + d])
+#define PASWAP(a,b) { uint32_t tmp = pidx[a]; pidx[a] = pidx[b]; pidx[b] = tmp; }
+
+#ifdef _MSC_VER
+#define restrict __restrict
+#endif
+
+% for DTYPE in ['float', 'double']:
+
+typedef struct
+{
+ ${DTYPE} cut_val;
+ int8_t cut_dim;
+ uint32_t start_idx;
+ uint32_t n;
+ ${DTYPE} cut_bounds_lv;
+ ${DTYPE} cut_bounds_hv;
+ struct Node_${DTYPE} *left_child;
+ struct Node_${DTYPE} *right_child;
+} Node_${DTYPE};
+
+typedef struct
+{
+ ${DTYPE} *bbox;
+ int8_t no_dims;
+ uint32_t *pidx;
+ struct Node_${DTYPE} *root;
+} Tree_${DTYPE};
+
+% endfor
+
+% for DTYPE in ['float', 'double']:
+
+void insert_point_${DTYPE}(uint32_t *closest_idx, ${DTYPE} *closest_dist, uint32_t pidx, ${DTYPE} cur_dist, uint32_t k);
+void get_bounding_box_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, ${DTYPE} *bbox);
+int partition_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *bbox, int8_t *cut_dim,
+ ${DTYPE} *cut_val, uint32_t *n_lo);
+Tree_${DTYPE}* construct_tree_${DTYPE}(${DTYPE} *pa, int8_t no_dims, uint32_t n, uint32_t bsp);
+Node_${DTYPE}* construct_subtree_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, ${DTYPE} *bbox);
+Node_${DTYPE} * create_node_${DTYPE}(uint32_t start_idx, uint32_t n, int is_leaf);
+void delete_subtree_${DTYPE}(Node_${DTYPE} *root);
+void delete_tree_${DTYPE}(Tree_${DTYPE} *tree);
+void print_tree_${DTYPE}(Node_${DTYPE} *root, int level);
+${DTYPE} calc_dist_${DTYPE}(${DTYPE} *point1_coord, ${DTYPE} *point2_coord, int8_t no_dims);
+${DTYPE} get_cube_offset_${DTYPE}(int8_t dim, ${DTYPE} *point_coord, ${DTYPE} *bbox);
+${DTYPE} get_min_dist_${DTYPE}(${DTYPE} *point_coord, int8_t no_dims, ${DTYPE} *bbox);
+void search_leaf_${DTYPE}(${DTYPE} *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, ${DTYPE} *restrict closest_dist);
+void search_leaf_${DTYPE}_mask(${DTYPE} *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *restrict point_coord,
+ uint32_t k, uint8_t *restrict mask, uint32_t *restrict closest_idx, ${DTYPE} *restrict closest_dist);
+void search_splitnode_${DTYPE}(Node_${DTYPE} *root, ${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, ${DTYPE} *point_coord,
+ ${DTYPE} min_dist, uint32_t k, ${DTYPE} distance_upper_bound, ${DTYPE} eps_fac, uint8_t *mask, uint32_t * closest_idx, ${DTYPE} *closest_dist);
+void search_tree_${DTYPE}(Tree_${DTYPE} *tree, ${DTYPE} *pa, ${DTYPE} *point_coords,
+ uint32_t num_points, uint32_t k, ${DTYPE} distance_upper_bound,
+ ${DTYPE} eps, uint8_t *mask, uint32_t *closest_idxs, ${DTYPE} *closest_dists);
+
+% endfor
+
+% for DTYPE in ['float', 'double']:
+
+/************************************************
+Insert point into priority queue
+Params:
+ closest_idx : index queue
+ closest_dist : distance queue
+ pidx : permutation index of data points
+ cur_dist : distance to point inserted
+ k : number of neighbours
+************************************************/
+void insert_point_${DTYPE}(uint32_t *closest_idx, ${DTYPE} *closest_dist, uint32_t pidx, ${DTYPE} cur_dist, uint32_t k)
+{
+ int i;
+ for (i = k - 1; i > 0; i--)
+ {
+ if (closest_dist[i - 1] > cur_dist)
+ {
+ closest_dist[i] = closest_dist[i - 1];
+ closest_idx[i] = closest_idx[i - 1];
+ }
+ else
+ {
+ break;
+ }
+ }
+ closest_idx[i] = pidx;
+ closest_dist[i] = cur_dist;
+}
+
+/************************************************
+Get the bounding box of a set of points
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ n : number of points
+ bbox : bounding box (return)
+************************************************/
+void get_bounding_box_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t n, ${DTYPE} *bbox)
+{
+ ${DTYPE} cur;
+ int8_t bbox_idx, i, j;
+ uint32_t i2;
+
+ /* Use first data point to initialize */
+ for (i = 0; i < no_dims; i++)
+ {
+ bbox[2 * i] = bbox[2 * i + 1] = PA(0, i);
+ }
+
+ /* Update using rest of data points */
+ for (i2 = 1; i2 < n; i2++)
+ {
+ for (j = 0; j < no_dims; j++)
+ {
+ bbox_idx = 2 * j;
+ cur = PA(i2, j);
+ if (cur < bbox[bbox_idx])
+ {
+ bbox[bbox_idx] = cur;
+ }
+ else if (cur > bbox[bbox_idx + 1])
+ {
+ bbox[bbox_idx + 1] = cur;
+ }
+ }
+ }
+}
+
+/************************************************
+Partition a range of data points by manipulation the permutation index.
+The sliding midpoint rule is used for the partitioning.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bbox : bounding box of data points
+ cut_dim : dimension used for partition (return)
+ cut_val : value of cutting point (return)
+ n_lo : number of point below cutting plane (return)
+************************************************/
+int partition_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *bbox, int8_t *cut_dim, ${DTYPE} *cut_val, uint32_t *n_lo)
+{
+ int8_t dim = 0, i;
+ uint32_t p, q, i2;
+ ${DTYPE} size = 0, min_val, max_val, split, side_len, cur_val;
+ uint32_t end_idx = start_idx + n - 1;
+
+ /* Find largest bounding box side */
+ for (i = 0; i < no_dims; i++)
+ {
+ side_len = bbox[2 * i + 1] - bbox[2 * i];
+ if (side_len > size)
+ {
+ dim = i;
+ size = side_len;
+ }
+ }
+
+ min_val = bbox[2 * dim];
+ max_val = bbox[2 * dim + 1];
+
+ /* Check for zero length or inconsistent */
+ if (min_val >= max_val)
+ return 1;
+
+ /* Use middle for splitting */
+ split = (min_val + max_val) / 2;
+
+ /* Partition all data points around middle */
+ p = start_idx;
+ q = end_idx;
+ while (p <= q)
+ {
+ if (PA(p, dim) < split)
+ {
+ p++;
+ }
+ else if (PA(q, dim) >= split)
+ {
+ /* Guard for underflow */
+ if (q > 0)
+ {
+ q--;
+ }
+ else
+ {
+ break;
+ }
+ }
+ else
+ {
+ PASWAP(p, q);
+ p++;
+ q--;
+ }
+ }
+
+ /* Check for empty splits */
+ if (p == start_idx)
+ {
+ /* No points less than split.
+ Split at lowest point instead.
+ Minimum 1 point will be in lower box.
+ */
+
+ uint32_t j = start_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx + 1; i2 <= end_idx; i2++)
+ {
+ /* Find lowest point */
+ cur_val = PA(i2, dim);
+ if (cur_val < split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, start_idx);
+ p = start_idx + 1;
+ }
+ else if (p == end_idx + 1)
+ {
+ /* No points greater than split.
+ Split at highest point instead.
+ Minimum 1 point will be in higher box.
+ */
+
+ uint32_t j = end_idx;
+ split = PA(j, dim);
+ for (i2 = start_idx; i2 < end_idx; i2++)
+ {
+ /* Find highest point */
+ cur_val = PA(i2, dim);
+ if (cur_val > split)
+ {
+ j = i2;
+ split = cur_val;
+ }
+ }
+ PASWAP(j, end_idx);
+ p = end_idx;
+ }
+
+ /* Set return values */
+ *cut_dim = dim;
+ *cut_val = split;
+ *n_lo = p - start_idx;
+ return 0;
+}
+
+/************************************************
+Construct a sub tree over a range of data points.
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims: number of dimensions
+ start_idx : index of first data point to use
+ n : number of data points
+ bsp : number of points per leaf
+ bbox : bounding box of set of data points
+************************************************/
+Node_${DTYPE}* construct_subtree_${DTYPE}(${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, uint32_t bsp, ${DTYPE} *bbox)
+{
+ /* Create new node */
+ int is_leaf = (n <= bsp);
+ Node_${DTYPE} *root = create_node_${DTYPE}(start_idx, n, is_leaf);
+ int rval;
+ int8_t cut_dim;
+ uint32_t n_lo;
+ ${DTYPE} cut_val, lv, hv;
+ if (is_leaf)
+ {
+ /* Make leaf node */
+ root->cut_dim = -1;
+ }
+ else
+ {
+ /* Make split node */
+ /* Partition data set and set node info */
+ rval = partition_${DTYPE}(pa, pidx, no_dims, start_idx, n, bbox, &cut_dim, &cut_val, &n_lo);
+ if (rval == 1)
+ {
+ root->cut_dim = -1;
+ return root;
+ }
+ root->cut_val = cut_val;
+ root->cut_dim = cut_dim;
+
+ /* Recurse on both subsets */
+ lv = bbox[2 * cut_dim];
+ hv = bbox[2 * cut_dim + 1];
+
+ /* Set bounds for cut dimension */
+ root->cut_bounds_lv = lv;
+ root->cut_bounds_hv = hv;
+
+ /* Update bounding box before call to lower subset and restore after */
+ bbox[2 * cut_dim + 1] = cut_val;
+ root->left_child = (struct Node_${DTYPE} *)construct_subtree_${DTYPE}(pa, pidx, no_dims, start_idx, n_lo, bsp, bbox);
+ bbox[2 * cut_dim + 1] = hv;
+
+ /* Update bounding box before call to higher subset and restore after */
+ bbox[2 * cut_dim] = cut_val;
+ root->right_child = (struct Node_${DTYPE} *)construct_subtree_${DTYPE}(pa, pidx, no_dims, start_idx + n_lo, n - n_lo, bsp, bbox);
+ bbox[2 * cut_dim] = lv;
+ }
+ return root;
+}
+
+/************************************************
+Construct a tree over data points.
+Params:
+ pa : data points
+ no_dims: number of dimensions
+ n : number of data points
+ bsp : number of points per leaf
+************************************************/
+Tree_${DTYPE}* construct_tree_${DTYPE}(${DTYPE} *pa, int8_t no_dims, uint32_t n, uint32_t bsp)
+{
+ Tree_${DTYPE} *tree = (Tree_${DTYPE} *)malloc(sizeof(Tree_${DTYPE}));
+ uint32_t i;
+ uint32_t *pidx;
+ ${DTYPE} *bbox;
+
+ tree->no_dims = no_dims;
+
+ /* Initialize permutation array */
+ pidx = (uint32_t *)malloc(sizeof(uint32_t) * n);
+ for (i = 0; i < n; i++)
+ {
+ pidx[i] = i;
+ }
+
+ bbox = (${DTYPE} *)malloc(2 * sizeof(${DTYPE}) * no_dims);
+ get_bounding_box_${DTYPE}(pa, pidx, no_dims, n, bbox);
+ tree->bbox = bbox;
+
+ /* Construct subtree on full dataset */
+ tree->root = (struct Node_${DTYPE} *)construct_subtree_${DTYPE}(pa, pidx, no_dims, 0, n, bsp, bbox);
+
+ tree->pidx = pidx;
+ return tree;
+}
+
+/************************************************
+Create a tree node.
+Params:
+ start_idx : index of first data point to use
+ n : number of data points
+************************************************/
+Node_${DTYPE}* create_node_${DTYPE}(uint32_t start_idx, uint32_t n, int is_leaf)
+{
+ Node_${DTYPE} *new_node;
+ if (is_leaf)
+ {
+ /*
+ Allocate only the part of the struct that will be used in a leaf node.
+ This relies on the C99 specification of struct layout conservation and padding and
+ that dereferencing is never attempted for the node pointers in a leaf.
+ */
+ new_node = (Node_${DTYPE} *)malloc(sizeof(Node_${DTYPE}) - 2 * sizeof(Node_${DTYPE} *));
+ }
+ else
+ {
+ new_node = (Node_${DTYPE} *)malloc(sizeof(Node_${DTYPE}));
+ }
+ new_node->n = n;
+ new_node->start_idx = start_idx;
+ return new_node;
+}
+
+/************************************************
+Delete subtree
+Params:
+ root : root node of subtree to delete
+************************************************/
+void delete_subtree_${DTYPE}(Node_${DTYPE} *root)
+{
+ if (root->cut_dim != -1)
+ {
+ delete_subtree_${DTYPE}((Node_${DTYPE} *)root->left_child);
+ delete_subtree_${DTYPE}((Node_${DTYPE} *)root->right_child);
+ }
+ free(root);
+}
+
+/************************************************
+Delete tree
+Params:
+ tree : Tree struct of kd tree
+************************************************/
+void delete_tree_${DTYPE}(Tree_${DTYPE} *tree)
+{
+ delete_subtree_${DTYPE}((Node_${DTYPE} *)tree->root);
+ free(tree->bbox);
+ free(tree->pidx);
+ free(tree);
+}
+
+/************************************************
+Print
+************************************************/
+void print_tree_${DTYPE}(Node_${DTYPE} *root, int level)
+{
+ int i;
+ for (i = 0; i < level; i++)
+ {
+ printf(" ");
+ }
+ printf("(cut_val: %f, cut_dim: %i)\n", root->cut_val, root->cut_dim);
+ if (root->cut_dim != -1)
+ print_tree_${DTYPE}((Node_${DTYPE} *)root->left_child, level + 1);
+ if (root->cut_dim != -1)
+ print_tree_${DTYPE}((Node_${DTYPE} *)root->right_child, level + 1);
+}
+
+/************************************************
+Calculate squared cartesian distance between points
+Params:
+ point1_coord : point 1
+ point2_coord : point 2
+************************************************/
+${DTYPE} calc_dist_${DTYPE}(${DTYPE} *point1_coord, ${DTYPE} *point2_coord, int8_t no_dims)
+{
+ /* Calculate squared distance */
+ ${DTYPE} dist = 0, dim_dist;
+ int8_t i;
+ for (i = 0; i < no_dims; i++)
+ {
+ dim_dist = point2_coord[i] - point1_coord[i];
+ dist += dim_dist * dim_dist;
+ }
+ return dist;
+}
+
+/************************************************
+Get squared distance from point to cube in specified dimension
+Params:
+ dim : dimension
+ point_coord : cartesian coordinates of point
+ bbox : cube
+************************************************/
+${DTYPE} get_cube_offset_${DTYPE}(int8_t dim, ${DTYPE} *point_coord, ${DTYPE} *bbox)
+{
+ ${DTYPE} dim_coord = point_coord[dim];
+
+ if (dim_coord < bbox[2 * dim])
+ {
+ /* Left of cube in dimension */
+ return dim_coord - bbox[2 * dim];
+ }
+ else if (dim_coord > bbox[2 * dim + 1])
+ {
+ /* Right of cube in dimension */
+ return dim_coord - bbox[2 * dim + 1];
+ }
+ else
+ {
+ /* Inside cube in dimension */
+ return 0.;
+ }
+}
+
+/************************************************
+Get minimum squared distance between point and cube.
+Params:
+ point_coord : cartesian coordinates of point
+ no_dims : number of dimensions
+ bbox : cube
+************************************************/
+${DTYPE} get_min_dist_${DTYPE}(${DTYPE} *point_coord, int8_t no_dims, ${DTYPE} *bbox)
+{
+ ${DTYPE} cube_offset = 0, cube_offset_dim;
+ int8_t i;
+
+ for (i = 0; i < no_dims; i++)
+ {
+ cube_offset_dim = get_cube_offset_${DTYPE}(i, point_coord, bbox);
+ cube_offset += cube_offset_dim * cube_offset_dim;
+ }
+
+ return cube_offset;
+}
+
+/************************************************
+Search a leaf node for closest point
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_${DTYPE}(${DTYPE} *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *restrict point_coord,
+ uint32_t k, uint32_t *restrict closest_idx, ${DTYPE} *restrict closest_dist)
+{
+ ${DTYPE} cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Get distance to query point */
+ cur_dist = calc_dist_${DTYPE}(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_${DTYPE}(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+
+/************************************************
+Search a leaf node for closest point with data point mask
+Params:
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ start_idx : index of first data point to use
+ size : number of data points
+ point_coord : query point
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_leaf_${DTYPE}_mask(${DTYPE} *restrict pa, uint32_t *restrict pidx, int8_t no_dims, uint32_t start_idx, uint32_t n, ${DTYPE} *restrict point_coord,
+ uint32_t k, uint8_t *mask, uint32_t *restrict closest_idx, ${DTYPE} *restrict closest_dist)
+{
+ ${DTYPE} cur_dist;
+ uint32_t i;
+ /* Loop through all points in leaf */
+ for (i = 0; i < n; i++)
+ {
+ /* Is this point masked out? */
+ if (mask[pidx[start_idx + i]])
+ {
+ continue;
+ }
+ /* Get distance to query point */
+ cur_dist = calc_dist_${DTYPE}(&PA(start_idx + i, 0), point_coord, no_dims);
+ /* Update closest info if new point is closest so far*/
+ if (cur_dist < closest_dist[k - 1])
+ {
+ insert_point_${DTYPE}(closest_idx, closest_dist, pidx[start_idx + i], cur_dist, k);
+ }
+ }
+}
+
+/************************************************
+Search subtree for nearest to query point
+Params:
+ root : root node of subtree
+ pa : data points
+ pidx : permutation index of data points
+ no_dims : number of dimensions
+ point_coord : query point
+ min_dist : minumum distance to nearest neighbour
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_splitnode_${DTYPE}(Node_${DTYPE} *root, ${DTYPE} *pa, uint32_t *pidx, int8_t no_dims, ${DTYPE} *point_coord,
+ ${DTYPE} min_dist, uint32_t k, ${DTYPE} distance_upper_bound, ${DTYPE} eps_fac, uint8_t *mask,
+ uint32_t *closest_idx, ${DTYPE} *closest_dist)
+{
+ int8_t dim;
+ ${DTYPE} dist_left, dist_right;
+ ${DTYPE} new_offset;
+ ${DTYPE} box_diff;
+
+ /* Skip if distance bound exeeded */
+ if (min_dist > distance_upper_bound)
+ {
+ return;
+ }
+
+ dim = root->cut_dim;
+
+ /* Handle leaf node */
+ if (dim == -1)
+ {
+ if (mask)
+ {
+ search_leaf_${DTYPE}_mask(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, mask, closest_idx, closest_dist);
+ }
+ else
+ {
+ search_leaf_${DTYPE}(pa, pidx, no_dims, root->start_idx, root->n, point_coord, k, closest_idx, closest_dist);
+ }
+ return;
+ }
+
+ /* Get distance to cutting plane */
+ new_offset = point_coord[dim] - root->cut_val;
+
+ if (new_offset < 0)
+ {
+ /* Left of cutting plane */
+ dist_left = min_dist;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit */
+ search_splitnode_${DTYPE}((Node_${DTYPE} *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Right of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = root->cut_bounds_lv - point_coord[dim];
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_right = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_${DTYPE}((Node_${DTYPE} *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+ else
+ {
+ /* Right of cutting plane */
+ dist_right = min_dist;
+ if (dist_right < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search right subtree if minimum distance is below limit*/
+ search_splitnode_${DTYPE}((Node_${DTYPE} *)root->right_child, pa, pidx, no_dims, point_coord, dist_right, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+
+ /* Left of cutting plane. Update minimum distance.
+ See Algorithms for Fast Vector Quantization
+ Sunil Arya and David M. Mount. */
+ box_diff = point_coord[dim] - root->cut_bounds_hv;
+ if (box_diff < 0)
+ {
+ box_diff = 0;
+ }
+ dist_left = min_dist - box_diff * box_diff + new_offset * new_offset;
+ if (dist_left < closest_dist[k - 1] * eps_fac)
+ {
+ /* Search left subtree if minimum distance is below limit*/
+ search_splitnode_${DTYPE}((Node_${DTYPE} *)root->left_child, pa, pidx, no_dims, point_coord, dist_left, k, distance_upper_bound, eps_fac, mask, closest_idx, closest_dist);
+ }
+ }
+}
+
+/************************************************
+Search for nearest neighbour for a set of query points
+Params:
+ tree : Tree struct of kd tree
+ pa : data points
+ pidx : permutation index of data points
+ point_coords : query points
+ num_points : number of query points
+ mask : boolean array of invalid (True) and valid (False) data points
+ closest_idx : index of closest data point found (return)
+ closest_dist : distance to closest point (return)
+************************************************/
+void search_tree_${DTYPE}(Tree_${DTYPE} *tree, ${DTYPE} *pa, ${DTYPE} *point_coords,
+ uint32_t num_points, uint32_t k, ${DTYPE} distance_upper_bound,
+ ${DTYPE} eps, uint8_t *mask, uint32_t *closest_idxs, ${DTYPE} *closest_dists)
+{
+ ${DTYPE} min_dist;
+ ${DTYPE} eps_fac = 1 / ((1 + eps) * (1 + eps));
+ int8_t no_dims = tree->no_dims;
+ ${DTYPE} *bbox = tree->bbox;
+ uint32_t *pidx = tree->pidx;
+ uint32_t j = 0;
+#if defined(_MSC_VER) && defined(_OPENMP)
+ int32_t i = 0;
+ int32_t local_num_points = (int32_t) num_points;
+#else
+ uint32_t i;
+ uint32_t local_num_points = num_points;
+#endif
+ Node_${DTYPE} *root = (Node_${DTYPE} *)tree->root;
+
+ /* Queries are OpenMP enabled */
+ #pragma omp parallel
+ {
+ /* The low chunk size is important to avoid L2 cache trashing
+ for spatial coherent query datasets
+ */
+ #pragma omp for private(i, j) schedule(static, 100) nowait
+ for (i = 0; i < local_num_points; i++)
+ {
+ for (j = 0; j < k; j++)
+ {
+ closest_idxs[i * k + j] = UINT32_MAX;
+ closest_dists[i * k + j] = DBL_MAX;
+ }
+ min_dist = get_min_dist_${DTYPE}(point_coords + no_dims * i, no_dims, bbox);
+ search_splitnode_${DTYPE}(root, pa, pidx, no_dims, point_coords + no_dims * i, min_dist,
+ k, distance_upper_bound, eps_fac, mask, &closest_idxs[i * k], &closest_dists[i * k]);
+ }
+ }
+}
+% endfor
diff --git a/src/utils/libkdtree/pykdtree/kdtree.c b/src/utils/libkdtree/pykdtree/kdtree.c
new file mode 100644
index 0000000..895c0d2
--- /dev/null
+++ b/src/utils/libkdtree/pykdtree/kdtree.c
@@ -0,0 +1,11350 @@
+/* Generated by Cython 0.27.3 */
+
+#define PY_SSIZE_T_CLEAN
+#include "Python.h"
+#ifndef Py_PYTHON_H
+ #error Python headers needed to compile C extensions, please install development version of Python.
+#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000)
+ #error Cython requires Python 2.6+ or Python 3.3+.
+#else
+#define CYTHON_ABI "0_27_3"
+#define CYTHON_FUTURE_DIVISION 0
+#include
+#ifndef offsetof
+ #define offsetof(type, member) ( (size_t) & ((type*)0) -> member )
+#endif
+#if !defined(WIN32) && !defined(MS_WINDOWS)
+ #ifndef __stdcall
+ #define __stdcall
+ #endif
+ #ifndef __cdecl
+ #define __cdecl
+ #endif
+ #ifndef __fastcall
+ #define __fastcall
+ #endif
+#endif
+#ifndef DL_IMPORT
+ #define DL_IMPORT(t) t
+#endif
+#ifndef DL_EXPORT
+ #define DL_EXPORT(t) t
+#endif
+#define __PYX_COMMA ,
+#ifndef HAVE_LONG_LONG
+ #if PY_VERSION_HEX >= 0x02070000
+ #define HAVE_LONG_LONG
+ #endif
+#endif
+#ifndef PY_LONG_LONG
+ #define PY_LONG_LONG LONG_LONG
+#endif
+#ifndef Py_HUGE_VAL
+ #define Py_HUGE_VAL HUGE_VAL
+#endif
+#ifdef PYPY_VERSION
+ #define CYTHON_COMPILING_IN_PYPY 1
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #undef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 0
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #if PY_VERSION_HEX < 0x03050000
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #undef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 1
+ #undef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 0
+ #undef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 0
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+#elif defined(PYSTON_VERSION)
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 1
+ #define CYTHON_COMPILING_IN_CPYTHON 0
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #undef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 0
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #undef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 0
+ #undef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 0
+ #undef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT 0
+ #undef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE 0
+#else
+ #define CYTHON_COMPILING_IN_PYPY 0
+ #define CYTHON_COMPILING_IN_PYSTON 0
+ #define CYTHON_COMPILING_IN_CPYTHON 1
+ #ifndef CYTHON_USE_TYPE_SLOTS
+ #define CYTHON_USE_TYPE_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYTYPE_LOOKUP
+ #define CYTHON_USE_PYTYPE_LOOKUP 0
+ #elif !defined(CYTHON_USE_PYTYPE_LOOKUP)
+ #define CYTHON_USE_PYTYPE_LOOKUP 1
+ #endif
+ #if PY_MAJOR_VERSION < 3
+ #undef CYTHON_USE_ASYNC_SLOTS
+ #define CYTHON_USE_ASYNC_SLOTS 0
+ #elif !defined(CYTHON_USE_ASYNC_SLOTS)
+ #define CYTHON_USE_ASYNC_SLOTS 1
+ #endif
+ #if PY_VERSION_HEX < 0x02070000
+ #undef CYTHON_USE_PYLONG_INTERNALS
+ #define CYTHON_USE_PYLONG_INTERNALS 0
+ #elif !defined(CYTHON_USE_PYLONG_INTERNALS)
+ #define CYTHON_USE_PYLONG_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_PYLIST_INTERNALS
+ #define CYTHON_USE_PYLIST_INTERNALS 1
+ #endif
+ #ifndef CYTHON_USE_UNICODE_INTERNALS
+ #define CYTHON_USE_UNICODE_INTERNALS 1
+ #endif
+ #if PY_VERSION_HEX < 0x030300F0
+ #undef CYTHON_USE_UNICODE_WRITER
+ #define CYTHON_USE_UNICODE_WRITER 0
+ #elif !defined(CYTHON_USE_UNICODE_WRITER)
+ #define CYTHON_USE_UNICODE_WRITER 1
+ #endif
+ #ifndef CYTHON_AVOID_BORROWED_REFS
+ #define CYTHON_AVOID_BORROWED_REFS 0
+ #endif
+ #ifndef CYTHON_ASSUME_SAFE_MACROS
+ #define CYTHON_ASSUME_SAFE_MACROS 1
+ #endif
+ #ifndef CYTHON_UNPACK_METHODS
+ #define CYTHON_UNPACK_METHODS 1
+ #endif
+ #ifndef CYTHON_FAST_THREAD_STATE
+ #define CYTHON_FAST_THREAD_STATE 1
+ #endif
+ #ifndef CYTHON_FAST_PYCALL
+ #define CYTHON_FAST_PYCALL 1
+ #endif
+ #ifndef CYTHON_PEP489_MULTI_PHASE_INIT
+ #define CYTHON_PEP489_MULTI_PHASE_INIT (0 && PY_VERSION_HEX >= 0x03050000)
+ #endif
+ #ifndef CYTHON_USE_TP_FINALIZE
+ #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1)
+ #endif
+#endif
+#if !defined(CYTHON_FAST_PYCCALL)
+#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1)
+#endif
+#if CYTHON_USE_PYLONG_INTERNALS
+ #include "longintrepr.h"
+ #undef SHIFT
+ #undef BASE
+ #undef MASK
+#endif
+#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag)
+ #define Py_OptimizeFlag 0
+#endif
+#define __PYX_BUILD_PY_SSIZE_T "n"
+#define CYTHON_FORMAT_SSIZE_T "z"
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_BUILTIN_MODULE_NAME "__builtin__"
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+ #define __Pyx_DefaultClassType PyClass_Type
+#else
+ #define __Pyx_BUILTIN_MODULE_NAME "builtins"
+ #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\
+ PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)
+ #define __Pyx_DefaultClassType PyType_Type
+#endif
+#ifndef Py_TPFLAGS_CHECKTYPES
+ #define Py_TPFLAGS_CHECKTYPES 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_INDEX
+ #define Py_TPFLAGS_HAVE_INDEX 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_NEWBUFFER
+ #define Py_TPFLAGS_HAVE_NEWBUFFER 0
+#endif
+#ifndef Py_TPFLAGS_HAVE_FINALIZE
+ #define Py_TPFLAGS_HAVE_FINALIZE 0
+#endif
+#if PY_VERSION_HEX < 0x030700A0 || !defined(METH_FASTCALL)
+ #ifndef METH_FASTCALL
+ #define METH_FASTCALL 0x80
+ #endif
+ typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject **args, Py_ssize_t nargs);
+ typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject **args,
+ Py_ssize_t nargs, PyObject *kwnames);
+#else
+ #define __Pyx_PyCFunctionFast _PyCFunctionFast
+ #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords
+#endif
+#if CYTHON_FAST_PYCCALL
+#define __Pyx_PyFastCFunction_Check(func)\
+ ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS)))))
+#else
+#define __Pyx_PyFastCFunction_Check(func) 0
+#endif
+#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#elif PY_VERSION_HEX >= 0x03060000
+ #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet()
+#elif PY_VERSION_HEX >= 0x03000000
+ #define __Pyx_PyThreadState_Current PyThreadState_GET()
+#else
+ #define __Pyx_PyThreadState_Current _PyThreadState_Current
+#endif
+#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized)
+#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n))
+#else
+#define __Pyx_PyDict_NewPresized(n) PyDict_New()
+#endif
+#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y)
+#else
+ #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y)
+ #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y)
+#endif
+#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND)
+ #define CYTHON_PEP393_ENABLED 1
+ #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\
+ 0 : _PyUnicode_Ready((PyObject *)(op)))
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i)
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u)
+ #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u)
+ #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u)
+ #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i)
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u)))
+#else
+ #define CYTHON_PEP393_ENABLED 0
+ #define PyUnicode_1BYTE_KIND 1
+ #define PyUnicode_2BYTE_KIND 2
+ #define PyUnicode_4BYTE_KIND 4
+ #define __Pyx_PyUnicode_READY(op) (0)
+ #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u)
+ #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i]))
+ #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111)
+ #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE))
+ #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u))
+ #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i]))
+ #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch)
+ #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u))
+#endif
+#if CYTHON_COMPILING_IN_PYPY
+ #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b)
+#else
+ #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b)
+ #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\
+ PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b))
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains)
+ #define PyUnicode_Contains(u, s) PySequence_Contains(u, s)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check)
+ #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format)
+ #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt)
+#endif
+#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc)
+ #define PyObject_Malloc(s) PyMem_Malloc(s)
+ #define PyObject_Free(p) PyMem_Free(p)
+ #define PyObject_Realloc(p) PyMem_Realloc(p)
+#endif
+#if CYTHON_COMPILING_IN_PYSTON
+ #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno)
+#else
+ #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0)
+ #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno)
+#endif
+#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b))
+#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None)) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b))
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b)
+#else
+ #define __Pyx_PyString_Format(a, b) PyString_Format(a, b)
+#endif
+#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII)
+ #define PyObject_ASCII(o) PyObject_Repr(o)
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBaseString_Type PyUnicode_Type
+ #define PyStringObject PyUnicodeObject
+ #define PyString_Type PyUnicode_Type
+ #define PyString_Check PyUnicode_Check
+ #define PyString_CheckExact PyUnicode_CheckExact
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj)
+ #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj)
+#else
+ #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj))
+ #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj))
+#endif
+#ifndef PySet_CheckExact
+ #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type)
+#endif
+#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception)
+#if PY_MAJOR_VERSION >= 3
+ #define PyIntObject PyLongObject
+ #define PyInt_Type PyLong_Type
+ #define PyInt_Check(op) PyLong_Check(op)
+ #define PyInt_CheckExact(op) PyLong_CheckExact(op)
+ #define PyInt_FromString PyLong_FromString
+ #define PyInt_FromUnicode PyLong_FromUnicode
+ #define PyInt_FromLong PyLong_FromLong
+ #define PyInt_FromSize_t PyLong_FromSize_t
+ #define PyInt_FromSsize_t PyLong_FromSsize_t
+ #define PyInt_AsLong PyLong_AsLong
+ #define PyInt_AS_LONG PyLong_AS_LONG
+ #define PyInt_AsSsize_t PyLong_AsSsize_t
+ #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask
+ #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask
+ #define PyNumber_Int PyNumber_Long
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define PyBoolObject PyLongObject
+#endif
+#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY
+ #ifndef PyUnicode_InternFromString
+ #define PyUnicode_InternFromString(s) PyUnicode_FromString(s)
+ #endif
+#endif
+#if PY_VERSION_HEX < 0x030200A4
+ typedef long Py_hash_t;
+ #define __Pyx_PyInt_FromHash_t PyInt_FromLong
+ #define __Pyx_PyInt_AsHash_t PyInt_AsLong
+#else
+ #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t
+ #define __Pyx_PyInt_AsHash_t PyInt_AsSsize_t
+#endif
+#if PY_MAJOR_VERSION >= 3
+ #define __Pyx_PyMethod_New(func, self, klass) ((self) ? PyMethod_New(func, self) : PyInstanceMethod_New(func))
+#else
+ #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass)
+#endif
+#ifndef __has_attribute
+ #define __has_attribute(x) 0
+#endif
+#ifndef __has_cpp_attribute
+ #define __has_cpp_attribute(x) 0
+#endif
+#if CYTHON_USE_ASYNC_SLOTS
+ #if PY_VERSION_HEX >= 0x030500B1
+ #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods
+ #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async)
+ #else
+ #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved))
+ #endif
+#else
+ #define __Pyx_PyType_AsAsync(obj) NULL
+#endif
+#ifndef __Pyx_PyAsyncMethodsStruct
+ typedef struct {
+ unaryfunc am_await;
+ unaryfunc am_aiter;
+ unaryfunc am_anext;
+ } __Pyx_PyAsyncMethodsStruct;
+#endif
+#ifndef CYTHON_RESTRICT
+ #if defined(__GNUC__)
+ #define CYTHON_RESTRICT __restrict__
+ #elif defined(_MSC_VER) && _MSC_VER >= 1400
+ #define CYTHON_RESTRICT __restrict
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_RESTRICT restrict
+ #else
+ #define CYTHON_RESTRICT
+ #endif
+#endif
+#ifndef CYTHON_UNUSED
+# if defined(__GNUC__)
+# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER))
+# define CYTHON_UNUSED __attribute__ ((__unused__))
+# else
+# define CYTHON_UNUSED
+# endif
+#endif
+#ifndef CYTHON_MAYBE_UNUSED_VAR
+# if defined(__cplusplus)
+ template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { }
+# else
+# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x)
+# endif
+#endif
+#ifndef CYTHON_NCP_UNUSED
+# if CYTHON_COMPILING_IN_CPYTHON
+# define CYTHON_NCP_UNUSED
+# else
+# define CYTHON_NCP_UNUSED CYTHON_UNUSED
+# endif
+#endif
+#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None)
+#ifdef _MSC_VER
+ #ifndef _MSC_STDINT_H_
+ #if _MSC_VER < 1300
+ typedef unsigned char uint8_t;
+ typedef unsigned int uint32_t;
+ #else
+ typedef unsigned __int8 uint8_t;
+ typedef unsigned __int32 uint32_t;
+ #endif
+ #endif
+#else
+ #include
+#endif
+#ifndef CYTHON_FALLTHROUGH
+ #if defined(__cplusplus) && __cplusplus >= 201103L
+ #if __has_cpp_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH [[fallthrough]]
+ #elif __has_cpp_attribute(clang::fallthrough)
+ #define CYTHON_FALLTHROUGH [[clang::fallthrough]]
+ #elif __has_cpp_attribute(gnu::fallthrough)
+ #define CYTHON_FALLTHROUGH [[gnu::fallthrough]]
+ #endif
+ #endif
+ #ifndef CYTHON_FALLTHROUGH
+ #if __has_attribute(fallthrough)
+ #define CYTHON_FALLTHROUGH __attribute__((fallthrough))
+ #else
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+ #if defined(__clang__ ) && defined(__apple_build_version__)
+ #if __apple_build_version__ < 7000000
+ #undef CYTHON_FALLTHROUGH
+ #define CYTHON_FALLTHROUGH
+ #endif
+ #endif
+#endif
+
+#ifndef CYTHON_INLINE
+ #if defined(__clang__)
+ #define CYTHON_INLINE __inline__ __attribute__ ((__unused__))
+ #elif defined(__GNUC__)
+ #define CYTHON_INLINE __inline__
+ #elif defined(_MSC_VER)
+ #define CYTHON_INLINE __inline
+ #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define CYTHON_INLINE inline
+ #else
+ #define CYTHON_INLINE
+ #endif
+#endif
+
+#if defined(WIN32) || defined(MS_WINDOWS)
+ #define _USE_MATH_DEFINES
+#endif
+#include
+#ifdef NAN
+#define __PYX_NAN() ((float) NAN)
+#else
+static CYTHON_INLINE float __PYX_NAN() {
+ float value;
+ memset(&value, 0xFF, sizeof(value));
+ return value;
+}
+#endif
+#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL)
+#define __Pyx_truncl trunc
+#else
+#define __Pyx_truncl truncl
+#endif
+
+
+#define __PYX_ERR(f_index, lineno, Ln_error) \
+{ \
+ __pyx_filename = __pyx_f[f_index]; __pyx_lineno = lineno; __pyx_clineno = __LINE__; goto Ln_error; \
+}
+
+#ifndef __PYX_EXTERN_C
+ #ifdef __cplusplus
+ #define __PYX_EXTERN_C extern "C"
+ #else
+ #define __PYX_EXTERN_C extern
+ #endif
+#endif
+
+#define __PYX_HAVE__pykdtree__kdtree
+#define __PYX_HAVE_API__pykdtree__kdtree
+#include
+#include
+#include "numpy/arrayobject.h"
+#include "numpy/ufuncobject.h"
+#include
+#ifdef _OPENMP
+#include
+#endif /* _OPENMP */
+
+#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS)
+#define CYTHON_WITHOUT_ASSERTIONS
+#endif
+
+typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding;
+ const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry;
+
+#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0
+#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT 0
+#define __PYX_DEFAULT_STRING_ENCODING ""
+#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString
+#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#define __Pyx_uchar_cast(c) ((unsigned char)c)
+#define __Pyx_long_cast(x) ((long)x)
+#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\
+ (sizeof(type) < sizeof(Py_ssize_t)) ||\
+ (sizeof(type) > sizeof(Py_ssize_t) &&\
+ likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX) &&\
+ (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\
+ v == (type)PY_SSIZE_T_MIN))) ||\
+ (sizeof(type) == sizeof(Py_ssize_t) &&\
+ (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\
+ v == (type)PY_SSIZE_T_MAX))) )
+#if defined (__cplusplus) && __cplusplus >= 201103L
+ #include
+ #define __Pyx_sst_abs(value) std::abs(value)
+#elif SIZEOF_INT >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) abs(value)
+#elif SIZEOF_LONG >= SIZEOF_SIZE_T
+ #define __Pyx_sst_abs(value) labs(value)
+#elif defined (_MSC_VER)
+ #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value))
+#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L
+ #define __Pyx_sst_abs(value) llabs(value)
+#elif defined (__GNUC__)
+ #define __Pyx_sst_abs(value) __builtin_llabs(value)
+#else
+ #define __Pyx_sst_abs(value) ((value<0) ? -value : value)
+#endif
+static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*);
+static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length);
+#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s))
+#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l)
+#define __Pyx_PyBytes_FromString PyBytes_FromString
+#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize
+static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*);
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize
+#else
+ #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString
+ #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize
+#endif
+#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s))
+#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s))
+#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s)
+#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s)
+#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s)
+#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s)
+#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s)
+static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) {
+ const Py_UNICODE *u_end = u;
+ while (*u_end++) ;
+ return (size_t)(u_end - u - 1);
+}
+#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u))
+#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode
+#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode
+#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj)
+#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None)
+#define __Pyx_PyBool_FromLong(b) ((b) ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False))
+static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*);
+static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x);
+#define __Pyx_PySequence_Tuple(obj)\
+ (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj))
+static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*);
+static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t);
+#if CYTHON_ASSUME_SAFE_MACROS
+#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x))
+#else
+#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x)
+#endif
+#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x))
+#if PY_MAJOR_VERSION >= 3
+#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x))
+#else
+#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x))
+#endif
+#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x))
+#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII
+static int __Pyx_sys_getdefaultencoding_not_ascii;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ PyObject* ascii_chars_u = NULL;
+ PyObject* ascii_chars_b = NULL;
+ const char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ if (strcmp(default_encoding_c, "ascii") == 0) {
+ __Pyx_sys_getdefaultencoding_not_ascii = 0;
+ } else {
+ char ascii_chars[128];
+ int c;
+ for (c = 0; c < 128; c++) {
+ ascii_chars[c] = c;
+ }
+ __Pyx_sys_getdefaultencoding_not_ascii = 1;
+ ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL);
+ if (!ascii_chars_u) goto bad;
+ ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL);
+ if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) {
+ PyErr_Format(
+ PyExc_ValueError,
+ "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.",
+ default_encoding_c);
+ goto bad;
+ }
+ Py_DECREF(ascii_chars_u);
+ Py_DECREF(ascii_chars_b);
+ }
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ Py_XDECREF(ascii_chars_u);
+ Py_XDECREF(ascii_chars_b);
+ return -1;
+}
+#endif
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL)
+#else
+#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL)
+#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT
+static char* __PYX_DEFAULT_STRING_ENCODING;
+static int __Pyx_init_sys_getdefaultencoding_params(void) {
+ PyObject* sys;
+ PyObject* default_encoding = NULL;
+ char* default_encoding_c;
+ sys = PyImport_ImportModule("sys");
+ if (!sys) goto bad;
+ default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL);
+ Py_DECREF(sys);
+ if (!default_encoding) goto bad;
+ default_encoding_c = PyBytes_AsString(default_encoding);
+ if (!default_encoding_c) goto bad;
+ __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c));
+ if (!__PYX_DEFAULT_STRING_ENCODING) goto bad;
+ strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c);
+ Py_DECREF(default_encoding);
+ return 0;
+bad:
+ Py_XDECREF(default_encoding);
+ return -1;
+}
+#endif
+#endif
+
+
+/* Test for GCC > 2.95 */
+#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95)))
+ #define likely(x) __builtin_expect(!!(x), 1)
+ #define unlikely(x) __builtin_expect(!!(x), 0)
+#else /* !__GNUC__ or GCC < 2.95 */
+ #define likely(x) (x)
+ #define unlikely(x) (x)
+#endif /* __GNUC__ */
+static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; }
+
+static PyObject *__pyx_m = NULL;
+static PyObject *__pyx_d;
+static PyObject *__pyx_b;
+static PyObject *__pyx_cython_runtime;
+static PyObject *__pyx_empty_tuple;
+static PyObject *__pyx_empty_bytes;
+static PyObject *__pyx_empty_unicode;
+static int __pyx_lineno;
+static int __pyx_clineno = 0;
+static const char * __pyx_cfilenm= __FILE__;
+static const char *__pyx_filename;
+
+/* Header.proto */
+#if !defined(CYTHON_CCOMPLEX)
+ #if defined(__cplusplus)
+ #define CYTHON_CCOMPLEX 1
+ #elif defined(_Complex_I)
+ #define CYTHON_CCOMPLEX 1
+ #else
+ #define CYTHON_CCOMPLEX 0
+ #endif
+#endif
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #include
+ #else
+ #include
+ #endif
+#endif
+#if CYTHON_CCOMPLEX && !defined(__cplusplus) && defined(__sun__) && defined(__GNUC__)
+ #undef _Complex_I
+ #define _Complex_I 1.0fj
+#endif
+
+
+static const char *__pyx_f[] = {
+ "pykdtree/kdtree.pyx",
+ "stringsource",
+ "__init__.pxd",
+ "type.pxd",
+};
+/* BufferFormatStructs.proto */
+#define IS_UNSIGNED(type) (((type) -1) > 0)
+struct __Pyx_StructField_;
+#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0)
+typedef struct {
+ const char* name;
+ struct __Pyx_StructField_* fields;
+ size_t size;
+ size_t arraysize[8];
+ int ndim;
+ char typegroup;
+ char is_unsigned;
+ int flags;
+} __Pyx_TypeInfo;
+typedef struct __Pyx_StructField_ {
+ __Pyx_TypeInfo* type;
+ const char* name;
+ size_t offset;
+} __Pyx_StructField;
+typedef struct {
+ __Pyx_StructField* field;
+ size_t parent_offset;
+} __Pyx_BufFmt_StackElem;
+typedef struct {
+ __Pyx_StructField root;
+ __Pyx_BufFmt_StackElem* head;
+ size_t fmt_offset;
+ size_t new_count, enc_count;
+ size_t struct_alignment;
+ int is_complex;
+ char enc_type;
+ char new_packmode;
+ char enc_packmode;
+ char is_valid_array;
+} __Pyx_BufFmt_Context;
+
+/* NoFastGil.proto */
+#define __Pyx_PyGILState_Ensure PyGILState_Ensure
+#define __Pyx_PyGILState_Release PyGILState_Release
+#define __Pyx_FastGIL_Remember()
+#define __Pyx_FastGIL_Forget()
+#define __Pyx_FastGilFuncInit()
+
+/* ForceInitThreads.proto */
+#ifndef __PYX_FORCE_INIT_THREADS
+ #define __PYX_FORCE_INIT_THREADS 0
+#endif
+
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":743
+ * # in Cython to enable them only on the right systems.
+ *
+ * ctypedef npy_int8 int8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ */
+typedef npy_int8 __pyx_t_5numpy_int8_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":744
+ *
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t
+ */
+typedef npy_int16 __pyx_t_5numpy_int16_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":745
+ * ctypedef npy_int8 int8_t
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_int64 int64_t
+ * #ctypedef npy_int96 int96_t
+ */
+typedef npy_int32 __pyx_t_5numpy_int32_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":746
+ * ctypedef npy_int16 int16_t
+ * ctypedef npy_int32 int32_t
+ * ctypedef npy_int64 int64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_int96 int96_t
+ * #ctypedef npy_int128 int128_t
+ */
+typedef npy_int64 __pyx_t_5numpy_int64_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":750
+ * #ctypedef npy_int128 int128_t
+ *
+ * ctypedef npy_uint8 uint8_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ */
+typedef npy_uint8 __pyx_t_5numpy_uint8_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":751
+ *
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t
+ */
+typedef npy_uint16 __pyx_t_5numpy_uint16_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":752
+ * ctypedef npy_uint8 uint8_t
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uint64 uint64_t
+ * #ctypedef npy_uint96 uint96_t
+ */
+typedef npy_uint32 __pyx_t_5numpy_uint32_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":753
+ * ctypedef npy_uint16 uint16_t
+ * ctypedef npy_uint32 uint32_t
+ * ctypedef npy_uint64 uint64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_uint96 uint96_t
+ * #ctypedef npy_uint128 uint128_t
+ */
+typedef npy_uint64 __pyx_t_5numpy_uint64_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":757
+ * #ctypedef npy_uint128 uint128_t
+ *
+ * ctypedef npy_float32 float32_t # <<<<<<<<<<<<<<
+ * ctypedef npy_float64 float64_t
+ * #ctypedef npy_float80 float80_t
+ */
+typedef npy_float32 __pyx_t_5numpy_float32_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":758
+ *
+ * ctypedef npy_float32 float32_t
+ * ctypedef npy_float64 float64_t # <<<<<<<<<<<<<<
+ * #ctypedef npy_float80 float80_t
+ * #ctypedef npy_float128 float128_t
+ */
+typedef npy_float64 __pyx_t_5numpy_float64_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":767
+ * # The int types are mapped a bit surprising --
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t
+ */
+typedef npy_long __pyx_t_5numpy_int_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":768
+ * # numpy.int corresponds to 'l' and numpy.long to 'q'
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longlong longlong_t
+ *
+ */
+typedef npy_longlong __pyx_t_5numpy_long_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":769
+ * ctypedef npy_long int_t
+ * ctypedef npy_longlong long_t
+ * ctypedef npy_longlong longlong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_ulong uint_t
+ */
+typedef npy_longlong __pyx_t_5numpy_longlong_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":771
+ * ctypedef npy_longlong longlong_t
+ *
+ * ctypedef npy_ulong uint_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t
+ */
+typedef npy_ulong __pyx_t_5numpy_uint_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":772
+ *
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t # <<<<<<<<<<<<<<
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulong_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":773
+ * ctypedef npy_ulong uint_t
+ * ctypedef npy_ulonglong ulong_t
+ * ctypedef npy_ulonglong ulonglong_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_intp intp_t
+ */
+typedef npy_ulonglong __pyx_t_5numpy_ulonglong_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":775
+ * ctypedef npy_ulonglong ulonglong_t
+ *
+ * ctypedef npy_intp intp_t # <<<<<<<<<<<<<<
+ * ctypedef npy_uintp uintp_t
+ *
+ */
+typedef npy_intp __pyx_t_5numpy_intp_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":776
+ *
+ * ctypedef npy_intp intp_t
+ * ctypedef npy_uintp uintp_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_double float_t
+ */
+typedef npy_uintp __pyx_t_5numpy_uintp_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":778
+ * ctypedef npy_uintp uintp_t
+ *
+ * ctypedef npy_double float_t # <<<<<<<<<<<<<<
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t
+ */
+typedef npy_double __pyx_t_5numpy_float_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":779
+ *
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t # <<<<<<<<<<<<<<
+ * ctypedef npy_longdouble longdouble_t
+ *
+ */
+typedef npy_double __pyx_t_5numpy_double_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":780
+ * ctypedef npy_double float_t
+ * ctypedef npy_double double_t
+ * ctypedef npy_longdouble longdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cfloat cfloat_t
+ */
+typedef npy_longdouble __pyx_t_5numpy_longdouble_t;
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< float > __pyx_t_float_complex;
+ #else
+ typedef float _Complex __pyx_t_float_complex;
+ #endif
+#else
+ typedef struct { float real, imag; } __pyx_t_float_complex;
+#endif
+static CYTHON_INLINE __pyx_t_float_complex __pyx_t_float_complex_from_parts(float, float);
+
+/* Declarations.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ typedef ::std::complex< double > __pyx_t_double_complex;
+ #else
+ typedef double _Complex __pyx_t_double_complex;
+ #endif
+#else
+ typedef struct { double real, imag; } __pyx_t_double_complex;
+#endif
+static CYTHON_INLINE __pyx_t_double_complex __pyx_t_double_complex_from_parts(double, double);
+
+
+/*--- Type declarations ---*/
+struct __pyx_obj_8pykdtree_6kdtree_KDTree;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":782
+ * ctypedef npy_longdouble longdouble_t
+ *
+ * ctypedef npy_cfloat cfloat_t # <<<<<<<<<<<<<<
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t
+ */
+typedef npy_cfloat __pyx_t_5numpy_cfloat_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":783
+ *
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t # <<<<<<<<<<<<<<
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ */
+typedef npy_cdouble __pyx_t_5numpy_cdouble_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":784
+ * ctypedef npy_cfloat cfloat_t
+ * ctypedef npy_cdouble cdouble_t
+ * ctypedef npy_clongdouble clongdouble_t # <<<<<<<<<<<<<<
+ *
+ * ctypedef npy_cdouble complex_t
+ */
+typedef npy_clongdouble __pyx_t_5numpy_clongdouble_t;
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":786
+ * ctypedef npy_clongdouble clongdouble_t
+ *
+ * ctypedef npy_cdouble complex_t # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew1(a):
+ */
+typedef npy_cdouble __pyx_t_5numpy_complex_t;
+struct __pyx_t_8pykdtree_6kdtree_node_float;
+struct __pyx_t_8pykdtree_6kdtree_tree_float;
+struct __pyx_t_8pykdtree_6kdtree_node_double;
+struct __pyx_t_8pykdtree_6kdtree_tree_double;
+
+/* "pykdtree/kdtree.pyx":25
+ *
+ * # Node structure
+ * cdef struct node_float: # <<<<<<<<<<<<<<
+ * float cut_val
+ * int8_t cut_dim
+ */
+struct __pyx_t_8pykdtree_6kdtree_node_float {
+ float cut_val;
+ int8_t cut_dim;
+ uint32_t start_idx;
+ uint32_t n;
+ float cut_bounds_lv;
+ float cut_bounds_hv;
+ struct __pyx_t_8pykdtree_6kdtree_node_float *left_child;
+ struct __pyx_t_8pykdtree_6kdtree_node_float *right_child;
+};
+
+/* "pykdtree/kdtree.pyx":35
+ * node_float *right_child
+ *
+ * cdef struct tree_float: # <<<<<<<<<<<<<<
+ * float *bbox
+ * int8_t no_dims
+ */
+struct __pyx_t_8pykdtree_6kdtree_tree_float {
+ float *bbox;
+ int8_t no_dims;
+ uint32_t *pidx;
+ struct __pyx_t_8pykdtree_6kdtree_node_float *root;
+};
+
+/* "pykdtree/kdtree.pyx":41
+ * node_float *root
+ *
+ * cdef struct node_double: # <<<<<<<<<<<<<<
+ * double cut_val
+ * int8_t cut_dim
+ */
+struct __pyx_t_8pykdtree_6kdtree_node_double {
+ double cut_val;
+ int8_t cut_dim;
+ uint32_t start_idx;
+ uint32_t n;
+ double cut_bounds_lv;
+ double cut_bounds_hv;
+ struct __pyx_t_8pykdtree_6kdtree_node_double *left_child;
+ struct __pyx_t_8pykdtree_6kdtree_node_double *right_child;
+};
+
+/* "pykdtree/kdtree.pyx":51
+ * node_double *right_child
+ *
+ * cdef struct tree_double: # <<<<<<<<<<<<<<
+ * double *bbox
+ * int8_t no_dims
+ */
+struct __pyx_t_8pykdtree_6kdtree_tree_double {
+ double *bbox;
+ int8_t no_dims;
+ uint32_t *pidx;
+ struct __pyx_t_8pykdtree_6kdtree_node_double *root;
+};
+
+/* "pykdtree/kdtree.pyx":65
+ * cdef extern void delete_tree_double(tree_double *kdtree)
+ *
+ * cdef class KDTree: # <<<<<<<<<<<<<<
+ * """kd-tree for fast nearest-neighbour lookup.
+ * The interface is made to resemble the scipy.spatial kd-tree except
+ */
+struct __pyx_obj_8pykdtree_6kdtree_KDTree {
+ PyObject_HEAD
+ struct __pyx_t_8pykdtree_6kdtree_tree_float *_kdtree_float;
+ struct __pyx_t_8pykdtree_6kdtree_tree_double *_kdtree_double;
+ PyArrayObject *data_pts;
+ PyArrayObject *data;
+ float *_data_pts_data_float;
+ double *_data_pts_data_double;
+ uint32_t n;
+ int8_t ndim;
+ uint32_t leafsize;
+};
+
+
+/* --- Runtime support code (head) --- */
+/* Refnanny.proto */
+#ifndef CYTHON_REFNANNY
+ #define CYTHON_REFNANNY 0
+#endif
+#if CYTHON_REFNANNY
+ typedef struct {
+ void (*INCREF)(void*, PyObject*, int);
+ void (*DECREF)(void*, PyObject*, int);
+ void (*GOTREF)(void*, PyObject*, int);
+ void (*GIVEREF)(void*, PyObject*, int);
+ void* (*SetupContext)(const char*, int, const char*);
+ void (*FinishContext)(void**);
+ } __Pyx_RefNannyAPIStruct;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL;
+ static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname);
+ #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL;
+#ifdef WITH_THREAD
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ if (acquire_gil) {\
+ PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ PyGILState_Release(__pyx_gilstate_save);\
+ } else {\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\
+ }
+#else
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)\
+ __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__)
+#endif
+ #define __Pyx_RefNannyFinishContext()\
+ __Pyx_RefNanny->FinishContext(&__pyx_refnanny)
+ #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__)
+ #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0)
+ #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0)
+ #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0)
+ #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0)
+#else
+ #define __Pyx_RefNannyDeclarations
+ #define __Pyx_RefNannySetupContext(name, acquire_gil)
+ #define __Pyx_RefNannyFinishContext()
+ #define __Pyx_INCREF(r) Py_INCREF(r)
+ #define __Pyx_DECREF(r) Py_DECREF(r)
+ #define __Pyx_GOTREF(r)
+ #define __Pyx_GIVEREF(r)
+ #define __Pyx_XINCREF(r) Py_XINCREF(r)
+ #define __Pyx_XDECREF(r) Py_XDECREF(r)
+ #define __Pyx_XGOTREF(r)
+ #define __Pyx_XGIVEREF(r)
+#endif
+#define __Pyx_XDECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_XDECREF(tmp);\
+ } while (0)
+#define __Pyx_DECREF_SET(r, v) do {\
+ PyObject *tmp = (PyObject *) r;\
+ r = v; __Pyx_DECREF(tmp);\
+ } while (0)
+#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0)
+#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0)
+
+/* PyObjectGetAttrStr.proto */
+#if CYTHON_USE_TYPE_SLOTS
+static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) {
+ PyTypeObject* tp = Py_TYPE(obj);
+ if (likely(tp->tp_getattro))
+ return tp->tp_getattro(obj, attr_name);
+#if PY_MAJOR_VERSION < 3
+ if (likely(tp->tp_getattr))
+ return tp->tp_getattr(obj, PyString_AS_STRING(attr_name));
+#endif
+ return PyObject_GetAttr(obj, attr_name);
+}
+#else
+#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n)
+#endif
+
+/* GetBuiltinName.proto */
+static PyObject *__Pyx_GetBuiltinName(PyObject *name);
+
+/* RaiseArgTupleInvalid.proto */
+static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact,
+ Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found);
+
+/* KeywordStringCheck.proto */
+static int __Pyx_CheckKeywordStrings(PyObject *kwdict, const char* function_name, int kw_allowed);
+
+/* RaiseDoubleKeywords.proto */
+static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name);
+
+/* ParseKeywords.proto */
+static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\
+ PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\
+ const char* function_name);
+
+/* ArgTypeTest.proto */
+#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\
+ ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\
+ __Pyx__ArgTypeTest(obj, type, name, exact))
+static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact);
+
+/* PyObjectCall.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw);
+#else
+#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw)
+#endif
+
+/* PyThreadStateGet.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate;
+#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current;
+#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type
+#else
+#define __Pyx_PyThreadState_declare
+#define __Pyx_PyThreadState_assign
+#define __Pyx_PyErr_Occurred() PyErr_Occurred()
+#endif
+
+/* PyErrFetchRestore.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL)
+#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL))
+#else
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#endif
+#else
+#define __Pyx_PyErr_Clear() PyErr_Clear()
+#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc)
+#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb)
+#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb)
+#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb)
+#endif
+
+/* RaiseException.proto */
+static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause);
+
+/* GetModuleGlobalName.proto */
+static CYTHON_INLINE PyObject *__Pyx_GetModuleGlobalName(PyObject *name);
+
+/* PyCFunctionFastCall.proto */
+#if CYTHON_FAST_PYCCALL
+static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs);
+#else
+#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL)
+#endif
+
+/* PyFunctionFastCall.proto */
+#if CYTHON_FAST_PYCALL
+#define __Pyx_PyFunction_FastCall(func, args, nargs)\
+ __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL)
+#if 1 || PY_VERSION_HEX < 0x030600B1
+static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, int nargs, PyObject *kwargs);
+#else
+#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs)
+#endif
+#endif
+
+/* PyObjectCallMethO.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg);
+#endif
+
+/* PyObjectCallOneArg.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg);
+
+/* PyObjectCallNoArg.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+static CYTHON_INLINE PyObject* __Pyx_PyObject_CallNoArg(PyObject *func);
+#else
+#define __Pyx_PyObject_CallNoArg(func) __Pyx_PyObject_Call(func, __pyx_empty_tuple, NULL)
+#endif
+
+/* ExtTypeTest.proto */
+static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type);
+
+/* IsLittleEndian.proto */
+static CYTHON_INLINE int __Pyx_Is_Little_Endian(void);
+
+/* BufferFormatCheck.proto */
+static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts);
+static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx,
+ __Pyx_BufFmt_StackElem* stack,
+ __Pyx_TypeInfo* type);
+
+/* BufferGetAndValidate.proto */
+#define __Pyx_GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack)\
+ ((obj == Py_None || obj == NULL) ?\
+ (__Pyx_ZeroBuffer(buf), 0) :\
+ __Pyx__GetBufferAndValidate(buf, obj, dtype, flags, nd, cast, stack))
+static int __Pyx__GetBufferAndValidate(Py_buffer* buf, PyObject* obj,
+ __Pyx_TypeInfo* dtype, int flags, int nd, int cast, __Pyx_BufFmt_StackElem* stack);
+static void __Pyx_ZeroBuffer(Py_buffer* buf);
+static CYTHON_INLINE void __Pyx_SafeReleaseBuffer(Py_buffer* info);
+static Py_ssize_t __Pyx_minusones[] = { -1, -1, -1, -1, -1, -1, -1, -1 };
+static Py_ssize_t __Pyx_zeros[] = { 0, 0, 0, 0, 0, 0, 0, 0 };
+
+/* BufferFallbackError.proto */
+static void __Pyx_RaiseBufferFallbackError(void);
+
+/* DictGetItem.proto */
+#if PY_MAJOR_VERSION >= 3 && !CYTHON_COMPILING_IN_PYPY
+static PyObject *__Pyx_PyDict_GetItem(PyObject *d, PyObject* key) {
+ PyObject *value;
+ value = PyDict_GetItemWithError(d, key);
+ if (unlikely(!value)) {
+ if (!PyErr_Occurred()) {
+ PyObject* args = PyTuple_Pack(1, key);
+ if (likely(args))
+ PyErr_SetObject(PyExc_KeyError, args);
+ Py_XDECREF(args);
+ }
+ return NULL;
+ }
+ Py_INCREF(value);
+ return value;
+}
+#else
+ #define __Pyx_PyDict_GetItem(d, key) PyObject_GetItem(d, key)
+#endif
+
+/* RaiseTooManyValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected);
+
+/* RaiseNeedMoreValuesToUnpack.proto */
+static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index);
+
+/* RaiseNoneIterError.proto */
+static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void);
+
+/* SaveResetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb)
+static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb);
+#else
+#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb)
+#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb)
+#endif
+
+/* PyErrExceptionMatches.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err)
+static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err);
+#else
+#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err)
+#endif
+
+/* GetException.proto */
+#if CYTHON_FAST_THREAD_STATE
+#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb)
+static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb);
+#else
+static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb);
+#endif
+
+/* SetupReduce.proto */
+static int __Pyx_setup_reduce(PyObject* type_obj);
+
+/* Import.proto */
+static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level);
+
+/* CLineInTraceback.proto */
+#ifdef CYTHON_CLINE_IN_TRACEBACK
+#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0)
+#else
+static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line);
+#endif
+
+/* CodeObjectCache.proto */
+typedef struct {
+ PyCodeObject* code_object;
+ int code_line;
+} __Pyx_CodeObjectCacheEntry;
+struct __Pyx_CodeObjectCache {
+ int count;
+ int max_count;
+ __Pyx_CodeObjectCacheEntry* entries;
+};
+static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL};
+static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line);
+static PyCodeObject *__pyx_find_code_object(int code_line);
+static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object);
+
+/* AddTraceback.proto */
+static void __Pyx_AddTraceback(const char *funcname, int c_line,
+ int py_line, const char *filename);
+
+/* BufferStructDeclare.proto */
+typedef struct {
+ Py_ssize_t shape, strides, suboffsets;
+} __Pyx_Buf_DimInfo;
+typedef struct {
+ size_t refcount;
+ Py_buffer pybuffer;
+} __Pyx_Buffer;
+typedef struct {
+ __Pyx_Buffer *rcbuffer;
+ char *data;
+ __Pyx_Buf_DimInfo diminfo[8];
+} __Pyx_LocalBuf_ND;
+
+#if PY_MAJOR_VERSION < 3
+ static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags);
+ static void __Pyx_ReleaseBuffer(Py_buffer *view);
+#else
+ #define __Pyx_GetBuffer PyObject_GetBuffer
+ #define __Pyx_ReleaseBuffer PyBuffer_Release
+#endif
+
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_uint32_t(uint32_t value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int8_t(int8_t value);
+
+/* RealImag.proto */
+#if CYTHON_CCOMPLEX
+ #ifdef __cplusplus
+ #define __Pyx_CREAL(z) ((z).real())
+ #define __Pyx_CIMAG(z) ((z).imag())
+ #else
+ #define __Pyx_CREAL(z) (__real__(z))
+ #define __Pyx_CIMAG(z) (__imag__(z))
+ #endif
+#else
+ #define __Pyx_CREAL(z) ((z).real)
+ #define __Pyx_CIMAG(z) ((z).imag)
+#endif
+#if defined(__cplusplus) && CYTHON_CCOMPLEX\
+ && (defined(_WIN32) || defined(__clang__) || (defined(__GNUC__) && (__GNUC__ >= 5 || __GNUC__ == 4 && __GNUC_MINOR__ >= 4 )) || __cplusplus >= 201103)
+ #define __Pyx_SET_CREAL(z,x) ((z).real(x))
+ #define __Pyx_SET_CIMAG(z,y) ((z).imag(y))
+#else
+ #define __Pyx_SET_CREAL(z,x) __Pyx_CREAL(z) = (x)
+ #define __Pyx_SET_CIMAG(z,y) __Pyx_CIMAG(z) = (y)
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_float(a, b) ((a)==(b))
+ #define __Pyx_c_sum_float(a, b) ((a)+(b))
+ #define __Pyx_c_diff_float(a, b) ((a)-(b))
+ #define __Pyx_c_prod_float(a, b) ((a)*(b))
+ #define __Pyx_c_quot_float(a, b) ((a)/(b))
+ #define __Pyx_c_neg_float(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_float(z) ((z)==(float)0)
+ #define __Pyx_c_conj_float(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (::std::abs(z))
+ #define __Pyx_c_pow_float(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_float(z) ((z)==0)
+ #define __Pyx_c_conj_float(z) (conjf(z))
+ #if 1
+ #define __Pyx_c_abs_float(z) (cabsf(z))
+ #define __Pyx_c_pow_float(a, b) (cpowf(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_sum_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_diff_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_prod_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_quot_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_neg_float(__pyx_t_float_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_conj_float(__pyx_t_float_complex);
+ #if 1
+ static CYTHON_INLINE float __Pyx_c_abs_float(__pyx_t_float_complex);
+ static CYTHON_INLINE __pyx_t_float_complex __Pyx_c_pow_float(__pyx_t_float_complex, __pyx_t_float_complex);
+ #endif
+#endif
+
+/* Arithmetic.proto */
+#if CYTHON_CCOMPLEX
+ #define __Pyx_c_eq_double(a, b) ((a)==(b))
+ #define __Pyx_c_sum_double(a, b) ((a)+(b))
+ #define __Pyx_c_diff_double(a, b) ((a)-(b))
+ #define __Pyx_c_prod_double(a, b) ((a)*(b))
+ #define __Pyx_c_quot_double(a, b) ((a)/(b))
+ #define __Pyx_c_neg_double(a) (-(a))
+ #ifdef __cplusplus
+ #define __Pyx_c_is_zero_double(z) ((z)==(double)0)
+ #define __Pyx_c_conj_double(z) (::std::conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (::std::abs(z))
+ #define __Pyx_c_pow_double(a, b) (::std::pow(a, b))
+ #endif
+ #else
+ #define __Pyx_c_is_zero_double(z) ((z)==0)
+ #define __Pyx_c_conj_double(z) (conj(z))
+ #if 1
+ #define __Pyx_c_abs_double(z) (cabs(z))
+ #define __Pyx_c_pow_double(a, b) (cpow(a, b))
+ #endif
+ #endif
+#else
+ static CYTHON_INLINE int __Pyx_c_eq_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_sum_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_diff_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_prod_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_quot_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_neg_double(__pyx_t_double_complex);
+ static CYTHON_INLINE int __Pyx_c_is_zero_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_conj_double(__pyx_t_double_complex);
+ #if 1
+ static CYTHON_INLINE double __Pyx_c_abs_double(__pyx_t_double_complex);
+ static CYTHON_INLINE __pyx_t_double_complex __Pyx_c_pow_double(__pyx_t_double_complex, __pyx_t_double_complex);
+ #endif
+#endif
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_enum__NPY_TYPES(enum NPY_TYPES value);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE uint32_t __Pyx_PyInt_As_uint32_t(PyObject *);
+
+/* CIntToPy.proto */
+static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value);
+
+/* CIntFromPy.proto */
+static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *);
+
+/* FastTypeChecks.proto */
+#if CYTHON_COMPILING_IN_CPYTHON
+#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type)
+static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type);
+static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2);
+#else
+#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type)
+#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type)
+#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2))
+#endif
+
+/* CheckBinaryVersion.proto */
+static int __Pyx_check_binary_version(void);
+
+/* PyIdentifierFromString.proto */
+#if !defined(__Pyx_PyIdentifier_FromString)
+#if PY_MAJOR_VERSION < 3
+ #define __Pyx_PyIdentifier_FromString(s) PyString_FromString(s)
+#else
+ #define __Pyx_PyIdentifier_FromString(s) PyUnicode_FromString(s)
+#endif
+#endif
+
+/* ModuleImport.proto */
+static PyObject *__Pyx_ImportModule(const char *name);
+
+/* TypeImport.proto */
+static PyTypeObject *__Pyx_ImportType(const char *module_name, const char *class_name, size_t size, int strict);
+
+/* InitStrings.proto */
+static int __Pyx_InitStrings(__Pyx_StringTabEntry *t);
+
+
+/* Module declarations from 'cpython.buffer' */
+
+/* Module declarations from 'libc.string' */
+
+/* Module declarations from 'libc.stdio' */
+
+/* Module declarations from '__builtin__' */
+
+/* Module declarations from 'cpython.type' */
+static PyTypeObject *__pyx_ptype_7cpython_4type_type = 0;
+
+/* Module declarations from 'cpython' */
+
+/* Module declarations from 'cpython.object' */
+
+/* Module declarations from 'cpython.ref' */
+
+/* Module declarations from 'cpython.mem' */
+
+/* Module declarations from 'numpy' */
+
+/* Module declarations from 'numpy' */
+static PyTypeObject *__pyx_ptype_5numpy_dtype = 0;
+static PyTypeObject *__pyx_ptype_5numpy_flatiter = 0;
+static PyTypeObject *__pyx_ptype_5numpy_broadcast = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ndarray = 0;
+static PyTypeObject *__pyx_ptype_5numpy_ufunc = 0;
+static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *, char *, char *, int *); /*proto*/
+
+/* Module declarations from 'libc.stdint' */
+
+/* Module declarations from 'cython' */
+
+/* Module declarations from 'pykdtree.kdtree' */
+static PyTypeObject *__pyx_ptype_8pykdtree_6kdtree_KDTree = 0;
+__PYX_EXTERN_C DL_IMPORT(struct __pyx_t_8pykdtree_6kdtree_tree_float) *construct_tree_float(float *, int8_t, uint32_t, uint32_t); /*proto*/
+__PYX_EXTERN_C DL_IMPORT(void) search_tree_float(struct __pyx_t_8pykdtree_6kdtree_tree_float *, float *, float *, uint32_t, uint32_t, float, float, uint8_t *, uint32_t *, float *); /*proto*/
+__PYX_EXTERN_C DL_IMPORT(void) delete_tree_float(struct __pyx_t_8pykdtree_6kdtree_tree_float *); /*proto*/
+__PYX_EXTERN_C DL_IMPORT(struct __pyx_t_8pykdtree_6kdtree_tree_double) *construct_tree_double(double *, int8_t, uint32_t, uint32_t); /*proto*/
+__PYX_EXTERN_C DL_IMPORT(void) search_tree_double(struct __pyx_t_8pykdtree_6kdtree_tree_double *, double *, double *, uint32_t, uint32_t, double, double, uint8_t *, uint32_t *, double *); /*proto*/
+__PYX_EXTERN_C DL_IMPORT(void) delete_tree_double(struct __pyx_t_8pykdtree_6kdtree_tree_double *); /*proto*/
+static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 };
+static __Pyx_TypeInfo __Pyx_TypeInfo_double = { "double", NULL, sizeof(double), { 0 }, 0, 'R', 0, 0 };
+static __Pyx_TypeInfo __Pyx_TypeInfo_nn_uint32_t = { "uint32_t", NULL, sizeof(uint32_t), { 0 }, 0, IS_UNSIGNED(uint32_t) ? 'U' : 'I', IS_UNSIGNED(uint32_t), 0 };
+static __Pyx_TypeInfo __Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t = { "uint8_t", NULL, sizeof(__pyx_t_5numpy_uint8_t), { 0 }, 0, IS_UNSIGNED(__pyx_t_5numpy_uint8_t) ? 'U' : 'I', IS_UNSIGNED(__pyx_t_5numpy_uint8_t), 0 };
+#define __Pyx_MODULE_NAME "pykdtree.kdtree"
+extern int __pyx_module_is_main_pykdtree__kdtree;
+int __pyx_module_is_main_pykdtree__kdtree = 0;
+
+/* Implementation of 'pykdtree.kdtree' */
+static PyObject *__pyx_builtin_ValueError;
+static PyObject *__pyx_builtin_TypeError;
+static PyObject *__pyx_builtin_range;
+static PyObject *__pyx_builtin_RuntimeError;
+static PyObject *__pyx_builtin_ImportError;
+static const char __pyx_k_k[] = "k";
+static const char __pyx_k_np[] = "np";
+static const char __pyx_k_Inf[] = "Inf";
+static const char __pyx_k_eps[] = "eps";
+static const char __pyx_k_max[] = "max";
+static const char __pyx_k_main[] = "__main__";
+static const char __pyx_k_mask[] = "mask";
+static const char __pyx_k_name[] = "__name__";
+static const char __pyx_k_size[] = "size";
+static const char __pyx_k_sqrt[] = "sqrt";
+static const char __pyx_k_test[] = "__test__";
+static const char __pyx_k_dtype[] = "dtype";
+static const char __pyx_k_empty[] = "empty";
+static const char __pyx_k_finfo[] = "finfo";
+static const char __pyx_k_numpy[] = "numpy";
+static const char __pyx_k_range[] = "range";
+static const char __pyx_k_ravel[] = "ravel";
+static const char __pyx_k_uint8[] = "uint8";
+static const char __pyx_k_import[] = "__import__";
+static const char __pyx_k_reduce[] = "__reduce__";
+static const char __pyx_k_uint32[] = "uint32";
+static const char __pyx_k_float32[] = "float32";
+static const char __pyx_k_float64[] = "float64";
+static const char __pyx_k_reshape[] = "reshape";
+static const char __pyx_k_data_pts[] = "data_pts";
+static const char __pyx_k_getstate[] = "__getstate__";
+static const char __pyx_k_leafsize[] = "leafsize";
+static const char __pyx_k_setstate[] = "__setstate__";
+static const char __pyx_k_TypeError[] = "TypeError";
+static const char __pyx_k_query_pts[] = "query_pts";
+static const char __pyx_k_reduce_ex[] = "__reduce_ex__";
+static const char __pyx_k_sqr_dists[] = "sqr_dists";
+static const char __pyx_k_ValueError[] = "ValueError";
+static const char __pyx_k_ImportError[] = "ImportError";
+static const char __pyx_k_RuntimeError[] = "RuntimeError";
+static const char __pyx_k_reduce_cython[] = "__reduce_cython__";
+static const char __pyx_k_setstate_cython[] = "__setstate_cython__";
+static const char __pyx_k_ascontiguousarray[] = "ascontiguousarray";
+static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback";
+static const char __pyx_k_distance_upper_bound[] = "distance_upper_bound";
+static const char __pyx_k_eps_must_be_non_negative[] = "eps must be non-negative";
+static const char __pyx_k_ndarray_is_not_C_contiguous[] = "ndarray is not C contiguous";
+static const char __pyx_k_Data_and_query_points_must_have[] = "Data and query points must have same dimensions";
+static const char __pyx_k_Mask_must_have_the_same_size_as[] = "Mask must have the same size as data points";
+static const char __pyx_k_Type_mismatch_query_points_must[] = "Type mismatch. query points must be of type float32 when data points are of type float32";
+static const char __pyx_k_numpy_core_multiarray_failed_to[] = "numpy.core.multiarray failed to import";
+static const char __pyx_k_unknown_dtype_code_in_numpy_pxd[] = "unknown dtype code in numpy.pxd (%d)";
+static const char __pyx_k_Format_string_allocated_too_shor[] = "Format string allocated too short, see comment in numpy.pxd";
+static const char __pyx_k_Non_native_byte_order_not_suppor[] = "Non-native byte order not supported";
+static const char __pyx_k_Number_of_neighbours_must_be_gre[] = "Number of neighbours must be greater than zero";
+static const char __pyx_k_distance_upper_bound_must_be_non[] = "distance_upper_bound must be non negative";
+static const char __pyx_k_leafsize_must_be_greater_than_ze[] = "leafsize must be greater than zero";
+static const char __pyx_k_ndarray_is_not_Fortran_contiguou[] = "ndarray is not Fortran contiguous";
+static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__";
+static const char __pyx_k_numpy_core_umath_failed_to_impor[] = "numpy.core.umath failed to import";
+static const char __pyx_k_Format_string_allocated_too_shor_2[] = "Format string allocated too short.";
+static PyObject *__pyx_kp_s_Data_and_query_points_must_have;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor;
+static PyObject *__pyx_kp_u_Format_string_allocated_too_shor_2;
+static PyObject *__pyx_n_s_ImportError;
+static PyObject *__pyx_n_s_Inf;
+static PyObject *__pyx_kp_s_Mask_must_have_the_same_size_as;
+static PyObject *__pyx_kp_u_Non_native_byte_order_not_suppor;
+static PyObject *__pyx_kp_s_Number_of_neighbours_must_be_gre;
+static PyObject *__pyx_n_s_RuntimeError;
+static PyObject *__pyx_n_s_TypeError;
+static PyObject *__pyx_kp_s_Type_mismatch_query_points_must;
+static PyObject *__pyx_n_s_ValueError;
+static PyObject *__pyx_n_s_ascontiguousarray;
+static PyObject *__pyx_n_s_cline_in_traceback;
+static PyObject *__pyx_n_s_data_pts;
+static PyObject *__pyx_n_s_distance_upper_bound;
+static PyObject *__pyx_kp_s_distance_upper_bound_must_be_non;
+static PyObject *__pyx_n_s_dtype;
+static PyObject *__pyx_n_s_empty;
+static PyObject *__pyx_n_s_eps;
+static PyObject *__pyx_kp_s_eps_must_be_non_negative;
+static PyObject *__pyx_n_s_finfo;
+static PyObject *__pyx_n_s_float32;
+static PyObject *__pyx_n_s_float64;
+static PyObject *__pyx_n_s_getstate;
+static PyObject *__pyx_n_s_import;
+static PyObject *__pyx_n_s_k;
+static PyObject *__pyx_n_s_leafsize;
+static PyObject *__pyx_kp_s_leafsize_must_be_greater_than_ze;
+static PyObject *__pyx_n_s_main;
+static PyObject *__pyx_n_s_mask;
+static PyObject *__pyx_n_s_max;
+static PyObject *__pyx_n_s_name;
+static PyObject *__pyx_kp_u_ndarray_is_not_C_contiguous;
+static PyObject *__pyx_kp_u_ndarray_is_not_Fortran_contiguou;
+static PyObject *__pyx_kp_s_no_default___reduce___due_to_non;
+static PyObject *__pyx_n_s_np;
+static PyObject *__pyx_n_s_numpy;
+static PyObject *__pyx_kp_s_numpy_core_multiarray_failed_to;
+static PyObject *__pyx_kp_s_numpy_core_umath_failed_to_impor;
+static PyObject *__pyx_n_s_query_pts;
+static PyObject *__pyx_n_s_range;
+static PyObject *__pyx_n_s_ravel;
+static PyObject *__pyx_n_s_reduce;
+static PyObject *__pyx_n_s_reduce_cython;
+static PyObject *__pyx_n_s_reduce_ex;
+static PyObject *__pyx_n_s_reshape;
+static PyObject *__pyx_n_s_setstate;
+static PyObject *__pyx_n_s_setstate_cython;
+static PyObject *__pyx_n_s_size;
+static PyObject *__pyx_n_s_sqr_dists;
+static PyObject *__pyx_n_s_sqrt;
+static PyObject *__pyx_n_s_test;
+static PyObject *__pyx_n_s_uint32;
+static PyObject *__pyx_n_s_uint8;
+static PyObject *__pyx_kp_u_unknown_dtype_code_in_numpy_pxd;
+static int __pyx_pf_8pykdtree_6kdtree_6KDTree___cinit__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static int __pyx_pf_8pykdtree_6kdtree_6KDTree_2__init__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, PyArrayObject *__pyx_v_data_pts, int __pyx_v_leafsize); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4query(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, PyArrayObject *__pyx_v_query_pts, PyObject *__pyx_v_k, PyObject *__pyx_v_eps, PyObject *__pyx_v_distance_upper_bound, PyObject *__pyx_v_sqr_dists, PyObject *__pyx_v_mask); /* proto */
+static void __pyx_pf_8pykdtree_6kdtree_6KDTree_6__dealloc__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8data_pts___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4data___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_1n___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4ndim___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8leafsize___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self); /* proto */
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */
+static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info); /* proto */
+static PyObject *__pyx_tp_new_8pykdtree_6kdtree_KDTree(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/
+static PyObject *__pyx_int_0;
+static PyObject *__pyx_int_1;
+static PyObject *__pyx_tuple_;
+static PyObject *__pyx_tuple__2;
+static PyObject *__pyx_tuple__3;
+static PyObject *__pyx_tuple__4;
+static PyObject *__pyx_tuple__5;
+static PyObject *__pyx_tuple__6;
+static PyObject *__pyx_tuple__7;
+static PyObject *__pyx_tuple__8;
+static PyObject *__pyx_tuple__9;
+static PyObject *__pyx_tuple__10;
+static PyObject *__pyx_tuple__11;
+static PyObject *__pyx_tuple__12;
+static PyObject *__pyx_tuple__13;
+static PyObject *__pyx_tuple__14;
+static PyObject *__pyx_tuple__15;
+static PyObject *__pyx_tuple__16;
+static PyObject *__pyx_tuple__17;
+static PyObject *__pyx_tuple__18;
+
+/* "pykdtree/kdtree.pyx":87
+ * cdef readonly uint32_t leafsize
+ *
+ * def __cinit__(KDTree self): # <<<<<<<<<<<<<<
+ * self._kdtree_float = NULL
+ * self._kdtree_double = NULL
+ */
+
+/* Python wrapper */
+static int __pyx_pw_8pykdtree_6kdtree_6KDTree_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static int __pyx_pw_8pykdtree_6kdtree_6KDTree_1__cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0);
+ if (unlikely(PyTuple_GET_SIZE(__pyx_args) > 0)) {
+ __Pyx_RaiseArgtupleInvalid("__cinit__", 1, 0, 0, PyTuple_GET_SIZE(__pyx_args)); return -1;}
+ if (unlikely(__pyx_kwds) && unlikely(PyDict_Size(__pyx_kwds) > 0) && unlikely(!__Pyx_CheckKeywordStrings(__pyx_kwds, "__cinit__", 0))) return -1;
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree___cinit__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_8pykdtree_6kdtree_6KDTree___cinit__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__cinit__", 0);
+
+ /* "pykdtree/kdtree.pyx":88
+ *
+ * def __cinit__(KDTree self):
+ * self._kdtree_float = NULL # <<<<<<<<<<<<<<
+ * self._kdtree_double = NULL
+ *
+ */
+ __pyx_v_self->_kdtree_float = NULL;
+
+ /* "pykdtree/kdtree.pyx":89
+ * def __cinit__(KDTree self):
+ * self._kdtree_float = NULL
+ * self._kdtree_double = NULL # <<<<<<<<<<<<<<
+ *
+ * def __init__(KDTree self, np.ndarray data_pts not None, int leafsize=16):
+ */
+ __pyx_v_self->_kdtree_double = NULL;
+
+ /* "pykdtree/kdtree.pyx":87
+ * cdef readonly uint32_t leafsize
+ *
+ * def __cinit__(KDTree self): # <<<<<<<<<<<<<<
+ * self._kdtree_float = NULL
+ * self._kdtree_double = NULL
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":91
+ * self._kdtree_double = NULL
+ *
+ * def __init__(KDTree self, np.ndarray data_pts not None, int leafsize=16): # <<<<<<<<<<<<<<
+ *
+ * # Check arguments
+ */
+
+/* Python wrapper */
+static int __pyx_pw_8pykdtree_6kdtree_6KDTree_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static int __pyx_pw_8pykdtree_6kdtree_6KDTree_3__init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyArrayObject *__pyx_v_data_pts = 0;
+ int __pyx_v_leafsize;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__init__ (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_data_pts,&__pyx_n_s_leafsize,0};
+ PyObject* values[2] = {0,0};
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_data_pts)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_leafsize);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(0, 91, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_data_pts = ((PyArrayObject *)values[0]);
+ if (values[1]) {
+ __pyx_v_leafsize = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_leafsize == (int)-1) && PyErr_Occurred())) __PYX_ERR(0, 91, __pyx_L3_error)
+ } else {
+ __pyx_v_leafsize = ((int)16);
+ }
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("__init__", 0, 1, 2, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 91, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return -1;
+ __pyx_L4_argument_unpacking_done:;
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_data_pts), __pyx_ptype_5numpy_ndarray, 0, "data_pts", 0))) __PYX_ERR(0, 91, __pyx_L1_error)
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_2__init__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self), __pyx_v_data_pts, __pyx_v_leafsize);
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = -1;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_8pykdtree_6kdtree_6KDTree_2__init__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, PyArrayObject *__pyx_v_data_pts, int __pyx_v_leafsize) {
+ PyArrayObject *__pyx_v_data_array_float = 0;
+ PyArrayObject *__pyx_v_data_array_double = 0;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_data_array_double;
+ __Pyx_Buffer __pyx_pybuffer_data_array_double;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_data_array_float;
+ __Pyx_Buffer __pyx_pybuffer_data_array_float;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ PyObject *__pyx_t_2 = NULL;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ PyObject *__pyx_t_6 = NULL;
+ PyArrayObject *__pyx_t_7 = NULL;
+ int __pyx_t_8;
+ PyObject *__pyx_t_9 = NULL;
+ PyObject *__pyx_t_10 = NULL;
+ PyObject *__pyx_t_11 = NULL;
+ PyArrayObject *__pyx_t_12 = NULL;
+ __Pyx_RefNannySetupContext("__init__", 0);
+ __pyx_pybuffer_data_array_float.pybuffer.buf = NULL;
+ __pyx_pybuffer_data_array_float.refcount = 0;
+ __pyx_pybuffernd_data_array_float.data = NULL;
+ __pyx_pybuffernd_data_array_float.rcbuffer = &__pyx_pybuffer_data_array_float;
+ __pyx_pybuffer_data_array_double.pybuffer.buf = NULL;
+ __pyx_pybuffer_data_array_double.refcount = 0;
+ __pyx_pybuffernd_data_array_double.data = NULL;
+ __pyx_pybuffernd_data_array_double.rcbuffer = &__pyx_pybuffer_data_array_double;
+
+ /* "pykdtree/kdtree.pyx":94
+ *
+ * # Check arguments
+ * if leafsize < 1: # <<<<<<<<<<<<<<
+ * raise ValueError('leafsize must be greater than zero')
+ *
+ */
+ __pyx_t_1 = ((__pyx_v_leafsize < 1) != 0);
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":95
+ * # Check arguments
+ * if leafsize < 1:
+ * raise ValueError('leafsize must be greater than zero') # <<<<<<<<<<<<<<
+ *
+ * # Get data content
+ */
+ __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple_, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 95, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_Raise(__pyx_t_2, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __PYX_ERR(0, 95, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":94
+ *
+ * # Check arguments
+ * if leafsize < 1: # <<<<<<<<<<<<<<
+ * raise ValueError('leafsize must be greater than zero')
+ *
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":101
+ * cdef np.ndarray[double, ndim=1] data_array_double
+ *
+ * if data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32)
+ * self._data_pts_data_float = data_array_float.data
+ */
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 101, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 101, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = PyObject_RichCompare(__pyx_t_2, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 101, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 101, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":102
+ *
+ * if data_pts.dtype == np.float32:
+ * data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32) # <<<<<<<<<<<<<<
+ * self._data_pts_data_float = data_array_float.data
+ * self.data_pts = data_array_float
+ */
+ __pyx_t_3 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_3, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_data_pts), __pyx_n_s_ravel); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_5 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (__pyx_t_5) {
+ __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_5); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else {
+ __pyx_t_3 = __Pyx_PyObject_CallNoArg(__pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error)
+ }
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_3);
+ __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_float32); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_dtype, __pyx_t_6) < 0) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 102, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (!(likely(((__pyx_t_6) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_6, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 102, __pyx_L1_error)
+ __pyx_t_7 = ((PyArrayObject *)__pyx_t_6);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_float.rcbuffer->pybuffer);
+ __pyx_t_8 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_data_array_float.rcbuffer->pybuffer, (PyObject*)__pyx_t_7, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_8 < 0)) {
+ PyErr_Fetch(&__pyx_t_9, &__pyx_t_10, &__pyx_t_11);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_data_array_float.rcbuffer->pybuffer, (PyObject*)__pyx_v_data_array_float, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_9); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_11);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_9, __pyx_t_10, __pyx_t_11);
+ }
+ __pyx_t_9 = __pyx_t_10 = __pyx_t_11 = 0;
+ }
+ __pyx_pybuffernd_data_array_float.diminfo[0].strides = __pyx_pybuffernd_data_array_float.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_data_array_float.diminfo[0].shape = __pyx_pybuffernd_data_array_float.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 102, __pyx_L1_error)
+ }
+ __pyx_t_7 = 0;
+ __pyx_v_data_array_float = ((PyArrayObject *)__pyx_t_6);
+ __pyx_t_6 = 0;
+
+ /* "pykdtree/kdtree.pyx":103
+ * if data_pts.dtype == np.float32:
+ * data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32)
+ * self._data_pts_data_float = data_array_float.data # <<<<<<<<<<<<<<
+ * self.data_pts = data_array_float
+ * else:
+ */
+ __pyx_v_self->_data_pts_data_float = ((float *)__pyx_v_data_array_float->data);
+
+ /* "pykdtree/kdtree.pyx":104
+ * data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32)
+ * self._data_pts_data_float = data_array_float.data
+ * self.data_pts = data_array_float # <<<<<<<<<<<<<<
+ * else:
+ * data_array_double = np.ascontiguousarray(data_pts.ravel(), dtype=np.float64)
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_data_array_float));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_data_array_float));
+ __Pyx_GOTREF(__pyx_v_self->data_pts);
+ __Pyx_DECREF(((PyObject *)__pyx_v_self->data_pts));
+ __pyx_v_self->data_pts = ((PyArrayObject *)__pyx_v_data_array_float);
+
+ /* "pykdtree/kdtree.pyx":101
+ * cdef np.ndarray[double, ndim=1] data_array_double
+ *
+ * if data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * data_array_float = np.ascontiguousarray(data_pts.ravel(), dtype=np.float32)
+ * self._data_pts_data_float = data_array_float.data
+ */
+ goto __pyx_L4;
+ }
+
+ /* "pykdtree/kdtree.pyx":106
+ * self.data_pts = data_array_float
+ * else:
+ * data_array_double = np.ascontiguousarray(data_pts.ravel(), dtype=np.float64) # <<<<<<<<<<<<<<
+ * self._data_pts_data_double = data_array_double.data
+ * self.data_pts = data_array_double
+ */
+ /*else*/ {
+ __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_3)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_data_pts), __pyx_n_s_ravel); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_2))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_2);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_2);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_2, function);
+ }
+ }
+ if (__pyx_t_4) {
+ __pyx_t_6 = __Pyx_PyObject_CallOneArg(__pyx_t_2, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ } else {
+ __pyx_t_6 = __Pyx_PyObject_CallNoArg(__pyx_t_2); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 106, __pyx_L1_error)
+ }
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float64); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (PyDict_SetItem(__pyx_t_6, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_3, __pyx_t_2, __pyx_t_6); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 106, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 106, __pyx_L1_error)
+ __pyx_t_12 = ((PyArrayObject *)__pyx_t_5);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_double.rcbuffer->pybuffer);
+ __pyx_t_8 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_data_array_double.rcbuffer->pybuffer, (PyObject*)__pyx_t_12, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_8 < 0)) {
+ PyErr_Fetch(&__pyx_t_11, &__pyx_t_10, &__pyx_t_9);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_data_array_double.rcbuffer->pybuffer, (PyObject*)__pyx_v_data_array_double, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_11); Py_XDECREF(__pyx_t_10); Py_XDECREF(__pyx_t_9);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_11, __pyx_t_10, __pyx_t_9);
+ }
+ __pyx_t_11 = __pyx_t_10 = __pyx_t_9 = 0;
+ }
+ __pyx_pybuffernd_data_array_double.diminfo[0].strides = __pyx_pybuffernd_data_array_double.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_data_array_double.diminfo[0].shape = __pyx_pybuffernd_data_array_double.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_8 < 0)) __PYX_ERR(0, 106, __pyx_L1_error)
+ }
+ __pyx_t_12 = 0;
+ __pyx_v_data_array_double = ((PyArrayObject *)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "pykdtree/kdtree.pyx":107
+ * else:
+ * data_array_double = np.ascontiguousarray(data_pts.ravel(), dtype=np.float64)
+ * self._data_pts_data_double = data_array_double.data # <<<<<<<<<<<<<<
+ * self.data_pts = data_array_double
+ *
+ */
+ __pyx_v_self->_data_pts_data_double = ((double *)__pyx_v_data_array_double->data);
+
+ /* "pykdtree/kdtree.pyx":108
+ * data_array_double = np.ascontiguousarray(data_pts.ravel(), dtype=np.float64)
+ * self._data_pts_data_double = data_array_double.data
+ * self.data_pts = data_array_double # <<<<<<<<<<<<<<
+ *
+ * # scipy interface compatibility
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_data_array_double));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_data_array_double));
+ __Pyx_GOTREF(__pyx_v_self->data_pts);
+ __Pyx_DECREF(((PyObject *)__pyx_v_self->data_pts));
+ __pyx_v_self->data_pts = ((PyArrayObject *)__pyx_v_data_array_double);
+ }
+ __pyx_L4:;
+
+ /* "pykdtree/kdtree.pyx":111
+ *
+ * # scipy interface compatibility
+ * self.data = self.data_pts # <<<<<<<<<<<<<<
+ *
+ * # Get tree info
+ */
+ __pyx_t_5 = ((PyObject *)__pyx_v_self->data_pts);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_5);
+ __Pyx_GOTREF(__pyx_v_self->data);
+ __Pyx_DECREF(((PyObject *)__pyx_v_self->data));
+ __pyx_v_self->data = ((PyArrayObject *)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "pykdtree/kdtree.pyx":114
+ *
+ * # Get tree info
+ * self.n = data_pts.shape[0] # <<<<<<<<<<<<<<
+ * self.leafsize = leafsize
+ * if data_pts.ndim == 1:
+ */
+ __pyx_v_self->n = ((uint32_t)(__pyx_v_data_pts->dimensions[0]));
+
+ /* "pykdtree/kdtree.pyx":115
+ * # Get tree info
+ * self.n = data_pts.shape[0]
+ * self.leafsize = leafsize # <<<<<<<<<<<<<<
+ * if data_pts.ndim == 1:
+ * self.ndim = 1
+ */
+ __pyx_v_self->leafsize = ((uint32_t)__pyx_v_leafsize);
+
+ /* "pykdtree/kdtree.pyx":116
+ * self.n = data_pts.shape[0]
+ * self.leafsize = leafsize
+ * if data_pts.ndim == 1: # <<<<<<<<<<<<<<
+ * self.ndim = 1
+ * else:
+ */
+ __pyx_t_1 = ((__pyx_v_data_pts->nd == 1) != 0);
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":117
+ * self.leafsize = leafsize
+ * if data_pts.ndim == 1:
+ * self.ndim = 1 # <<<<<<<<<<<<<<
+ * else:
+ * self.ndim = data_pts.shape[1]
+ */
+ __pyx_v_self->ndim = 1;
+
+ /* "pykdtree/kdtree.pyx":116
+ * self.n = data_pts.shape[0]
+ * self.leafsize = leafsize
+ * if data_pts.ndim == 1: # <<<<<<<<<<<<<<
+ * self.ndim = 1
+ * else:
+ */
+ goto __pyx_L5;
+ }
+
+ /* "pykdtree/kdtree.pyx":119
+ * self.ndim = 1
+ * else:
+ * self.ndim = data_pts.shape[1] # <<<<<<<<<<<<<<
+ *
+ * # Release GIL and construct tree
+ */
+ /*else*/ {
+ __pyx_v_self->ndim = ((int8_t)(__pyx_v_data_pts->dimensions[1]));
+ }
+ __pyx_L5:;
+
+ /* "pykdtree/kdtree.pyx":122
+ *
+ * # Release GIL and construct tree
+ * if data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * with nogil:
+ * self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim,
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 122, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_6 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 122, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_float32); if (unlikely(!__pyx_t_2)) __PYX_ERR(0, 122, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_2);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __pyx_t_6 = PyObject_RichCompare(__pyx_t_5, __pyx_t_2, Py_EQ); __Pyx_XGOTREF(__pyx_t_6); if (unlikely(!__pyx_t_6)) __PYX_ERR(0, 122, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0;
+ __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_6); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(0, 122, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":123
+ * # Release GIL and construct tree
+ * if data_pts.dtype == np.float32:
+ * with nogil: # <<<<<<<<<<<<<<
+ * self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim,
+ * self.n, self.leafsize)
+ */
+ {
+ #ifdef WITH_THREAD
+ PyThreadState *_save;
+ Py_UNBLOCK_THREADS
+ __Pyx_FastGIL_Remember();
+ #endif
+ /*try:*/ {
+
+ /* "pykdtree/kdtree.pyx":124
+ * if data_pts.dtype == np.float32:
+ * with nogil:
+ * self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim, # <<<<<<<<<<<<<<
+ * self.n, self.leafsize)
+ * else:
+ */
+ __pyx_v_self->_kdtree_float = construct_tree_float(__pyx_v_self->_data_pts_data_float, __pyx_v_self->ndim, __pyx_v_self->n, __pyx_v_self->leafsize);
+ }
+
+ /* "pykdtree/kdtree.pyx":123
+ * # Release GIL and construct tree
+ * if data_pts.dtype == np.float32:
+ * with nogil: # <<<<<<<<<<<<<<
+ * self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim,
+ * self.n, self.leafsize)
+ */
+ /*finally:*/ {
+ /*normal exit:*/{
+ #ifdef WITH_THREAD
+ __Pyx_FastGIL_Forget();
+ Py_BLOCK_THREADS
+ #endif
+ goto __pyx_L9;
+ }
+ __pyx_L9:;
+ }
+ }
+
+ /* "pykdtree/kdtree.pyx":122
+ *
+ * # Release GIL and construct tree
+ * if data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * with nogil:
+ * self._kdtree_float = construct_tree_float(self._data_pts_data_float, self.ndim,
+ */
+ goto __pyx_L6;
+ }
+
+ /* "pykdtree/kdtree.pyx":127
+ * self.n, self.leafsize)
+ * else:
+ * with nogil: # <<<<<<<<<<<<<<
+ * self._kdtree_double = construct_tree_double(self._data_pts_data_double, self.ndim,
+ * self.n, self.leafsize)
+ */
+ /*else*/ {
+ {
+ #ifdef WITH_THREAD
+ PyThreadState *_save;
+ Py_UNBLOCK_THREADS
+ __Pyx_FastGIL_Remember();
+ #endif
+ /*try:*/ {
+
+ /* "pykdtree/kdtree.pyx":128
+ * else:
+ * with nogil:
+ * self._kdtree_double = construct_tree_double(self._data_pts_data_double, self.ndim, # <<<<<<<<<<<<<<
+ * self.n, self.leafsize)
+ *
+ */
+ __pyx_v_self->_kdtree_double = construct_tree_double(__pyx_v_self->_data_pts_data_double, __pyx_v_self->ndim, __pyx_v_self->n, __pyx_v_self->leafsize);
+ }
+
+ /* "pykdtree/kdtree.pyx":127
+ * self.n, self.leafsize)
+ * else:
+ * with nogil: # <<<<<<<<<<<<<<
+ * self._kdtree_double = construct_tree_double(self._data_pts_data_double, self.ndim,
+ * self.n, self.leafsize)
+ */
+ /*finally:*/ {
+ /*normal exit:*/{
+ #ifdef WITH_THREAD
+ __Pyx_FastGIL_Forget();
+ Py_BLOCK_THREADS
+ #endif
+ goto __pyx_L12;
+ }
+ __pyx_L12:;
+ }
+ }
+ }
+ __pyx_L6:;
+
+ /* "pykdtree/kdtree.pyx":91
+ * self._kdtree_double = NULL
+ *
+ * def __init__(KDTree self, np.ndarray data_pts not None, int leafsize=16): # <<<<<<<<<<<<<<
+ *
+ * # Check arguments
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_2);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_6);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_float.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = -1;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_data_array_float.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_data_array_float);
+ __Pyx_XDECREF((PyObject *)__pyx_v_data_array_double);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":132
+ *
+ *
+ * def query(KDTree self, np.ndarray query_pts not None, k=1, eps=0, # <<<<<<<<<<<<<<
+ * distance_upper_bound=None, sqr_dists=False, mask=None):
+ * """Query the kd-tree for nearest neighbors
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_5query(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/
+static char __pyx_doc_8pykdtree_6kdtree_6KDTree_4query[] = "Query the kd-tree for nearest neighbors\n\n :Parameters:\n query_pts : numpy array\n Query points with shape (n , dims)\n k : int\n The number of nearest neighbours to return\n eps : non-negative float\n Return approximate nearest neighbours; the k-th returned value\n is guaranteed to be no further than (1 + eps) times the distance\n to the real k-th nearest neighbour\n distance_upper_bound : non-negative float\n Return only neighbors within this distance.\n This is used to prune tree searches.\n sqr_dists : bool, optional\n Internally pykdtree works with squared distances.\n Determines if the squared or Euclidean distances are returned.\n mask : numpy array, optional\n Array of booleans where neighbors are considered invalid and\n should not be returned. A mask value of True represents an\n invalid pixel. Mask should have shape (n,). By default all\n points are considered valid.\n\n ";
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_5query(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) {
+ PyArrayObject *__pyx_v_query_pts = 0;
+ PyObject *__pyx_v_k = 0;
+ PyObject *__pyx_v_eps = 0;
+ PyObject *__pyx_v_distance_upper_bound = 0;
+ PyObject *__pyx_v_sqr_dists = 0;
+ PyObject *__pyx_v_mask = 0;
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("query (wrapper)", 0);
+ {
+ static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_query_pts,&__pyx_n_s_k,&__pyx_n_s_eps,&__pyx_n_s_distance_upper_bound,&__pyx_n_s_sqr_dists,&__pyx_n_s_mask,0};
+ PyObject* values[6] = {0,0,0,0,0,0};
+ values[1] = ((PyObject *)__pyx_int_1);
+ values[2] = ((PyObject *)__pyx_int_0);
+
+ /* "pykdtree/kdtree.pyx":133
+ *
+ * def query(KDTree self, np.ndarray query_pts not None, k=1, eps=0,
+ * distance_upper_bound=None, sqr_dists=False, mask=None): # <<<<<<<<<<<<<<
+ * """Query the kd-tree for nearest neighbors
+ *
+ */
+ values[3] = ((PyObject *)Py_None);
+ values[4] = ((PyObject *)Py_False);
+ values[5] = ((PyObject *)Py_None);
+ if (unlikely(__pyx_kwds)) {
+ Py_ssize_t kw_args;
+ const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args);
+ switch (pos_args) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ CYTHON_FALLTHROUGH;
+ case 0: break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ kw_args = PyDict_Size(__pyx_kwds);
+ switch (pos_args) {
+ case 0:
+ if (likely((values[0] = PyDict_GetItem(__pyx_kwds, __pyx_n_s_query_pts)) != 0)) kw_args--;
+ else goto __pyx_L5_argtuple_error;
+ CYTHON_FALLTHROUGH;
+ case 1:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_k);
+ if (value) { values[1] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 2:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_eps);
+ if (value) { values[2] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 3:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_distance_upper_bound);
+ if (value) { values[3] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 4:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_sqr_dists);
+ if (value) { values[4] = value; kw_args--; }
+ }
+ CYTHON_FALLTHROUGH;
+ case 5:
+ if (kw_args > 0) {
+ PyObject* value = PyDict_GetItem(__pyx_kwds, __pyx_n_s_mask);
+ if (value) { values[5] = value; kw_args--; }
+ }
+ }
+ if (unlikely(kw_args > 0)) {
+ if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "query") < 0)) __PYX_ERR(0, 132, __pyx_L3_error)
+ }
+ } else {
+ switch (PyTuple_GET_SIZE(__pyx_args)) {
+ case 6: values[5] = PyTuple_GET_ITEM(__pyx_args, 5);
+ CYTHON_FALLTHROUGH;
+ case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4);
+ CYTHON_FALLTHROUGH;
+ case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3);
+ CYTHON_FALLTHROUGH;
+ case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2);
+ CYTHON_FALLTHROUGH;
+ case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1);
+ CYTHON_FALLTHROUGH;
+ case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0);
+ break;
+ default: goto __pyx_L5_argtuple_error;
+ }
+ }
+ __pyx_v_query_pts = ((PyArrayObject *)values[0]);
+ __pyx_v_k = values[1];
+ __pyx_v_eps = values[2];
+ __pyx_v_distance_upper_bound = values[3];
+ __pyx_v_sqr_dists = values[4];
+ __pyx_v_mask = values[5];
+ }
+ goto __pyx_L4_argument_unpacking_done;
+ __pyx_L5_argtuple_error:;
+ __Pyx_RaiseArgtupleInvalid("query", 0, 1, 6, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 132, __pyx_L3_error)
+ __pyx_L3_error:;
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.query", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __Pyx_RefNannyFinishContext();
+ return NULL;
+ __pyx_L4_argument_unpacking_done:;
+ if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_query_pts), __pyx_ptype_5numpy_ndarray, 0, "query_pts", 0))) __PYX_ERR(0, 132, __pyx_L1_error)
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_4query(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self), __pyx_v_query_pts, __pyx_v_k, __pyx_v_eps, __pyx_v_distance_upper_bound, __pyx_v_sqr_dists, __pyx_v_mask);
+
+ /* "pykdtree/kdtree.pyx":132
+ *
+ *
+ * def query(KDTree self, np.ndarray query_pts not None, k=1, eps=0, # <<<<<<<<<<<<<<
+ * distance_upper_bound=None, sqr_dists=False, mask=None):
+ * """Query the kd-tree for nearest neighbors
+ */
+
+ /* function exit code */
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4query(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, PyArrayObject *__pyx_v_query_pts, PyObject *__pyx_v_k, PyObject *__pyx_v_eps, PyObject *__pyx_v_distance_upper_bound, PyObject *__pyx_v_sqr_dists, PyObject *__pyx_v_mask) {
+ long __pyx_v_q_ndim;
+ uint32_t __pyx_v_num_qpoints;
+ uint32_t __pyx_v_num_n;
+ PyArrayObject *__pyx_v_closest_idxs = 0;
+ PyArrayObject *__pyx_v_closest_dists_float = 0;
+ PyArrayObject *__pyx_v_closest_dists_double = 0;
+ uint32_t *__pyx_v_closest_idxs_data;
+ float *__pyx_v_closest_dists_data_float;
+ double *__pyx_v_closest_dists_data_double;
+ PyArrayObject *__pyx_v_query_array_float = 0;
+ PyArrayObject *__pyx_v_query_array_double = 0;
+ float *__pyx_v_query_array_data_float;
+ double *__pyx_v_query_array_data_double;
+ PyArrayObject *__pyx_v_query_mask = 0;
+ __pyx_t_5numpy_uint8_t *__pyx_v_query_mask_data;
+ PyObject *__pyx_v_closest_dists = NULL;
+ float __pyx_v_dub_float;
+ double __pyx_v_dub_double;
+ double __pyx_v_epsilon_float;
+ double __pyx_v_epsilon_double;
+ PyObject *__pyx_v_closest_dists_res = NULL;
+ PyObject *__pyx_v_closest_idxs_res = NULL;
+ PyObject *__pyx_v_idx_out = NULL;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_closest_dists_double;
+ __Pyx_Buffer __pyx_pybuffer_closest_dists_double;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_closest_dists_float;
+ __Pyx_Buffer __pyx_pybuffer_closest_dists_float;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_closest_idxs;
+ __Pyx_Buffer __pyx_pybuffer_closest_idxs;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_query_array_double;
+ __Pyx_Buffer __pyx_pybuffer_query_array_double;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_query_array_float;
+ __Pyx_Buffer __pyx_pybuffer_query_array_float;
+ __Pyx_LocalBuf_ND __pyx_pybuffernd_query_mask;
+ __Pyx_Buffer __pyx_pybuffer_query_mask;
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ int __pyx_t_2;
+ int __pyx_t_3;
+ PyObject *__pyx_t_4 = NULL;
+ PyObject *__pyx_t_5 = NULL;
+ uint32_t __pyx_t_6;
+ PyObject *__pyx_t_7 = NULL;
+ PyObject *__pyx_t_8 = NULL;
+ PyArrayObject *__pyx_t_9 = NULL;
+ int __pyx_t_10;
+ PyArrayObject *__pyx_t_11 = NULL;
+ int __pyx_t_12;
+ PyObject *__pyx_t_13 = NULL;
+ PyObject *__pyx_t_14 = NULL;
+ PyObject *__pyx_t_15 = NULL;
+ PyArrayObject *__pyx_t_16 = NULL;
+ PyArrayObject *__pyx_t_17 = NULL;
+ PyArrayObject *__pyx_t_18 = NULL;
+ PyArrayObject *__pyx_t_19 = NULL;
+ float __pyx_t_20;
+ double __pyx_t_21;
+ __Pyx_RefNannySetupContext("query", 0);
+ __pyx_pybuffer_closest_idxs.pybuffer.buf = NULL;
+ __pyx_pybuffer_closest_idxs.refcount = 0;
+ __pyx_pybuffernd_closest_idxs.data = NULL;
+ __pyx_pybuffernd_closest_idxs.rcbuffer = &__pyx_pybuffer_closest_idxs;
+ __pyx_pybuffer_closest_dists_float.pybuffer.buf = NULL;
+ __pyx_pybuffer_closest_dists_float.refcount = 0;
+ __pyx_pybuffernd_closest_dists_float.data = NULL;
+ __pyx_pybuffernd_closest_dists_float.rcbuffer = &__pyx_pybuffer_closest_dists_float;
+ __pyx_pybuffer_closest_dists_double.pybuffer.buf = NULL;
+ __pyx_pybuffer_closest_dists_double.refcount = 0;
+ __pyx_pybuffernd_closest_dists_double.data = NULL;
+ __pyx_pybuffernd_closest_dists_double.rcbuffer = &__pyx_pybuffer_closest_dists_double;
+ __pyx_pybuffer_query_array_float.pybuffer.buf = NULL;
+ __pyx_pybuffer_query_array_float.refcount = 0;
+ __pyx_pybuffernd_query_array_float.data = NULL;
+ __pyx_pybuffernd_query_array_float.rcbuffer = &__pyx_pybuffer_query_array_float;
+ __pyx_pybuffer_query_array_double.pybuffer.buf = NULL;
+ __pyx_pybuffer_query_array_double.refcount = 0;
+ __pyx_pybuffernd_query_array_double.data = NULL;
+ __pyx_pybuffernd_query_array_double.rcbuffer = &__pyx_pybuffer_query_array_double;
+ __pyx_pybuffer_query_mask.pybuffer.buf = NULL;
+ __pyx_pybuffer_query_mask.refcount = 0;
+ __pyx_pybuffernd_query_mask.data = NULL;
+ __pyx_pybuffernd_query_mask.rcbuffer = &__pyx_pybuffer_query_mask;
+
+ /* "pykdtree/kdtree.pyx":160
+ *
+ * # Check arguments
+ * if k < 1: # <<<<<<<<<<<<<<
+ * raise ValueError('Number of neighbours must be greater than zero')
+ * elif eps < 0:
+ */
+ __pyx_t_1 = PyObject_RichCompare(__pyx_v_k, __pyx_int_1, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 160, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_2) {
+
+ /* "pykdtree/kdtree.pyx":161
+ * # Check arguments
+ * if k < 1:
+ * raise ValueError('Number of neighbours must be greater than zero') # <<<<<<<<<<<<<<
+ * elif eps < 0:
+ * raise ValueError('eps must be non-negative')
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 161, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 161, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":160
+ *
+ * # Check arguments
+ * if k < 1: # <<<<<<<<<<<<<<
+ * raise ValueError('Number of neighbours must be greater than zero')
+ * elif eps < 0:
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":162
+ * if k < 1:
+ * raise ValueError('Number of neighbours must be greater than zero')
+ * elif eps < 0: # <<<<<<<<<<<<<<
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None:
+ */
+ __pyx_t_1 = PyObject_RichCompare(__pyx_v_eps, __pyx_int_0, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 162, __pyx_L1_error)
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 162, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_2) {
+
+ /* "pykdtree/kdtree.pyx":163
+ * raise ValueError('Number of neighbours must be greater than zero')
+ * elif eps < 0:
+ * raise ValueError('eps must be non-negative') # <<<<<<<<<<<<<<
+ * elif distance_upper_bound is not None:
+ * if distance_upper_bound < 0:
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 163, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 163, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":162
+ * if k < 1:
+ * raise ValueError('Number of neighbours must be greater than zero')
+ * elif eps < 0: # <<<<<<<<<<<<<<
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None:
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":164
+ * elif eps < 0:
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None: # <<<<<<<<<<<<<<
+ * if distance_upper_bound < 0:
+ * raise ValueError('distance_upper_bound must be non negative')
+ */
+ __pyx_t_2 = (__pyx_v_distance_upper_bound != Py_None);
+ __pyx_t_3 = (__pyx_t_2 != 0);
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":165
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None:
+ * if distance_upper_bound < 0: # <<<<<<<<<<<<<<
+ * raise ValueError('distance_upper_bound must be non negative')
+ *
+ */
+ __pyx_t_1 = PyObject_RichCompare(__pyx_v_distance_upper_bound, __pyx_int_0, Py_LT); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 165, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":166
+ * elif distance_upper_bound is not None:
+ * if distance_upper_bound < 0:
+ * raise ValueError('distance_upper_bound must be non negative') # <<<<<<<<<<<<<<
+ *
+ * # Check dimensions
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 166, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 166, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":165
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None:
+ * if distance_upper_bound < 0: # <<<<<<<<<<<<<<
+ * raise ValueError('distance_upper_bound must be non negative')
+ *
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":164
+ * elif eps < 0:
+ * raise ValueError('eps must be non-negative')
+ * elif distance_upper_bound is not None: # <<<<<<<<<<<<<<
+ * if distance_upper_bound < 0:
+ * raise ValueError('distance_upper_bound must be non negative')
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":169
+ *
+ * # Check dimensions
+ * if query_pts.ndim == 1: # <<<<<<<<<<<<<<
+ * q_ndim = 1
+ * else:
+ */
+ __pyx_t_3 = ((__pyx_v_query_pts->nd == 1) != 0);
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":170
+ * # Check dimensions
+ * if query_pts.ndim == 1:
+ * q_ndim = 1 # <<<<<<<<<<<<<<
+ * else:
+ * q_ndim = query_pts.shape[1]
+ */
+ __pyx_v_q_ndim = 1;
+
+ /* "pykdtree/kdtree.pyx":169
+ *
+ * # Check dimensions
+ * if query_pts.ndim == 1: # <<<<<<<<<<<<<<
+ * q_ndim = 1
+ * else:
+ */
+ goto __pyx_L5;
+ }
+
+ /* "pykdtree/kdtree.pyx":172
+ * q_ndim = 1
+ * else:
+ * q_ndim = query_pts.shape[1] # <<<<<<<<<<<<<<
+ *
+ * if self.ndim != q_ndim:
+ */
+ /*else*/ {
+ __pyx_v_q_ndim = (__pyx_v_query_pts->dimensions[1]);
+ }
+ __pyx_L5:;
+
+ /* "pykdtree/kdtree.pyx":174
+ * q_ndim = query_pts.shape[1]
+ *
+ * if self.ndim != q_ndim: # <<<<<<<<<<<<<<
+ * raise ValueError('Data and query points must have same dimensions')
+ *
+ */
+ __pyx_t_3 = ((__pyx_v_self->ndim != __pyx_v_q_ndim) != 0);
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":175
+ *
+ * if self.ndim != q_ndim:
+ * raise ValueError('Data and query points must have same dimensions') # <<<<<<<<<<<<<<
+ *
+ * if self.data_pts.dtype == np.float32 and query_pts.dtype != np.float32:
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 175, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(0, 175, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":174
+ * q_ndim = query_pts.shape[1]
+ *
+ * if self.ndim != q_ndim: # <<<<<<<<<<<<<<
+ * raise ValueError('Data and query points must have same dimensions')
+ *
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":177
+ * raise ValueError('Data and query points must have same dimensions')
+ *
+ * if self.data_pts.dtype == np.float32 and query_pts.dtype != np.float32: # <<<<<<<<<<<<<<
+ * raise TypeError('Type mismatch. query points must be of type float32 when data points are of type float32')
+ *
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float32); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = PyObject_RichCompare(__pyx_t_1, __pyx_t_5, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_3 = __pyx_t_2;
+ goto __pyx_L8_bool_binop_done;
+ }
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_query_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_float32); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = PyObject_RichCompare(__pyx_t_4, __pyx_t_1, Py_NE); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(0, 177, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_3 = __pyx_t_2;
+ __pyx_L8_bool_binop_done:;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":178
+ *
+ * if self.data_pts.dtype == np.float32 and query_pts.dtype != np.float32:
+ * raise TypeError('Type mismatch. query points must be of type float32 when data points are of type float32') # <<<<<<<<<<<<<<
+ *
+ * # Get query info
+ */
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 178, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_Raise(__pyx_t_5, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __PYX_ERR(0, 178, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":177
+ * raise ValueError('Data and query points must have same dimensions')
+ *
+ * if self.data_pts.dtype == np.float32 and query_pts.dtype != np.float32: # <<<<<<<<<<<<<<
+ * raise TypeError('Type mismatch. query points must be of type float32 when data points are of type float32')
+ *
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":181
+ *
+ * # Get query info
+ * cdef uint32_t num_qpoints = query_pts.shape[0] # <<<<<<<<<<<<<<
+ * cdef uint32_t num_n = k
+ * cdef np.ndarray[uint32_t, ndim=1] closest_idxs = np.empty(num_qpoints * k, dtype=np.uint32)
+ */
+ __pyx_v_num_qpoints = (__pyx_v_query_pts->dimensions[0]);
+
+ /* "pykdtree/kdtree.pyx":182
+ * # Get query info
+ * cdef uint32_t num_qpoints = query_pts.shape[0]
+ * cdef uint32_t num_n = k # <<<<<<<<<<<<<<
+ * cdef np.ndarray[uint32_t, ndim=1] closest_idxs = np.empty(num_qpoints * k, dtype=np.uint32)
+ * cdef np.ndarray[float, ndim=1] closest_dists_float
+ */
+ __pyx_t_6 = __Pyx_PyInt_As_uint32_t(__pyx_v_k); if (unlikely((__pyx_t_6 == ((uint32_t)-1)) && PyErr_Occurred())) __PYX_ERR(0, 182, __pyx_L1_error)
+ __pyx_v_num_n = __pyx_t_6;
+
+ /* "pykdtree/kdtree.pyx":183
+ * cdef uint32_t num_qpoints = query_pts.shape[0]
+ * cdef uint32_t num_n = k
+ * cdef np.ndarray[uint32_t, ndim=1] closest_idxs = np.empty(num_qpoints * k, dtype=np.uint32) # <<<<<<<<<<<<<<
+ * cdef np.ndarray[float, ndim=1] closest_dists_float
+ * cdef np.ndarray[double, ndim=1] closest_dists_double
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_empty); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_num_qpoints); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_4 = PyNumber_Multiply(__pyx_t_5, __pyx_v_k); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_GIVEREF(__pyx_t_4);
+ PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4);
+ __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_uint32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (PyDict_SetItem(__pyx_t_4, __pyx_n_s_dtype, __pyx_t_8) < 0) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 183, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (!(likely(((__pyx_t_8) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_8, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 183, __pyx_L1_error)
+ __pyx_t_9 = ((PyArrayObject *)__pyx_t_8);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer, (PyObject*)__pyx_t_9, &__Pyx_TypeInfo_nn_uint32_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ __pyx_v_closest_idxs = ((PyArrayObject *)Py_None); __Pyx_INCREF(Py_None); __pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer.buf = NULL;
+ __PYX_ERR(0, 183, __pyx_L1_error)
+ } else {__pyx_pybuffernd_closest_idxs.diminfo[0].strides = __pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_closest_idxs.diminfo[0].shape = __pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer.shape[0];
+ }
+ }
+ __pyx_t_9 = 0;
+ __pyx_v_closest_idxs = ((PyArrayObject *)__pyx_t_8);
+ __pyx_t_8 = 0;
+
+ /* "pykdtree/kdtree.pyx":189
+ *
+ * # Set up return arrays
+ * cdef uint32_t *closest_idxs_data = closest_idxs.data # <<<<<<<<<<<<<<
+ * cdef float *closest_dists_data_float
+ * cdef double *closest_dists_data_double
+ */
+ __pyx_v_closest_idxs_data = ((uint32_t *)__pyx_v_closest_idxs->data);
+
+ /* "pykdtree/kdtree.pyx":201
+ * cdef np.uint8_t *query_mask_data
+ *
+ * if mask is not None and mask.size != self.n: # <<<<<<<<<<<<<<
+ * raise ValueError('Mask must have the same size as data points')
+ * elif mask is not None:
+ */
+ __pyx_t_2 = (__pyx_v_mask != Py_None);
+ __pyx_t_10 = (__pyx_t_2 != 0);
+ if (__pyx_t_10) {
+ } else {
+ __pyx_t_3 = __pyx_t_10;
+ goto __pyx_L11_bool_binop_done;
+ }
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_mask, __pyx_n_s_size); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 201, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_4 = __Pyx_PyInt_From_uint32_t(__pyx_v_self->n); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 201, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = PyObject_RichCompare(__pyx_t_8, __pyx_t_4, Py_NE); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 201, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 201, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_3 = __pyx_t_10;
+ __pyx_L11_bool_binop_done:;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":202
+ *
+ * if mask is not None and mask.size != self.n:
+ * raise ValueError('Mask must have the same size as data points') # <<<<<<<<<<<<<<
+ * elif mask is not None:
+ * query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8)
+ */
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 202, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_Raise(__pyx_t_5, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __PYX_ERR(0, 202, __pyx_L1_error)
+
+ /* "pykdtree/kdtree.pyx":201
+ * cdef np.uint8_t *query_mask_data
+ *
+ * if mask is not None and mask.size != self.n: # <<<<<<<<<<<<<<
+ * raise ValueError('Mask must have the same size as data points')
+ * elif mask is not None:
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":203
+ * if mask is not None and mask.size != self.n:
+ * raise ValueError('Mask must have the same size as data points')
+ * elif mask is not None: # <<<<<<<<<<<<<<
+ * query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8)
+ * query_mask_data = query_mask.data
+ */
+ __pyx_t_3 = (__pyx_v_mask != Py_None);
+ __pyx_t_10 = (__pyx_t_3 != 0);
+ if (__pyx_t_10) {
+
+ /* "pykdtree/kdtree.pyx":204
+ * raise ValueError('Mask must have the same size as data points')
+ * elif mask is not None:
+ * query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8) # <<<<<<<<<<<<<<
+ * query_mask_data = query_mask.data
+ * else:
+ */
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_v_mask, __pyx_n_s_ravel); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_1 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_1)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_1);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ if (__pyx_t_1) {
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ } else {
+ __pyx_t_5 = __Pyx_PyObject_CallNoArg(__pyx_t_8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error)
+ }
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = PyTuple_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_uint8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_7) < 0) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_8, __pyx_t_5); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 204, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (!(likely(((__pyx_t_7) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_7, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 204, __pyx_L1_error)
+ __pyx_t_11 = ((PyArrayObject *)__pyx_t_7);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_mask.rcbuffer->pybuffer);
+ __pyx_t_12 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_mask.rcbuffer->pybuffer, (PyObject*)__pyx_t_11, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_12 < 0)) {
+ PyErr_Fetch(&__pyx_t_13, &__pyx_t_14, &__pyx_t_15);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_mask.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_mask, &__Pyx_TypeInfo_nn___pyx_t_5numpy_uint8_t, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14); Py_XDECREF(__pyx_t_15);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_13, __pyx_t_14, __pyx_t_15);
+ }
+ __pyx_t_13 = __pyx_t_14 = __pyx_t_15 = 0;
+ }
+ __pyx_pybuffernd_query_mask.diminfo[0].strides = __pyx_pybuffernd_query_mask.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_mask.diminfo[0].shape = __pyx_pybuffernd_query_mask.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 204, __pyx_L1_error)
+ }
+ __pyx_t_11 = 0;
+ __pyx_v_query_mask = ((PyArrayObject *)__pyx_t_7);
+ __pyx_t_7 = 0;
+
+ /* "pykdtree/kdtree.pyx":205
+ * elif mask is not None:
+ * query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8)
+ * query_mask_data = query_mask.data # <<<<<<<<<<<<<<
+ * else:
+ * query_mask_data = NULL
+ */
+ __pyx_v_query_mask_data = ((uint8_t *)__pyx_v_query_mask->data);
+
+ /* "pykdtree/kdtree.pyx":203
+ * if mask is not None and mask.size != self.n:
+ * raise ValueError('Mask must have the same size as data points')
+ * elif mask is not None: # <<<<<<<<<<<<<<
+ * query_mask = np.ascontiguousarray(mask.ravel(), dtype=np.uint8)
+ * query_mask_data = query_mask.data
+ */
+ goto __pyx_L10;
+ }
+
+ /* "pykdtree/kdtree.pyx":207
+ * query_mask_data = query_mask.data
+ * else:
+ * query_mask_data = NULL # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ /*else*/ {
+ __pyx_v_query_mask_data = NULL;
+ }
+ __pyx_L10:;
+
+ /* "pykdtree/kdtree.pyx":210
+ *
+ *
+ * if query_pts.dtype == np.float32 and self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32)
+ * closest_dists = closest_dists_float
+ */
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_query_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_float32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = PyObject_RichCompare(__pyx_t_7, __pyx_t_8, Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (__pyx_t_3) {
+ } else {
+ __pyx_t_10 = __pyx_t_3;
+ goto __pyx_L14_bool_binop_done;
+ }
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_float32); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = PyObject_RichCompare(__pyx_t_5, __pyx_t_7, Py_EQ); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_8); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 210, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_10 = __pyx_t_3;
+ __pyx_L14_bool_binop_done:;
+ if (__pyx_t_10) {
+
+ /* "pykdtree/kdtree.pyx":211
+ *
+ * if query_pts.dtype == np.float32 and self.data_pts.dtype == np.float32:
+ * closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32) # <<<<<<<<<<<<<<
+ * closest_dists = closest_dists_float
+ * closest_dists_data_float = closest_dists_float.data
+ */
+ __pyx_t_8 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_8, __pyx_n_s_empty); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_PyInt_From_uint32_t(__pyx_v_num_qpoints); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_5 = PyNumber_Multiply(__pyx_t_8, __pyx_v_k); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = PyTuple_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5);
+ __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float32); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (PyDict_SetItem(__pyx_t_5, __pyx_n_s_dtype, __pyx_t_1) < 0) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_7, __pyx_t_8, __pyx_t_5); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 211, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 211, __pyx_L1_error)
+ __pyx_t_16 = ((PyArrayObject *)__pyx_t_1);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer);
+ __pyx_t_12 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer, (PyObject*)__pyx_t_16, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_12 < 0)) {
+ PyErr_Fetch(&__pyx_t_15, &__pyx_t_14, &__pyx_t_13);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer, (PyObject*)__pyx_v_closest_dists_float, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_15); Py_XDECREF(__pyx_t_14); Py_XDECREF(__pyx_t_13);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_15, __pyx_t_14, __pyx_t_13);
+ }
+ __pyx_t_15 = __pyx_t_14 = __pyx_t_13 = 0;
+ }
+ __pyx_pybuffernd_closest_dists_float.diminfo[0].strides = __pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_closest_dists_float.diminfo[0].shape = __pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 211, __pyx_L1_error)
+ }
+ __pyx_t_16 = 0;
+ __pyx_v_closest_dists_float = ((PyArrayObject *)__pyx_t_1);
+ __pyx_t_1 = 0;
+
+ /* "pykdtree/kdtree.pyx":212
+ * if query_pts.dtype == np.float32 and self.data_pts.dtype == np.float32:
+ * closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32)
+ * closest_dists = closest_dists_float # <<<<<<<<<<<<<<
+ * closest_dists_data_float = closest_dists_float.data
+ * query_array_float = np.ascontiguousarray(query_pts.ravel(), dtype=np.float32)
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_closest_dists_float));
+ __pyx_v_closest_dists = ((PyObject *)__pyx_v_closest_dists_float);
+
+ /* "pykdtree/kdtree.pyx":213
+ * closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32)
+ * closest_dists = closest_dists_float
+ * closest_dists_data_float = closest_dists_float.data # <<<<<<<<<<<<<<
+ * query_array_float = np.ascontiguousarray(query_pts.ravel(), dtype=np.float32)
+ * query_array_data_float = query_array_float.data
+ */
+ __pyx_v_closest_dists_data_float = ((float *)__pyx_v_closest_dists_float->data);
+
+ /* "pykdtree/kdtree.pyx":214
+ * closest_dists = closest_dists_float
+ * closest_dists_data_float = closest_dists_float.data
+ * query_array_float = np.ascontiguousarray(query_pts.ravel(), dtype=np.float32) # <<<<<<<<<<<<<<
+ * query_array_data_float = query_array_float.data
+ * else:
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_query_pts), __pyx_n_s_ravel); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_8))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_8);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_8);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_8, function);
+ }
+ }
+ if (__pyx_t_7) {
+ __pyx_t_1 = __Pyx_PyObject_CallOneArg(__pyx_t_8, __pyx_t_7); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ } else {
+ __pyx_t_1 = __Pyx_PyObject_CallNoArg(__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error)
+ }
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_8 = PyTuple_New(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_GIVEREF(__pyx_t_1);
+ PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_1);
+ __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (PyDict_SetItem(__pyx_t_1, __pyx_n_s_dtype, __pyx_t_4) < 0) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_8, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 214, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 214, __pyx_L1_error)
+ __pyx_t_17 = ((PyArrayObject *)__pyx_t_4);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_float.rcbuffer->pybuffer);
+ __pyx_t_12 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_array_float.rcbuffer->pybuffer, (PyObject*)__pyx_t_17, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_12 < 0)) {
+ PyErr_Fetch(&__pyx_t_13, &__pyx_t_14, &__pyx_t_15);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_array_float.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_array_float, &__Pyx_TypeInfo_float, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14); Py_XDECREF(__pyx_t_15);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_13, __pyx_t_14, __pyx_t_15);
+ }
+ __pyx_t_13 = __pyx_t_14 = __pyx_t_15 = 0;
+ }
+ __pyx_pybuffernd_query_array_float.diminfo[0].strides = __pyx_pybuffernd_query_array_float.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_array_float.diminfo[0].shape = __pyx_pybuffernd_query_array_float.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 214, __pyx_L1_error)
+ }
+ __pyx_t_17 = 0;
+ __pyx_v_query_array_float = ((PyArrayObject *)__pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "pykdtree/kdtree.pyx":215
+ * closest_dists_data_float = closest_dists_float.data
+ * query_array_float = np.ascontiguousarray(query_pts.ravel(), dtype=np.float32)
+ * query_array_data_float = query_array_float.data # <<<<<<<<<<<<<<
+ * else:
+ * closest_dists_double = np.empty(num_qpoints * k, dtype=np.float64)
+ */
+ __pyx_v_query_array_data_float = ((float *)__pyx_v_query_array_float->data);
+
+ /* "pykdtree/kdtree.pyx":210
+ *
+ *
+ * if query_pts.dtype == np.float32 and self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * closest_dists_float = np.empty(num_qpoints * k, dtype=np.float32)
+ * closest_dists = closest_dists_float
+ */
+ goto __pyx_L13;
+ }
+
+ /* "pykdtree/kdtree.pyx":217
+ * query_array_data_float = query_array_float.data
+ * else:
+ * closest_dists_double = np.empty(num_qpoints * k, dtype=np.float64) # <<<<<<<<<<<<<<
+ * closest_dists = closest_dists_double
+ * closest_dists_data_double = closest_dists_double.data
+ */
+ /*else*/ {
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_empty); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_PyInt_From_uint32_t(__pyx_v_num_qpoints); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_8 = PyNumber_Multiply(__pyx_t_4, __pyx_v_k); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_8);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_8);
+ __pyx_t_8 = 0;
+ __pyx_t_8 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_float64); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (PyDict_SetItem(__pyx_t_8, __pyx_n_s_dtype, __pyx_t_7) < 0) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, __pyx_t_8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 217, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ if (!(likely(((__pyx_t_7) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_7, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 217, __pyx_L1_error)
+ __pyx_t_18 = ((PyArrayObject *)__pyx_t_7);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer);
+ __pyx_t_12 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer, (PyObject*)__pyx_t_18, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_12 < 0)) {
+ PyErr_Fetch(&__pyx_t_15, &__pyx_t_14, &__pyx_t_13);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer, (PyObject*)__pyx_v_closest_dists_double, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_15); Py_XDECREF(__pyx_t_14); Py_XDECREF(__pyx_t_13);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_15, __pyx_t_14, __pyx_t_13);
+ }
+ __pyx_t_15 = __pyx_t_14 = __pyx_t_13 = 0;
+ }
+ __pyx_pybuffernd_closest_dists_double.diminfo[0].strides = __pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_closest_dists_double.diminfo[0].shape = __pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 217, __pyx_L1_error)
+ }
+ __pyx_t_18 = 0;
+ __pyx_v_closest_dists_double = ((PyArrayObject *)__pyx_t_7);
+ __pyx_t_7 = 0;
+
+ /* "pykdtree/kdtree.pyx":218
+ * else:
+ * closest_dists_double = np.empty(num_qpoints * k, dtype=np.float64)
+ * closest_dists = closest_dists_double # <<<<<<<<<<<<<<
+ * closest_dists_data_double = closest_dists_double.data
+ * query_array_double = np.ascontiguousarray(query_pts.ravel(), dtype=np.float64)
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_closest_dists_double));
+ __pyx_v_closest_dists = ((PyObject *)__pyx_v_closest_dists_double);
+
+ /* "pykdtree/kdtree.pyx":219
+ * closest_dists_double = np.empty(num_qpoints * k, dtype=np.float64)
+ * closest_dists = closest_dists_double
+ * closest_dists_data_double = closest_dists_double.data # <<<<<<<<<<<<<<
+ * query_array_double = np.ascontiguousarray(query_pts.ravel(), dtype=np.float64)
+ * query_array_data_double = query_array_double.data
+ */
+ __pyx_v_closest_dists_data_double = ((double *)__pyx_v_closest_dists_double->data);
+
+ /* "pykdtree/kdtree.pyx":220
+ * closest_dists = closest_dists_double
+ * closest_dists_data_double = closest_dists_double.data
+ * query_array_double = np.ascontiguousarray(query_pts.ravel(), dtype=np.float64) # <<<<<<<<<<<<<<
+ * query_array_data_double = query_array_double.data
+ *
+ */
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_ascontiguousarray); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_query_pts), __pyx_n_s_ravel); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = NULL;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_1)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_1);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ if (__pyx_t_1) {
+ __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ } else {
+ __pyx_t_7 = __Pyx_PyObject_CallNoArg(__pyx_t_4); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 220, __pyx_L1_error)
+ }
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_7);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7);
+ __pyx_t_7 = 0;
+ __pyx_t_7 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float64); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (PyDict_SetItem(__pyx_t_7, __pyx_n_s_dtype, __pyx_t_5) < 0) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_8, __pyx_t_4, __pyx_t_7); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 220, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (!(likely(((__pyx_t_5) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_5, __pyx_ptype_5numpy_ndarray))))) __PYX_ERR(0, 220, __pyx_L1_error)
+ __pyx_t_19 = ((PyArrayObject *)__pyx_t_5);
+ {
+ __Pyx_BufFmt_StackElem __pyx_stack[1];
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_double.rcbuffer->pybuffer);
+ __pyx_t_12 = __Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_array_double.rcbuffer->pybuffer, (PyObject*)__pyx_t_19, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack);
+ if (unlikely(__pyx_t_12 < 0)) {
+ PyErr_Fetch(&__pyx_t_13, &__pyx_t_14, &__pyx_t_15);
+ if (unlikely(__Pyx_GetBufferAndValidate(&__pyx_pybuffernd_query_array_double.rcbuffer->pybuffer, (PyObject*)__pyx_v_query_array_double, &__Pyx_TypeInfo_double, PyBUF_FORMAT| PyBUF_STRIDES, 1, 0, __pyx_stack) == -1)) {
+ Py_XDECREF(__pyx_t_13); Py_XDECREF(__pyx_t_14); Py_XDECREF(__pyx_t_15);
+ __Pyx_RaiseBufferFallbackError();
+ } else {
+ PyErr_Restore(__pyx_t_13, __pyx_t_14, __pyx_t_15);
+ }
+ __pyx_t_13 = __pyx_t_14 = __pyx_t_15 = 0;
+ }
+ __pyx_pybuffernd_query_array_double.diminfo[0].strides = __pyx_pybuffernd_query_array_double.rcbuffer->pybuffer.strides[0]; __pyx_pybuffernd_query_array_double.diminfo[0].shape = __pyx_pybuffernd_query_array_double.rcbuffer->pybuffer.shape[0];
+ if (unlikely(__pyx_t_12 < 0)) __PYX_ERR(0, 220, __pyx_L1_error)
+ }
+ __pyx_t_19 = 0;
+ __pyx_v_query_array_double = ((PyArrayObject *)__pyx_t_5);
+ __pyx_t_5 = 0;
+
+ /* "pykdtree/kdtree.pyx":221
+ * closest_dists_data_double = closest_dists_double.data
+ * query_array_double = np.ascontiguousarray(query_pts.ravel(), dtype=np.float64)
+ * query_array_data_double = query_array_double.data # <<<<<<<<<<<<<<
+ *
+ * # Setup distance_upper_bound
+ */
+ __pyx_v_query_array_data_double = ((double *)__pyx_v_query_array_double->data);
+ }
+ __pyx_L13:;
+
+ /* "pykdtree/kdtree.pyx":226
+ * cdef float dub_float
+ * cdef double dub_double
+ * if distance_upper_bound is None: # <<<<<<<<<<<<<<
+ * if self.data_pts.dtype == np.float32:
+ * dub_float = np.finfo(np.float32).max
+ */
+ __pyx_t_10 = (__pyx_v_distance_upper_bound == Py_None);
+ __pyx_t_3 = (__pyx_t_10 != 0);
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":227
+ * cdef double dub_double
+ * if distance_upper_bound is None:
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * dub_float = np.finfo(np.float32).max
+ * else:
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = PyObject_RichCompare(__pyx_t_5, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_7); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_7); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 227, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":228
+ * if distance_upper_bound is None:
+ * if self.data_pts.dtype == np.float32:
+ * dub_float = np.finfo(np.float32).max # <<<<<<<<<<<<<<
+ * else:
+ * dub_double = np.finfo(np.float64).max
+ */
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_finfo); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_5))) {
+ __pyx_t_4 = PyMethod_GET_SELF(__pyx_t_5);
+ if (likely(__pyx_t_4)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5);
+ __Pyx_INCREF(__pyx_t_4);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_5, function);
+ }
+ }
+ if (!__pyx_t_4) {
+ __pyx_t_7 = __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_t_8); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_8};
+ __pyx_t_7 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_4, __pyx_t_8};
+ __pyx_t_7 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_1 = PyTuple_New(1+1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_GIVEREF(__pyx_t_4); PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_t_4); __pyx_t_4 = NULL;
+ __Pyx_GIVEREF(__pyx_t_8);
+ PyTuple_SET_ITEM(__pyx_t_1, 0+1, __pyx_t_8);
+ __pyx_t_8 = 0;
+ __pyx_t_7 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_1, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_max); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_20 = __pyx_PyFloat_AsFloat(__pyx_t_5); if (unlikely((__pyx_t_20 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 228, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_dub_float = ((float)__pyx_t_20);
+
+ /* "pykdtree/kdtree.pyx":227
+ * cdef double dub_double
+ * if distance_upper_bound is None:
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * dub_float = np.finfo(np.float32).max
+ * else:
+ */
+ goto __pyx_L17;
+ }
+
+ /* "pykdtree/kdtree.pyx":230
+ * dub_float = np.finfo(np.float32).max
+ * else:
+ * dub_double = np.finfo(np.float64).max # <<<<<<<<<<<<<<
+ * else:
+ * if self.data_pts.dtype == np.float32:
+ */
+ /*else*/ {
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_finfo); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_7, __pyx_n_s_float64); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __pyx_t_7 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_7)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_7);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ }
+ }
+ if (!__pyx_t_7) {
+ __pyx_t_5 = __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_8); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_8};
+ __pyx_t_5 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_7, __pyx_t_8};
+ __pyx_t_5 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0;
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_4 = PyTuple_New(1+1); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_7); __pyx_t_7 = NULL;
+ __Pyx_GIVEREF(__pyx_t_8);
+ PyTuple_SET_ITEM(__pyx_t_4, 0+1, __pyx_t_8);
+ __pyx_t_8 = 0;
+ __pyx_t_5 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_4, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_max); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_21 = __pyx_PyFloat_AsDouble(__pyx_t_1); if (unlikely((__pyx_t_21 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 230, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_dub_double = ((double)__pyx_t_21);
+ }
+ __pyx_L17:;
+
+ /* "pykdtree/kdtree.pyx":226
+ * cdef float dub_float
+ * cdef double dub_double
+ * if distance_upper_bound is None: # <<<<<<<<<<<<<<
+ * if self.data_pts.dtype == np.float32:
+ * dub_float = np.finfo(np.float32).max
+ */
+ goto __pyx_L16;
+ }
+
+ /* "pykdtree/kdtree.pyx":232
+ * dub_double = np.finfo(np.float64).max
+ * else:
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * dub_float = (distance_upper_bound * distance_upper_bound)
+ * else:
+ */
+ /*else*/ {
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_5, __pyx_n_s_float32); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_t_5 = PyObject_RichCompare(__pyx_t_1, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_5); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_5); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 232, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":233
+ * else:
+ * if self.data_pts.dtype == np.float32:
+ * dub_float = (distance_upper_bound * distance_upper_bound) # <<<<<<<<<<<<<<
+ * else:
+ * dub_double = (distance_upper_bound * distance_upper_bound)
+ */
+ __pyx_t_5 = PyNumber_Multiply(__pyx_v_distance_upper_bound, __pyx_v_distance_upper_bound); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 233, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_20 = __pyx_PyFloat_AsFloat(__pyx_t_5); if (unlikely((__pyx_t_20 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 233, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_dub_float = ((float)__pyx_t_20);
+
+ /* "pykdtree/kdtree.pyx":232
+ * dub_double = np.finfo(np.float64).max
+ * else:
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * dub_float = (distance_upper_bound * distance_upper_bound)
+ * else:
+ */
+ goto __pyx_L18;
+ }
+
+ /* "pykdtree/kdtree.pyx":235
+ * dub_float = (distance_upper_bound * distance_upper_bound)
+ * else:
+ * dub_double = (distance_upper_bound * distance_upper_bound) # <<<<<<<<<<<<<<
+ *
+ * # Set epsilon
+ */
+ /*else*/ {
+ __pyx_t_5 = PyNumber_Multiply(__pyx_v_distance_upper_bound, __pyx_v_distance_upper_bound); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 235, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_21 = __pyx_PyFloat_AsDouble(__pyx_t_5); if (unlikely((__pyx_t_21 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 235, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __pyx_v_dub_double = ((double)__pyx_t_21);
+ }
+ __pyx_L18:;
+ }
+ __pyx_L16:;
+
+ /* "pykdtree/kdtree.pyx":238
+ *
+ * # Set epsilon
+ * cdef double epsilon_float = eps # <<<<<<<<<<<<<<
+ * cdef double epsilon_double = eps
+ *
+ */
+ __pyx_t_20 = __pyx_PyFloat_AsFloat(__pyx_v_eps); if (unlikely((__pyx_t_20 == (float)-1) && PyErr_Occurred())) __PYX_ERR(0, 238, __pyx_L1_error)
+ __pyx_v_epsilon_float = ((float)__pyx_t_20);
+
+ /* "pykdtree/kdtree.pyx":239
+ * # Set epsilon
+ * cdef double epsilon_float = eps
+ * cdef double epsilon_double = eps # <<<<<<<<<<<<<<
+ *
+ * # Release GIL and query tree
+ */
+ __pyx_t_21 = __pyx_PyFloat_AsDouble(__pyx_v_eps); if (unlikely((__pyx_t_21 == (double)-1) && PyErr_Occurred())) __PYX_ERR(0, 239, __pyx_L1_error)
+ __pyx_v_epsilon_double = ((double)__pyx_t_21);
+
+ /* "pykdtree/kdtree.pyx":242
+ *
+ * # Release GIL and query tree
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * with nogil:
+ * search_tree_float(self._kdtree_float, self._data_pts_data_float,
+ */
+ __pyx_t_5 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_4 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_4, __pyx_n_s_float32); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_4 = PyObject_RichCompare(__pyx_t_5, __pyx_t_1, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 242, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":243
+ * # Release GIL and query tree
+ * if self.data_pts.dtype == np.float32:
+ * with nogil: # <<<<<<<<<<<<<<
+ * search_tree_float(self._kdtree_float, self._data_pts_data_float,
+ * query_array_data_float, num_qpoints, num_n, dub_float, epsilon_float,
+ */
+ {
+ #ifdef WITH_THREAD
+ PyThreadState *_save;
+ Py_UNBLOCK_THREADS
+ __Pyx_FastGIL_Remember();
+ #endif
+ /*try:*/ {
+
+ /* "pykdtree/kdtree.pyx":244
+ * if self.data_pts.dtype == np.float32:
+ * with nogil:
+ * search_tree_float(self._kdtree_float, self._data_pts_data_float, # <<<<<<<<<<<<<<
+ * query_array_data_float, num_qpoints, num_n, dub_float, epsilon_float,
+ * query_mask_data, closest_idxs_data, closest_dists_data_float)
+ */
+ search_tree_float(__pyx_v_self->_kdtree_float, __pyx_v_self->_data_pts_data_float, __pyx_v_query_array_data_float, __pyx_v_num_qpoints, __pyx_v_num_n, __pyx_v_dub_float, __pyx_v_epsilon_float, __pyx_v_query_mask_data, __pyx_v_closest_idxs_data, __pyx_v_closest_dists_data_float);
+ }
+
+ /* "pykdtree/kdtree.pyx":243
+ * # Release GIL and query tree
+ * if self.data_pts.dtype == np.float32:
+ * with nogil: # <<<<<<<<<<<<<<
+ * search_tree_float(self._kdtree_float, self._data_pts_data_float,
+ * query_array_data_float, num_qpoints, num_n, dub_float, epsilon_float,
+ */
+ /*finally:*/ {
+ /*normal exit:*/{
+ #ifdef WITH_THREAD
+ __Pyx_FastGIL_Forget();
+ Py_BLOCK_THREADS
+ #endif
+ goto __pyx_L22;
+ }
+ __pyx_L22:;
+ }
+ }
+
+ /* "pykdtree/kdtree.pyx":242
+ *
+ * # Release GIL and query tree
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * with nogil:
+ * search_tree_float(self._kdtree_float, self._data_pts_data_float,
+ */
+ goto __pyx_L19;
+ }
+
+ /* "pykdtree/kdtree.pyx":249
+ *
+ * else:
+ * with nogil: # <<<<<<<<<<<<<<
+ * search_tree_double(self._kdtree_double, self._data_pts_data_double,
+ * query_array_data_double, num_qpoints, num_n, dub_double, epsilon_double,
+ */
+ /*else*/ {
+ {
+ #ifdef WITH_THREAD
+ PyThreadState *_save;
+ Py_UNBLOCK_THREADS
+ __Pyx_FastGIL_Remember();
+ #endif
+ /*try:*/ {
+
+ /* "pykdtree/kdtree.pyx":250
+ * else:
+ * with nogil:
+ * search_tree_double(self._kdtree_double, self._data_pts_data_double, # <<<<<<<<<<<<<<
+ * query_array_data_double, num_qpoints, num_n, dub_double, epsilon_double,
+ * query_mask_data, closest_idxs_data, closest_dists_data_double)
+ */
+ search_tree_double(__pyx_v_self->_kdtree_double, __pyx_v_self->_data_pts_data_double, __pyx_v_query_array_data_double, __pyx_v_num_qpoints, __pyx_v_num_n, __pyx_v_dub_double, __pyx_v_epsilon_double, __pyx_v_query_mask_data, __pyx_v_closest_idxs_data, __pyx_v_closest_dists_data_double);
+ }
+
+ /* "pykdtree/kdtree.pyx":249
+ *
+ * else:
+ * with nogil: # <<<<<<<<<<<<<<
+ * search_tree_double(self._kdtree_double, self._data_pts_data_double,
+ * query_array_data_double, num_qpoints, num_n, dub_double, epsilon_double,
+ */
+ /*finally:*/ {
+ /*normal exit:*/{
+ #ifdef WITH_THREAD
+ __Pyx_FastGIL_Forget();
+ Py_BLOCK_THREADS
+ #endif
+ goto __pyx_L25;
+ }
+ __pyx_L25:;
+ }
+ }
+ }
+ __pyx_L19:;
+
+ /* "pykdtree/kdtree.pyx":255
+ *
+ * # Shape result
+ * if k > 1: # <<<<<<<<<<<<<<
+ * closest_dists_res = closest_dists.reshape(num_qpoints, k)
+ * closest_idxs_res = closest_idxs.reshape(num_qpoints, k)
+ */
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_k, __pyx_int_1, Py_GT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 255, __pyx_L1_error)
+ __pyx_t_3 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_3 < 0)) __PYX_ERR(0, 255, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":256
+ * # Shape result
+ * if k > 1:
+ * closest_dists_res = closest_dists.reshape(num_qpoints, k) # <<<<<<<<<<<<<<
+ * closest_idxs_res = closest_idxs.reshape(num_qpoints, k)
+ * else:
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_closest_dists, __pyx_n_s_reshape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_5 = __Pyx_PyInt_From_uint32_t(__pyx_v_num_qpoints); if (unlikely(!__pyx_t_5)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_5);
+ __pyx_t_8 = NULL;
+ __pyx_t_12 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_8)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_8);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_12 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_8, __pyx_t_5, __pyx_v_k};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_12, 2+__pyx_t_12); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_8, __pyx_t_5, __pyx_v_k};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_12, 2+__pyx_t_12); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(2+__pyx_t_12); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ if (__pyx_t_8) {
+ __Pyx_GIVEREF(__pyx_t_8); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_8); __pyx_t_8 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_5);
+ PyTuple_SET_ITEM(__pyx_t_7, 0+__pyx_t_12, __pyx_t_5);
+ __Pyx_INCREF(__pyx_v_k);
+ __Pyx_GIVEREF(__pyx_v_k);
+ PyTuple_SET_ITEM(__pyx_t_7, 1+__pyx_t_12, __pyx_v_k);
+ __pyx_t_5 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_7, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 256, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_closest_dists_res = __pyx_t_4;
+ __pyx_t_4 = 0;
+
+ /* "pykdtree/kdtree.pyx":257
+ * if k > 1:
+ * closest_dists_res = closest_dists.reshape(num_qpoints, k)
+ * closest_idxs_res = closest_idxs.reshape(num_qpoints, k) # <<<<<<<<<<<<<<
+ * else:
+ * closest_dists_res = closest_dists
+ */
+ __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_closest_idxs), __pyx_n_s_reshape); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_7 = __Pyx_PyInt_From_uint32_t(__pyx_v_num_qpoints); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __pyx_t_5 = NULL;
+ __pyx_t_12 = 0;
+ if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) {
+ __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1);
+ if (likely(__pyx_t_5)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1);
+ __Pyx_INCREF(__pyx_t_5);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_1, function);
+ __pyx_t_12 = 1;
+ }
+ }
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_7, __pyx_v_k};
+ __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_12, 2+__pyx_t_12); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_1)) {
+ PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_7, __pyx_v_k};
+ __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_1, __pyx_temp+1-__pyx_t_12, 2+__pyx_t_12); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0;
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ } else
+ #endif
+ {
+ __pyx_t_8 = PyTuple_New(2+__pyx_t_12); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ if (__pyx_t_5) {
+ __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL;
+ }
+ __Pyx_GIVEREF(__pyx_t_7);
+ PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_12, __pyx_t_7);
+ __Pyx_INCREF(__pyx_v_k);
+ __Pyx_GIVEREF(__pyx_v_k);
+ PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_12, __pyx_v_k);
+ __pyx_t_7 = 0;
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 257, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_closest_idxs_res = __pyx_t_4;
+ __pyx_t_4 = 0;
+
+ /* "pykdtree/kdtree.pyx":255
+ *
+ * # Shape result
+ * if k > 1: # <<<<<<<<<<<<<<
+ * closest_dists_res = closest_dists.reshape(num_qpoints, k)
+ * closest_idxs_res = closest_idxs.reshape(num_qpoints, k)
+ */
+ goto __pyx_L26;
+ }
+
+ /* "pykdtree/kdtree.pyx":259
+ * closest_idxs_res = closest_idxs.reshape(num_qpoints, k)
+ * else:
+ * closest_dists_res = closest_dists # <<<<<<<<<<<<<<
+ * closest_idxs_res = closest_idxs
+ *
+ */
+ /*else*/ {
+ __Pyx_INCREF(__pyx_v_closest_dists);
+ __pyx_v_closest_dists_res = __pyx_v_closest_dists;
+
+ /* "pykdtree/kdtree.pyx":260
+ * else:
+ * closest_dists_res = closest_dists
+ * closest_idxs_res = closest_idxs # <<<<<<<<<<<<<<
+ *
+ * if distance_upper_bound is not None: # Mark out of bounds results
+ */
+ __Pyx_INCREF(((PyObject *)__pyx_v_closest_idxs));
+ __pyx_v_closest_idxs_res = ((PyObject *)__pyx_v_closest_idxs);
+ }
+ __pyx_L26:;
+
+ /* "pykdtree/kdtree.pyx":262
+ * closest_idxs_res = closest_idxs
+ *
+ * if distance_upper_bound is not None: # Mark out of bounds results # <<<<<<<<<<<<<<
+ * if self.data_pts.dtype == np.float32:
+ * idx_out = (closest_dists_res >= dub_float)
+ */
+ __pyx_t_3 = (__pyx_v_distance_upper_bound != Py_None);
+ __pyx_t_10 = (__pyx_t_3 != 0);
+ if (__pyx_t_10) {
+
+ /* "pykdtree/kdtree.pyx":263
+ *
+ * if distance_upper_bound is not None: # Mark out of bounds results
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * idx_out = (closest_dists_res >= dub_float)
+ * else:
+ */
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self->data_pts), __pyx_n_s_dtype); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_float32); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = PyObject_RichCompare(__pyx_t_4, __pyx_t_8, Py_EQ); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_t_1); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 263, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (__pyx_t_10) {
+
+ /* "pykdtree/kdtree.pyx":264
+ * if distance_upper_bound is not None: # Mark out of bounds results
+ * if self.data_pts.dtype == np.float32:
+ * idx_out = (closest_dists_res >= dub_float) # <<<<<<<<<<<<<<
+ * else:
+ * idx_out = (closest_dists_res >= dub_double)
+ */
+ __pyx_t_1 = PyFloat_FromDouble(__pyx_v_dub_float); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = PyObject_RichCompare(__pyx_v_closest_dists_res, __pyx_t_1, Py_GE); __Pyx_XGOTREF(__pyx_t_8); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 264, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_v_idx_out = __pyx_t_8;
+ __pyx_t_8 = 0;
+
+ /* "pykdtree/kdtree.pyx":263
+ *
+ * if distance_upper_bound is not None: # Mark out of bounds results
+ * if self.data_pts.dtype == np.float32: # <<<<<<<<<<<<<<
+ * idx_out = (closest_dists_res >= dub_float)
+ * else:
+ */
+ goto __pyx_L28;
+ }
+
+ /* "pykdtree/kdtree.pyx":266
+ * idx_out = (closest_dists_res >= dub_float)
+ * else:
+ * idx_out = (closest_dists_res >= dub_double) # <<<<<<<<<<<<<<
+ *
+ * closest_dists_res[idx_out] = np.Inf
+ */
+ /*else*/ {
+ __pyx_t_8 = PyFloat_FromDouble(__pyx_v_dub_double); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 266, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __pyx_t_1 = PyObject_RichCompare(__pyx_v_closest_dists_res, __pyx_t_8, Py_GE); __Pyx_XGOTREF(__pyx_t_1); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 266, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+ __pyx_v_idx_out = __pyx_t_1;
+ __pyx_t_1 = 0;
+ }
+ __pyx_L28:;
+
+ /* "pykdtree/kdtree.pyx":268
+ * idx_out = (closest_dists_res >= dub_double)
+ *
+ * closest_dists_res[idx_out] = np.Inf # <<<<<<<<<<<<<<
+ * closest_idxs_res[idx_out] = self.n
+ *
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 268, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_8 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_Inf); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 268, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ if (unlikely(PyObject_SetItem(__pyx_v_closest_dists_res, __pyx_v_idx_out, __pyx_t_8) < 0)) __PYX_ERR(0, 268, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+
+ /* "pykdtree/kdtree.pyx":269
+ *
+ * closest_dists_res[idx_out] = np.Inf
+ * closest_idxs_res[idx_out] = self.n # <<<<<<<<<<<<<<
+ *
+ * if not sqr_dists: # Return actual cartesian distances
+ */
+ __pyx_t_8 = __Pyx_PyInt_From_uint32_t(__pyx_v_self->n); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 269, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ if (unlikely(PyObject_SetItem(__pyx_v_closest_idxs_res, __pyx_v_idx_out, __pyx_t_8) < 0)) __PYX_ERR(0, 269, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0;
+
+ /* "pykdtree/kdtree.pyx":262
+ * closest_idxs_res = closest_idxs
+ *
+ * if distance_upper_bound is not None: # Mark out of bounds results # <<<<<<<<<<<<<<
+ * if self.data_pts.dtype == np.float32:
+ * idx_out = (closest_dists_res >= dub_float)
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":271
+ * closest_idxs_res[idx_out] = self.n
+ *
+ * if not sqr_dists: # Return actual cartesian distances # <<<<<<<<<<<<<<
+ * closest_dists_res = np.sqrt(closest_dists_res)
+ *
+ */
+ __pyx_t_10 = __Pyx_PyObject_IsTrue(__pyx_v_sqr_dists); if (unlikely(__pyx_t_10 < 0)) __PYX_ERR(0, 271, __pyx_L1_error)
+ __pyx_t_3 = ((!__pyx_t_10) != 0);
+ if (__pyx_t_3) {
+
+ /* "pykdtree/kdtree.pyx":272
+ *
+ * if not sqr_dists: # Return actual cartesian distances
+ * closest_dists_res = np.sqrt(closest_dists_res) # <<<<<<<<<<<<<<
+ *
+ * return closest_dists_res, closest_idxs_res
+ */
+ __pyx_t_1 = __Pyx_GetModuleGlobalName(__pyx_n_s_np); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_t_4 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_sqrt); if (unlikely(!__pyx_t_4)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __pyx_t_1 = NULL;
+ if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) {
+ __pyx_t_1 = PyMethod_GET_SELF(__pyx_t_4);
+ if (likely(__pyx_t_1)) {
+ PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4);
+ __Pyx_INCREF(__pyx_t_1);
+ __Pyx_INCREF(function);
+ __Pyx_DECREF_SET(__pyx_t_4, function);
+ }
+ }
+ if (!__pyx_t_1) {
+ __pyx_t_8 = __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_v_closest_dists_res); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ } else {
+ #if CYTHON_FAST_PYCALL
+ if (PyFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_1, __pyx_v_closest_dists_res};
+ __pyx_t_8 = __Pyx_PyFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ } else
+ #endif
+ #if CYTHON_FAST_PYCCALL
+ if (__Pyx_PyFastCFunction_Check(__pyx_t_4)) {
+ PyObject *__pyx_temp[2] = {__pyx_t_1, __pyx_v_closest_dists_res};
+ __pyx_t_8 = __Pyx_PyCFunction_FastCall(__pyx_t_4, __pyx_temp+1-1, 1+1); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __Pyx_GOTREF(__pyx_t_8);
+ } else
+ #endif
+ {
+ __pyx_t_7 = PyTuple_New(1+1); if (unlikely(!__pyx_t_7)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_7);
+ __Pyx_GIVEREF(__pyx_t_1); PyTuple_SET_ITEM(__pyx_t_7, 0, __pyx_t_1); __pyx_t_1 = NULL;
+ __Pyx_INCREF(__pyx_v_closest_dists_res);
+ __Pyx_GIVEREF(__pyx_v_closest_dists_res);
+ PyTuple_SET_ITEM(__pyx_t_7, 0+1, __pyx_v_closest_dists_res);
+ __pyx_t_8 = __Pyx_PyObject_Call(__pyx_t_4, __pyx_t_7, NULL); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 272, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0;
+ }
+ }
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_DECREF_SET(__pyx_v_closest_dists_res, __pyx_t_8);
+ __pyx_t_8 = 0;
+
+ /* "pykdtree/kdtree.pyx":271
+ * closest_idxs_res[idx_out] = self.n
+ *
+ * if not sqr_dists: # Return actual cartesian distances # <<<<<<<<<<<<<<
+ * closest_dists_res = np.sqrt(closest_dists_res)
+ *
+ */
+ }
+
+ /* "pykdtree/kdtree.pyx":274
+ * closest_dists_res = np.sqrt(closest_dists_res)
+ *
+ * return closest_dists_res, closest_idxs_res # <<<<<<<<<<<<<<
+ *
+ * def __dealloc__(KDTree self):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_8 = PyTuple_New(2); if (unlikely(!__pyx_t_8)) __PYX_ERR(0, 274, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_8);
+ __Pyx_INCREF(__pyx_v_closest_dists_res);
+ __Pyx_GIVEREF(__pyx_v_closest_dists_res);
+ PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_closest_dists_res);
+ __Pyx_INCREF(__pyx_v_closest_idxs_res);
+ __Pyx_GIVEREF(__pyx_v_closest_idxs_res);
+ PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_v_closest_idxs_res);
+ __pyx_r = __pyx_t_8;
+ __pyx_t_8 = 0;
+ goto __pyx_L0;
+
+ /* "pykdtree/kdtree.pyx":132
+ *
+ *
+ * def query(KDTree self, np.ndarray query_pts not None, k=1, eps=0, # <<<<<<<<<<<<<<
+ * distance_upper_bound=None, sqr_dists=False, mask=None):
+ * """Query the kd-tree for nearest neighbors
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_XDECREF(__pyx_t_5);
+ __Pyx_XDECREF(__pyx_t_7);
+ __Pyx_XDECREF(__pyx_t_8);
+ { PyObject *__pyx_type, *__pyx_value, *__pyx_tb;
+ __Pyx_PyThreadState_declare
+ __Pyx_PyThreadState_assign
+ __Pyx_ErrFetch(&__pyx_type, &__pyx_value, &__pyx_tb);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_float.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_mask.rcbuffer->pybuffer);
+ __Pyx_ErrRestore(__pyx_type, __pyx_value, __pyx_tb);}
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.query", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ goto __pyx_L2;
+ __pyx_L0:;
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_dists_float.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_closest_idxs.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_double.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_array_float.rcbuffer->pybuffer);
+ __Pyx_SafeReleaseBuffer(&__pyx_pybuffernd_query_mask.rcbuffer->pybuffer);
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_closest_idxs);
+ __Pyx_XDECREF((PyObject *)__pyx_v_closest_dists_float);
+ __Pyx_XDECREF((PyObject *)__pyx_v_closest_dists_double);
+ __Pyx_XDECREF((PyObject *)__pyx_v_query_array_float);
+ __Pyx_XDECREF((PyObject *)__pyx_v_query_array_double);
+ __Pyx_XDECREF((PyObject *)__pyx_v_query_mask);
+ __Pyx_XDECREF(__pyx_v_closest_dists);
+ __Pyx_XDECREF(__pyx_v_closest_dists_res);
+ __Pyx_XDECREF(__pyx_v_closest_idxs_res);
+ __Pyx_XDECREF(__pyx_v_idx_out);
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":276
+ * return closest_dists_res, closest_idxs_res
+ *
+ * def __dealloc__(KDTree self): # <<<<<<<<<<<<<<
+ * if self._kdtree_float != NULL:
+ * delete_tree_float(self._kdtree_float)
+ */
+
+/* Python wrapper */
+static void __pyx_pw_8pykdtree_6kdtree_6KDTree_7__dealloc__(PyObject *__pyx_v_self); /*proto*/
+static void __pyx_pw_8pykdtree_6kdtree_6KDTree_7__dealloc__(PyObject *__pyx_v_self) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0);
+ __pyx_pf_8pykdtree_6kdtree_6KDTree_6__dealloc__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+static void __pyx_pf_8pykdtree_6kdtree_6KDTree_6__dealloc__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ __Pyx_RefNannySetupContext("__dealloc__", 0);
+
+ /* "pykdtree/kdtree.pyx":277
+ *
+ * def __dealloc__(KDTree self):
+ * if self._kdtree_float != NULL: # <<<<<<<<<<<<<<
+ * delete_tree_float(self._kdtree_float)
+ * elif self._kdtree_double != NULL:
+ */
+ __pyx_t_1 = ((__pyx_v_self->_kdtree_float != NULL) != 0);
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":278
+ * def __dealloc__(KDTree self):
+ * if self._kdtree_float != NULL:
+ * delete_tree_float(self._kdtree_float) # <<<<<<<<<<<<<<
+ * elif self._kdtree_double != NULL:
+ * delete_tree_double(self._kdtree_double)
+ */
+ delete_tree_float(__pyx_v_self->_kdtree_float);
+
+ /* "pykdtree/kdtree.pyx":277
+ *
+ * def __dealloc__(KDTree self):
+ * if self._kdtree_float != NULL: # <<<<<<<<<<<<<<
+ * delete_tree_float(self._kdtree_float)
+ * elif self._kdtree_double != NULL:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "pykdtree/kdtree.pyx":279
+ * if self._kdtree_float != NULL:
+ * delete_tree_float(self._kdtree_float)
+ * elif self._kdtree_double != NULL: # <<<<<<<<<<<<<<
+ * delete_tree_double(self._kdtree_double)
+ */
+ __pyx_t_1 = ((__pyx_v_self->_kdtree_double != NULL) != 0);
+ if (__pyx_t_1) {
+
+ /* "pykdtree/kdtree.pyx":280
+ * delete_tree_float(self._kdtree_float)
+ * elif self._kdtree_double != NULL:
+ * delete_tree_double(self._kdtree_double) # <<<<<<<<<<<<<<
+ */
+ delete_tree_double(__pyx_v_self->_kdtree_double);
+
+ /* "pykdtree/kdtree.pyx":279
+ * if self._kdtree_float != NULL:
+ * delete_tree_float(self._kdtree_float)
+ * elif self._kdtree_double != NULL: # <<<<<<<<<<<<<<
+ * delete_tree_double(self._kdtree_double)
+ */
+ }
+ __pyx_L3:;
+
+ /* "pykdtree/kdtree.pyx":276
+ * return closest_dists_res, closest_idxs_res
+ *
+ * def __dealloc__(KDTree self): # <<<<<<<<<<<<<<
+ * if self._kdtree_float != NULL:
+ * delete_tree_float(self._kdtree_float)
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "pykdtree/kdtree.pyx":79
+ * cdef tree_float *_kdtree_float
+ * cdef tree_double *_kdtree_double
+ * cdef readonly np.ndarray data_pts # <<<<<<<<<<<<<<
+ * cdef readonly np.ndarray data
+ * cdef float *_data_pts_data_float
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_8data_pts_1__get__(PyObject *__pyx_v_self); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_8data_pts_1__get__(PyObject *__pyx_v_self) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_8data_pts___get__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8data_pts___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__", 0);
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(((PyObject *)__pyx_v_self->data_pts));
+ __pyx_r = ((PyObject *)__pyx_v_self->data_pts);
+ goto __pyx_L0;
+
+ /* function exit code */
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":80
+ * cdef tree_double *_kdtree_double
+ * cdef readonly np.ndarray data_pts
+ * cdef readonly np.ndarray data # <<<<<<<<<<<<<<
+ * cdef float *_data_pts_data_float
+ * cdef double *_data_pts_data_double
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_4data_1__get__(PyObject *__pyx_v_self); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_4data_1__get__(PyObject *__pyx_v_self) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_4data___get__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4data___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__", 0);
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(((PyObject *)__pyx_v_self->data));
+ __pyx_r = ((PyObject *)__pyx_v_self->data);
+ goto __pyx_L0;
+
+ /* function exit code */
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":83
+ * cdef float *_data_pts_data_float
+ * cdef double *_data_pts_data_double
+ * cdef readonly uint32_t n # <<<<<<<<<<<<<<
+ * cdef readonly int8_t ndim
+ * cdef readonly uint32_t leafsize
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_1n_1__get__(PyObject *__pyx_v_self); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_1n_1__get__(PyObject *__pyx_v_self) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_1n___get__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_1n___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__get__", 0);
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_From_uint32_t(__pyx_v_self->n); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 83, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.n.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":84
+ * cdef double *_data_pts_data_double
+ * cdef readonly uint32_t n
+ * cdef readonly int8_t ndim # <<<<<<<<<<<<<<
+ * cdef readonly uint32_t leafsize
+ *
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_4ndim_1__get__(PyObject *__pyx_v_self) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_4ndim___get__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_4ndim___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__get__", 0);
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_From_int8_t(__pyx_v_self->ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 84, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "pykdtree/kdtree.pyx":85
+ * cdef readonly uint32_t n
+ * cdef readonly int8_t ndim
+ * cdef readonly uint32_t leafsize # <<<<<<<<<<<<<<
+ *
+ * def __cinit__(KDTree self):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_8leafsize_1__get__(PyObject *__pyx_v_self); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_8leafsize_1__get__(PyObject *__pyx_v_self) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__get__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_8leafsize___get__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8leafsize___get__(struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__get__", 0);
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = __Pyx_PyInt_From_uint32_t(__pyx_v_self->leafsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 85, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.leafsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_9__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_8__reduce_cython__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_8__reduce_cython__(CYTHON_UNUSED struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__reduce_cython__", 0);
+
+ /* "(tree fragment)":2
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 2, __pyx_L1_error)
+
+ /* "(tree fragment)":1
+ * def __reduce_cython__(self): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+/* Python wrapper */
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/
+static PyObject *__pyx_pw_8pykdtree_6kdtree_6KDTree_11__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = 0;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_8pykdtree_6kdtree_6KDTree_10__setstate_cython__(((struct __pyx_obj_8pykdtree_6kdtree_KDTree *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static PyObject *__pyx_pf_8pykdtree_6kdtree_6KDTree_10__setstate_cython__(CYTHON_UNUSED struct __pyx_obj_8pykdtree_6kdtree_KDTree *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("__setstate_cython__", 0);
+
+ /* "(tree fragment)":4
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<<
+ */
+ __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __Pyx_Raise(__pyx_t_1, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+ __PYX_ERR(1, 4, __pyx_L1_error)
+
+ /* "(tree fragment)":3
+ * def __reduce_cython__(self):
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<<
+ * raise TypeError("no default __reduce__ due to non-trivial __cinit__")
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("pykdtree.kdtree.KDTree.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":214
+ * # experimental exception made for __getbuffer__ and __releasebuffer__
+ * # -- the details of this may change.
+ * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<<
+ * # This implementation of getbuffer is geared towards Cython
+ * # requirements, and does not yet fullfill the PEP.
+ */
+
+/* Python wrapper */
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/
+static CYTHON_UNUSED int __pyx_pw_5numpy_7ndarray_1__getbuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0);
+ __pyx_r = __pyx_pf_5numpy_7ndarray___getbuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+static int __pyx_pf_5numpy_7ndarray___getbuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) {
+ int __pyx_v_copy_shape;
+ int __pyx_v_i;
+ int __pyx_v_ndim;
+ int __pyx_v_endian_detector;
+ int __pyx_v_little_endian;
+ int __pyx_v_t;
+ char *__pyx_v_f;
+ PyArray_Descr *__pyx_v_descr = 0;
+ int __pyx_v_offset;
+ int __pyx_v_hasfields;
+ int __pyx_r;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ int __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ int __pyx_t_4;
+ int __pyx_t_5;
+ PyObject *__pyx_t_6 = NULL;
+ char *__pyx_t_7;
+ __Pyx_RefNannySetupContext("__getbuffer__", 0);
+ if (__pyx_v_info != NULL) {
+ __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None);
+ __Pyx_GIVEREF(__pyx_v_info->obj);
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":220
+ * # of flags
+ *
+ * if info == NULL: return # <<<<<<<<<<<<<<
+ *
+ * cdef int copy_shape, i, ndim
+ */
+ __pyx_t_1 = ((__pyx_v_info == NULL) != 0);
+ if (__pyx_t_1) {
+ __pyx_r = 0;
+ goto __pyx_L0;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":223
+ *
+ * cdef int copy_shape, i, ndim
+ * cdef int endian_detector = 1 # <<<<<<<<<<<<<<
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ */
+ __pyx_v_endian_detector = 1;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":224
+ * cdef int copy_shape, i, ndim
+ * cdef int endian_detector = 1
+ * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<<
+ *
+ * ndim = PyArray_NDIM(self)
+ */
+ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":226
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ *
+ * ndim = PyArray_NDIM(self) # <<<<<<<<<<<<<<
+ *
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ __pyx_v_ndim = PyArray_NDIM(__pyx_v_self);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":228
+ * ndim = PyArray_NDIM(self)
+ *
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * copy_shape = 1
+ * else:
+ */
+ __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":229
+ *
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * copy_shape = 1 # <<<<<<<<<<<<<<
+ * else:
+ * copy_shape = 0
+ */
+ __pyx_v_copy_shape = 1;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":228
+ * ndim = PyArray_NDIM(self)
+ *
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * copy_shape = 1
+ * else:
+ */
+ goto __pyx_L4;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":231
+ * copy_shape = 1
+ * else:
+ * copy_shape = 0 # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ */
+ /*else*/ {
+ __pyx_v_copy_shape = 0;
+ }
+ __pyx_L4:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":233
+ * copy_shape = 0
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_C_CONTIGUOUS) == PyBUF_C_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L6_bool_binop_done;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":234
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_C_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L6_bool_binop_done:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":233
+ * copy_shape = 0
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":235
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous") # <<<<<<<<<<<<<<
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 235, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 235, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":233
+ * copy_shape = 0
+ *
+ * if ((flags & pybuf.PyBUF_C_CONTIGUOUS == pybuf.PyBUF_C_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_C_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not C contiguous")
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":237
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ __pyx_t_2 = (((__pyx_v_flags & PyBUF_F_CONTIGUOUS) == PyBUF_F_CONTIGUOUS) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L9_bool_binop_done;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":238
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ */
+ __pyx_t_2 = ((!(PyArray_CHKFLAGS(__pyx_v_self, NPY_F_CONTIGUOUS) != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L9_bool_binop_done:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":237
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":239
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS)
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous") # <<<<<<<<<<<<<<
+ *
+ * info.buf = PyArray_DATA(self)
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 239, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 239, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":237
+ * raise ValueError(u"ndarray is not C contiguous")
+ *
+ * if ((flags & pybuf.PyBUF_F_CONTIGUOUS == pybuf.PyBUF_F_CONTIGUOUS) # <<<<<<<<<<<<<<
+ * and not PyArray_CHKFLAGS(self, NPY_F_CONTIGUOUS)):
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":241
+ * raise ValueError(u"ndarray is not Fortran contiguous")
+ *
+ * info.buf = PyArray_DATA(self) # <<<<<<<<<<<<<<
+ * info.ndim = ndim
+ * if copy_shape:
+ */
+ __pyx_v_info->buf = PyArray_DATA(__pyx_v_self);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":242
+ *
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim # <<<<<<<<<<<<<<
+ * if copy_shape:
+ * # Allocate new buffer for strides and shape info.
+ */
+ __pyx_v_info->ndim = __pyx_v_ndim;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":243
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if copy_shape: # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ __pyx_t_1 = (__pyx_v_copy_shape != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":246
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim) # <<<<<<<<<<<<<<
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ */
+ __pyx_v_info->strides = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * 2) * ((size_t)__pyx_v_ndim))));
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":247
+ * # This is allocated as one block, strides first.
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim # <<<<<<<<<<<<<<
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ */
+ __pyx_v_info->shape = (__pyx_v_info->strides + __pyx_v_ndim);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":248
+ * info.strides = PyObject_Malloc(sizeof(Py_ssize_t) * 2 * ndim)
+ * info.shape = info.strides + ndim
+ * for i in range(ndim): # <<<<<<<<<<<<<<
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ */
+ __pyx_t_4 = __pyx_v_ndim;
+ for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) {
+ __pyx_v_i = __pyx_t_5;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":249
+ * info.shape = info.strides + ndim
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i] # <<<<<<<<<<<<<<
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ */
+ (__pyx_v_info->strides[__pyx_v_i]) = (PyArray_STRIDES(__pyx_v_self)[__pyx_v_i]);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":250
+ * for i in range(ndim):
+ * info.strides[i] = PyArray_STRIDES(self)[i]
+ * info.shape[i] = PyArray_DIMS(self)[i] # <<<<<<<<<<<<<<
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ */
+ (__pyx_v_info->shape[__pyx_v_i]) = (PyArray_DIMS(__pyx_v_self)[__pyx_v_i]);
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":243
+ * info.buf = PyArray_DATA(self)
+ * info.ndim = ndim
+ * if copy_shape: # <<<<<<<<<<<<<<
+ * # Allocate new buffer for strides and shape info.
+ * # This is allocated as one block, strides first.
+ */
+ goto __pyx_L11;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":252
+ * info.shape[i] = PyArray_DIMS(self)[i]
+ * else:
+ * info.strides = PyArray_STRIDES(self) # <<<<<<<<<<<<<<
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ */
+ /*else*/ {
+ __pyx_v_info->strides = ((Py_ssize_t *)PyArray_STRIDES(__pyx_v_self));
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":253
+ * else:
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self) # <<<<<<<<<<<<<<
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ */
+ __pyx_v_info->shape = ((Py_ssize_t *)PyArray_DIMS(__pyx_v_self));
+ }
+ __pyx_L11:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":254
+ * info.strides = PyArray_STRIDES(self)
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL # <<<<<<<<<<<<<<
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ */
+ __pyx_v_info->suboffsets = NULL;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":255
+ * info.shape = PyArray_DIMS(self)
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self) # <<<<<<<<<<<<<<
+ * info.readonly = not PyArray_ISWRITEABLE(self)
+ *
+ */
+ __pyx_v_info->itemsize = PyArray_ITEMSIZE(__pyx_v_self);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":256
+ * info.suboffsets = NULL
+ * info.itemsize = PyArray_ITEMSIZE(self)
+ * info.readonly = not PyArray_ISWRITEABLE(self) # <<<<<<<<<<<<<<
+ *
+ * cdef int t
+ */
+ __pyx_v_info->readonly = (!(PyArray_ISWRITEABLE(__pyx_v_self) != 0));
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":259
+ *
+ * cdef int t
+ * cdef char* f = NULL # <<<<<<<<<<<<<<
+ * cdef dtype descr = self.descr
+ * cdef int offset
+ */
+ __pyx_v_f = NULL;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":260
+ * cdef int t
+ * cdef char* f = NULL
+ * cdef dtype descr = self.descr # <<<<<<<<<<<<<<
+ * cdef int offset
+ *
+ */
+ __pyx_t_3 = ((PyObject *)__pyx_v_self->descr);
+ __Pyx_INCREF(__pyx_t_3);
+ __pyx_v_descr = ((PyArray_Descr *)__pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":263
+ * cdef int offset
+ *
+ * cdef bint hasfields = PyDataType_HASFIELDS(descr) # <<<<<<<<<<<<<<
+ *
+ * if not hasfields and not copy_shape:
+ */
+ __pyx_v_hasfields = PyDataType_HASFIELDS(__pyx_v_descr);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":265
+ * cdef bint hasfields = PyDataType_HASFIELDS(descr)
+ *
+ * if not hasfields and not copy_shape: # <<<<<<<<<<<<<<
+ * # do not call releasebuffer
+ * info.obj = None
+ */
+ __pyx_t_2 = ((!(__pyx_v_hasfields != 0)) != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L15_bool_binop_done;
+ }
+ __pyx_t_2 = ((!(__pyx_v_copy_shape != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L15_bool_binop_done:;
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":267
+ * if not hasfields and not copy_shape:
+ * # do not call releasebuffer
+ * info.obj = None # <<<<<<<<<<<<<<
+ * else:
+ * # need to call releasebuffer
+ */
+ __Pyx_INCREF(Py_None);
+ __Pyx_GIVEREF(Py_None);
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj);
+ __pyx_v_info->obj = Py_None;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":265
+ * cdef bint hasfields = PyDataType_HASFIELDS(descr)
+ *
+ * if not hasfields and not copy_shape: # <<<<<<<<<<<<<<
+ * # do not call releasebuffer
+ * info.obj = None
+ */
+ goto __pyx_L14;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":270
+ * else:
+ * # need to call releasebuffer
+ * info.obj = self # <<<<<<<<<<<<<<
+ *
+ * if not hasfields:
+ */
+ /*else*/ {
+ __Pyx_INCREF(((PyObject *)__pyx_v_self));
+ __Pyx_GIVEREF(((PyObject *)__pyx_v_self));
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj);
+ __pyx_v_info->obj = ((PyObject *)__pyx_v_self);
+ }
+ __pyx_L14:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":272
+ * info.obj = self
+ *
+ * if not hasfields: # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ __pyx_t_1 = ((!(__pyx_v_hasfields != 0)) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":273
+ *
+ * if not hasfields:
+ * t = descr.type_num # <<<<<<<<<<<<<<
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ */
+ __pyx_t_4 = __pyx_v_descr->type_num;
+ __pyx_v_t = __pyx_t_4;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * if not hasfields:
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '>') != 0);
+ if (!__pyx_t_2) {
+ goto __pyx_L20_next_or;
+ } else {
+ }
+ __pyx_t_2 = (__pyx_v_little_endian != 0);
+ if (!__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L19_bool_binop_done;
+ }
+ __pyx_L20_next_or:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":275
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ */
+ __pyx_t_2 = ((__pyx_v_descr->byteorder == '<') != 0);
+ if (__pyx_t_2) {
+ } else {
+ __pyx_t_1 = __pyx_t_2;
+ goto __pyx_L19_bool_binop_done;
+ }
+ __pyx_t_2 = ((!(__pyx_v_little_endian != 0)) != 0);
+ __pyx_t_1 = __pyx_t_2;
+ __pyx_L19_bool_binop_done:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * if not hasfields:
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":276
+ * if ((descr.byteorder == c'>' and little_endian) or
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<<
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 276, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 276, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":274
+ * if not hasfields:
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":277
+ * (descr.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b" # <<<<<<<<<<<<<<
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ */
+ switch (__pyx_v_t) {
+ case NPY_BYTE:
+ __pyx_v_f = ((char *)"b");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":278
+ * raise ValueError(u"Non-native byte order not supported")
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B" # <<<<<<<<<<<<<<
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ */
+ case NPY_UBYTE:
+ __pyx_v_f = ((char *)"B");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":279
+ * if t == NPY_BYTE: f = "b"
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h" # <<<<<<<<<<<<<<
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ */
+ case NPY_SHORT:
+ __pyx_v_f = ((char *)"h");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":280
+ * elif t == NPY_UBYTE: f = "B"
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H" # <<<<<<<<<<<<<<
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ */
+ case NPY_USHORT:
+ __pyx_v_f = ((char *)"H");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":281
+ * elif t == NPY_SHORT: f = "h"
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i" # <<<<<<<<<<<<<<
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ */
+ case NPY_INT:
+ __pyx_v_f = ((char *)"i");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":282
+ * elif t == NPY_USHORT: f = "H"
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ */
+ case NPY_UINT:
+ __pyx_v_f = ((char *)"I");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":283
+ * elif t == NPY_INT: f = "i"
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ */
+ case NPY_LONG:
+ __pyx_v_f = ((char *)"l");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":284
+ * elif t == NPY_UINT: f = "I"
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ */
+ case NPY_ULONG:
+ __pyx_v_f = ((char *)"L");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":285
+ * elif t == NPY_LONG: f = "l"
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ */
+ case NPY_LONGLONG:
+ __pyx_v_f = ((char *)"q");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":286
+ * elif t == NPY_ULONG: f = "L"
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q" # <<<<<<<<<<<<<<
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ */
+ case NPY_ULONGLONG:
+ __pyx_v_f = ((char *)"Q");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":287
+ * elif t == NPY_LONGLONG: f = "q"
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f" # <<<<<<<<<<<<<<
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ */
+ case NPY_FLOAT:
+ __pyx_v_f = ((char *)"f");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":288
+ * elif t == NPY_ULONGLONG: f = "Q"
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ */
+ case NPY_DOUBLE:
+ __pyx_v_f = ((char *)"d");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":289
+ * elif t == NPY_FLOAT: f = "f"
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g" # <<<<<<<<<<<<<<
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ */
+ case NPY_LONGDOUBLE:
+ __pyx_v_f = ((char *)"g");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":290
+ * elif t == NPY_DOUBLE: f = "d"
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf" # <<<<<<<<<<<<<<
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ */
+ case NPY_CFLOAT:
+ __pyx_v_f = ((char *)"Zf");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":291
+ * elif t == NPY_LONGDOUBLE: f = "g"
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd" # <<<<<<<<<<<<<<
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O"
+ */
+ case NPY_CDOUBLE:
+ __pyx_v_f = ((char *)"Zd");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":292
+ * elif t == NPY_CFLOAT: f = "Zf"
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg" # <<<<<<<<<<<<<<
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ */
+ case NPY_CLONGDOUBLE:
+ __pyx_v_f = ((char *)"Zg");
+ break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":293
+ * elif t == NPY_CDOUBLE: f = "Zd"
+ * elif t == NPY_CLONGDOUBLE: f = "Zg"
+ * elif t == NPY_OBJECT: f = "O" # <<<<<<<<<<<<<<
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ */
+ case NPY_OBJECT:
+ __pyx_v_f = ((char *)"O");
+ break;
+ default:
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":295
+ * elif t == NPY_OBJECT: f = "O"
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<<
+ * info.format = f
+ * return
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_6 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_t_3); if (unlikely(!__pyx_t_6)) __PYX_ERR(2, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_GIVEREF(__pyx_t_6);
+ PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_6);
+ __pyx_t_6 = 0;
+ __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_3, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(2, 295, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_6);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __Pyx_Raise(__pyx_t_6, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0;
+ __PYX_ERR(2, 295, __pyx_L1_error)
+ break;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":296
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f # <<<<<<<<<<<<<<
+ * return
+ * else:
+ */
+ __pyx_v_info->format = __pyx_v_f;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":297
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * info.format = f
+ * return # <<<<<<<<<<<<<<
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ */
+ __pyx_r = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":272
+ * info.obj = self
+ *
+ * if not hasfields: # <<<<<<<<<<<<<<
+ * t = descr.type_num
+ * if ((descr.byteorder == c'>' and little_endian) or
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":299
+ * return
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len) # <<<<<<<<<<<<<<
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0
+ */
+ /*else*/ {
+ __pyx_v_info->format = ((char *)PyObject_Malloc(0xFF));
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":300
+ * else:
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ * info.format[0] = c'^' # Native data types, manual alignment # <<<<<<<<<<<<<<
+ * offset = 0
+ * f = _util_dtypestring(descr, info.format + 1,
+ */
+ (__pyx_v_info->format[0]) = '^';
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":301
+ * info.format = PyObject_Malloc(_buffer_format_string_len)
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0 # <<<<<<<<<<<<<<
+ * f = _util_dtypestring(descr, info.format + 1,
+ * info.format + _buffer_format_string_len,
+ */
+ __pyx_v_offset = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":302
+ * info.format[0] = c'^' # Native data types, manual alignment
+ * offset = 0
+ * f = _util_dtypestring(descr, info.format + 1, # <<<<<<<<<<<<<<
+ * info.format + _buffer_format_string_len,
+ * &offset)
+ */
+ __pyx_t_7 = __pyx_f_5numpy__util_dtypestring(__pyx_v_descr, (__pyx_v_info->format + 1), (__pyx_v_info->format + 0xFF), (&__pyx_v_offset)); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(2, 302, __pyx_L1_error)
+ __pyx_v_f = __pyx_t_7;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":305
+ * info.format + _buffer_format_string_len,
+ * &offset)
+ * f[0] = c'\0' # Terminate format string # <<<<<<<<<<<<<<
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ */
+ (__pyx_v_f[0]) = '\x00';
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":214
+ * # experimental exception made for __getbuffer__ and __releasebuffer__
+ * # -- the details of this may change.
+ * def __getbuffer__(ndarray self, Py_buffer* info, int flags): # <<<<<<<<<<<<<<
+ * # This implementation of getbuffer is geared towards Cython
+ * # requirements, and does not yet fullfill the PEP.
+ */
+
+ /* function exit code */
+ __pyx_r = 0;
+ goto __pyx_L0;
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_6);
+ __Pyx_AddTraceback("numpy.ndarray.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = -1;
+ if (__pyx_v_info != NULL && __pyx_v_info->obj != NULL) {
+ __Pyx_GOTREF(__pyx_v_info->obj);
+ __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = NULL;
+ }
+ goto __pyx_L2;
+ __pyx_L0:;
+ if (__pyx_v_info != NULL && __pyx_v_info->obj == Py_None) {
+ __Pyx_GOTREF(Py_None);
+ __Pyx_DECREF(Py_None); __pyx_v_info->obj = NULL;
+ }
+ __pyx_L2:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_descr);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":307
+ * f[0] = c'\0' # Terminate format string
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<<
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ */
+
+/* Python wrapper */
+static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info); /*proto*/
+static CYTHON_UNUSED void __pyx_pw_5numpy_7ndarray_3__releasebuffer__(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info) {
+ __Pyx_RefNannyDeclarations
+ __Pyx_RefNannySetupContext("__releasebuffer__ (wrapper)", 0);
+ __pyx_pf_5numpy_7ndarray_2__releasebuffer__(((PyArrayObject *)__pyx_v_self), ((Py_buffer *)__pyx_v_info));
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+static void __pyx_pf_5numpy_7ndarray_2__releasebuffer__(PyArrayObject *__pyx_v_self, Py_buffer *__pyx_v_info) {
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ __Pyx_RefNannySetupContext("__releasebuffer__", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":308
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ __pyx_t_1 = (PyArray_HASFIELDS(__pyx_v_self) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":309
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format) # <<<<<<<<<<<<<<
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * PyObject_Free(info.strides)
+ */
+ PyObject_Free(__pyx_v_info->format);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":308
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info):
+ * if PyArray_HASFIELDS(self): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":310
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.strides)
+ * # info.shape was stored after info.strides in the same block
+ */
+ __pyx_t_1 = (((sizeof(npy_intp)) != (sizeof(Py_ssize_t))) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":311
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t):
+ * PyObject_Free(info.strides) # <<<<<<<<<<<<<<
+ * # info.shape was stored after info.strides in the same block
+ *
+ */
+ PyObject_Free(__pyx_v_info->strides);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":310
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ * if sizeof(npy_intp) != sizeof(Py_ssize_t): # <<<<<<<<<<<<<<
+ * PyObject_Free(info.strides)
+ * # info.shape was stored after info.strides in the same block
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":307
+ * f[0] = c'\0' # Terminate format string
+ *
+ * def __releasebuffer__(ndarray self, Py_buffer* info): # <<<<<<<<<<<<<<
+ * if PyArray_HASFIELDS(self):
+ * PyObject_Free(info.format)
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":788
+ * ctypedef npy_cdouble complex_t
+ *
+ * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(1, a)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew1(PyObject *__pyx_v_a) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew1", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":789
+ *
+ * cdef inline object PyArray_MultiIterNew1(a):
+ * return PyArray_MultiIterNew(1, a) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(1, ((void *)__pyx_v_a)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 789, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":788
+ * ctypedef npy_cdouble complex_t
+ *
+ * cdef inline object PyArray_MultiIterNew1(a): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(1, a)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew1", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":791
+ * return PyArray_MultiIterNew(1, a)
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew2(PyObject *__pyx_v_a, PyObject *__pyx_v_b) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew2", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":792
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b):
+ * return PyArray_MultiIterNew(2, a, b) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(2, ((void *)__pyx_v_a), ((void *)__pyx_v_b)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 792, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":791
+ * return PyArray_MultiIterNew(1, a)
+ *
+ * cdef inline object PyArray_MultiIterNew2(a, b): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew2", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":794
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(3, a, b, c)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew3(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew3", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":795
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c):
+ * return PyArray_MultiIterNew(3, a, b, c) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew4(a, b, c, d):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(3, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 795, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":794
+ * return PyArray_MultiIterNew(2, a, b)
+ *
+ * cdef inline object PyArray_MultiIterNew3(a, b, c): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(3, a, b, c)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew3", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":797
+ * return PyArray_MultiIterNew(3, a, b, c)
+ *
+ * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(4, a, b, c, d)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew4(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew4", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":798
+ *
+ * cdef inline object PyArray_MultiIterNew4(a, b, c, d):
+ * return PyArray_MultiIterNew(4, a, b, c, d) # <<<<<<<<<<<<<<
+ *
+ * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(4, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 798, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":797
+ * return PyArray_MultiIterNew(3, a, b, c)
+ *
+ * cdef inline object PyArray_MultiIterNew4(a, b, c, d): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(4, a, b, c, d)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew4", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":800
+ * return PyArray_MultiIterNew(4, a, b, c, d)
+ *
+ * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(5, a, b, c, d, e)
+ *
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyArray_MultiIterNew5(PyObject *__pyx_v_a, PyObject *__pyx_v_b, PyObject *__pyx_v_c, PyObject *__pyx_v_d, PyObject *__pyx_v_e) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ __Pyx_RefNannySetupContext("PyArray_MultiIterNew5", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":801
+ *
+ * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e):
+ * return PyArray_MultiIterNew(5, a, b, c, d, e) # <<<<<<<<<<<<<<
+ *
+ * cdef inline tuple PyDataType_SHAPE(dtype d):
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __pyx_t_1 = PyArray_MultiIterNew(5, ((void *)__pyx_v_a), ((void *)__pyx_v_b), ((void *)__pyx_v_c), ((void *)__pyx_v_d), ((void *)__pyx_v_e)); if (unlikely(!__pyx_t_1)) __PYX_ERR(2, 801, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_1);
+ __pyx_r = __pyx_t_1;
+ __pyx_t_1 = 0;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":800
+ * return PyArray_MultiIterNew(4, a, b, c, d)
+ *
+ * cdef inline object PyArray_MultiIterNew5(a, b, c, d, e): # <<<<<<<<<<<<<<
+ * return PyArray_MultiIterNew(5, a, b, c, d, e)
+ *
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_AddTraceback("numpy.PyArray_MultiIterNew5", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = 0;
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":803
+ * return PyArray_MultiIterNew(5, a, b, c, d, e)
+ *
+ * cdef inline tuple PyDataType_SHAPE(dtype d): # <<<<<<<<<<<<<<
+ * if PyDataType_HASSUBARRAY(d):
+ * return d.subarray.shape
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_PyDataType_SHAPE(PyArray_Descr *__pyx_v_d) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ __Pyx_RefNannySetupContext("PyDataType_SHAPE", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":804
+ *
+ * cdef inline tuple PyDataType_SHAPE(dtype d):
+ * if PyDataType_HASSUBARRAY(d): # <<<<<<<<<<<<<<
+ * return d.subarray.shape
+ * else:
+ */
+ __pyx_t_1 = (PyDataType_HASSUBARRAY(__pyx_v_d) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":805
+ * cdef inline tuple PyDataType_SHAPE(dtype d):
+ * if PyDataType_HASSUBARRAY(d):
+ * return d.subarray.shape # <<<<<<<<<<<<<<
+ * else:
+ * return ()
+ */
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(((PyObject*)__pyx_v_d->subarray->shape));
+ __pyx_r = ((PyObject*)__pyx_v_d->subarray->shape);
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":804
+ *
+ * cdef inline tuple PyDataType_SHAPE(dtype d):
+ * if PyDataType_HASSUBARRAY(d): # <<<<<<<<<<<<<<
+ * return d.subarray.shape
+ * else:
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":807
+ * return d.subarray.shape
+ * else:
+ * return () # <<<<<<<<<<<<<<
+ *
+ * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL:
+ */
+ /*else*/ {
+ __Pyx_XDECREF(__pyx_r);
+ __Pyx_INCREF(__pyx_empty_tuple);
+ __pyx_r = __pyx_empty_tuple;
+ goto __pyx_L0;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":803
+ * return PyArray_MultiIterNew(5, a, b, c, d, e)
+ *
+ * cdef inline tuple PyDataType_SHAPE(dtype d): # <<<<<<<<<<<<<<
+ * if PyDataType_HASSUBARRAY(d):
+ * return d.subarray.shape
+ */
+
+ /* function exit code */
+ __pyx_L0:;
+ __Pyx_XGIVEREF(__pyx_r);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":809
+ * return ()
+ *
+ * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<<
+ * # Recursive utility function used in __getbuffer__ to get format
+ * # string. The new location in the format string is returned.
+ */
+
+static CYTHON_INLINE char *__pyx_f_5numpy__util_dtypestring(PyArray_Descr *__pyx_v_descr, char *__pyx_v_f, char *__pyx_v_end, int *__pyx_v_offset) {
+ PyArray_Descr *__pyx_v_child = 0;
+ int __pyx_v_endian_detector;
+ int __pyx_v_little_endian;
+ PyObject *__pyx_v_fields = 0;
+ PyObject *__pyx_v_childname = NULL;
+ PyObject *__pyx_v_new_offset = NULL;
+ PyObject *__pyx_v_t = NULL;
+ char *__pyx_r;
+ __Pyx_RefNannyDeclarations
+ PyObject *__pyx_t_1 = NULL;
+ Py_ssize_t __pyx_t_2;
+ PyObject *__pyx_t_3 = NULL;
+ PyObject *__pyx_t_4 = NULL;
+ int __pyx_t_5;
+ int __pyx_t_6;
+ int __pyx_t_7;
+ long __pyx_t_8;
+ char *__pyx_t_9;
+ __Pyx_RefNannySetupContext("_util_dtypestring", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":814
+ *
+ * cdef dtype child
+ * cdef int endian_detector = 1 # <<<<<<<<<<<<<<
+ * cdef bint little_endian = ((&endian_detector)[0] != 0)
+ * cdef tuple fields
+ */
+ __pyx_v_endian_detector = 1;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":815
+ * cdef dtype child
+ * cdef int endian_detector = 1
+ * cdef bint little_endian = ((&endian_detector)[0] != 0) # <<<<<<<<<<<<<<
+ * cdef tuple fields
+ *
+ */
+ __pyx_v_little_endian = ((((char *)(&__pyx_v_endian_detector))[0]) != 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":818
+ * cdef tuple fields
+ *
+ * for childname in descr.names: # <<<<<<<<<<<<<<
+ * fields = descr.fields[childname]
+ * child, new_offset = fields
+ */
+ if (unlikely(__pyx_v_descr->names == Py_None)) {
+ PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable");
+ __PYX_ERR(2, 818, __pyx_L1_error)
+ }
+ __pyx_t_1 = __pyx_v_descr->names; __Pyx_INCREF(__pyx_t_1); __pyx_t_2 = 0;
+ for (;;) {
+ if (__pyx_t_2 >= PyTuple_GET_SIZE(__pyx_t_1)) break;
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyTuple_GET_ITEM(__pyx_t_1, __pyx_t_2); __Pyx_INCREF(__pyx_t_3); __pyx_t_2++; if (unlikely(0 < 0)) __PYX_ERR(2, 818, __pyx_L1_error)
+ #else
+ __pyx_t_3 = PySequence_ITEM(__pyx_t_1, __pyx_t_2); __pyx_t_2++; if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 818, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ #endif
+ __Pyx_XDECREF_SET(__pyx_v_childname, __pyx_t_3);
+ __pyx_t_3 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":819
+ *
+ * for childname in descr.names:
+ * fields = descr.fields[childname] # <<<<<<<<<<<<<<
+ * child, new_offset = fields
+ *
+ */
+ if (unlikely(__pyx_v_descr->fields == Py_None)) {
+ PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable");
+ __PYX_ERR(2, 819, __pyx_L1_error)
+ }
+ __pyx_t_3 = __Pyx_PyDict_GetItem(__pyx_v_descr->fields, __pyx_v_childname); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 819, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ if (!(likely(PyTuple_CheckExact(__pyx_t_3))||((__pyx_t_3) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_t_3)->tp_name), 0))) __PYX_ERR(2, 819, __pyx_L1_error)
+ __Pyx_XDECREF_SET(__pyx_v_fields, ((PyObject*)__pyx_t_3));
+ __pyx_t_3 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":820
+ * for childname in descr.names:
+ * fields = descr.fields[childname]
+ * child, new_offset = fields # <<<<<<<<<<<<<<
+ *
+ * if (end - f) - (new_offset - offset[0]) < 15:
+ */
+ if (likely(__pyx_v_fields != Py_None)) {
+ PyObject* sequence = __pyx_v_fields;
+ #if !CYTHON_COMPILING_IN_PYPY
+ Py_ssize_t size = Py_SIZE(sequence);
+ #else
+ Py_ssize_t size = PySequence_Size(sequence);
+ #endif
+ if (unlikely(size != 2)) {
+ if (size > 2) __Pyx_RaiseTooManyValuesError(2);
+ else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size);
+ __PYX_ERR(2, 820, __pyx_L1_error)
+ }
+ #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS
+ __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0);
+ __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1);
+ __Pyx_INCREF(__pyx_t_3);
+ __Pyx_INCREF(__pyx_t_4);
+ #else
+ __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 820, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 820, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ #endif
+ } else {
+ __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(2, 820, __pyx_L1_error)
+ }
+ if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_ptype_5numpy_dtype))))) __PYX_ERR(2, 820, __pyx_L1_error)
+ __Pyx_XDECREF_SET(__pyx_v_child, ((PyArray_Descr *)__pyx_t_3));
+ __pyx_t_3 = 0;
+ __Pyx_XDECREF_SET(__pyx_v_new_offset, __pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":822
+ * child, new_offset = fields
+ *
+ * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<<
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd")
+ *
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 822, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyNumber_Subtract(__pyx_v_new_offset, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 822, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(2, 822, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = ((((__pyx_v_end - __pyx_v_f) - ((int)__pyx_t_5)) < 15) != 0);
+ if (__pyx_t_6) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":823
+ *
+ * if (end - f) - (new_offset - offset[0]) < 15:
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd") # <<<<<<<<<<<<<<
+ *
+ * if ((child.byteorder == c'>' and little_endian) or
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__13, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 823, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 823, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":822
+ * child, new_offset = fields
+ *
+ * if (end - f) - (new_offset - offset[0]) < 15: # <<<<<<<<<<<<<<
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd")
+ *
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":825
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd")
+ *
+ * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (child.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ __pyx_t_7 = ((__pyx_v_child->byteorder == '>') != 0);
+ if (!__pyx_t_7) {
+ goto __pyx_L8_next_or;
+ } else {
+ }
+ __pyx_t_7 = (__pyx_v_little_endian != 0);
+ if (!__pyx_t_7) {
+ } else {
+ __pyx_t_6 = __pyx_t_7;
+ goto __pyx_L7_bool_binop_done;
+ }
+ __pyx_L8_next_or:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":826
+ *
+ * if ((child.byteorder == c'>' and little_endian) or
+ * (child.byteorder == c'<' and not little_endian)): # <<<<<<<<<<<<<<
+ * raise ValueError(u"Non-native byte order not supported")
+ * # One could encode it in the format string and have Cython
+ */
+ __pyx_t_7 = ((__pyx_v_child->byteorder == '<') != 0);
+ if (__pyx_t_7) {
+ } else {
+ __pyx_t_6 = __pyx_t_7;
+ goto __pyx_L7_bool_binop_done;
+ }
+ __pyx_t_7 = ((!(__pyx_v_little_endian != 0)) != 0);
+ __pyx_t_6 = __pyx_t_7;
+ __pyx_L7_bool_binop_done:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":825
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd")
+ *
+ * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (child.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ if (__pyx_t_6) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":827
+ * if ((child.byteorder == c'>' and little_endian) or
+ * (child.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported") # <<<<<<<<<<<<<<
+ * # One could encode it in the format string and have Cython
+ * # complain instead, BUT: < and > in format strings also imply
+ */
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 827, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 827, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":825
+ * raise RuntimeError(u"Format string allocated too short, see comment in numpy.pxd")
+ *
+ * if ((child.byteorder == c'>' and little_endian) or # <<<<<<<<<<<<<<
+ * (child.byteorder == c'<' and not little_endian)):
+ * raise ValueError(u"Non-native byte order not supported")
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":837
+ *
+ * # Output padding bytes
+ * while offset[0] < new_offset: # <<<<<<<<<<<<<<
+ * f[0] = 120 # "x"; pad byte
+ * f += 1
+ */
+ while (1) {
+ __pyx_t_3 = __Pyx_PyInt_From_int((__pyx_v_offset[0])); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 837, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_t_3, __pyx_v_new_offset, Py_LT); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 837, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 837, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (!__pyx_t_6) break;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":838
+ * # Output padding bytes
+ * while offset[0] < new_offset:
+ * f[0] = 120 # "x"; pad byte # <<<<<<<<<<<<<<
+ * f += 1
+ * offset[0] += 1
+ */
+ (__pyx_v_f[0]) = 0x78;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":839
+ * while offset[0] < new_offset:
+ * f[0] = 120 # "x"; pad byte
+ * f += 1 # <<<<<<<<<<<<<<
+ * offset[0] += 1
+ *
+ */
+ __pyx_v_f = (__pyx_v_f + 1);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":840
+ * f[0] = 120 # "x"; pad byte
+ * f += 1
+ * offset[0] += 1 # <<<<<<<<<<<<<<
+ *
+ * offset[0] += child.itemsize
+ */
+ __pyx_t_8 = 0;
+ (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + 1);
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":842
+ * offset[0] += 1
+ *
+ * offset[0] += child.itemsize # <<<<<<<<<<<<<<
+ *
+ * if not PyDataType_HASFIELDS(child):
+ */
+ __pyx_t_8 = 0;
+ (__pyx_v_offset[__pyx_t_8]) = ((__pyx_v_offset[__pyx_t_8]) + __pyx_v_child->elsize);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":844
+ * offset[0] += child.itemsize
+ *
+ * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<<
+ * t = child.type_num
+ * if end - f < 5:
+ */
+ __pyx_t_6 = ((!(PyDataType_HASFIELDS(__pyx_v_child) != 0)) != 0);
+ if (__pyx_t_6) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":845
+ *
+ * if not PyDataType_HASFIELDS(child):
+ * t = child.type_num # <<<<<<<<<<<<<<
+ * if end - f < 5:
+ * raise RuntimeError(u"Format string allocated too short.")
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_int(__pyx_v_child->type_num); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 845, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_XDECREF_SET(__pyx_v_t, __pyx_t_4);
+ __pyx_t_4 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":846
+ * if not PyDataType_HASFIELDS(child):
+ * t = child.type_num
+ * if end - f < 5: # <<<<<<<<<<<<<<
+ * raise RuntimeError(u"Format string allocated too short.")
+ *
+ */
+ __pyx_t_6 = (((__pyx_v_end - __pyx_v_f) < 5) != 0);
+ if (__pyx_t_6) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":847
+ * t = child.type_num
+ * if end - f < 5:
+ * raise RuntimeError(u"Format string allocated too short.") # <<<<<<<<<<<<<<
+ *
+ * # Until ticket #99 is fixed, use integers to avoid warnings
+ */
+ __pyx_t_4 = __Pyx_PyObject_Call(__pyx_builtin_RuntimeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 847, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_Raise(__pyx_t_4, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __PYX_ERR(2, 847, __pyx_L1_error)
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":846
+ * if not PyDataType_HASFIELDS(child):
+ * t = child.type_num
+ * if end - f < 5: # <<<<<<<<<<<<<<
+ * raise RuntimeError(u"Format string allocated too short.")
+ *
+ */
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":850
+ *
+ * # Until ticket #99 is fixed, use integers to avoid warnings
+ * if t == NPY_BYTE: f[0] = 98 #"b" # <<<<<<<<<<<<<<
+ * elif t == NPY_UBYTE: f[0] = 66 #"B"
+ * elif t == NPY_SHORT: f[0] = 104 #"h"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_BYTE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 850, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 850, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 850, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 98;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":851
+ * # Until ticket #99 is fixed, use integers to avoid warnings
+ * if t == NPY_BYTE: f[0] = 98 #"b"
+ * elif t == NPY_UBYTE: f[0] = 66 #"B" # <<<<<<<<<<<<<<
+ * elif t == NPY_SHORT: f[0] = 104 #"h"
+ * elif t == NPY_USHORT: f[0] = 72 #"H"
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UBYTE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 851, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 851, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 851, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 66;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":852
+ * if t == NPY_BYTE: f[0] = 98 #"b"
+ * elif t == NPY_UBYTE: f[0] = 66 #"B"
+ * elif t == NPY_SHORT: f[0] = 104 #"h" # <<<<<<<<<<<<<<
+ * elif t == NPY_USHORT: f[0] = 72 #"H"
+ * elif t == NPY_INT: f[0] = 105 #"i"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_SHORT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 852, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 852, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 852, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x68;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":853
+ * elif t == NPY_UBYTE: f[0] = 66 #"B"
+ * elif t == NPY_SHORT: f[0] = 104 #"h"
+ * elif t == NPY_USHORT: f[0] = 72 #"H" # <<<<<<<<<<<<<<
+ * elif t == NPY_INT: f[0] = 105 #"i"
+ * elif t == NPY_UINT: f[0] = 73 #"I"
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_USHORT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 853, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 853, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 853, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 72;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":854
+ * elif t == NPY_SHORT: f[0] = 104 #"h"
+ * elif t == NPY_USHORT: f[0] = 72 #"H"
+ * elif t == NPY_INT: f[0] = 105 #"i" # <<<<<<<<<<<<<<
+ * elif t == NPY_UINT: f[0] = 73 #"I"
+ * elif t == NPY_LONG: f[0] = 108 #"l"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_INT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 854, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 854, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 854, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x69;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":855
+ * elif t == NPY_USHORT: f[0] = 72 #"H"
+ * elif t == NPY_INT: f[0] = 105 #"i"
+ * elif t == NPY_UINT: f[0] = 73 #"I" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONG: f[0] = 108 #"l"
+ * elif t == NPY_ULONG: f[0] = 76 #"L"
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_UINT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 855, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 855, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 855, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 73;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":856
+ * elif t == NPY_INT: f[0] = 105 #"i"
+ * elif t == NPY_UINT: f[0] = 73 #"I"
+ * elif t == NPY_LONG: f[0] = 108 #"l" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONG: f[0] = 76 #"L"
+ * elif t == NPY_LONGLONG: f[0] = 113 #"q"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 856, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 856, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 856, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x6C;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":857
+ * elif t == NPY_UINT: f[0] = 73 #"I"
+ * elif t == NPY_LONG: f[0] = 108 #"l"
+ * elif t == NPY_ULONG: f[0] = 76 #"L" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGLONG: f[0] = 113 #"q"
+ * elif t == NPY_ULONGLONG: f[0] = 81 #"Q"
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 857, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 857, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 857, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 76;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":858
+ * elif t == NPY_LONG: f[0] = 108 #"l"
+ * elif t == NPY_ULONG: f[0] = 76 #"L"
+ * elif t == NPY_LONGLONG: f[0] = 113 #"q" # <<<<<<<<<<<<<<
+ * elif t == NPY_ULONGLONG: f[0] = 81 #"Q"
+ * elif t == NPY_FLOAT: f[0] = 102 #"f"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGLONG); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 858, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 858, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 858, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x71;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":859
+ * elif t == NPY_ULONG: f[0] = 76 #"L"
+ * elif t == NPY_LONGLONG: f[0] = 113 #"q"
+ * elif t == NPY_ULONGLONG: f[0] = 81 #"Q" # <<<<<<<<<<<<<<
+ * elif t == NPY_FLOAT: f[0] = 102 #"f"
+ * elif t == NPY_DOUBLE: f[0] = 100 #"d"
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_ULONGLONG); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 859, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 859, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 859, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 81;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":860
+ * elif t == NPY_LONGLONG: f[0] = 113 #"q"
+ * elif t == NPY_ULONGLONG: f[0] = 81 #"Q"
+ * elif t == NPY_FLOAT: f[0] = 102 #"f" # <<<<<<<<<<<<<<
+ * elif t == NPY_DOUBLE: f[0] = 100 #"d"
+ * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_FLOAT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 860, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 860, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 860, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x66;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":861
+ * elif t == NPY_ULONGLONG: f[0] = 81 #"Q"
+ * elif t == NPY_FLOAT: f[0] = 102 #"f"
+ * elif t == NPY_DOUBLE: f[0] = 100 #"d" # <<<<<<<<<<<<<<
+ * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g"
+ * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_DOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 861, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 861, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 861, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x64;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":862
+ * elif t == NPY_FLOAT: f[0] = 102 #"f"
+ * elif t == NPY_DOUBLE: f[0] = 100 #"d"
+ * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g" # <<<<<<<<<<<<<<
+ * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf
+ * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_LONGDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 862, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 862, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 862, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 0x67;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":863
+ * elif t == NPY_DOUBLE: f[0] = 100 #"d"
+ * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g"
+ * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf # <<<<<<<<<<<<<<
+ * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd
+ * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CFLOAT); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 863, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 863, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 863, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 90;
+ (__pyx_v_f[1]) = 0x66;
+ __pyx_v_f = (__pyx_v_f + 1);
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":864
+ * elif t == NPY_LONGDOUBLE: f[0] = 103 #"g"
+ * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf
+ * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd # <<<<<<<<<<<<<<
+ * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg
+ * elif t == NPY_OBJECT: f[0] = 79 #"O"
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CDOUBLE); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 864, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 864, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 864, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 90;
+ (__pyx_v_f[1]) = 0x64;
+ __pyx_v_f = (__pyx_v_f + 1);
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":865
+ * elif t == NPY_CFLOAT: f[0] = 90; f[1] = 102; f += 1 # Zf
+ * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd
+ * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg # <<<<<<<<<<<<<<
+ * elif t == NPY_OBJECT: f[0] = 79 #"O"
+ * else:
+ */
+ __pyx_t_3 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_CLONGDOUBLE); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 865, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyObject_RichCompare(__pyx_v_t, __pyx_t_3, Py_EQ); __Pyx_XGOTREF(__pyx_t_4); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 865, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_4); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 865, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 90;
+ (__pyx_v_f[1]) = 0x67;
+ __pyx_v_f = (__pyx_v_f + 1);
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":866
+ * elif t == NPY_CDOUBLE: f[0] = 90; f[1] = 100; f += 1 # Zd
+ * elif t == NPY_CLONGDOUBLE: f[0] = 90; f[1] = 103; f += 1 # Zg
+ * elif t == NPY_OBJECT: f[0] = 79 #"O" # <<<<<<<<<<<<<<
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ */
+ __pyx_t_4 = __Pyx_PyInt_From_enum__NPY_TYPES(NPY_OBJECT); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 866, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __pyx_t_3 = PyObject_RichCompare(__pyx_v_t, __pyx_t_4, Py_EQ); __Pyx_XGOTREF(__pyx_t_3); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 866, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __pyx_t_6 = __Pyx_PyObject_IsTrue(__pyx_t_3); if (unlikely(__pyx_t_6 < 0)) __PYX_ERR(2, 866, __pyx_L1_error)
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ if (__pyx_t_6) {
+ (__pyx_v_f[0]) = 79;
+ goto __pyx_L15;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":868
+ * elif t == NPY_OBJECT: f[0] = 79 #"O"
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t) # <<<<<<<<<<<<<<
+ * f += 1
+ * else:
+ */
+ /*else*/ {
+ __pyx_t_3 = PyUnicode_Format(__pyx_kp_u_unknown_dtype_code_in_numpy_pxd, __pyx_v_t); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 868, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __pyx_t_4 = PyTuple_New(1); if (unlikely(!__pyx_t_4)) __PYX_ERR(2, 868, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_4);
+ __Pyx_GIVEREF(__pyx_t_3);
+ PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_3);
+ __pyx_t_3 = 0;
+ __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_t_4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(2, 868, __pyx_L1_error)
+ __Pyx_GOTREF(__pyx_t_3);
+ __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0;
+ __Pyx_Raise(__pyx_t_3, 0, 0, 0);
+ __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0;
+ __PYX_ERR(2, 868, __pyx_L1_error)
+ }
+ __pyx_L15:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":869
+ * else:
+ * raise ValueError(u"unknown dtype code in numpy.pxd (%d)" % t)
+ * f += 1 # <<<<<<<<<<<<<<
+ * else:
+ * # Cython ignores struct boundary information ("T{...}"),
+ */
+ __pyx_v_f = (__pyx_v_f + 1);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":844
+ * offset[0] += child.itemsize
+ *
+ * if not PyDataType_HASFIELDS(child): # <<<<<<<<<<<<<<
+ * t = child.type_num
+ * if end - f < 5:
+ */
+ goto __pyx_L13;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":873
+ * # Cython ignores struct boundary information ("T{...}"),
+ * # so don't output it
+ * f = _util_dtypestring(child, f, end, offset) # <<<<<<<<<<<<<<
+ * return f
+ *
+ */
+ /*else*/ {
+ __pyx_t_9 = __pyx_f_5numpy__util_dtypestring(__pyx_v_child, __pyx_v_f, __pyx_v_end, __pyx_v_offset); if (unlikely(__pyx_t_9 == ((char *)NULL))) __PYX_ERR(2, 873, __pyx_L1_error)
+ __pyx_v_f = __pyx_t_9;
+ }
+ __pyx_L13:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":818
+ * cdef tuple fields
+ *
+ * for childname in descr.names: # <<<<<<<<<<<<<<
+ * fields = descr.fields[childname]
+ * child, new_offset = fields
+ */
+ }
+ __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":874
+ * # so don't output it
+ * f = _util_dtypestring(child, f, end, offset)
+ * return f # <<<<<<<<<<<<<<
+ *
+ *
+ */
+ __pyx_r = __pyx_v_f;
+ goto __pyx_L0;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":809
+ * return ()
+ *
+ * cdef inline char* _util_dtypestring(dtype descr, char* f, char* end, int* offset) except NULL: # <<<<<<<<<<<<<<
+ * # Recursive utility function used in __getbuffer__ to get format
+ * # string. The new location in the format string is returned.
+ */
+
+ /* function exit code */
+ __pyx_L1_error:;
+ __Pyx_XDECREF(__pyx_t_1);
+ __Pyx_XDECREF(__pyx_t_3);
+ __Pyx_XDECREF(__pyx_t_4);
+ __Pyx_AddTraceback("numpy._util_dtypestring", __pyx_clineno, __pyx_lineno, __pyx_filename);
+ __pyx_r = NULL;
+ __pyx_L0:;
+ __Pyx_XDECREF((PyObject *)__pyx_v_child);
+ __Pyx_XDECREF(__pyx_v_fields);
+ __Pyx_XDECREF(__pyx_v_childname);
+ __Pyx_XDECREF(__pyx_v_new_offset);
+ __Pyx_XDECREF(__pyx_v_t);
+ __Pyx_RefNannyFinishContext();
+ return __pyx_r;
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":990
+ *
+ *
+ * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<<
+ * cdef PyObject* baseptr
+ * if base is None:
+ */
+
+static CYTHON_INLINE void __pyx_f_5numpy_set_array_base(PyArrayObject *__pyx_v_arr, PyObject *__pyx_v_base) {
+ PyObject *__pyx_v_baseptr;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ int __pyx_t_2;
+ __Pyx_RefNannySetupContext("set_array_base", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":992
+ * cdef inline void set_array_base(ndarray arr, object base):
+ * cdef PyObject* baseptr
+ * if base is None: # <<<<<<<<<<<<<<
+ * baseptr = NULL
+ * else:
+ */
+ __pyx_t_1 = (__pyx_v_base == Py_None);
+ __pyx_t_2 = (__pyx_t_1 != 0);
+ if (__pyx_t_2) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":993
+ * cdef PyObject* baseptr
+ * if base is None:
+ * baseptr = NULL # <<<<<<<<<<<<<<
+ * else:
+ * Py_INCREF(base) # important to do this before decref below!
+ */
+ __pyx_v_baseptr = NULL;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":992
+ * cdef inline void set_array_base(ndarray arr, object base):
+ * cdef PyObject* baseptr
+ * if base is None: # <<<<<<<<<<<<<<
+ * baseptr = NULL
+ * else:
+ */
+ goto __pyx_L3;
+ }
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":995
+ * baseptr = NULL
+ * else:
+ * Py_INCREF(base) # important to do this before decref below! # <<<<<<<<<<<<<<
+ * baseptr = base
+ * Py_XDECREF(arr.base)
+ */
+ /*else*/ {
+ Py_INCREF(__pyx_v_base);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":996
+ * else:
+ * Py_INCREF(base) # important to do this before decref below!
+ * baseptr = base # <<<<<<<<<<<<<<
+ * Py_XDECREF(arr.base)
+ * arr.base = baseptr
+ */
+ __pyx_v_baseptr = ((PyObject *)__pyx_v_base);
+ }
+ __pyx_L3:;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":997
+ * Py_INCREF(base) # important to do this before decref below!
+ * baseptr = base
+ * Py_XDECREF(arr.base) # <<<<<<<<<<<<<<
+ * arr.base = baseptr
+ *
+ */
+ Py_XDECREF(__pyx_v_arr->base);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":998
+ * baseptr = base
+ * Py_XDECREF(arr.base)
+ * arr.base = baseptr # <<<<<<<<<<<<<<
+ *
+ * cdef inline object get_array_base(ndarray arr):
+ */
+ __pyx_v_arr->base = __pyx_v_baseptr;
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":990
+ *
+ *
+ * cdef inline void set_array_base(ndarray arr, object base): # <<<<<<<<<<<<<<
+ * cdef PyObject* baseptr
+ * if base is None:
+ */
+
+ /* function exit code */
+ __Pyx_RefNannyFinishContext();
+}
+
+/* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":1000
+ * arr.base = baseptr
+ *
+ * cdef inline object get_array_base(ndarray arr): # <<<<<<<<<<<<<<
+ * if arr.base is NULL:
+ * return None
+ */
+
+static CYTHON_INLINE PyObject *__pyx_f_5numpy_get_array_base(PyArrayObject *__pyx_v_arr) {
+ PyObject *__pyx_r = NULL;
+ __Pyx_RefNannyDeclarations
+ int __pyx_t_1;
+ __Pyx_RefNannySetupContext("get_array_base", 0);
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":1001
+ *
+ * cdef inline object get_array_base(ndarray arr):
+ * if arr.base is NULL: # <<<<<<<<<<<<<<
+ * return None
+ * else:
+ */
+ __pyx_t_1 = ((__pyx_v_arr->base == NULL) != 0);
+ if (__pyx_t_1) {
+
+ /* "../../../anaconda/envs/polar2grid_py36/lib/python3.6/site-packages/Cython/Includes/numpy/__init__.pxd":1002
+ * cdef inline object get_array_base(ndarray arr):
+ * if arr.base is NULL:
+ * return None # <<<<<<<<<<<<<<
+ * else:
+ * return