Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about model training and usage #18

Open
bme-2020 opened this issue Apr 28, 2021 · 1 comment
Open

about model training and usage #18

bme-2020 opened this issue Apr 28, 2021 · 1 comment

Comments

@bme-2020
Copy link

As a preliminary student in the field of deep learning CT reconstruction, I am very interested in what you have done. After carefully reading your article and code implementation, I configured the environment on the server of the laboratory and successfully ran your code after making some modifications.But I have a question as follow:
After training the model through train_model.py file, how are the best training parameters obtained using the early stop method saved and used for reconstruction later?When I just started to learn neural network, I have learned methods such as model.save() and model.load() to save and call the model. Of course, there are many other ways to complete this process. Could you please tell me how you realize this? I'm sorry I didn't see that in the code.

For instance,your example_cone_3d.py :

import RunetimeError
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

from pyronn.ct_reconstruction.layers.projection_3d import cone_projection3d
from pyronn.ct_reconstruction.layers.backprojection_3d import cone_backprojection3d
from pyronn.ct_reconstruction.geometry.geometry_cone_3d import GeometryCone3D
from pyronn.ct_reconstruction.helpers.phantoms import shepp_logan
from pyronn.ct_reconstruction.helpers.trajectories import circular_trajectory
from pyronn.ct_reconstruction.helpers.filters.filters import ram_lak_3D

def example_cone_3d():
# ------------------ Declare Parameters ------------------

# Volume Parameters:
volume_size = 256
volume_shape = [volume_size, volume_size, volume_size]
volume_spacing = [0.5, 0.5, 0.5]

# Detector Parameters:
detector_shape = [2*volume_size, 2*volume_size]
detector_spacing = [1, 1]

# Trajectory Parameters:
number_of_projections = 360
angular_range = 2 * np.pi

source_detector_distance = 1200
source_isocenter_distance = 750

# create Geometry class
geometry = GeometryCone3D(volume_shape, volume_spacing, detector_shape, detector_spacing,
                          number_of_projections, angular_range, source_detector_distance, source_isocenter_distance)
geometry.set_trajectory(circular_trajectory.circular_trajectory_3d(geometry))

# Get Phantom 3d
phantom = shepp_logan.shepp_logan_3d(volume_shape)
# Add required batch dimension
phantom = np.expand_dims(phantom, axis=0)

# ------------------ Call Layers ------------------
# The following code is the new TF2.0 experimental way to tell
# Tensorflow only to allocate GPU memory needed rather then allocate every GPU memory available.
# This is important for the use of the hardware interpolation projector, otherwise there might be not enough memory left
# to allocate the texture memory on the GPU

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
    except RunetimeError as e:
        print(e)

sinogram = cone_projection3d(phantom, geometry)

reco_filter = ram_lak_3D(geometry)
sino_freq = tf.signal.fft(tf.cast(sinogram, dtype=tf.complex64))
sino_filtered_freq = tf.multiply(sino_freq, tf.cast(reco_filter, dtype=tf.complex64))
sinogram_filtered = tf.math.real(tf.signal.ifft(sino_filtered_freq))

reco = cone_backprojection3d(sinogram_filtered, geometry)

plt.figure()
plt.imshow(np.squeeze(reco)[volume_shape[0]//2], cmap=plt.get_cmap('gist_gray'))
plt.axis('off')
plt.savefig('3d_cone_reco.png', dpi=150, transparent=False, bbox_inches='tight')

if name == 'main':
example_cone_3d()

@csyben
Copy link
Owner

csyben commented May 17, 2021

Hi,

these are only small examples to guide new users how to setup just the projection and reconstruction with the provided layers. However, there is no learning happening in this example, therefore you cant find any save method for the best parameters.

If you are interested in a pure Tensorflow learning example, I recommend you take a look at the "example_learning_tensorflow". However, there is no high-level model storage strategy in the learning example either. The train() method simply returns a result dict containing the best weights for the reconstruction filter. Note that the examples are mainly intended to show the basic use of the layers and the geometry object, but you can just include the provided layers in more complicated structures and training code.

Hope that clarifies your question.

best

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants