Is it possible to use Unet for multi-output prediction? #3731
-
I have some PET images and have labeled the ROI of the cancer range, however I want to use this data to make other predictions, including probabilistic prediction, linear regression, etc., so there is a way to do like what this blog mentions Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 2 replies
-
Hi @ericspod , Could you please help share some information about this UNet question? Thanks in advance. |
Beta Was this translation helpful? Give feedback.
-
@Nic-Ma Thank you very much for your quick reply, I want to correct my question a little. How to use the existing model to connect different losses and make different predictions?
|
Beta Was this translation helpful? Give feedback.
-
I don't think it's a straight forward question of what to do with the existing UNet architecture to do something like this. The network in the blog is composed of two distinct branches and has two outputs, so the equivalent idea here could be to have distinct UNet instances for the tasks you want, and wrap these in a class that broadcasts the inputs to each. This class is what you'd pass to the training engine, and similarly you could have a wrapper around the loss functions you want to use so that they appear as one. Something like this perhaps: class NetWrapper(nn.module):
def __init__(self, nets):
self.nets=nets
def forward(self, x):
return tuple(net(x) for net in nets)
class LossWrapper(nn.module):
def __init__(self,losses):
self.losses=losses
def forward(self, inputs, target):
return sum(loss(input, grnd) for loss, input, grnd in zip(self.losses, inputs, target)) |
Beta Was this translation helpful? Give feedback.
-
Mulitple output channels for one output head is one option: The follwoing UNet will at least train on the image to image translation task:
Another option is to add a head per output:
To adapt this for multiple output layers first pass the input through the UNet and then to the specific output heads.
If you dont want to accumulate your different losses you'd have to write the training loop yourself. Haven't done that but i guess it may be possible to do something like
or similar. Related reads: |
Beta Was this translation helpful? Give feedback.
I don't think it's a straight forward question of what to do with the existing UNet architecture to do something like this. The network in the blog is composed of two distinct branches and has two outputs, so the equivalent idea here could be to have distinct UNet instances for the tasks you want, and wrap these in a class that broadcasts the inputs to each. This class is what you'd pass to the training engine, and similarly you could have a wrapper around the loss functions you want to use so that they appear as one. Something like this perhaps: