-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get inputs of a layer? #109
Comments
Hi @Dzandaa,
The Does this reply solve the question? |
Hello Joao,
Thank you very much for your answer.
As I don't create console programs, only GUI, I wanted to see the structure (input-output) of my layers in a TMemo.
And when initializing OpenCL, I had a crash (writeln) on Windows with Win32 gui application enabled
and EasyOpenCL.HideMessages() enabled.
I also wanted to see In Linux the console messages in a TMemo.
But I solved the problem by redirecting the stdout to a TMemo.
I created a denoising program that run on MNIST CIFAR and picture directory.
A strange thing, when I train on MNIST or CIFAR data, NeuralFit.TrainingAccuracy is always zero,
but the result is O.K.
When training on Pictures, NeuralFit.TrainingAccuracy is O.K.
Here is part of my code if you have any idea:
// *********************************
// ***** Create Neural Network *****
// *********************************
if(NNAutoencoder <> nil) then FreeAndNil(NNAutoencoder);
NNAutoencoder := THistoricalNets.Create();
NNAutoencoder.MessageProc := @Self.OnMessage;
NNAutoencoder.ErrorProc := @Self.OnError;
NNAutoencoder.HideMessages(); // ***** Hide Messages *****
if HasOpenCL then
begin
NNAutoencoder.EnableOpenCL(EasyOpenCL.PlatformIds[0], EasyOpenCL.Devices[0]);
end;
...
// Encoder
NNAutoencoder.AddLayer([
TNNetInput.Create(XSize, YSize, ZSize),
TNNetConvolutionReLU.Create(32, 3, 1, 1, 1),
TNNetMaxPool.Create(2, 2, 0),
TNNetConvolutionReLU.Create(32, 3, 1, 1, 1),
TNNetMaxPool.Create(2, 2, 0),
// Decoder
TNNetConvolutionReLU.Create(32, 3, 1, 1, 1),
TNNetUpsample.Create(),
TNNetConvolutionReLU.Create(32, 3, 1, 1, 1),
TNNetUpsample.Create(),
TNNetConvolutionReLU.Create(32, 3, 1, 1, 1),
TNNetConvolutionReLU.Create(ZSize, 3, 1, 1, 1)
]);
NNAutoencoder.SetLearningRate(0.0010,0.9);
NNAutoencoder.SetL2Decay(0.0);
// *****************************
// ***** Create Neural Fit *****
// *****************************
procedure TDenoizingForm.CreateNeuralFit;
begin
if(NeuralFit <> nil) then FreeAndNil(NeuralFit);
NeuralFit := TNeuralDataLoadingFit.Create();
NeuralFit.OnAfterEpoch := @Self.OnAfterEpoch;
NeuralFit.OnAfterStep := @Self.OnAfterStep;
NeuralFit.OnStart := @Self.OnStart;
NeuralFit.MessageProc := @Self.OnMessage;
NeuralFit.ErrorProc := @Self.OnError;
NeuralFit.HideMessages(); // ***** Hide Messages *****
NeuralFit.LearningRateDecay := 0.0;
NeuralFit.L2Decay := 0.0;
NeuralFit.AvgWeightEpochCount := 1;
NeuralFit.InitialLearningRate := 0.01;
NeuralFit.ClipDelta := 0.01;
NeuralFit.FileNameBase := DataName+'-Denoising';
NeuralFit.EnableBipolar99HitComparison();
if HasOpenCL then
begin
NeuralFit.EnableOpenCL(EasyOpenCL.PlatformIds[0], EasyOpenCL.Devices[0]);
end;
end;
// **************************
// ***** Start Training *****
// **************************
...
NeuralFit.FitLoading(NNAutoencoder, {EpochSize=}100, 0, 0, {Batch=}64, {Epochs=1000, @GetTrainingData, nil, nil);
...
// ***************************
// ***** Get Training Data ***
// ***************************
procedure TDenoizingForm.GetTrainingData(Idx: integer; ThreadId: integer;
pInput, pOutput: TNNetVolume);
var
ImageId : integer;
begin
ImageId := Random(ImageVolumes.Count);
pOutput.Copy(ImageVolumes[ImageId]);
pInput.Copy(pOutput);
pInput.AddGaussianNoise(FSNoise.Value);
end;
My wife is mathematician and Phd in Science.
She work in a Spacial Center.
And she is looking to test CAI in comparison with Keras/Tensorflow on a new project for rock analysis.
Is it O.K. for you?
Thank you for that great library.
And sorry for my bad English :)
Have a nice day.
B->
Saturday, March 4, 2023, 1:17:28 PM, you wrote:
… Hi @Dzandaa,
For a Layers[LayerCnt] layer, except for concatenating layers and
input layer, you can get the input via:
NNAutoencoder.Layers[LayerCnt].PrevLayer.Output;
The PrevLayer gives you the previous layer. Then, you can get its
output that is the input for LayerCnt layer.
Does this reply solve the question?
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
--
Best regards,
Dzandaa ***@***.***
|
Denoising is one of the most interesting applications for neural networks in my opinion. I should dedicate more time for it myself. In the case that you publish your source code in full, I'll be happy to add a link to it. I also think that the area of autoencoding super interesting. I've been playing with autoencoders with 64x64 images. In the case that you are interested, this is my current architecture:
For the encoder/decoder, I personally prefer avoiding maxpoolings. I usually use convolutions with stride=2 instead. I also avoid ReLUs. But this is my personal preference. You are free to use the layer of your preference. These are my other parameters:
I'm curious to know what your wife finds comparing CAI against Keras. Feel free to share good and bad news. Regarding "I train on MNIST or CIFAR data, NeuralFit.TrainingAccuracy is always zero", maybe, the problem is around the preprocessing... Not sure. Keep posting. I enjoy reading. |
Hello Joao, |
Hi,
You can get the outputs of a layer like:
NNAutoencoder.Layers[LayerCnt].Output.SizeX
NNAutoencoder.Layers[LayerCnt].Output.SizeY
NNAutoencoder.Layers[LayerCnt].Output.Depth
but is it possible to get it's inputs?
Like in NNAutoencoder.DebugStructure
Thank you.
The text was updated successfully, but these errors were encountered: