diff --git a/dev/api/index.html b/dev/api/index.html index 5c811d62..58755d06 100644 --- a/dev/api/index.html +++ b/dev/api/index.html @@ -1,10 +1,10 @@ General · ExplainableAI.jl

Basic API

All methods in ExplainableAI.jl work by calling analyze on an input and an analyzer:

ExplainableAI.analyzeFunction
analyze(input, method)
-analyze(input, method, neuron_selection)

Apply the analyzer method for the given input, returning an Explanation. If neuron_selection is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.

See also Explanation and heatmap.

Keyword arguments

  • add_batch_dim: add batch dimension to the input without allocating. Default is false.
source
ExplainableAI.ExplanationType

Return type of analyzers when calling analyze.

Fields

  • val: numerical output of the analyzer, e.g. an attribution or gradient
  • output: model output for the given analyzer input
  • neuron_selection: neuron index used for the explanation
  • analyzer: symbol corresponding the used analyzer, e.g. :LRP or :Gradient
  • extras: optional named tuple that can be used by analyzers to return additional information.
source
ExplainableAI.heatmapFunction
heatmap(explanation)
+analyze(input, method, neuron_selection)

Apply the analyzer method for the given input, returning an Explanation. If neuron_selection is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.

See also Explanation and heatmap.

Keyword arguments

  • add_batch_dim: add batch dimension to the input without allocating. Default is false.
source
ExplainableAI.ExplanationType

Return type of analyzers when calling analyze.

Fields

  • val: numerical output of the analyzer, e.g. an attribution or gradient
  • output: model output for the given analyzer input
  • neuron_selection: neuron index used for the explanation
  • analyzer: symbol corresponding the used analyzer, e.g. :LRP or :Gradient
  • extras: optional named tuple that can be used by analyzers to return additional information.
source
ExplainableAI.heatmapFunction
heatmap(explanation)
 heatmap(input, analyzer)
-heatmap(input, analyzer, neuron_selection)

Visualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).

See also analyze.

Keyword arguments

  • cs::ColorScheme: color scheme from ColorSchemes.jl that is applied. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is ColorSchemes.seismic.
  • reduce::Symbol: selects how color channels are reduced to a single number to apply a color scheme. The following methods can be selected, which are then applied over the color channels for each "pixel" in the explanation:
    • :sum: sum up color channels
    • :norm: compute 2-norm over the color channels
    • :maxabs: compute maximum(abs, x) over the color channels
    When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is :sum.
  • rangescale::Symbol: selects how the color channel reduced heatmap is normalized before the color scheme is applied. Can be either :extrema or :centered. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default for use with the seismic color scheme is :centered.
  • permute::Bool: Whether to flip W&H input channels. Default is true.
  • unpack_singleton::Bool: When heatmapping a batch with a single sample, setting unpack_singleton=true will return an image instead of an Vector containing a single image.

Note: keyword arguments can't be used when calling heatmap with an analyzer.

source

Analyzers

ExplainableAI.LRPType
LRP(model, rules)
-LRP(model, composite)

Analyze model by applying Layer-Wise Relevance Propagation. The analyzer can either be created by passing an array of LRP-rules or by passing a composite, see Composite for an example.

Keyword arguments

  • skip_checks::Bool: Skip checks whether model is compatible with LRP and contains output softmax. Default is false.
  • verbose::Bool: Select whether the model checks should print a summary on failure. Default is true.

References

[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

source
ExplainableAI.GradientType
Gradient(model)

Analyze model by calculating the gradient of a neuron activation with respect to the input.

source
ExplainableAI.InputTimesGradientType
InputTimesGradient(model)

Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.

source
ExplainableAI.SmoothGradFunction
SmoothGrad(analyzer, [n=50, std=0.1, rng=GLOBAL_RNG])
-SmoothGrad(analyzer, [n=50, distribution=Normal(0, σ²=0.01), rng=GLOBAL_RNG])

Analyze model by calculating a smoothed sensitivity map. This is done by averaging sensitivity maps of a Gradient analyzer over random samples in a neighborhood of the input, typically by adding Gaussian noise with mean 0.

References

  • Smilkov et al., SmoothGrad: removing noise by adding noise
source
ExplainableAI.IntegratedGradientsFunction
IntegratedGradients(analyzer, [n=50])
-IntegratedGradients(analyzer, [n=50])

Analyze model by using the Integrated Gradients method.

References

  • Sundararajan et al., Axiomatic Attribution for Deep Networks
source

Input augmentations

SmoothGrad and IntegratedGradients are special cases of the input augmentations NoiseAugmentation and InterpolationAugmentation, which can be applied as a wrapper to any analyzer:

ExplainableAI.NoiseAugmentationType
NoiseAugmentation(analyzer, n, [std=1, rng=GLOBAL_RNG])
-NoiseAugmentation(analyzer, n, distribution, [rng=GLOBAL_RNG])

A wrapper around analyzers that augments the input with n samples of additive noise sampled from distribution. This input augmentation is then averaged to return an Explanation.

source
ExplainableAI.InterpolationAugmentationType
InterpolationAugmentation(model, [n=50])

A wrapper around analyzers that augments the input with n steps of linear interpolation between the input and a reference input (typically zero(input)). The gradients w.r.t. this augmented input are then averaged and multiplied with the difference between the input and the reference input.

source

Model preparation

ExplainableAI.canonizeFunction
canonize(model)

Canonize model by flattening it and fusing BatchNorm layers into preceding Dense and Conv layers with linear activation functions.

source

Input preprocessing

ExplainableAI.preprocess_imagenetFunction
preprocess_imagenet(img)

Preprocess an image for use with Metalhead.jl's ImageNet models using PyTorch weights. Uses PyTorch's normalization constants.

source

Index

+heatmap(input, analyzer, neuron_selection)

Visualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).

See also analyze.

Keyword arguments

Note: keyword arguments can't be used when calling heatmap with an analyzer.

source

Analyzers

ExplainableAI.LRPType
LRP(model, rules)
+LRP(model, composite)

Analyze model by applying Layer-Wise Relevance Propagation. The analyzer can either be created by passing an array of LRP-rules or by passing a composite, see Composite for an example.

Keyword arguments

  • skip_checks::Bool: Skip checks whether model is compatible with LRP and contains output softmax. Default is false.
  • verbose::Bool: Select whether the model checks should print a summary on failure. Default is true.

References

[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications

source
ExplainableAI.GradientType
Gradient(model)

Analyze model by calculating the gradient of a neuron activation with respect to the input.

source
ExplainableAI.InputTimesGradientType
InputTimesGradient(model)

Analyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.

source
ExplainableAI.SmoothGradFunction
SmoothGrad(analyzer, [n=50, std=0.1, rng=GLOBAL_RNG])
+SmoothGrad(analyzer, [n=50, distribution=Normal(0, σ²=0.01), rng=GLOBAL_RNG])

Analyze model by calculating a smoothed sensitivity map. This is done by averaging sensitivity maps of a Gradient analyzer over random samples in a neighborhood of the input, typically by adding Gaussian noise with mean 0.

References

  • Smilkov et al., SmoothGrad: removing noise by adding noise
source
ExplainableAI.IntegratedGradientsFunction
IntegratedGradients(analyzer, [n=50])
+IntegratedGradients(analyzer, [n=50])

Analyze model by using the Integrated Gradients method.

References

  • Sundararajan et al., Axiomatic Attribution for Deep Networks
source

Input augmentations

SmoothGrad and IntegratedGradients are special cases of the input augmentations NoiseAugmentation and InterpolationAugmentation, which can be applied as a wrapper to any analyzer:

ExplainableAI.NoiseAugmentationType
NoiseAugmentation(analyzer, n, [std=1, rng=GLOBAL_RNG])
+NoiseAugmentation(analyzer, n, distribution, [rng=GLOBAL_RNG])

A wrapper around analyzers that augments the input with n samples of additive noise sampled from distribution. This input augmentation is then averaged to return an Explanation.

source
ExplainableAI.InterpolationAugmentationType
InterpolationAugmentation(model, [n=50])

A wrapper around analyzers that augments the input with n steps of linear interpolation between the input and a reference input (typically zero(input)). The gradients w.r.t. this augmented input are then averaged and multiplied with the difference between the input and the reference input.

source

Model preparation

ExplainableAI.strip_softmaxFunction
strip_softmax(model)
+strip_softmax(layer)

Remove softmax activation on layer or model if it exists.

source
ExplainableAI.canonizeFunction
canonize(model)

Canonize model by flattening it and fusing BatchNorm layers into preceding Dense and Conv layers with linear activation functions.

source
ExplainableAI.flatten_modelFunction
flatten_model(model)

Flatten a Flux Chain containing Chains.

source

Input preprocessing

ExplainableAI.preprocess_imagenetFunction
preprocess_imagenet(img)

Preprocess an image for use with Metalhead.jl's ImageNet models using PyTorch weights. Uses PyTorch's normalization constants.

source

Index

diff --git a/dev/generated/augmentations.ipynb b/dev/generated/augmentations.ipynb index 182cc10f..f9a7bca4 100644 --- a/dev/generated/augmentations.ipynb +++ b/dev/generated/augmentations.ipynb @@ -113,10 +113,10 @@ { "output_type": "execute_result", "data": { - "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.450361,0.448482,0.446352) … RGB{Float64}(0.45203,0.450153,0.448013)\n RGB{Float64}(0.451751,0.449874,0.447735) RGB{Float64}(0.451792,0.449914,0.447775)\n RGB{Float64}(0.456098,0.454227,0.452063) RGB{Float64}(0.450654,0.448775,0.446643)\n RGB{Float64}(0.4514,0.449522,0.447386) RGB{Float64}(0.449965,0.448085,0.445957)\n RGB{Float64}(0.443863,0.441974,0.439881) RGB{Float64}(0.452571,0.450695,0.448552)\n RGB{Float64}(0.440569,0.438675,0.436601) … RGB{Float64}(0.453341,0.451466,0.449318)\n RGB{Float64}(0.443619,0.44173,0.439639) RGB{Float64}(0.455123,0.45325,0.451092)\n RGB{Float64}(0.446893,0.445008,0.442898) RGB{Float64}(0.449197,0.447316,0.445192)\n RGB{Float64}(0.44718,0.445295,0.443184) RGB{Float64}(0.442525,0.440634,0.438549)\n RGB{Float64}(0.450998,0.449119,0.446985) RGB{Float64}(0.442101,0.440209,0.438127)\n ⋮ ⋱ \n RGB{Float64}(0.448627,0.446745,0.444625) RGB{Float64}(0.453352,0.451477,0.449329)\n RGB{Float64}(0.457783,0.455915,0.453741) … RGB{Float64}(0.449721,0.44784,0.445714)\n RGB{Float64}(0.454157,0.452284,0.450131) RGB{Float64}(0.446129,0.444243,0.442137)\n RGB{Float64}(0.446253,0.444367,0.442261) RGB{Float64}(0.446955,0.44507,0.44296)\n RGB{Float64}(0.444698,0.44281,0.440712) RGB{Float64}(0.453798,0.451923,0.449773)\n RGB{Float64}(0.448463,0.446581,0.444461) RGB{Float64}(0.457361,0.455492,0.453321)\n RGB{Float64}(0.447949,0.446066,0.44395) … RGB{Float64}(0.454807,0.452935,0.450779)\n RGB{Float64}(0.449441,0.44756,0.445435) RGB{Float64}(0.451094,0.449215,0.447081)\n RGB{Float64}(0.452421,0.450545,0.448402) RGB{Float64}(0.451471,0.449593,0.447456)", - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACiBJREFUeAHtwduOG1d6huH3X+svbrrJbrZbe8kzygAeIxMHmDiYXETmbiZ35juwcwPRIAPEkX3gWN7Kkppskk1WrU2OvlIOCBgB6rCex//tL3/hVxkn1VrpVXq5ZE4JISAxRMSCcUotlVMqFam1IobRM3qGcUql0qv0zHjPjFNqrZzijAbljAbljAblGCeZGWIYp1SrnBJCQCwY4u78mpILki0jJRek1oqUWhDDEAuGhBCQEAJi1egFemZGr9KrVHqVXqUizmhQzmhQzmhQbmaImSGG0TN6tVTEMMSCITFEJHpEzAwpOfOeIZmMlFKQUgtSckEqFQkWEKuG1FqRYAEJMdAzTso5I7VUpNbKKc5oUM5oUM5oUF5rRUouSCmFnnFSCAGZxAmntMcWSSkhbdciMUaklorM53PE3ZGu65AudUjbtohlQ5pJg5jRq1TEoyPHwxFJKdEzemaGGIY4o0E5o0E5o0F5KQVJKSEpJSSEgDTeIO6OTKcTpEsJyW1G9nd75O7ugOSckEkzQfZ3e8TMkNQlJMaIdKlDHjx4gMxnc6SZNMhmvUGOxyOSU0bMDAkhICEEpFIRZzQoZzQoZzQor7VySowRMTPEG0eCBSSljLRti5RSEHdHLi8ukN1+h5gZUktFdvsd8uzZM05pmgaZTCfIer1GQghIzhmZTqdIbSoSQkBKKUjTNMjxcESc0aCc0aCc0aDczJAQAuLRETNDYoj0jF7XdUitFUkpISklpG1bJOeCpK5DzAxZLBbI5eUFcr5YIG3bIsfDEdlsNkjqEnJ2foYYRs94r9JzdySnTM/oOaNBOaNBOaNBuWHIpGkQbxokp4wsL5aIYcj8bI5st1tkt9shXdchhiHeODKdTpG2bZEQArK6ukKurq6Q3W6HvPr2FeLRkf1uj9xub5HZbIZcf3CNpJQQd0fm8zlSS0Wc0aCc0aCc0aAco1crvVIKYmZI4w1Sa0E26w2Sc0Zub2+R4/GI3L93H/HGkZQScn5+jjTeINvtFnF35HA4IGaGXF1dIYfjAYkeka7rkOPxiKScEHdHDscDEkNEnNGgnNGgnNGg3MwQM0NyzsjxcERyyUitFTk/P0cePnqI3H9wH7m5uUHe/PIGOd4ckfPFOXJ+do4cjgfk3uIecjwekdR1yO9+93fITz//jGw2G2Q6nSLb3RaZzWZIpSLtsUXm8zmnOKNBOaNBOaNBeQwBCTEitVTk7PwM2e12yGw2Q3LOyJs3b5DNeoPc3Nwgbdcih8MBaZoGefv2LXJxeYFUKtIeW+Ruf4e8evUfnPLs2TOkmTRI13XIfr9HurZDJpMJYhhSSkGc0aCc0aCc0aC8VnqlFKTkgpRSkNlshiwWC+RwONCrFdntd8jLr75C2rZFfvPhh8hPP/2EXF9fIzFGZL1eI7e3t8h6vUZ2ux3y6NFjZDabIcvlAtnc3iLT6RTJJSMpJaRpGk5xRoNyRoNyRoPyXDK9TK9Skdl0hpgZklNG9rs98vr1a6TxBnn27Cny9VdfI6urFbLf7ZEff/wR+e3z3yIv//slsry4QJ4/f45cXlwiuWTkm2++QTabDbJYLpCcMhJDRKwx5NgekUkzQZzRoJzRoJzRoLyUQq/SMzPk2B7pVXrujlyuLpGnz54iV6sr5A9/+HvkP//2N+Tjjz9GHj58iHz22WfIRx99hJydnSHPnz9HUtchb9+9Q+62d8hyuUTevX2HbLdbJMaI5JKRxWKBTJoJEmJAnNGgnNGgnNGg3DCkUpFKRQxD3B2ZTqfIbrdDrj+4Rh4/eYz88Z/+iHz6z58i8/kcub3dIo8fPUb+5U9/Qn7++WdkNpshh0pv0kyQ6XTKexXpFgvk3c0N4qUgs9kMmc/nSAwRqaUizmhQzmhQzmhQbsGQQEC8cU6ZTqZIlzpkdblC3r57iyx/WiIvXvwVefnyJfLpp58iKSXkg+sPkBcv/op8+OxD5OXLl0iMEWkmDRJjRHIpSMoJuVgukWbSIJNmggQzekavlII4o0E5o0E5o0G5mSExRmQ6mSKlFGQ6myK7tzskdQkxM+TLL/8LWa1WSEoJOR4OyKPHj5F//+IL5HyxQD7+/e+R6+tr5HA8IL/88gtyu71F7u7ukBgiPaO3WCyQ7XaLdJ0hk+kEqbUizmhQzmhQzmhQbhgSQkC61CE5JSR6RFJKSNu2SNd1yHQ6RT7/4nPk9vYW+fzzz5EQInLv3jXy5MkT5H++/RYpJSPfffcdcre/Q969e4e8fv0amUwmyL1795D9bo803iCVipgZEiwgzmhQzmhQzmhQXmtFUpc4JYSA7LY7ZH2zRt68fYPkXJAfvv8e2Ww2yJu3b5GbmxvkHz/5hJ7R+/O//hn58ssvkdl0hhwOB+Tu7g6ZTqfI40ePkRADcn52zimT6QQpuSClFqTWijijQTmjQTmjQXmlIrVWpJaKHNsj0h5bJMSAPHn8BDkcD8jZfI5sd1vEvvoauX//PrJcLpGrqyvkxYsXyLE9IhfLCyTEgLg7srxYIqvLSyS6I/v9HnF3xKMjXemQkgu9Ss8ZDcoZDcoZDcpjiJySa0YMQ2KMyP5ujzTNBMkpI9Ejcv/+A2Sz3iCL5ZL3Kj2jt7paIev1GmnbFmnbFlmtVsjFxQUymU4Rw5DlYomUWjilUpFSCmIY4owG5YwG5YwG5bVWpJSCHA4HpG1bpGkaJIaI5JSQu7s7pOs6Tnn48CGSc0Z++OEHZHV1hSyXS2Q+myMWDKm1Ih4dSSkhpRTE3elVeqUtSMoJySnTq/QqFXFGg3JGg3JGg/JSCpJSQnLJSK0VSTkhm9tbJKUOySkjIQTkyeMnyHa3RWqtiJkhk+kEuVpdIZvbDdI0DXJ5cYmUWpDLy0vEzJCUErLdbpHD3QHJJdOr9CwYYhjijAbljAbljAbllUrP6MUQkTAJSPSIXK1WSM4Zubi8QB7cf4DM53Nkc7tBXn37Cvnkk39Avv/+B2Q2myH7/R6ZTWdI27ZIiAHZ7XbI4XBAUpeQu8MdUktFmqZBYoxIDJFTnNGgnNGgnNGg3MyQEALSTBp6lV6MEemaDnn69CmyWq3oGb3z83NkvV4ji+UCqbz34W8+RNbrNTI/myMlF07Z3+0Rw5Dj8YjkkpGSCzKZTpAQAhI9IsECUmtFnNGgnNGgnNGgnP8jhICYGacEC8hisUC8cWS72yKr1QrZ7XZI13VIDBEpuSCb/QYJMSBd1yFd1yExRCSnjFgwxN2RUAMSJgHxxpHoETEMqVSk1oo4o0E5o0E5o0G5YfSMnpkhhiFmhgQC0nUd0nUd0rUdUmpBaq1ISgmpVKSWilgwJMaIBAuImSEWDKm1IjFEJISAmBlSqfQqvVILvcJ7Rs8ZDcoZDcoZDcoxfpUF49e0xxbZ7/dI9IhMmgmnTKYTTgkWkC51SIwRCSHQM3ohBHqVngVDzIyTKr1K5f/DGQ3KGQ3KGQ3qfwFPUgIf8aiFgAAAAABJRU5ErkJggg==", + "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.448217,0.446334,0.444216) … RGB{Float64}(0.449986,0.448105,0.445977)\n RGB{Float64}(0.449956,0.448075,0.445947) RGB{Float64}(0.449897,0.448017,0.445889)\n RGB{Float64}(0.452467,0.450591,0.448448) RGB{Float64}(0.44846,0.446578,0.444459)\n RGB{Float64}(0.44945,0.447569,0.445444) RGB{Float64}(0.448302,0.446419,0.444301)\n RGB{Float64}(0.442454,0.440563,0.438478) RGB{Float64}(0.450911,0.449032,0.446898)\n RGB{Float64}(0.439192,0.437296,0.435231) … RGB{Float64}(0.451624,0.449746,0.447609)\n RGB{Float64}(0.438122,0.436224,0.434165) RGB{Float64}(0.453552,0.451678,0.449529)\n RGB{Float64}(0.443941,0.442052,0.439959) RGB{Float64}(0.447656,0.445772,0.443658)\n RGB{Float64}(0.444285,0.442397,0.440302) RGB{Float64}(0.440258,0.438364,0.436292)\n RGB{Float64}(0.448292,0.446409,0.444291) RGB{Float64}(0.441288,0.439395,0.437317)\n ⋮ ⋱ \n RGB{Float64}(0.446779,0.444894,0.442784) RGB{Float64}(0.454038,0.452164,0.450012)\n RGB{Float64}(0.454989,0.453116,0.450959) … RGB{Float64}(0.448123,0.44624,0.444123)\n RGB{Float64}(0.451499,0.449621,0.447484) RGB{Float64}(0.44487,0.442982,0.440884)\n RGB{Float64}(0.443384,0.441494,0.439405) RGB{Float64}(0.445594,0.443707,0.441605)\n RGB{Float64}(0.441907,0.440015,0.437933) RGB{Float64}(0.450412,0.448533,0.446402)\n RGB{Float64}(0.446492,0.444607,0.442499) RGB{Float64}(0.453535,0.45166,0.449511)\n RGB{Float64}(0.447141,0.445256,0.443145) … RGB{Float64}(0.451767,0.449889,0.447751)\n RGB{Float64}(0.449809,0.447928,0.445801) RGB{Float64}(0.449058,0.447177,0.445054)\n RGB{Float64}(0.450054,0.448174,0.446045) RGB{Float64}(0.449558,0.447677,0.445551)", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACh9JREFUeAHtwc2OHNd5x+HfOeet/piu+eCQGjLiMFTkwPaCUBwjvon4auJ70w0IELS1FEALUxBsiHLC8YjkTHfXdH2c82b1L2nRQDa1rOexP/3pv/j/BALHOI54caR44ZgYImJmiONIICDFCyPnKMeRQOCowFHuzsgZhRAYBUaBgDjOyBkZs0kZs0kZs0lZIDAKjAKBUWAU+KXIKPEzdyTEiKSUkBAC4u6IF0c8O1JKRtwZlVIYBUYxRCTGiIQYkEBAQgpIIHCMuyPuzjHGbFLGbFLGbFJGYBRC4JgQAiN3xN2RlBLHpJSQEAKSc0bcHck5I3nISPGClFIQd0diiIhHRxxHLBoSU0QCAXF3ZMgDUrwg7s4xxmxSxmxSxmxShjMqpSC5ZI4JISAxRqSKFVJKQbquQ/quRx4OD8h6vUa8OLJcLpFqUSF91yP90CN91yM5Z2QRFxzj7khKCWnbFhnygLg7EkNkFBgZs0kZs0kZs0lZKQUZ8oAMw4CEEJCqqhAzQ8wM6fse6fse2Td7pGs75HA4ICkmZN/skRgi0vc9kiwhQz8gV1cfIav1GqmqCrm/v0cOhwNScmEUGKWYkBADxxizSRmzSRmzSZnjjJxRjBEJISCVVUgIAen7HslDRkIIyGq1QlarFdI0DRJjRLw4st3vkOvr5xyzqBbIYrFA7u7ukBACknNGlqslUkpBUkxILhlZLpbIoT0gxmxSxmxSxmxShjOKKSJVqpAYI1JZhTjOyBkVL8gwDMjQD0jbtkjOGck5c0xdb5Dz83Okrmuk6zrkcDgg99t7pO975OTkhFHgqIEBqaoK6YceCQTEmE3KmE3KmE3KQgxIlSokpoi4O3J6dsox6/Ua2d5vkaZpkK7vkOIFWdsaWa/WSNd3SIoJubi4QB49eoQ0+z3yw5s3SEoJ2e/3yG67Q1brFXJ5eYnkISM5Z2S1XiHFC2LMJmXMJmXMJmX8QvGChBKQEAMSY0RKKcj9/T3Sdz2y3W6Rw+GAPHnyBDEzZMgDsqk3SGUVst/tkcoMORxaJISAXD66RA6HFrFkSN/3SNu2SM4ZSZaQw+GApJQQYzYpYzYpYzYpCyEggYDkkpG2aZG+7zlms9kgz549Q54+vULef/iA/HT7E3JoD0i9qZHNZoO0XYvUpzVyaFukH3rk008/RW7e3iB393fIcrFE9vs9slquOKbtWuRkfYIEAmLMJmXMJmXMJmUpRiSlhBR3ZH2yRpp9gyxXS6Tkgtz+dIts77fIhw8fkK7rkMPhgFRVhbx79w45PztH3B3pux5pHhrkzZs3HHN9fY0sqgrp+h55aBqk63tkuVwijiOlFMSYTcqYTcqYTcqcn7kzKqUgpRRkszlBTjYb5KF5QKwyZLffIa9fv0a6rkNevHiB3Ly9QS4fXyIpJeTu/g7Z3m+Ru7s7ZL/fI8+ePUNWqxVyenqKbO/vkeVqhZRDi/R9j1RVxTHGbFLGbFLGbFJWckFKLojjyHKxZBQYDf2ANA8Ncnt7i1hlyPX1NfL6u++Qi0cXyEPzgLz937fIy09eIq9ff4ecnZ4in/zLJ8jZ2RlSckb++re/Ifd390h9WiPDMCAhBqSqKqTrOqSqKsSYTcqYTcqYTcpyyYyckeNI27UcU6wgFxcXyPXza+T84hx59eoV8t/ffIP85re/Ra6urpDPP/8c+ddf/QrZnGyQly9fIn3fIe/fv0d2Dw/I2ekZ8u79O2S32yFmhgx5QOpNjSwWCySmiBizSRmzSRmzSVkgII4jMUYkxohUViHVokL2+z3y+PFj5Pnz58hnn32G/O53/4ZsNhtke79Fnj59ivzHH/6A3Nz8A1mtVxyzWC6RVddzzOlwinx4/wFxHFkul8j6ZI3EGBk5I2M2KWM2KWM2KQshIMkSslgsGDmjxXKBdF2HXFxcIO/fvUduTm+Qr//8Z+T7779H/v33v0fatkWePH6MfPP118j1i2vk9evXiJkhi2qBmCXED47kXJCzszOkqiqkWlRIDBFJKSHDMCDGbFLGbFLGbFIWY0TMDKmsQtwdWS1XyH63R4ZhQGKMyLfffotsNhuk63ukafbIixcvkC+//BLZnGyQX//m18jjx4+R9tAit7e3yHa3Qx4eHhCzhHhxpK5rZLfbIUMYkIUvkJwzYswmZcwmZcwmZQSO6vsOGXJGQgzIoT0gbdshXdchy+UC+eqrr5Dddod88cUXSCAgTz56gnz8Tx8jP/zwA5JzRt78+AZ5aB6Qd+/fIbf/uEWWqyVyeXmJ7Js9UlUVknNGHEdiiogxm5Qxm5Qxm5S5OzL0AxJCYBQY7bY7ZLfdIW9v3iIlF+TtzQ2y226R29tb5MPdHfLq1SuO+eMf/xP5y+u/IKvVCjk8HJDmoUFWyxXy9NlTxJIhm3rDyBlZZUjJBXEcKaUgxmxSxmxSxmxShjPy4EjJBSmlIG3bIikl5PnHz5F+6JG6rpHtbssxH11dIad1jTy6fIR88/U3SNu2SF3XSIwRSZaQuq6R87NzZLFYIE3TIMkSklJC3B3JQ2bkjIzZpIzZpIzZpCymiAQCkj0j7s4xh/aAbDYbpH/okZQScvXRFbLb7pC6rjkmEJDzi3Pk7u4O6foe6boOeXRxgZyfnSOr9QoJISD1aY3kISOBgLg7knNGQgiIMZuUMZuUMZuUuTuSc0YeDg9I13VIDBGJISJd1yPtoUX6vkccR66urpBSCvL3//k7cnHxCKnrGlmv1xxTSkGsMqTrOsQqQyqrEHfnmK7vkKEfEHfnGGM2KWM2KWM2KSulIMMwICUXxN2REAPSNA0yDAMy5AHx4sinzz9F9rs94u6MAqNFtUAuHl0g2+0WWS6WyOnpKZJLRs7Pz5FAQHLJyH63R9q2Rfq+R0IISIgBCQTEmE3KmE3KmE3KcI6KMSKLaoEsFgvk/OIcGfoBOTs7Q549e4as1itke3+PvHnzI/Lq1Svkxx9/RNarNdI0DWKVIW3XIpVVSNM0SNd2SD/0SLNvEHdHqqpCzAyJMTJyRsZsUsZsUsZsUhZCQFJKSIyRUWCUUkKsM+TF9Qvk7PyMY+rNBrn7cIfUdc0xL//5JXJ3f4ecnJwgJReklIJsd1skhoj0Q4/0XY/knJGqqpAYIxJTRGKMiBdHjNmkjNmkjNmkjMAopYSEGBB355i6rpFkCdnv9sj5xTmy3zdIP/RIjBHJOSPNvkFSTMgwDEjf9UhMEfHiSA6ZY6qqQhaLBZJSQswMCSEwckaOI8ZsUsZsUsZsUhYIjAKjEAISQmDkjAoF6doOGfKAdH2HeHHE3ZEhD4xaRqUUJMaImBkSQkBijMgwDIi7I5VVSLKEBALi7ojjiBdHCgUJ/MyYTcqYTcqYTcr4BXdHSilIDJFRYBQISNd3SNM0iJkhVVVxzHKx5JgQAzL0A+LuSIiBkTOKMSLujoQQEHdnFDjOGTnOcQExZpMyZpMyZpP6P7XvMbg+xHIKAAAAAElFTkSuQmCC", "text/html": [ - "" + "" ] }, "metadata": {}, @@ -146,10 +146,10 @@ { "output_type": "execute_result", "data": { - "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.448454,0.446571,0.444452) … RGB{Float64}(0.450485,0.448606,0.446475)\n RGB{Float64}(0.450538,0.448658,0.446527) RGB{Float64}(0.44994,0.448059,0.445932)\n RGB{Float64}(0.454458,0.452585,0.450431) RGB{Float64}(0.448366,0.446483,0.444364)\n RGB{Float64}(0.450418,0.448538,0.446408) RGB{Float64}(0.447525,0.445641,0.443527)\n RGB{Float64}(0.441659,0.439767,0.437687) RGB{Float64}(0.449427,0.447546,0.445421)\n RGB{Float64}(0.437644,0.435746,0.433689) … RGB{Float64}(0.451187,0.449308,0.447173)\n RGB{Float64}(0.440101,0.438207,0.436136) RGB{Float64}(0.453973,0.452099,0.449948)\n RGB{Float64}(0.446003,0.444116,0.442011) RGB{Float64}(0.447814,0.445931,0.443815)\n RGB{Float64}(0.445639,0.443752,0.441649) RGB{Float64}(0.441624,0.439732,0.437652)\n RGB{Float64}(0.448744,0.446862,0.444741) RGB{Float64}(0.441395,0.439502,0.437424)\n ⋮ ⋱ \n RGB{Float64}(0.447097,0.445212,0.443101) RGB{Float64}(0.4542,0.452326,0.450173)\n RGB{Float64}(0.455913,0.454041,0.451879) … RGB{Float64}(0.448724,0.446842,0.444721)\n RGB{Float64}(0.452425,0.450548,0.448406) RGB{Float64}(0.447683,0.445799,0.443684)\n RGB{Float64}(0.444761,0.442873,0.440775) RGB{Float64}(0.445623,0.443736,0.441633)\n RGB{Float64}(0.440354,0.43846,0.436388) RGB{Float64}(0.450867,0.448988,0.446855)\n RGB{Float64}(0.44535,0.443463,0.441362) RGB{Float64}(0.453891,0.452017,0.449867)\n RGB{Float64}(0.445569,0.443682,0.441579) … RGB{Float64}(0.45283,0.450954,0.44881)\n RGB{Float64}(0.449986,0.448106,0.445978) RGB{Float64}(0.449488,0.447607,0.445482)\n RGB{Float64}(0.44998,0.448099,0.445971) RGB{Float64}(0.449782,0.447901,0.445774)", - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAChxJREFUeAHtwcuOHOd5x+Hf+9VbXd3TPTM9PAzFcEjZDgQJkII4AnQT8dXEl6ZLiB1obQaOAY0X0QmiSFGcnj7X4Xuz+pe0aENZ1LKex//4x//g1xjGKRGBBIHknOkFvZQSklJCUpE4JSKQiKAX9ILgFDPj10QEpxiGmBmnBEEv6DmjQTmjQTmjQblh9IyeYfSMkwzjlCIViCVDSi+RVCTEMKRtW6RtWyRHRiICyTlzSkoJSZaQlBI942dGz8zoBb2IQCKCU5zRoJzRoJzRoNzMEDOjZ/QMQyIykiOQlBKSioS4O2JmSNu2SO4ykiMjXdchkQPJOSM5ZySlhGQyvUTPC0dSSoiZIRGBtG2L5MhIRHCKMxqUMxqUMxqURwTSdR3SdR0SBJIsIalISFmWSLKENE2DHI9H5Hg4IkVRIGaGnJ2dIWVZInVTI03dIHVT0+voVVVFL+hFDiQVCWmaBmm7ll7Qs2Sc4owG5YwG5YwG5TlnpG1bpO1aJFlCfOLIZDJBJuUEadsWOR6PyHa7Qw6HPRI5kHJSIvvDHjEzpG1apCgKpG1b5PrJNTKbzZDSS2S1WiGH4wHJXUbMDEkpIWZGL+g5o0E5o0E5o0F5RNAzeiklJKWEFF5wStM2SNM0nDKtKuRsNkM22w1ilpCIjGy3W+Tm5gYxM6QsS6SaVMjd6g4xDOm6DqmmFZJzRlJKSM4ZmUwmyPF4RJzRoJzRoJzRoByjVxQFUnqJpJQQd0cMQyICiRxI13VI27VIe2iRruuQpjkgEfTm8zmyvFwii/MFUh+PyP5wQFarFdK2LTKfz+kZPcM4pSxLpOs6xDDEGQ3KGQ3KGQ3KkyWkKAokFYle0Ls4v0ByZGQ+nyPr+zWy3W2RpmmQ/W6PzGYzZDKZIHVTIyklZHm1RK6urpDdboesv/4a8cKR3XaHbNYbZDqbIg8ePEC6tkO6rkOmsymSIyPOaFDOaFDOaFDO/4MlQywZknJCVncrpGkbZH2/Rg7HA/Lk+hpxL5GmbZDF+QJxd2Sz2SCll8jheECSJeTBwwfIsT4i7o40bYPUxxrpug4pvEAOhwPihSPOaFDOaFDOaFCOcVLOGTnsD0jTNEhEIPOzOfLek/eQ6+tr5O7uDnn79i1S1zWymC+Q+XyOHI9HZLFYIIfjAWmaBvndP/8O+eGHH5D71T1SVRWy2WyQqqo45VgfkdlsxinOaFDOaFDOaFBeFAVSFAWSu4xMZ1Nkv98j1aRCcs7I25/eIvf398jduzukbmrkeDgipZfI2x/fIsvlkl7QOx6PyH63R7755htOubl5hpSTCdLUDbLb75CmbpCqqjilyx3ijAbljAbljAblEYHknJEcGYkIZD6fI/OzObLb75CSEtlut8jt7S1SNw3y4vlz5NWrV8jDRw+RzWaD3K3ukPv7e2S1WiHbzRZ5+vQpMp1OkfPzc2R9v0aqqkK6rkPqpkYm5YRTnNGgnNGgnNGgPHcZyV2HRNCrphUSEUhdH5Hddoe8ef0G8dKRm5sb5Pbvt8jl8hLZ7fbIq1c/IL95f4rc3t4i5xfnyG9/81vk4uIC6XKHfPW/XyH3q3tkcb5A2q5FUkrIZDJB6mONlGWJOKNBOaNBOaNBeY6MRA5OORwOiGFIzhm5XF4iz26eIcvlEvnkk0+Qly9fIh999BFyfX2NfP7558gHH3yAzOdz5P3330eapkZ+evcO2W/2yPnFOfLup3fIerNG3B3puows5nNkUk2QoigQZzQoZzQoZzQoNzMkk5FkCSmKAvHCkWpaIZvNBnn48CFyc3OD/P73/4p8+um/IdPpFFmvN8jT954in332GfLm9WtkNpvRM3pVVSHNtKFn9JrzBrl7d4dEDmQ6nSJnZ2dIURRIzhlxRoNyRoNyRoNywxAvHKmqilMm1QRp2xZZLpfI3d0d8vqH18hf/vISub29RT799FOkrmvk4aOHyMuXL5HnL14gt19+iRTuyGQyQdwLJO8z0rYdcn5xjkzKCTKpJogl45SIQJzRoJzRoJzRoNySIV444qUjkTNSVRWy2+6QruvoBb2//vWvyNnZGbLf7ZDNeo08f/Ec+dN//gmZL+bIhx9+iDx8+BA5HI7Ij29/RNabDXI4HJCydMQwZHG+QDabDWJmSDWpkBwZcUaDckaDckaDcn7J6DVNgzR1g+QIZL1eI03TIMfjEZkv5sgXX3yBrFYr5M9//i/EkiGPHz1Gnv7TU+Trr75C2q5DvvvuO2S33yHv3r1D3rx+g1TTCnn08BGy3WyR0kskIpAgkGQJcUaDckaDckaD8ohA2rZFDEPMDDnsD8hms0F+evcTp/zP3/6GrO/vkddv3iCr1Qr5+OOPOeUPf/h35Msvv0Sm1RTZ7/fIfr9HptMp8uS9J0jpJTI7myFmhkwmE6RtWiQikIhAnNGgnNGgnNGgnF+ICCRHRtqmRZqmQYqiQF48f4HsD3tkNpsh280WCQJ5cn2NXFxcIFdXS+Tly/9GjocDcnFxgaQiIUXhyPnFObK8XCLVtEJ22x1SFAWSUkIsGdK1Hb2g54wG5YwG5YwG5SklTum6DsmRkZQS0uWOntFrmxYpUoE8evwIub+/RxaLBRL8zDDkanmJ3K1WSNM0yGFzQC4vL5GLiwukqirEzJD5fI60bYvknJHIgXRth1gyxBkNyhkNyhkNyiMCyTkj+/0eqesa8cKRyIE0dYMcDgfkeDxyyuPHj5Ecgbz6/ntkuVwii/MFMp1O6RknFalA2ralZ/RSSkjXdUhEIF3XIW3X8muc0aCc0aCc0aA854x0bYfknBHDkJwzst3tkPVmjbRNi1hKyLObZ8hmvUEiAjF+VlUVcnV1hazv14iXjlxdXSFN0yAPHjxAcs5IXddIUzfI8XhEutwhhiFmRi/oOaNBOaNBOaNBeUQgQSBeOJIsIe6OPHhwhXRdh1xeXCJP3nuCVFWF3N/fI99++y3yyb98gnz//ffIbDZDdtsdMp1OkfpYc8p6vUYO+wPSdi2y3+85pSxLxAtHUkpIRCDOaFDOaFDOaFCeLNEr6BVFgeSckaIoEK8duXl2gyyXSyQIZD6fI6u7FXK+OOeUFy9eIKvVCpnOpkjkQNrcIvv9HknbhDRtg7RNi+SckclkgiRLSEoJsWT0Mj1nNChnNChnNCjnF4pUIGaG5JSRZIYsFguk8ALZbrfI5eUlstvukKZtkJQM6doO2W13iCVD2rZF2qZFUkpIEEiOjBSpQNIkIZYM8cKRVCROiQgkCMQZDcoZDcoZDcrNjF+TUkLMDEkE0tQN0nUdUtc1kiMjEYG0XUevrpGcM1KkAim9RIpUICklxBqjF/RSkZBkCTEzTgkCiQh6wUnOaFDOaFDOaFDOPxARiJlxipkhdVMju90OcXekLEtOqSYVp1gypG1aekHPkiERgRRFgUQEkiwhZsYpQdAL/oHgZ4Y4o0E5o0E5o0H9Hy5ZKAuzTEHwAAAAAElFTkSuQmCC", + "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.447447,0.445563,0.44345) … RGB{Float64}(0.448958,0.447076,0.444954)\n RGB{Float64}(0.448848,0.446966,0.444845) RGB{Float64}(0.448824,0.446942,0.444821)\n RGB{Float64}(0.452837,0.450961,0.448816) RGB{Float64}(0.4474,0.445516,0.443403)\n RGB{Float64}(0.444361,0.442472,0.440377) RGB{Float64}(0.447272,0.445388,0.443275)\n RGB{Float64}(0.439538,0.437642,0.435574) RGB{Float64}(0.449485,0.447604,0.445479)\n RGB{Float64}(0.439831,0.437936,0.435866) … RGB{Float64}(0.45082,0.448941,0.446809)\n RGB{Float64}(0.442884,0.440994,0.438907) RGB{Float64}(0.452683,0.450807,0.448664)\n RGB{Float64}(0.44252,0.440629,0.438544) RGB{Float64}(0.445341,0.443454,0.441353)\n RGB{Float64}(0.443767,0.441878,0.439786) RGB{Float64}(0.440748,0.438854,0.43678)\n RGB{Float64}(0.446952,0.445067,0.442957) RGB{Float64}(0.44008,0.438185,0.436114)\n ⋮ ⋱ \n RGB{Float64}(0.446782,0.444897,0.442787) RGB{Float64}(0.449927,0.448047,0.445919)\n RGB{Float64}(0.453577,0.451702,0.449553) … RGB{Float64}(0.446197,0.444311,0.442205)\n RGB{Float64}(0.451126,0.449248,0.447113) RGB{Float64}(0.446008,0.444122,0.442017)\n RGB{Float64}(0.443152,0.441262,0.439173) RGB{Float64}(0.445286,0.443398,0.441297)\n RGB{Float64}(0.441278,0.439385,0.437308) RGB{Float64}(0.45161,0.449733,0.447595)\n RGB{Float64}(0.444443,0.442555,0.440459) RGB{Float64}(0.452805,0.45093,0.448785)\n RGB{Float64}(0.446321,0.444435,0.442329) … RGB{Float64}(0.451255,0.449377,0.447241)\n RGB{Float64}(0.448288,0.446405,0.444287) RGB{Float64}(0.448005,0.446122,0.444005)\n RGB{Float64}(0.449035,0.447153,0.445031) RGB{Float64}(0.448349,0.446467,0.444348)", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACiFJREFUeAHtwduOHMd9x/FvVf27p+ewO3ugybW4pCUBtoBYBhw50FPoZeI3cx5BgBDfhrYvTBGIEtGOSPOwc9iZPlRX5erX5MUAvunL/nzs97//dwaZDxz/XGaQUkJSSohzDnHOIWaGOOcYOAY5ZSSTkZwz/4zDMXCclFPmFOccA8dpmUEmI8ZkVMZkVMZkVMZHnHMMHAOH4yTPIFhgkBl47xEfPOKc45S+75GUE5L6hKSUkJQTp3jvEe884r1n4Bl45xHnGOTMIJORnDOnGJNRGZNRGZNRmXMO8c4zcAycc0hOmVOcc4j3HnHeITllJKaIBB+Q2EUk54z0qUdyykjKCfHeM8gMsstIsIB47xHnHJJTRmKMSM4ZyTkzcAyMyaiMyaiMyagspYT0qUdSSkgmIyEExOGQ2WzGKU3dILGPSIwRCSEgfd8jy8USqaoK6WKHdG2HdLFD+twj5axEcs5I6hMSLCAxRiTGyCnO8RGHGJNRGZNRGZNRWc4ZiX1E+r7nlOADYqUhVhgSY0QyGamPNVLXNZJSQnwISH2sEe89EmNEggUkdhH52cOfIfP5HCmsQDbbDdI0DdKnHnHOId57xPvAIDMwJqMyJqMyJqMyPuJwSPABcd4hVhgSQkBSn5AYI+KcQ4qyQKqqQvb3eyT4gMQ+Ivv9Hrl9csspRVEgZVkim80G8d4jfd8js3KGpJyQ4AOSUkLKskTqukaMyaiMyaiMyajM4ZBgAXHOIc45JPjAIGckpojknJG2bZE+9kjbtwwyg+PxiGQ+WK6WyPp8jaxWK6RtW6Sua2S73SJ93yOLxQLJOSPeeyTmiFhhSIyRgWNgTEZlTEZlTEZlfKSwAvHBIzll5Oz8jEHOyGKxRHa7HXIf7pGu65Dj8YhU8wq5Kq+Qpm0Q7z1ycXmBXF5cIvf398jLly8RC4bc3x+Q3XaHVPMKubq6QmKMiPWGzKs5klNGjMmojMmojMmoDMcgk5GUEuKdQywY0vcR2Ww2SNu1yHa7Rdq2RW4e3SDBAtJ1HbJcLhErDNnv94gFQ+qmRpx3yOXlJVI3NWIWkNhFpGkapO97xIIhdV0jwQJiTEZlTEZlTEZlDofknJGcM1I3LdLF1wwyg+Vqidzc3CAPHz5E7u7ukLdv3yJd1yGr1QpZzBdIU9fI6voB0jQNErsO+eyzz5DXr14j290WmZUzZH+/R6pZxSlt2yLz+ZxTjMmojMmojMmoLISAhOCRPiWkmlfI8XBEqqpCUp+Q9+/eI5vNBrm7u0ParkXapkWKokTevXuHrNdrJJORtm2Rw+GA/PjyJeJwyO3jW6QoDOm6Djkcj0jbtkhZlpzSpx4xJqMyJqMyJqOyTEYyH8kMUk7IYrFAlsslUtc14pxDDscD8uLFC6TtWuTp018gr1+9Qi6vLhEfPHJ3d4fsdjtks9kgh8MBufn5DVLNK2S1WiG73Q6ZzWZI6hPSxQ4prEAcDjEmozImozImo7LUJyT1CUk5IdWs4pS2a5H7+3vk9T9eI4UVyO3tLfL9998jF+s1cjgekFc/vUI+/fRT5MWLF8jZ+Tny+eefI2fnZ0jqE/LD//yAbDdbZLVaITFGxAePlK5EmrZByqJEjMmojMmojMmoLKWE5JyRTEbqpmaQGVhhyHq9Rh4/foysL9bIl19+ifz5T39CfvXFF8jNo0fIH/7wH8gvf/VLZLVaIU+fPkG6LiLv379D9scDcn5+jrx79x7Z7/eImSF93yPL1RKZlTMkhIAYk1EZk1EZk1EZjkHOGXHOISEEJPiAVPMK2e/2yPWDa+TJ7S3yr7/9LfLVV18h83mF7HZ75ObnN8jXX3+NvPnHG2Q+nyPO1chsViFV1yGOD85WK+Tu7g7JZKSaVchivkC890hKCTEmozImozImozLnHBJ8QMpZiTgcUpYlEruIXFxcIHd3d8ir16+R/3r2DHn+1+fI7/7td0jsOuTB9QPk2bNnyJMnt8jz58+RYIYURYFYMOSYMtKnHjk/P0esMKQsSgaODxyDlBJiTEZlTEZlTEZl3nnEB4+UZYmkPiFVVSHv794j6ZCQnBPylz//BVksFkjd1MjhcECePLlFvv32W2S5XCJffPEFcnV1jTRNjbx58xbZ7XfIsT4iZoaklJDlcokcDgcGHYOyLJGcM2JMRmVMRmVMRmU4Bs45JMaIdF2HOO+Qw+GAdG2HHI9HZLFcIH/8zz8im+0G+e6775DgA3J9fY08unmE/PcPPyAxdsjLv/0NOR6PyLu375A3b98gVVUh11fXSF3XSFmUSN/3iPMOcTjEmIzKmIzKmIzKcspIzBHp+56BY3C/v0f2+z3y9u1bpO975M1f3yCbzRZ59dNPyGa7RX79639Bcs7IN998g7x48QKZlTPkWB+Rw+GAVFWFPHz4CLEQkPl8zinlrET6vmeQGaScEGMyKmMyKmMyKuMjDodkMtJ3PdI0DWJmyCeffIK0TYss5gtkd7lH+r5HHj16hJydnSFXV9fIs2fPkLZpkfPzc8QHj1gwpDgrkKvLK2Q2myG7/Q6xYIj3HkkpIbGPDDIDYzIqYzIqYzIq894jmcwgMUg5IT54pDt2SFEUSNu2SLCAXF9fIbvtFlmtVpzi+GC9XiPb7RZpmgZp2ha5urxELtYXyKyaITlnZD6fIyklJOeM5JyRlBLicIgxGZUxGZUxGZVlMpL6hDRtgzR1g/jgGWQGTdMgx+MRaZoGWSwWyIMHD5BMRn766RVyeXGBrM/XyHKx5JSUE2LBkKZpkLIsGTgG3ntO6boOySkzyHzgGBiTURmTURmTUVlKCYkxIrGLSJ96TtlsNkifegaZQSYjj28fI7vdDnE4xDmHzGYz5OLyAtlutoiZIeuLNZL6hKzXawaOQYwRaeoGqesa6boOcTjEOccpxmRUxmRUxmRURuYk7z1SFAVSFAVy/eAaadsOuby8QG4e3SBVVSHb7Qb53x9/RH7z5W+Qv//f35GqqpD7+3ukqiokxog455CmbZCmaZC2aZH7wz2SUkLKskTMDPHec4oxGZUxGZUxGZXhGPjgkdKVDBwDC4bEGJEnt0+Qi4sLJOeMrM5WyN3mDjk/O+eUp0+fIu/fv0eqquKUvu+Rpm6Qpm6QuqmRru2QPvWImSHee8R7j3jvkZQSYkxGZUxGZUxGZQ6HhBAQZ45TvPfI+dk5YoUhu/0Oubi4QA73ByTGiHjvkNhH5P7+HimsQFKfkLZtEe89klJC2q5FnHNIURZIQYlYCEjwAXE4JOfMKcZkVMZkVMZkVIZj4HCIwzFwDHLOnNI2LdLFDmnbFskpIzllJPY9g7ZFUk6IxyPBAjLzM8Q5h8QYkZwzEkJAvPeIdx5JOTHIDFJODDInGZNRGZNRGZNRmeODTOYUj+eU7DLSNA1yOByQYAEpi5JTZuWMU5z3SNd1SM4Z8c4zcAxCCEhOGXE4TslkxmBMRmVMRmVMRvX/77EkyIv50vAAAAAASUVORK5CYII=", "text/html": [ - "" + "" ] }, "metadata": {}, @@ -176,10 +176,10 @@ { "output_type": "execute_result", "data": { - "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.449153,0.447272,0.445149) … RGB{Float64}(0.450811,0.448932,0.446799)\n RGB{Float64}(0.450515,0.448636,0.446505) RGB{Float64}(0.45065,0.448771,0.446639)\n RGB{Float64}(0.453825,0.451951,0.449801) RGB{Float64}(0.449468,0.447587,0.445462)\n RGB{Float64}(0.448207,0.446324,0.444206) RGB{Float64}(0.448849,0.446967,0.444846)\n RGB{Float64}(0.442966,0.441076,0.438988) RGB{Float64}(0.450825,0.448946,0.446813)\n RGB{Float64}(0.441154,0.439261,0.437184) … RGB{Float64}(0.453217,0.451342,0.449195)\n RGB{Float64}(0.441594,0.439702,0.437622) RGB{Float64}(0.454916,0.453043,0.450887)\n RGB{Float64}(0.4451,0.443213,0.441113) RGB{Float64}(0.446662,0.444777,0.442668)\n RGB{Float64}(0.445894,0.444008,0.441903) RGB{Float64}(0.440601,0.438707,0.436633)\n RGB{Float64}(0.451077,0.449198,0.447064) RGB{Float64}(0.442182,0.44029,0.438207)\n ⋮ ⋱ \n RGB{Float64}(0.448208,0.446325,0.444208) RGB{Float64}(0.453758,0.451883,0.449733)\n RGB{Float64}(0.454612,0.452739,0.450584) … RGB{Float64}(0.445408,0.443521,0.44142)\n RGB{Float64}(0.45143,0.449552,0.447416) RGB{Float64}(0.447833,0.445949,0.443833)\n RGB{Float64}(0.446022,0.444136,0.44203) RGB{Float64}(0.44742,0.445536,0.443422)\n RGB{Float64}(0.443856,0.441966,0.439874) RGB{Float64}(0.453651,0.451776,0.449627)\n RGB{Float64}(0.44579,0.443904,0.4418) RGB{Float64}(0.454059,0.452185,0.450034)\n RGB{Float64}(0.446223,0.444337,0.442231) … RGB{Float64}(0.452673,0.450797,0.448653)\n RGB{Float64}(0.449209,0.447328,0.445204) RGB{Float64}(0.449996,0.448116,0.445988)\n RGB{Float64}(0.450966,0.449087,0.446953) RGB{Float64}(0.450274,0.448394,0.446265)", - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACjBJREFUeAHtwc2OHFWax+Hfe84b+VGV9Y3tAtvdhoWRekDqAa5i+m56rmz6EppejwRCPRL2Bhm6bTCuysrMyIyIE+fM6h9mURKbWMbz+H//9a9IoSCG8XsKBSm5ILlkxDAkhIBEj4hhDIxByQUppSCFgpRSEMO4j5lxn1wy9zEzxDDuUygMCgNnMipnMipnMirnN8wMMYyBMTCMewUGpRTEzJAQA2JmiJkhOWcGgUHf9UgpBck5MzAGwQISQkBCDEgoATEzxDCkUBgUBqUU7uNMRuVMRuVMRuVmhpgZA2NgGFIoDAqDEANiZkiwwMC4V8kFyX1G+r5H+twjOWfuYxhSrCC5ZCRaRGKMSLCAFArSpx7JZAal8J4hzmRUzmRUzmRUnktGSi5Izpn7GIaEGJDoETEzpO1apGs7JOeMhBiQvu+R5XKJLKoF0nUdkrqEtG2L9NYjs2qGGMZ9QghI0zZISon7mBnvFcSZjMqZjMqZjMpLLkjqE5JS4j6VV0g1q5DKKyTnjPSpR/b7PdK0DdJ1CakqR+q6RkIISOoSEmNEUkrIw0cPkeViiVRVhazv1sjhcEByzohhiAVDQggMCgNnMipnMipnMiovFKSUggQLiAVDqqpCQghIlzok9xnJJSNeOTKbzZC6rhkYg1IKst1skSdPnyCGIVVVIbP5DFnfrpEQA9KnHpnNZ0jOGYkhIn3ukfl8jjSHBnEmo3Imo3Imo3LDkBgiEqvIwBhUVcXAGJRSkFwykrqE9KlHDumA9LlHUpeQwnurkxVyfnaOrFYrpGkb5HA4IOv1Gun7Hjk6PkIKBQlmSFc6ZFbNkJQSA2PgTEblTEblTEblZgxmsxkSPSIlF+T09JSBMTg6OkLW6zUSQ0C6LiH7/R7xypH5bI40bYPEEJHzi3Pk4uICqXc75NWrH5HoEanrGtlsN8hisUCuLq+QlBKS+4wsFgsk9xlxJqNyJqNyJqNyMKRQkJwzEkJAYoxIn3tkfbtG2q5FNpst0jQN8uDhA8TdkZQScnx8jHjlyHa7RdwdaQ4NYsGQy8tLpG1bxCtHui4hTdMgfd8jMUbk0ByQGCPiTEblTEblTEblFgwxM6TkgtT7Gum6DimlIKvjFXL96Bp59PARcnt7i7z99S3SNi1yvDpGjo+OkaZpkNVqhTRNg3Rdh3zyySfIm9dvkM1mg8xnc2S73SKz+Yz7tG2LLBdLxDDEmYzKmYzKmYzKY4hIiAHJfUaWyyWy3W6R5XKB9H2PvHv3Drm7u0Nubm6QruuQw+GAVFWF/PruV+Ts7IxBYdAcGmS/r5Ef//dHpFCQJ4+fINWsQrq2Q+p6h3RdQhaLBWIY0ucecSajciajciaj8lIKkvuM9LlHSinI0fERsjpeIfvDnvvsdjvk5cuXSNu2yNOnT5HXb14jV1dXSIwRWa/XyOZug6zXa2S32yHX19fIYrFATk5OkM3dBpkv5khuWiR1CfHKGRQGzmRUzmRUzmRU3vc9UigMCoPFYoEUCtKlDql3NfLLL78g7o48efIEefnyJXJxcYHUdY28ef0G+eOzPyIvvn+BnJyeIs+ePUNOTk+Q3Gfkhx9+QO7Wd8jqZIX0fY9YMMQrR5qmQWazGeJMRuVMRuVMRuU5Z6RQEOO9w+HAwBj0qUfOzs6Qx08eIxfnF8if/uNPyHfffYd8+umnyKOHD5H/+dvfkOfPnyNHR0fIx8+eIW3XIe/evUN22x1ycnqC3Ly7QTbbDeLuSEoJWa1WyHw2R0IIiDMZlTMZlTMZlZsZUnJhEAyxYIhHR2bzGbLb7ZCrq0vko8cfIf/55z8jX3zxBbJcLpDNZoN8eP0h8tVXXyG//Pwzslguec+QxXyOdMsOMTMknSTk5uYGKbkg88UcOTo6QmKMSCkFcSajciajciajcjND3B3xyhHDkPl8jnRdh5xfnCM3NzfIm9dvkG+++Qb5/sUL5Msvv0TatkUuLy+Qb7/9Fnny9Cny4vsXiHtEZvM54vsDsi97pO975PT0FPHKkfl8jpgZYmZI7jPiTEblTEblTEblFgxxd2Q2mzEoDObzObKrd0jf90ihIP/8v38iJ6cnSNM0SF3XyOPHj5F/fP0P5Oj4CHn+/DlyeXWJtE2DvH37K7LdbpHD/oBUVYUUCnJ6corsdjvEzBjMGOSSEWcyKmcyKmcyKuc3DEO6rkP61CM598jhcEDatkW6tkPm8zny9d+/Ru42d8jXf/8aCTEgDz54gFx/eI28evUKyTkjP/30E1Lv98jt7S3y8y8/I8vFErm6ukLqukbm8zmS+oSUUpAYIuJMRuVMRuVMRuWlFKTrOqRQEDNDdrsa2W62yLubd0hKPfL69WtkfXuLvH37Flnf3SGfffYZ9/nLX/4LefHiBTKbzZB9vUfqukaWR0vk0cNHiFeOLI+WDAqDalYhIQUGhd8oiDMZlTMZlTMZlfMbhYKUUpC+75G2aZEQAnJ9fY20TYssl0uk3u2QEALy8NEj5PTkBDm/OEe+/eZb5NA0yMnJCvHoSIgROa0q5PzsHJnP58h2t0ViiIiZISEEJKXEoDBwJqNyJqNyJqPyYIGBMch9ZlAYhBCQpm0Rrxzp+x6pKkcePHiArO/WyOp4hRQKYhhydnGGrG/XSEo9stvukPPzc+Ts7AxZLpcMjMHx8THS9z1iGFJyQfq+RwxDnMmonMmonMmovFCQ3GfksD8gXeoQM0OM91JKSF3XSJcSkhcZ+eDqAySXjLx5/QY5Oz9HVscrZDFfIGaG5JIRj450bYfkRUZiiEgIASk5I6lPSM6ZQeG9wMCZjMqZjMqZjMpLLkjqEpJSQvrUIzFGpK5rZFfvkNQlJJeCPP7oMbLZbhgUBmaGzOcL5OLiArnb3CExRuTi4gLJOSPnZ+dILhlJKSFd2yGHpkFSlxgYg2CB+ziTUTmTUTmTUXmhcJ/oEQkxILNqhlxeXSJt2yJnp2fI9fU1slgukLu7O+TVq1fI559/jvz7X/9GlkdLpN7XyHK5RFKXGBiD7W6LHA4HJHUJ2R/23KeqKiSGiIRgvGeIMxmVMxmVMxmVG4bEGBELhpgZEkNE2rZFPn72MXJ6dooYhhyvjpHb21vk5OQEMd77wx/+gKzXa2S5WCI5Z6SUghwOB8TMkK7tkJQS0ucemc1mSAgBCRaQECKSc0acyaicyaicyaic3wgxIJGImBkDY7BarZDoEdltt8j5xQWy2+2QlBISQkBS6pHdrkZCCEiXOqRPPQNjkHNGYohI9IiYGTILM8TdkRgiYmZILhkpFMSZjMqZjMqZjMrNjIExMAwxM8TMkBIK0nUd0nUd0qWElJyRUgqSUuI+uWQkEJDKK8SjMzAGKSWk5IJ4dCSGyO8pFKRQGGTu5UxG5UxG5UxG5RjvFd4zBmYMCgUpFKRpGqTe1Yi7I9Ws4j7z+ZxBYWBmSOoSUihIsIAYhoQQkGIFCRYYGAPDkFIK9yr8LmcyKmcyKmcyqv8HE0odgjUSn6MAAAAASUVORK5CYII=", + "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.450524,0.448644,0.446513) … RGB{Float64}(0.452413,0.450536,0.448394)\n RGB{Float64}(0.452021,0.450144,0.448004) RGB{Float64}(0.451327,0.449449,0.447313)\n RGB{Float64}(0.455498,0.453626,0.451466) RGB{Float64}(0.450721,0.448842,0.446709)\n RGB{Float64}(0.449628,0.447748,0.445621) RGB{Float64}(0.450212,0.448332,0.446203)\n RGB{Float64}(0.441391,0.439498,0.43742) RGB{Float64}(0.452829,0.450953,0.448808)\n RGB{Float64}(0.439395,0.437499,0.435433) … RGB{Float64}(0.454492,0.452619,0.450465)\n RGB{Float64}(0.441538,0.439646,0.437566) RGB{Float64}(0.456504,0.454634,0.452468)\n RGB{Float64}(0.447459,0.445575,0.443462) RGB{Float64}(0.449167,0.447285,0.445162)\n RGB{Float64}(0.44712,0.445235,0.443124) RGB{Float64}(0.441339,0.439446,0.437368)\n RGB{Float64}(0.453026,0.451151,0.449005) RGB{Float64}(0.442836,0.440946,0.438859)\n ⋮ ⋱ \n RGB{Float64}(0.450221,0.448341,0.446211) RGB{Float64}(0.453395,0.45152,0.449373)\n RGB{Float64}(0.457913,0.456045,0.453871) … RGB{Float64}(0.448,0.446117,0.444)\n RGB{Float64}(0.458045,0.456177,0.454002) RGB{Float64}(0.447261,0.445376,0.443264)\n RGB{Float64}(0.446474,0.444589,0.442481) RGB{Float64}(0.447486,0.445602,0.443488)\n RGB{Float64}(0.440241,0.438347,0.436275) RGB{Float64}(0.453706,0.451831,0.449681)\n RGB{Float64}(0.441649,0.439756,0.437676) RGB{Float64}(0.456158,0.454287,0.452124)\n RGB{Float64}(0.44794,0.446057,0.44394) … RGB{Float64}(0.454319,0.452445,0.450292)\n RGB{Float64}(0.451918,0.450041,0.447902) RGB{Float64}(0.451286,0.449408,0.447272)\n RGB{Float64}(0.452007,0.45013,0.44799) RGB{Float64}(0.451667,0.44979,0.447652)", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACi1JREFUeAHtwctuI+eZx+Hf+9VbRVIiW5Rkq9V2y44naMBI4o2N5CYyV5O5s8kVGPF6JvYEXlhZTGIndp/congQWfUdZvWv7gWBYIBa1vP4f/zhD/xLxnGFXi4ZyTlzTLCAVF4hwQJSKEgpBSmlcFThLaNnGP9KoSClFMQwxMzoGW8VeoWCOKNBOaNBOaNBOUbPMHpGzzB6Rs8wJBA4pgoVUlUVPaNXSkFyykguGUkpIaUUJOfMMcECYsGQEAK9Qi+EgJgZx5RSkFwyxzijQTmjQTmjQbmZIYZxjJkhuWSkUJAQAlKFgISq4pgYI71CL5eM5JyRFBOSc0ZyyUiwQC/QCwQkeECqEBCzgOSSkRQTUnKhVzjKGQ3KGQ3KGQ3KSy5IKgnJKXOMBUOqUCFVXSElF+RwOCAxRuSwPyCVV4hhyHQ6RaanU6TtWqRrO6TrOiTlhDRNwzGFt7wKSLtvkRgjvULPgiGGIc5oUM5oUM5oUJ5zRmKKSEoJCRaQyiqk8gpp6ho5tC0SY0Qedg/Ifr9Hcs5ICAHZ7rZICAGJXUTcHelih1xdXSGz2Qxp6gZZrVbI/uEByaUghiEhBKQKAcmlIM5oUM5oUM5oUF4oiGFIVVWIYUjtNVKFCokxITFGxDCkmTTIdDpFNtstYkYvp4xsNhvk5uaGY5q6QZpJg9y9uUMsGJJSQiaTCZJzRqpQISklpJlMkMP+gDijQTmjQTmjQblhSFVVSKgCEiwglVcck3KiV+h1sUNSSkjXdkhKEenaDim8NZ/PkbNHZ8hiMUcObYvsH/bI6n6FpJiQ2cmMY4IFJJaI1HWNxC7SM3rOaFDOaFDOaFBuwZBgAamqCimlIIvFAjEzZDabIev7NeIPjnRdh+zKDpmdzJBJM0HatkVCCMj5+RI5v7hAttst8v3me6SqKmS32yHrzRqZTqfIxcUFEmNEUkrIdDpFSi6IMxqUMxqUMxqU845CQQoFCSEg7o7knJHV3QqJMSL3q3tkf9gjj68eI147EruIzOdzxN2RzWaLeF0j+/0eMTPk8uIS6doOqb1Guq5DurZDUkqIuyP7wx7xyhFnNChnNChnNCgPZohZ4Jjdww5JLxOSc0ZOT0+Rx9ePkavHV8jd3R3y+tVr5NAekPl8jpycniD7wx6ZL+bIYX9AYtchv/y3XyI/Pf8JWa/XSDNpkM1mg9RNjdTUSNd2yHQ65RhnNChnNChnNCi3EBB3R3LOyHQ6RXa7HTJpJkhMEfn555+R+9U98ubuDunaFtnv90jtNfI6vkaWyyVSSkEO7QHZPeyQ73/4L3qF3odPP0SaukG6rkW2ux3SdR2Sp4m3DMk5I85oUM5oUM5oUM47cs5IThkppSAnJyfI6ekpst/v6RV62+0Wub29Rbq2RW4+ukF+ev4cee/yEtmsN8jqboXcr++Ru7sVst1ukCfXT5DpdIosHi2Q+/t7ZDqZIm0+IDFGxL1GCgVxRoNyRoNyRoPynDKSYkJKKch0OkUKBYldRLbbLfLi+Qukrmvk6dMPkb/e/hVZLpfIbrtDfnr+E/Lxxx8jt9/dIotHC+STX/wCeXT2CMkpIf/7t78hq/sVspgvkJQSYiEglTtyOByQuqkRZzQoZzQoZzQoTzkhOWfEMGS/33NMrjOyPFsiT58+RZbLJfLrX/0K+Z+//AX59NNPkffffx/543/+EXn27BlycnKCfPLJJ0jXdcjPr18jm90DslgskDdv3iCb9Qap3JEUIzKfz5Fm0iBVqBBnNChnNChnNCg3DDEMCSEgVVUhlVfIZDJBNpsNcnl5gTz98EPk8y8+R7744gtkMp0gm80WuX5yjfzud79FXrx8gZzMZsiu0GsmE2Q2i4iZITFG5M3Pb5CcMzKdTZGT0xOkChWSS0ac0aCc0aCc0aDczBB3R5qmRgpvTScTpG075Pz8HHlzd4c8f/EC+fN//xn57vYW+eLzz5EuRuT9995Dvv7mG+Tm5ga5/e4WcXdkMpkg+/0eySUjKSbk7OwMqZsaaZoGMTN6Rq/kgjijQTmjQTmjQbmZIXVdI3XdILlkpJlMkM12i6RNolfoffvtt8hiPkcOhwOy3e2Qm5unyJdffonM53Pk2bNnyHvvXSKHwwF5+eoVsllvkP3DHvHa6RV689M5st6skWABaSYNUkpBnNGgnNGgnNGg3MwQC4Z0XYeklJBgATnsD0jXdUjbtshkMkH+9KevkPVmjXz11VdICAG5vLxEnjx5gnz/978jMSXkh3/8A9ltd8ibN2+QFy9fItPpBLm8uES22y3SNA1SckFKKYiZIc5oUM5oUM5oUF5KQbquQ3LOSAgB2W63yHqzRl69fIV0sUNevnyF3K9WyKtXr5C71Qr57De/QQpv/fvvf4/cfneL1E2N7B/2yG63Q2azGfLk+hpxd2Q2m9EzepOmQbouIoVCr9BzRoNyRoNyRoNy3lFyQXLOSOwi0nYtEiwg19fXSNu2yOnJKbLZbOiZIVdXV8hisUDOL86Rr7/5Gtk/7JFHjx4hoaoQrypk8WiBLJdLxGtHttstUtc1UngrhIDEFJGSC+KMBuWMBuWMBuUhBKRQEMuGlFKQKlRIjBE5OT1B9oc94u7I9fU1sl6vkcViQc+MXqF3vjxHVrZCuq5DDtstcnb2CDl7dIbUTc0xi8UCKaUghiEpJiTFRM/oOaNBOaNBOaNBOe/IOSNd1yFt1yKGIRYMadsW2T/skbZtETNDrq6ukFwy8uOPPyLn5+fIfD5HJtMJYhhSKEgIAWnbFpnNZoh7hZRCr4sd0nUdEmNECgUxDHFGg3JGg3JGg/KUEhJjRGKKSNd2iAVDVqsVkmLimJQz8sEHHyDrzZpjDEMmkwmyPF8i6/Ua8cqRs+UZklJCLi8v6RV6bdsim+0GORwOSIoJMTPEzDjGGQ3KGQ3KGQ3Kc8lIKYVeoefuSNM0yMXFBdJ1HXK+PEceXz9GptMpcn9/j/zw/Q/IZ599hvzzx38is9kM2W13yOxkhnRdh1ShQtbrNdIeWuRwOCC7hx1SSkGapkFqr5EQAsc4o0E5o0E5o0G5YUjlFWJmiJkhdVMjXeyQm6c3yHK5RAoFmZ+eInd3d8h8MeeYjz76CFmtVsh0NkVyzvQKve1hi5gZ0rUdElNEUkpI0zRIVVVIVVWIBUNyzogzGpQzGpQzGpRbMCRYQNwdCWaIWUAWiwXitSPrzRpZLpfIdrtDYheRYAGJKSLb3RYxMySlhKSYkFIKklPmmMorxN2RQkHcHamqCikUeoW3Cj1nNChnNChnNCg3M8QwxDCk8K6CBAtI27ZI7CLSdR1SSkEKBUkpIW3bIjlnxCtH3B2pQoVYMKRrO6SUgoQQEHenZ/RKKfQKvVIKUkrhGGc0KGc0KGc0KOcdhUKv0DMzeoWjDocDstvtEHdHmrrhmMlkghQKEiwgMUakUJBggV6hV3mFlFwQC8ZRhf+fwltGzxkNyhkNyhkN6v8AAWotVA/Ky9kAAAAASUVORK5CYII=", "text/html": [ - "" + "" ] }, "metadata": {}, @@ -208,10 +208,10 @@ { "output_type": "execute_result", "data": { - "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.395394,0.393446,0.391631) … RGB{Float64}(0.398104,0.396158,0.394328)\n RGB{Float64}(0.395888,0.393941,0.392123) RGB{Float64}(0.398213,0.396268,0.394437)\n RGB{Float64}(0.395587,0.393639,0.391822) RGB{Float64}(0.396703,0.394756,0.392934)\n RGB{Float64}(0.391757,0.389806,0.388011) RGB{Float64}(0.392916,0.390965,0.389164)\n RGB{Float64}(0.39423,0.392281,0.390472) RGB{Float64}(0.403424,0.401484,0.399623)\n RGB{Float64}(0.397909,0.395963,0.394134) … RGB{Float64}(0.410357,0.408425,0.406524)\n RGB{Float64}(0.389855,0.387901,0.386117) RGB{Float64}(0.405784,0.403847,0.401972)\n RGB{Float64}(0.389093,0.387139,0.385359) RGB{Float64}(0.390828,0.388875,0.387086)\n RGB{Float64}(0.387221,0.385265,0.383496) RGB{Float64}(0.377252,0.375288,0.373575)\n RGB{Float64}(0.39262,0.390669,0.38887) RGB{Float64}(0.37919,0.377227,0.375504)\n ⋮ ⋱ \n RGB{Float64}(0.404775,0.402836,0.400968) RGB{Float64}(0.388814,0.38686,0.385082)\n RGB{Float64}(0.40393,0.40199,0.400127) … RGB{Float64}(0.398127,0.396181,0.39435)\n RGB{Float64}(0.399025,0.39708,0.395244) RGB{Float64}(0.383865,0.381906,0.380156)\n RGB{Float64}(0.395511,0.393563,0.391747) RGB{Float64}(0.392776,0.390826,0.389025)\n RGB{Float64}(0.395609,0.393661,0.391845) RGB{Float64}(0.387935,0.38598,0.384207)\n RGB{Float64}(0.393061,0.391111,0.389309) RGB{Float64}(0.394808,0.392859,0.391047)\n RGB{Float64}(0.394255,0.392306,0.390497) … RGB{Float64}(0.395138,0.393189,0.391375)\n RGB{Float64}(0.394025,0.392076,0.390268) RGB{Float64}(0.392928,0.390977,0.389176)\n RGB{Float64}(0.394301,0.392351,0.390542) RGB{Float64}(0.394931,0.392983,0.39117)", - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAACi5JREFUeAHtwctyHMeZhuE3M/+q6m6gAYKgJHLCksUIW4yQFpa0Gd+G5xo8F6yIscJDSiHKJiWcAQJ9qkNmzuorcNERs8llPY/999//zl6OvXLOSE4Z6fseyWQkpYRUVYVUViExRcQ5hzgcksmI9x5JKSHee8R7j3jvEYdDUk6MMqOUM+LYL5MZZUbGpChjUpQxKcr4mGPkcEjOmX0yGXHOITllJOeMxBgR5xySc0aqqmKf2Edk6Aek73skhIDUdY1UdYVYZUggsE+MEUkpITlnRpm9jElRxqQoY1KU8bHMKJMR5xwSQkD6vke890jf98gwDIh3HgkhICEE9kkxIbt2h/R9j6SUkOAD4rxDMhk5PDhk5BjllJEYIrLb7tgrM8pkxJgUZUyKMiZFWc4ZyTxKMSJWGZJiQhwOscqQruuQIQ5Iv+6R3W6HLJdLxHuPtF3LKDOKMSIxRsRVDhmGAZnNZkhVVUjbtkjbtcjQD0jbtYj3Hgk+IJmMGJOijElRxqQoc84hOSUk5YQMw4DUdYMs5jNkNpshB4sD5ObmBgkhIJnMPnVdIzFFJKWEVFYhZhUyaxokxojc3d4h6/UGubu7RcwqpK4rxDuPLA4WjByj4AJiTIoyJkUZk6IM55CcM6PMyOEQx6OUEpJTRoY4IM2sQZqmQSqrkO1ui/jgkdXDCtnudojjUbCAmAVk9bBChmFAvPdIPwxI08wQC4ZUVYUEH5C6qpGUEmJMijImRRmToiylyMg5pKorJPjAyDHq2g7p2g5p2xbJOSO3N7fIr7/+ilxdX7OP49GzZ8+QL1++RF68eI7UdY2sV2vk4eEB2W63yMlshnRdh2QyMgwD0jQN+6ScEWNSlDEpypgUZXwkBI9UVYU0dYN47xHnHdLUDRIsIPcf7pHfz86Qs/NzZLfbIX/84x+RV6++Qr74/Avk5cuXyPMXz5Gj5RL57fffkaurK+Qf//MP5Pb2FllvNoh3DnHeMXKMcs7sY0yKMiZFGZOijMxeXdchq9UKcc4hTdMgVVUhq4cVstlskPVqhXz99dfIcnmIfP/d98jT06fIdrtFmqZB5vM5cnR8jHy4v0dWqxXy/PlzxHuPtG2L1E2DtO0OiTEhbdciIQTEmBRlTIoyJkWZ8w5JKSO73RbZbDZIXdeI9x559uwZ8vTpU+Tk5ASZNQ3SNA3ivUfMDLm4uEDevX+PXF1eIj/88APy2fPPkFnTIHXTINvdFnny5AnSti1y/3CPzGYzJMaIeNcgDocYk6KMSVHGpCjzziNDGpC+75HNZoMMw4AMQ2Sf09NTpK5qZL1eI03dIJeXl8jd3R1ydX2NXFxcIGaGLOZz5PWb18j19Q3y17/+J3K0PEIODg+Qk6cnyPXVNbLZbJDdboc475BhGBBjUpQxKcqYFGWZjFRWIYeHS6SqKmSz2SBPjo+R+WKOrB5WyPXVNeK9R9abNZJiQm5ub5Dz83Nk9bBCdm2LnJ2dITln9nHOId9//x1iZkjXd0gIAen6Hkk5ISEHxHuPGJOijElRxqQo894j3ntkFhpkeXiIbBdbZBgG5PLyEnm4f0C22y3SdR3yr3//Czk+PkbWqzXy6WefIS+/fIksj5bIn//0Z2SxmCNd1yNHx0fI21/eItc314jDIVfXV0hKCTEzxNceCT4gxqQoY1KUMSnKyIwcDjk6OkLqpkZm8xmy2WyQw+Uh8urVK+T06Sny5OQJcnl5iTR1g6zXa+Tt27dIVVXI6bNT5L/+9jck8+j169fIzfUNcnJygvz8889ISgm5u7tDgg/I8mjJPjlnxJgUZUyKMiZFWQgBMTOkqgyxYEiyhDRNgzRNg3zzzTfIt3/5C/LVq6+QDx/ukevrK+Sf//xf5OzsDKnqCpnNZsjZ2RkSQkCGvkc22w0yxAHZbDfIxcUFEoeIBAtIVVdIzhlp6gYxJkUZk6KMSVHmg0fMAuJDQNq2RYY4IDllZLfdIZeXl8jDwwPy/v1vyNtffkFSSsib12+QfuiR84tz5P7+Hum6DvmPFy+Q27s7JOeMeOeROEQkhIA455GqMiT4gHgfEB88YkyKMiZFGZOiLMaIpGDIdrNFmqZB+l2P/Pr2VySmiGx3W+Tdv98h3373LTJrGuT27g7p+x7Z7XaIc559YozIxcUlMsQBef/uPXJ3d4dcXFwgR0dHjByjxWKBLI+WSEqJfYxJUcakKGNSlJEZOe+QEAKyWq+Q9WqN3NzeIvf398jqYYV8+fJL5Kc3PyFXV1fImzdvkPOLC2S72SCHyyUS44C8/PIl0nYt8uTJE2ToB+TTTz9FZvMZslgskLqukZQSklNGdrsd4nCIMSnKmBRlTIqynDOSU0ZijEgcIrJer5Fh6BHHo5QS8vNPPyO/vf8N+fHHH5Gu75HZrEGGfkDmizny7t175Pz8AunaFvnD558jn3/+ByTcBeSTTz5BmqZBnHfIZrNB2r5FYoxIygkxJkUZk6KMSVGWyUjbtUhKCXl4eEBW6xXStR0ym8/YJ6WEtG3LPl988QWjnJHZbIaEEJDr+Q2y226RD/f3yKdti9R1jdRNjTx7dop0XY+0bYtUViFDGBDvPSPHyJgUZUyKMiZFGR9xOCTGiMQYkaEfkGY2Qw4WC2Q+nyNt2yJmhrx69YqRY9TuWuT09BSp6grp+x7ZtTvk6PgY6boOuf9wjzgccnl1xT4pJvbJZCSEwD7GpChjUpQxKcpyykjMEYkpIm3bIj54pK4qJFhAYkqMHHsNw4B475GTkxPk5OkJsjxcIsEHRo7RzfUN0jQNsmtbxDnHKDPy3iP1rEa6rkNSn5BMRuIQEWNSlDEpypgUZTlnJJOROET2sWBIsIDEGJGcMtJ2HeJmDqmbBrEQkMXBAsk5I8fHx8h8PkNSzsiL5y+Q1WqF7HY7ZL6YI4vFAvHeIZlHMUXEdx5xziE+eMSYFGVMijImRRkfy4y898hisUB88EjbdUiKEakXNWKVIYcHh+wzDAOSYkIcjpHjIw5JMSLOO6RpGmQ2nyGVVUjOGRmGhFR1xT4+eEYDo5wyYkyKMiZFGZOiLJMRh0NyzkiMEXHOIYvFHHHOIfP5HKnrGrFgSEwR+XD3AQkhIP3QI+dn54hVhnjvkcV8joT5HIkpIl3XIZmM9H2PVH2FDMOApJQQ5xzivUeMSVHGpChjUpQ5HPs475CqqhDvPVLXNWJmSFM3jBwjh0O6TYeknJAYI2JmSEwR2XzYIMEC0u5aJKaIxBiRGCMSQkBSSoiZIVVVISkmRo69jElRxqQoY1KUZT6SM5JTZuQYhRDYxzmH9H3PXo5RJiMHiwMkWEC888gwDEjd1EjOGRmGAWnbFokxIlYZ0nc9I8co54yklBCHQ5x37GNMijImRRmToszxKDtG3nvEOcf/J+eMpJwQC4YEC0hVVUjfD0hOCfHeM4qMUkyI8w7JOSNmhjjvGGUeOR5lHmVGOTPy3iEOxz7GpChjUpQxKer/ABJcN350k0hPAAAAAElFTkSuQmCC", + "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.351809,0.34983,0.34826) … RGB{Float64}(0.354843,0.352866,0.351279)\n RGB{Float64}(0.35032,0.348341,0.346779) RGB{Float64}(0.353631,0.351653,0.350073)\n RGB{Float64}(0.351409,0.34943,0.347862) RGB{Float64}(0.354959,0.352981,0.351393)\n RGB{Float64}(0.349521,0.347542,0.345984) RGB{Float64}(0.349715,0.347735,0.346176)\n RGB{Float64}(0.349647,0.347668,0.346109) RGB{Float64}(0.359971,0.357996,0.35638)\n RGB{Float64}(0.346748,0.344767,0.343225) … RGB{Float64}(0.365355,0.363383,0.361737)\n RGB{Float64}(0.343255,0.341274,0.339751) RGB{Float64}(0.366811,0.364841,0.363186)\n RGB{Float64}(0.347284,0.345304,0.343759) RGB{Float64}(0.345238,0.343257,0.341723)\n RGB{Float64}(0.342378,0.340397,0.338878) RGB{Float64}(0.336259,0.334276,0.332791)\n RGB{Float64}(0.341335,0.339353,0.33784) RGB{Float64}(0.342739,0.340757,0.339237)\n ⋮ ⋱ \n RGB{Float64}(0.364512,0.36254,0.360899) RGB{Float64}(0.347847,0.345867,0.344319)\n RGB{Float64}(0.35998,0.358005,0.356389) … RGB{Float64}(0.344506,0.342525,0.340995)\n RGB{Float64}(0.354208,0.352231,0.350647) RGB{Float64}(0.342904,0.340923,0.339401)\n RGB{Float64}(0.353607,0.351629,0.350049) RGB{Float64}(0.35015,0.348171,0.34661)\n RGB{Float64}(0.349899,0.34792,0.34636) RGB{Float64}(0.348958,0.346979,0.345424)\n RGB{Float64}(0.345451,0.343471,0.341935) RGB{Float64}(0.347499,0.345519,0.343972)\n RGB{Float64}(0.34938,0.347401,0.345844) … RGB{Float64}(0.348774,0.346794,0.34524)\n RGB{Float64}(0.349957,0.347978,0.346418) RGB{Float64}(0.349328,0.347349,0.345792)\n RGB{Float64}(0.351888,0.349909,0.348338) RGB{Float64}(0.350979,0.349,0.347434)", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAAClFJREFUeAHtwUuPHNd5x+HfOeetqp6eG0eaoYZjM44lUSAQxwIEZe8s42/gfFd9BAsQAkeRYEYRKc7QCbvn2pe6nTerfw0XDXhTy3oe+/c//YldHGeXEAKSc0Y8O9LnHnF3JMXEToFBiolBYODZkezOwB1JlhAzQ2KISAgByZ4Rd0c8O7s4zsAZOI+MyaiMyaiMyajMcXYJISCBgIQQGAQGdVcjXdchTdsgKRkSQ0BCDIiZIYGAOI7kPiMpJST0gYExSCkhKSUke2bgDNq2Rfrc8/c5YkxGZUxGZUxGZYGAuDsSQmAQAoMQkJwz4jiDwCNnEHhkhSFVWSHZM+LuSNO0SNe1yHqzRswKpKhr5ODgADn56ASJMbJL27TIar1i4Az63DPIDIzJqIzJqIzJqMxxxN2Rru+QnDNSWIGYGeJ8qEHatkWapkFmPkNCCIhnR0IIiHtGurZDtpstg1Aje7MZsjffQ+q6RpqmQWKISNM0SNM2SIwJCXwgMDAmozImozImozI+0OeM9F3HLnt7c+Rgfx8pygLZbrfI4cEh4u4MQkBiCEiMEVmtV0jdNEjf90h2R6qyRLq+QxaLJbJcLJH1Zo3Mqhni7khVVchsb4aUZYmkmBBjMipjMipjMipzZxACA+dRShHpuhZpmoZBYODuiLsjh4eHSEoJyTkjbdsil28vkb7vkZwzcnx8jJgZsq23yHq9RvquQ2KMSN/3yGxvDynKEqnKCimKAsmeEWMyKmMyKmMyKsMdSSkhZoYEApI9I+vNGrl/uGcXS4ZcXV0hV1dXyHq9Ru7u75EYAnJ0dIRc/OoCuXh2wSAw6LoOuby8Qu7v7pC6qZG2a9nF3ZGqKpEiFEjwgBiTURmTURmTUVkIAamqCqmqCokhIjlnpM89st1skdlshtze3SE//vgj8u7dO2R/fx958cUXyOeffYYcHR0hz//hOXJ8fIycnZ4iP/30P8hqtUK+++475PLyElmv18hqvUK22y0SQ2DgDEIMiDEZlTEZlTEZlWXPSNu2SNd2SNu1yMH+ARJCQMqqRN6+fYu8XyyQ5fU18vLlS+S3n/4W+d0//Q45OztDbm5ukIODA2Q+nyMhRuTs6RmSFgl58eJzpO1apN7WyGa7QdqmRdwdadsWMTPEmIzKmIzKmIzK+EDOGen7HunaDrnLd8jHH3+MzGYz5MmTJ0iMEQkE5OknT5Hc98jZ0zPk3dUVslgskcVigSyXS2T/YB+JISIfffwRcnt3h7x8+RJZLBbI1eUVg30G2TNSFAWDEBBjMipjMipjMiqLISKeHVmv1sjN7S0yqyrEsyMXv7pAnjx5ghRFgWw2G+Tnn39Gbm9ukez/gdxc3yDr9Rq5f3hA+r5DCiuQh9UK+Zevv0Z+/fzXyHw+Rw7295FAQNq2RVarFbt0fY8Yk1EZk1EZk1FZsoRYMsQKQ2KKSIoJ+ejjj5CqrJCmaZBXr14hl28vkeVyiRw/OUa6rkfu7++RN2/eIH3fI13bInXTIF3XIXuzGWKFIU3TIClGZDabIZvNBkkpIe6OJHfEmIzKmIzKmIzKAoFdUkzI8dExUlYlcnd3h1y+vUQ22y1yc3ODbLdb5Pvvv0c++/RT5PrmBnn+/Dny9ddfI588fYp89dVXyHK5RJq2Qfq+R5aLJXJzc4O4O3J/d49kz0hZlIgVhoQYEGMyKmMyKmMyKss5IzFFJFlC5uUcqesasWTIxcUFUpQFEgjIl19+iSwWCyTEgDzcPyA//PADcnp6ipw/O0f+8K9/QN6/f49cvn2L/NcPPyDz+Rx5v3iP3N7eIquHFVIUBXJ4eIgUZYGUZYUYk1EZk1EZk1FZjJFdPDvS5x5JlpC5zdnlxecvkBdfvED++Md/Q969e4d8/5/fI3/+87dIVVXIdrtFurZDXr16hTw8PCCbzQbpuo5BYPBw/4BcL6+Rbb1FYoiIFYZ0XYekmBBjMipjMipjMipLKSHJErK3t4fknBk4g81mgziOPKwekOPjY+Svf32FvHn9Gtlst0jd1EiMEfnvn35CVusV0rYtcnp6iiyWCyT3GdmsN0h2R8wMmad9/p4QI7sYk1EZk1EZk1FZn3sGPYOu7ZDsGam3NfL28i1SVRVydnqGfPPNN8jv//n3SFmVyJs3b5Db21vk9vYW2dubIVVZIavVCrm+vkayZ+Tdu3fI9fIa6foOKYsSKS0hhRXI0eERkt0Rd0eMyaiMyaiMyajM3REzQ0IISNd0yO3tLXJzc4sURYH85S9/QU7PzpD7u3tku90i3377LXJ5dYWsVyvk4uIC+dvf/hc5Pz9HHh7ueRSQw8MD5PzZM8Q9I33XI/P9fSSGgMQYkbquGQQGxmRUxmRUxmRUxgcCAXF3pOt7pCgKpKoq5O72Ftms18jr16+Rw8ND5PXr18i2rtlls9kgddMgi8UCubm5QZqmRs4/OUdmVYVc10vk5OQEefr0Kbv0fY/0uWcXdwbGZFTGZFTGZFQWCEjbtUhMEQmBQd3UyHa7RWJKSNO2iFlCHh4ekK7vkadPz5D3//ceuXj2DDnY30d++eUXpKoqxLMjziPHkfPzc+TZs2dIUzeI40gIAVmtVkhRFkjXdogxGZUxGZUxGZW5O9L3PbJZb5DtdovUdY2kGJGTs1OkqmZIXW+RvuuR/f19ZD6fI8dHx8jR0RHS9z3yj7/5DRJiRLquQ/quQw4Pj5CUEhJDRFJKSN3USM4Z6doO8eyIuyPGZFTGZFTGZFSWPTPoGeTsSNM2SNf1SIwR6doOqetbpKkbxMyQEAJSVhXy5OQEefLkGIkxIkdHR0jTNsj9/T1ydnaGHB4eIrPZDKlmFeLuSFmVyN3dHRJCQPq+R5xHxmRUxmRUxmRUxgfcHcm5Rzw7EgKPAoP1eo20bYtsNhvk5OQEiTEiZgnp+w7ZbrfI+SfnyP58n11SSkj2jCyXS2Q+nyNt0yLuGfHMI2dgZkhRFEjbtYgxGZUxGZUxGZW5O+LuPApICAGZzWZI7jOy2W6Qvb095OTkBDEzJMaIrFZr5PjoCIkxIrPZDHF3pOs7pKkbpG5qpCxKZLVaIcvlEimLEpnNZoi7MwgM3J2BMzAmozImozImozJ3R9wdyX2PmBlSViViyZDDw0Okms2QFCMSU0Q2mw3S9R2S3RF3R66urpBkCXF3JMWEtG2LbLZbJOeMlEWB5D4jbdci7o64M8ie2cWYjMqYjMqYjMpCCEgIAQkW2CWEgMz2ZoglQ8wMCTEwcAZt0yJ7sz2kLAtks9kgbdMizoccKYsSWa1WyHq9RsqqRFbuSFVWSFlViKXETs5OxmRUxmRUxmRUxgcCAfHg7JI9IyEEJHtGmrZBQghIDJFdirJEUkxICIFBYGAxIe6ONE3DIDCoqgqxwpAYIuI4kvseaXKPxBCREAKDwMCYjMqYjMqYjMoCAXEcCQTEeRR45O4MnIHjiJkhIQRkvj9H2rZFPDsScmAQGIQYkcCjZAkxM6TrOgaBR87A+UBgEEJAQgxIICDOI2MyKmMyKmMyqv8Hx/FgpOnDGV0AAAAASUVORK5CYII=", "text/html": [ - "" + "" ] }, "metadata": {}, @@ -242,10 +242,10 @@ { "output_type": "execute_result", "data": { - "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(1.0,0.999998,0.999998) … RGB{Float64}(0.999982,0.999982,1.0)\n RGB{Float64}(0.999957,0.999957,1.0) RGB{Float64}(0.999977,0.999977,1.0)\n RGB{Float64}(0.999907,0.999907,1.0) RGB{Float64}(1.0,0.999969,0.999969)\n RGB{Float64}(1.0,0.99995,0.99995) RGB{Float64}(1.0,0.999937,0.999937)\n RGB{Float64}(0.999976,0.999976,1.0) RGB{Float64}(0.999997,0.999997,1.0)\n RGB{Float64}(1.0,0.999923,0.999923) … RGB{Float64}(1.0,0.999987,0.999987)\n RGB{Float64}(1.0,0.999937,0.999937) RGB{Float64}(1.0,0.999943,0.999943)\n RGB{Float64}(1.0,0.999971,0.999971) RGB{Float64}(0.999936,0.999936,1.0)\n RGB{Float64}(0.999912,0.999912,1.0) RGB{Float64}(0.999964,0.999964,1.0)\n RGB{Float64}(0.999957,0.999957,1.0) RGB{Float64}(0.999962,0.999962,1.0)\n ⋮ ⋱ \n RGB{Float64}(1.0,0.999901,0.999901) RGB{Float64}(1.0,0.999962,0.999962)\n RGB{Float64}(0.999914,0.999914,1.0) … RGB{Float64}(1.0,0.999953,0.999953)\n RGB{Float64}(0.999868,0.999868,1.0) RGB{Float64}(1.0,0.99986,0.99986)\n RGB{Float64}(0.999996,0.999996,1.0) RGB{Float64}(1.0,0.999938,0.999938)\n RGB{Float64}(1.0,0.999968,0.999968) RGB{Float64}(1.0,0.999954,0.999954)\n RGB{Float64}(1.0,0.999852,0.999852) RGB{Float64}(1.0,0.999936,0.999936)\n RGB{Float64}(1.0,0.9998,0.9998) … RGB{Float64}(0.999992,0.999992,1.0)\n RGB{Float64}(1.0,0.999942,0.999942) RGB{Float64}(1.0,0.999992,0.999992)\n RGB{Float64}(1.0,0.999988,0.999988) RGB{Float64}(1.0,0.99999,0.99999)", - "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAABCVJREFUeAHtwU9o1gUcx/H3s+fj49wf89+SNkWdPUUkorgxUzKwQGPHNgQPSRERRHiogxe1g39QougQHofMS1F2slbEoKDQdejgKQZOCgx0hbQVc2v7dfp+12EQjq97Nvm+XiqKghRHpFAihRIplEihRAolUiiRQokUSqRQIoUSKZSokaLAlUo8NEQKJVIokUKJGimVeCiJFEqkUCKFEimUSKFECiVSKJFCiRRKpFAihRKL2PQ0cyp/9QXu449xW7bgjh7FTKxYjamv54ESKZRIoUQKJRaDe/dwk5OY8o0buNu3cXfv4ioVnIS7eRNT3zaF+erbRzHNzbiuLly5zLyJFEqkUCKFEgtpfBzzV6kJMzKyHLPtKeFGR3Ht7biJCdzhw7jhYVxfH27HDszjz72KaW3FjY7i1q9n3kQKJVIokUKJBTQ41IT55RfcjRu4x46WMc3PPo+paAa3dStzamnBzLz2Oubvv3Fbv72CG/oTs6KxEdfdjSuXuR8ihRIplEihRJSiwN26hfnpdhvm0CHc1BTu8mVcczOuUuE/6vg/7w1sw/Q04RoacE179uCuXcPt3o0rCuZLpFAihRIplAhyb7KE+f7nNsyaNbhz53BPPonbu5f7MjODGxvDvfN2gRsawlUqmM9v7sQ88cRBzNMNk7hymfkSKZRIoUQKJYIsnxzD7L99BXfpa8zjfX2YpsFB3Mt9mH/6+zH67jtcfz+m7uRJzCMnTuA2bcJ1deGeeQazpxU3OIjbuLGCWVlh3kQKJVIokUKJKM3NOAl36RKmj1lT+/djxpi1jllvnjmD+WxgAPOShBsexk1N4Vpbca2tmPXt7Zje3kaMRAiRQokUSqRQ4kHo6cG9+CLmrSNHcBs34j78ENfRgduwAfNSZydm+sIFTPn8edz4OO7OHVx9Pa6xESPiiRRKpFAihRIPWmMj7tNPcUWBKd7/AFMqMbePPsL89uOPmA0DA7hqFXfsGG7zZhaKSKFECiVSKFErpRKmxNyKAlcqlzEbOjpwDQ2406dxa9dSCyKFEimUSKHEInPqFO748ZuYYrQXc23nG5j2dtza1bg6akOkUCKFEimUqJWiwHw5UMKcPYvr7t6MeeEQ7vffcZ98gmtpoeZECiVSKJFCiVoplTC//orbvh23bBnu4EHcvn24apVFRaRQIoUSKZSokYkJ3A8/4K5e/ROza9dKzIEDuGqVRUukUCKFEimUqJHr13GvvIIripWYzk5ctcqSIFIokUKJFEospJERTGfdH5hvpnZhLr47grnTtIWlRqRQIoUSKZRYSBcv4np6MB0dzCpWYVpWs+SIFEqkUCKFEguprQ23bh1mxQpmLXuEpUykUCKFEimUWEi9vbjpacz0NLOW17GUiRRKpFAihRILadUq5tLAw0OkUCKFEinUv2tvyqjV/ZUMAAAAAElFTkSuQmCC", + "text/plain": "28×28 Array{RGB{Float64},2} with eltype ColorTypes.RGB{Float64}:\n RGB{Float64}(0.99999,0.99999,1.0) … RGB{Float64}(0.999993,0.999993,1.0)\n RGB{Float64}(0.999968,0.999968,1.0) RGB{Float64}(0.999982,0.999982,1.0)\n RGB{Float64}(0.999891,0.999891,1.0) RGB{Float64}(1.0,0.999997,0.999997)\n RGB{Float64}(1.0,0.999942,0.999942) RGB{Float64}(1.0,0.999958,0.999958)\n RGB{Float64}(1.0,0.999935,0.999935) RGB{Float64}(0.999972,0.999972,1.0)\n RGB{Float64}(0.999993,0.999993,1.0) … RGB{Float64}(0.999936,0.999936,1.0)\n RGB{Float64}(1.0,0.999941,0.999941) RGB{Float64}(0.999999,0.999999,1.0)\n RGB{Float64}(1.0,0.999944,0.999944) RGB{Float64}(0.99989,0.99989,1.0)\n RGB{Float64}(0.999789,0.999789,1.0) RGB{Float64}(0.999921,0.999921,1.0)\n RGB{Float64}(0.999807,0.999807,1.0) RGB{Float64}(0.999962,0.999962,1.0)\n ⋮ ⋱ \n RGB{Float64}(1.0,0.999998,0.999998) RGB{Float64}(1.0,0.999876,0.999876)\n RGB{Float64}(0.999946,0.999946,1.0) … RGB{Float64}(1.0,0.999938,0.999938)\n RGB{Float64}(0.999985,0.999985,1.0) RGB{Float64}(1.0,0.999904,0.999904)\n RGB{Float64}(0.999993,0.999993,1.0) RGB{Float64}(1.0,0.999977,0.999977)\n RGB{Float64}(1.0,0.999891,0.999891) RGB{Float64}(1.0,0.999919,0.999919)\n RGB{Float64}(1.0,0.999937,0.999937) RGB{Float64}(1.0,0.999911,0.999911)\n RGB{Float64}(1.0,0.999956,0.999956) … RGB{Float64}(1.0,0.999955,0.999955)\n RGB{Float64}(1.0,0.999957,0.999957) RGB{Float64}(1.0,0.999981,0.999981)\n RGB{Float64}(1.0,0.999998,0.999998) RGB{Float64}(1.0,0.999991,0.999991)", + "image/png": "iVBORw0KGgoAAAANSUhEUgAAAHAAAABwCAIAAABJgmMcAAAABGdBTUEAALGPC/xhBQAAAAFzUkdCAK7OHOkAAAAgY0hSTQAAeiYAAICEAAD6AAAAgOgAAHUwAADqYAAAOpgAABdwnLpRPAAABClJREFUeAHtwTto1QcUx/Hvzf2pMdfEqA0+4oM0UWmLDlaihgpFkNpBDFVB6iBS2kGQ0q46SJfgojhlcGlEO4gOgtQHVXSwPqY6a7mNFZFGqTWIGpv8O50TB0EMJ9dYzuejoihIcUQKJVIokUKJFEqkUCKFEimUSKFECiVSKJFCiRRKpFAihRIplEihRAolUiiRQokUSqRQIoUSKZRIoUQKJVIokUKJCWx4mFcqnz+DO34c19aG270b82zqDEx9PeNKpFAihRIplJgInj3DDQ1hytUqbmAA9+gRbsoUnISrVjH1C4cx5y6/h2lsxK1ahSuXGTORQokUSqRQopaePMEMjlQw1Wo9ZvmHwj18iGtrwz19itu2DXfrFq6vD7diBWbx2h2YefNwAwO4OXMYM5FCiRRKpFCihi5cq2Du3MFVq7jWb4Vp/GQdZrJGcO3tvFJLC2bkq68xT5/i3r/8M+7GY8yc+nrcxo24cpk3IVIokUKJFEpEKQrc3buY3x4uwGzfjhsawp04gWtqwk2axEvqeJ0D5z7CfFHBVSq4yurVuOvXcWvW4IqCsRIplEihRAolggy9KGF+/X0BprkZ19ODW7oU19XFGxkZwQ0O4r7/rsBdv46bNAlz6u7HmCVLPsd8MPU5rlxmrEQKJVIokUKJIJOfD2I+/esM7qdfMIsPH8ZULl7E7fgR8++RIxhduoQ7dgxTt28fZvrevbj583FdXbg1azBdC3EXLuBaW6dgmqYwZiKFEimUSKFElMZGXLmMO3oU08eooXXrMP8wqoVRu/bvx5w6cwazScLdvo0rCty9e7j+fkxLeztmy5YKRiKESKFECiVSKDEeNm/GbdiA2bVzJ27ePNyhQ7jVq3GtrZhNnZ2YorcXUzpwADc4iHvwANfQgKtUMCKeSKFECiVSKDHeKhXc8eO80sGDmJGihKmrY1RvL+b+jRuYuadP49rbcXv24BYtolZECiVSKJFCiYmgVMLUlXAjI7i6chkzd9Uq3LRpuJ4e3KxZvA0ihRIplEihxARQFLieHtyePX9iigebMdeWf4Npa8O1zMDV8XaIFEqkUCKFEhPA2bO4/ftx3d0LMJ99ibt/H3fyJG72bN46kUKJFEqkUGIcFAWuVOK1+vtxy5bhhodx3d24tWtxHR1MKCKFEimUSKHEOCiVeK3nz3FXr+KuXHmMWbmyCbN+Pa6jgzErClypRDiRQokUSqRQooaKAnfzJm7nTl7ShOnsxHV0EKJUYlyJFEqkUCKFEjVU+qOK6dTfmPNDKzB9P/RjBhoW8a4RKZRIoUQKJWqprw+3ZQums5NRRROmZQbvHJFCiRRKpFCillpbcTNnYhoaGKXpvMtECiVSKJFCiVrauhX34gVmeJhRk+t4l4kUSqRQIoUStdTczKtM5f9DpFAihRIp1H/JytHcsTSkWgAAAABJRU5ErkJggg==", "text/html": [ - "" + "" ] }, "metadata": {}, diff --git a/dev/generated/augmentations/index.html b/dev/generated/augmentations/index.html index 12fdbf52..303c60dc 100644 --- a/dev/generated/augmentations/index.html +++ b/dev/generated/augmentations/index.html @@ -20,16 +20,16 @@ convert2image(MNIST, x)

Noise augmentation

The NoiseAugmentation wrapper computes explanations averaged over noisy inputs. Let's demonstrate this on the Gradient analyzer. First, we compute the heatmap of an explanation without augmentation:

analyzer = Gradient(model)
 heatmap(input, analyzer)

Now we wrap the analyzer in a NoiseAugmentation with 10 samples of noise. By default, the noise is sampled from a Gaussian distribution with mean 0 and standard deviation 1.

analyzer = NoiseAugmentation(Gradient(model), 50)
-heatmap(input, analyzer)

Note that a higher sample size is desired, as it will lead to a smoother heatmap. However, this comes at the cost of a longer computation time.

We can also set the standard deviation of the Gaussian distribution:

analyzer = NoiseAugmentation(Gradient(model), 50, 0.1)
-heatmap(input, analyzer)

When used with a Gradient analyzer, this is equivalent to SmoothGrad:

analyzer = SmoothGrad(model, 50)
-heatmap(input, analyzer)

We can also use any distribution from Distributions.jl, for example Poisson noise with rate $\lambda=0.5$:

using Distributions
+heatmap(input, analyzer)

Note that a higher sample size is desired, as it will lead to a smoother heatmap. However, this comes at the cost of a longer computation time.

We can also set the standard deviation of the Gaussian distribution:

analyzer = NoiseAugmentation(Gradient(model), 50, 0.1)
+heatmap(input, analyzer)

When used with a Gradient analyzer, this is equivalent to SmoothGrad:

analyzer = SmoothGrad(model, 50)
+heatmap(input, analyzer)

We can also use any distribution from Distributions.jl, for example Poisson noise with rate $\lambda=0.5$:

using Distributions
 
 analyzer = NoiseAugmentation(Gradient(model), 50, Poisson(0.5))
-heatmap(input, analyzer)

Is is also possible to define your own distributions or mixture distributions.

NoiseAugmentation can be combined with any analyzer type, for example LRP:

analyzer = NoiseAugmentation(LRP(model), 50)
-heatmap(input, analyzer)

Integration augmentation

The InterpolationAugmentation wrapper computes explanations averaged over n steps of linear interpolation between the input and a reference input, which is set to zero(input) by default:

analyzer = InterpolationAugmentation(Gradient(model), 50)
+heatmap(input, analyzer)

Is is also possible to define your own distributions or mixture distributions.

NoiseAugmentation can be combined with any analyzer type, for example LRP:

analyzer = NoiseAugmentation(LRP(model), 50)
+heatmap(input, analyzer)

Integration augmentation

The InterpolationAugmentation wrapper computes explanations averaged over n steps of linear interpolation between the input and a reference input, which is set to zero(input) by default:

analyzer = InterpolationAugmentation(Gradient(model), 50)
 heatmap(input, analyzer)

When used with a Gradient analyzer, this is equivalent to IntegratedGradients:

analyzer = IntegratedGradients(model, 50)
 heatmap(input, analyzer)

To select a different reference input, pass it to the analyze or heatmap function using the keyword argument input_ref. Note that this is an arbitrary example for the sake of demonstration.

matrix_of_ones = ones(Float32, size(input))
 
 analyzer = InterpolationAugmentation(Gradient(model), 50)
 heatmap(input, analyzer; input_ref=matrix_of_ones)

Once again, InterpolationAugmentation can be combined with any analyzer type, for example LRP:

analyzer = InterpolationAugmentation(LRP(model), 50)
-heatmap(input, analyzer)

This page was generated using Literate.jl.

+heatmap(input, analyzer)

This page was generated using Literate.jl.

diff --git a/dev/generated/example/index.html b/dev/generated/example/index.html index d8feb48d..89e1ef13 100644 --- a/dev/generated/example/index.html +++ b/dev/generated/example/index.html @@ -44,4 +44,4 @@ heatmap(expl)

This heatmap shows us that the "upper loop" of the hand-drawn 9 has negative relevance with respect to the output neuron corresponding to digit 4!

Note

The output neuron can also be specified when calling heatmap:

heatmap(input, analyzer, 5)

Analyzing batches

ExplainableAI also supports explanations of input batches:

batchsize = 20
 xs, _ = MNIST(Float32, :test)[1:batchsize]
 batch = reshape(xs, 28, 28, 1, :) # reshape to WHCN format
-expl = analyze(batch, analyzer);

This will return a single Explanation expl for the entire batch. Calling heatmap on expl will detect the batch dimension and return a vector of heatmaps.

heatmap(expl)
(a vector displayed as a row to save space)

For more information on heatmapping batches, refer to the heatmapping documentation.


This page was generated using Literate.jl.

+expl = analyze(batch, analyzer);

This will return a single Explanation expl for the entire batch. Calling heatmap on expl will detect the batch dimension and return a vector of heatmaps.

heatmap(expl)
(a vector displayed as a row to save space)

For more information on heatmapping batches, refer to the heatmapping documentation.


This page was generated using Literate.jl.

diff --git a/dev/generated/heatmapping/index.html b/dev/generated/heatmapping/index.html index 289d5cbd..7500743a 100644 --- a/dev/generated/heatmapping/index.html +++ b/dev/generated/heatmapping/index.html @@ -27,4 +27,4 @@ batch = reshape(xs, 28, 28, 1, :); # reshape to WHCN format

The heatmap function automatically recognizes that the explanation is batched and returns a Vector of images:

heatmaps = heatmap(batch, analyzer)
(a vector displayed as a row to save space)

Image.jl's mosaic function can used to display them in a grid:

mosaic(heatmaps; nrow=10)
Output type consistency

To obtain a singleton Vector containing a single heatmap for non-batched inputs, use the heatmap keyword argument unpack_singleton=false.

Processing heatmaps

Heatmapping makes use of the Julia-based image processing ecosystem Images.jl.

If you want to further process heatmaps, you may benefit from reading about some fundamental conventions that the ecosystem utilizes that are different from how images are typically represented in OpenCV, MATLAB, ImageJ or Python.

Saving heatmaps

Since heatmaps are regular Images.jl images, they can be saved as such:

using FileIO
 
 img = heatmap(input, analyzer)
-save("heatmap.png", img)

This page was generated using Literate.jl.

+save("heatmap.png", img)

This page was generated using Literate.jl.

diff --git a/dev/generated/lrp/basics.ipynb b/dev/generated/lrp/basics.ipynb index 3dbaf4c4..efb117ec 100644 --- a/dev/generated/lrp/basics.ipynb +++ b/dev/generated/lrp/basics.ipynb @@ -292,7 +292,7 @@ { "output_type": "execute_result", "data": { - "text/plain": "11-element Vector{Array{Float32}}:\n [0.0059953914 -0.07300436 … -0.0023256522 0.01276984; -0.055275302 -0.07902819 … -0.005415473 -0.02390944; … ; -0.36862263 0.0887302 … -0.100644104 0.0018357964; 3.1133542 -3.8598118 … -0.04907499 -0.03325787;;; -0.014552219 0.030733665 … 0.0030845948 -0.0014118978; 0.00018365312 -0.0702099 … 0.0121113565 0.00047403085; … ; 5.5627723 0.055967048 … -0.00285806 -0.016106464; 0.8411138 0.10494323 … 0.02901777 0.012957445;;; -0.0021769316 0.09200689 … -0.023725713 -0.010548495; 0.047819488 0.02440265 … 0.009408881 0.0030215443; … ; 9.532685 -2.3882635 … 0.02866837 -0.010643353; -0.028905094 -1.9683127 … -0.09724046 -0.0023999568;;;;]\n [-0.0076615154 0.002691966 … 0.0 -0.0021878693; -0.08223498 -0.0 … 0.0 -0.0; … ; 0.133981 0.0 … -0.0 0.0; 0.1013264 -2.2961254 … -0.016280115 -0.0;;; -0.0 0.0 … 0.0 -0.0; -0.0 -0.0 … 0.0 -0.0; … ; -0.0 -0.0 … -0.0 0.0; 0.0 -0.0 … 0.0 0.0;;; -0.0 -0.0 … 0.0 -0.0; 0.0 0.0 … 0.0 -0.0; … ; 0.0 0.0 … 0.0 0.0; 0.14923847 0.59623855 … -0.0 -0.0004083761;;; 0.0 -0.0 … 0.0 0.008299869; -0.0 0.0 … 0.0 0.0; … ; -0.0 -0.0 … -0.0 0.0; 0.0 -0.0 … -0.0 0.0;;; -0.011492537 0.002707928 … 0.012495302 -0.0; 0.024554618 0.0 … -0.004401173 0.0; … ; -0.0 -2.1028411 … 0.0 -0.0; -1.7603387 0.90248686 … -0.008683717 -0.0;;; -0.00648858 -0.0 … 0.0 -0.0002625371; 0.007837431 0.075694405 … -0.020825446 -0.0; … ; 1.9965941 0.0 … 0.0 0.0; 1.1428998 -1.838967 … -0.008319572 -0.005092498;;; -0.0012861288 -0.017827166 … 0.030811837 -0.0032588777; -0.00075740163 -0.072933555 … 0.033856608 0.0002271641; … ; -0.0 4.351375 … 0.009071199 -0.0041446034; -0.0 0.0 … 0.0 0.0;;; 4.8822683f-5 0.013608019 … 0.009462316 -0.013723944; -0.016030258 0.0 … -0.002902429 0.0; … ; -0.0 -0.0 … 0.0 -0.0; -0.0 -1.4052538 … -0.011235246 0.0;;;;]\n [0.0 0.0 … -0.004413536 0.0; 0.0 0.051916707 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; -0.02429443 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.020793276 0.0; -0.07336373 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -3.7457852 … -0.021598982 0.0;;; 0.0 -0.0056681223 … 0.0 0.04145839; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 1.6687095 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.00020740693;;; 0.0 -0.026923422 … 0.0 0.0; 0.0 0.0 … -0.0003171338 0.0; … ; 0.13343054 0.0 … -0.007873983 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.00019252616 0.0 … 0.0 -0.002089241; … ; 0.0 0.0 … -0.031126875 0.0; 0.0 -0.08851113 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.04808743 0.0 … 0.0 0.01657947; … ; 0.0 0.4153457 … 0.0 -0.08610324; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0033958138 0.0; 0.019114 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; -0.035264935 0.0 … 0.0 0.0;;;;]\n [0.051916707 -0.009180839 … -0.062806934 -0.0044135363; -0.0011634689 0.0032293054 … -0.008266414 0.0; … ; 6.642518 -1.6220515 … 0.00330145 0.023510866; -0.02429443 -0.13794152 … 0.022571284 -0.0;;; -0.07336373 6.344882f-5 … -0.0015303302 0.020793276; -0.060423374 -0.062535904 … -0.05318082 -0.022094019; … ; 3.8798726 0.89579237 … -0.09460987 -0.14869721; -3.7457852 0.3938238 … -0.0010421907 -0.021598982;;; -0.0056681223 -0.025361573 … -0.0 0.04145839; -0.0 -0.0070061656 … 0.037456606 0.02770236; … ; -1.2130022 -0.6641276 … 0.011451553 0.022448175; 0.0 -0.6861021 … 0.0 0.0;;; 0.0 -0.0 … -0.0 0.0; 0.0 -5.3907697f-5 … 0.0 -0.0; … ; 0.5220937 0.16781053 … -0.0 -0.00021002459; 1.6687095 2.267305 … -0.001210088 -0.00020740699;;; -0.026923422 0.009361607 … -0.016968839 -0.0003171338; 0.049242377 0.055666465 … 0.27379128 -0.11796515; … ; 1.5881491 2.4463828 … -0.091126986 -0.010249719; 0.13343054 -1.035615 … 0.033344056 -0.007873983;;; 0.00019252616 -0.018122202 … -0.0032647855 -0.002089241; -0.06591415 -0.08535633 … -0.023146026 0.007185448; … ; -3.6971943 -4.9059343 … -0.19378026 -0.028798481; -0.08851113 1.2227559 … -0.06396735 -0.031126875;;; 0.04808743 0.037473936 … -0.050500777 0.01657947; -0.1460258 0.03205238 … -0.010725144 0.0014550077; … ; 13.229863 -0.7480651 … 0.012859018 -0.04342418; 0.4153457 3.7348742 … -0.07188912 -0.08610324;;; 0.019114 0.12839407 … 0.01968107 0.0033958138; 0.05774052 0.088627815 … -0.028796948 0.0036091674; … ; -1.454929 -1.5005275 … -0.0073248623 -0.008741704; -0.03526498 0.54806995 … -0.005046469 -0.0;;;;]\n [0.03452337 0.03563176 … 0.0047796597 -0.0; -0.02428116 0.09937316 … 0.013586361 -0.0; … ; 12.937611 -8.1121 … -0.004435978 -0.005213496; -5.198775 5.1334114 … -0.0076580984 -0.0;;; -0.0 -0.0 … -0.0 -0.009078328; -0.0 0.014343233 … 0.075224265 -0.020860186; … ; 0.0 -0.0 … -0.059055917 0.00504208; 0.84135145 -2.3625045 … 0.0108076045 -0.007309312;;; -0.16435769 -0.16372415 … -0.070558816 -0.0214148; -0.106697306 -0.15983221 … -0.074665464 -0.024963394; … ; 9.672054 -8.070152 … -0.078119814 -0.03288469; 6.37925 0.7697855 … -0.014229176 -0.02495695;;; … ;;; -0.0 -0.0 … -0.0 -0.0; -0.0 0.0 … 0.0 -0.0; … ; 0.0 -0.0 … -0.0 -0.0002546986; -0.22327027 0.0 … -0.0 -0.0;;; -0.0 -0.0 … -0.0 -0.0; 0.0 -0.0 … -0.0 -0.0; … ; 0.0 0.0 … -0.009323331 -0.0; -0.16089757 0.0 … -0.0027407308 -0.0;;; 0.10798129 -0.06635088 … 0.009313924 0.0004665067; 0.07797545 0.040396012 … -0.041465048 -0.026801638; … ; -0.0 0.0 … -0.02464316 -0.0015386331; 0.0 0.396781 … -0.0 -0.0;;;;]\n [-0.002962516 -0.0058786813 … -0.0042010094 -0.0031320709; -0.006124389 -0.0056859376 … -0.00475826 -0.00034339854; … ; 0.031890053 -0.0038664034 … -0.0021959767 -0.0016160023; -0.0033072908 -0.000115177325 … -0.0 -0.0009267144;;; -0.20005822 -0.33686244 … -0.12519589 -0.03461667; -0.3066508 -0.10231484 … -0.034115598 -0.0; … ; 36.693047 -0.0 … -0.02607626 -0.02476175; -0.34089684 -0.0 … -0.0 -0.0;;; 0.0 0.0 … 0.0 0.0; -0.007134702 0.0 … 0.0 0.0; … ; 0.000107431006 0.0021978607 … 3.7216985f-5 0.0; -0.00746486 0.004377608 … 5.1803534f-5 0.0033306906;;; -0.013379753 -0.0 … -0.00031173893 -0.0027112686; -0.0012587617 -0.0 … -0.0058993385 0.0033007315; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0003046105;;; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0012226113 -0.0 … -0.0 -0.0; -0.00016866397 -0.0 … -0.0 -0.0;;; -9.24723f-5 -0.33623737 … -0.003652926 -0.0047601936; -0.0 -0.027708456 … 0.014223018 -0.022159249; … ; -0.0 0.003628495 … -0.0031932022 -0.0060084336; -0.0 -0.0050141024 … -0.0001385486 0.0009690355;;; 0.0 0.0 … 0.0 0.0; -0.00030329992 0.0140990475 … 0.00025649017 0.00016802999; … ; 0.019412283 0.00040993985 … -0.007439299 0.0; 0.0 -0.008200825 … 0.0 0.0;;; -0.0028775497 0.051179744 … -0.0025781312 -0.005137296; -0.005038852 0.11056101 … -0.011015659 -0.0066643055; … ; -0.0043640286 -0.0075038276 … -0.009765199 -0.004884046; -0.0035399257 0.04805758 … -0.010551117 -0.0066261594;;;;]\n [0.00772311 -0.00019294172 … 0.005486548 0.0037063735; 0.0028186892 0.003846503 … 0.00939581 0.00034246745; … ; 0.018795151 0.0040656365 … 0.0052142297 0.0030667973; 0.007866269 1.2515371f-6 … 0.0 0.0016764188;;; -0.0023105731 0.0014569345 … -0.00055841723 0.0015474984; -0.0018557478 0.0050202706 … -0.0024993587 0.0; … ; 0.013110469 0.0 … 0.0010954437 -0.0004964767; -0.004563556 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.0057182154 0.0 … 0.0 0.0; … ; -0.001735239 0.0012998573 … 0.00014436385 0.0; -0.00395068 0.0028267684 … -0.00043996735 0.0018537242;;; -0.010967341 0.0 … -0.001193555 -0.0016701703; -0.00019808135 0.0 … -0.004031056 0.002458853; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.0011505173;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; -0.0011976889 0.0 … 0.0 0.0; -0.00023735093 0.0 … 0.0 0.0;;; -8.120162f-5 -0.0090116225 … 0.002126612 0.00085042446; 0.0 -0.0073915003 … 0.004668085 -0.0035843893; … ; 0.0 0.0018468954 … 0.00048062214 -0.00948973; 0.0 -0.0025964172 … 3.5736095f-6 0.0032569119;;; 0.0 0.0 … 0.0 0.0; -0.0016311536 0.007386732 … 8.4346044f-5 -0.00074947346; … ; 0.010540287 -0.0008754684 … -0.006718305 0.0; 0.0 -0.00349761 … 0.0 0.0;;; 0.0025021029 0.0010794975 … 0.00087872613 0.0010870972; 0.0066068578 0.023593795 … 0.014949051 0.006513105; … ; 0.0016173548 -0.008046329 … 0.006330599 -0.0056231353; 0.004201331 0.009660998 … -0.013060743 0.0015563015;;;;]\n [0.00772311; 0.0028186892; … ; -0.0056231357; 0.0015563015;;]\n [0.0; 0.0; … ; -0.0; -0.019540418;;]\n [0.0; 0.037988186; … ; -0.0019786535; -0.019540418;;]\n [0.0; 0.0; … ; 0.0; 0.0;;]" + "text/plain": "11-element Vector{Array{Float32}}:\n [0.11555797 0.16567013 … -0.00074588886 -0.016081242; -0.020463338 -0.18181762 … 0.0075527276 -0.0012225641; … ; 0.05978778 0.025775693 … 0.016010145 0.0058790017; 0.027305666 0.034312364 … 0.011098867 0.0031443986;;; 0.17919934 -0.44290525 … -0.0028781097 0.0025866567; -0.17921384 -1.1027005 … 0.00529778 0.014584399; … ; 0.0065630525 0.019256484 … 0.01691892 -0.017772272; -0.006193216 -0.0006287951 … -0.0018861903 0.0041949633;;; -0.23348749 -0.14968054 … -0.017748682 0.005836729; 0.22589077 -0.11296646 … 0.0004931396 0.008539669; … ; 0.0057462584 -0.0026442981 … -0.015646847 -0.0002237301; 0.0056847427 -0.013524553 … 0.01720984 -0.004970015;;;;]\n [-0.037589494 -0.35138637 … -0.0023543204 -0.0031317736; -0.32994968 -0.343098 … -0.0023740756 -0.001340456; … ; 0.001353058 -0.045696773 … 0.024047514 -0.0043757693; 0.009031263 0.00071556837 … -0.012437812 -0.0006313002;;; 0.12511042 0.16571468 … 0.0062297885 -0.0023807054; 0.032985922 0.043249875 … 0.002118163 0.004735875; … ; -0.0 -0.0030185538 … -0.018875849 0.0036812269; -0.0 0.007889737 … 0.017898919 0.009926329;;; -0.05125717 -0.0 … 0.0042070267 0.0; 0.09910777 0.105798446 … -0.0 0.0; … ; -0.010481375 0.009564088 … 0.012331664 0.0; -0.0 -0.0 … 0.0010695708 -4.6312893f-5;;; -0.10725487 -0.032024257 … 0.0032939261 0.0; 0.0 0.0 … -5.4200027f-5 -0.0011091139; … ; -0.0 0.0047965213 … -0.0 -0.0005881093; -0.0010443022 -0.00014615235 … 0.0 -0.0;;; -0.69584686 -1.0485102 … 0.027182447 0.0036318763; 0.19321147 0.038500734 … -0.023141088 0.004430523; … ; -0.00366669 -0.0002217274 … 0.03482189 0.00024875192; 0.0018984725 -0.010518859 … 0.0038209218 0.0;;; -0.025747867 0.30286813 … 0.00014623615 -0.0029056992; 0.06188767 0.004420979 … -0.010237755 -0.00016455087; … ; -0.00019923941 0.010801336 … 0.0066850116 0.0039750487; -0.010430547 0.0040452997 … 0.004033728 -0.0018882912;;; 0.0 -0.0 … -0.0 -0.0; -0.0 0.0 … -0.0 -0.0; … ; 0.0 0.0 … -0.0 -0.0; -0.0 0.0 … 0.0 0.0;;; 0.022411661 -0.0 … 0.0 -0.0037738564; 0.11818989 0.0 … 0.0012431474 -0.0019136093; … ; -0.00065092824 0.0 … 0.0 -0.0040879594; -0.00030009574 -0.001830498 … -0.0 -0.00044494044;;;;]\n [0.0 0.0 … -0.001536238 0.0; -0.32529083 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.00089153595 0.0 … 0.0067885933 0.0;;; 0.0 0.0076351017 … -0.01013882 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.00069445185; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0041311784; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.029274946 … -0.018185424 0.0;;; 0.0 0.0 … 0.0 -0.0012027454; 0.0 -0.8644419 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.006391116 … -0.00089663255 0.0;;; 0.0 -0.8371275 … 0.0 0.006528635; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.038002744 … 0.0 0.0; 0.0 0.0 … 0.0 -0.011889387;;; 0.0 0.0 … 0.0 -0.0031841923; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0043347296;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.001537811; 0.0 -0.0061070193 … 0.0 0.0;;; -0.2521484 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.04499356 … 0.042463027 0.0;;;;]\n [-0.32529083 0.072779745 … -0.016228346 -0.001536238; 0.2532575 -0.0 … 0.010171676 0.0009467099; … ; 0.027731834 0.22912401 … 0.03565193 -0.0; 0.00089153595 0.008775006 … 0.009476386 0.0067885933;;; 0.0076351026 0.031596325 … -0.0 -0.01013882; 0.0 0.03872561 … 0.0 0.009164122; … ; 0.0 0.0 … -0.0 0.0010779466; -0.0 -0.0006514026 … -0.0 0.00069445185;;; 0.0 0.21772698 … -0.0 0.0041311784; -0.0 -0.0 … -0.08997348 -0.019909158; … ; 0.0015084054 0.03941227 … -0.0 -0.012937454; -0.029274946 -0.026991209 … -0.03406386 -0.018185424;;; -0.8644419 0.10458824 … -0.013436062 -0.0012027454; 1.2725815 -0.2836974 … 0.004793112 -0.012660509; … ; -0.05805798 0.04653638 … 0.012597835 -0.0042927563; 0.006391116 0.048833884 … 0.004463303 -0.00089663255;;; -0.8371275 -0.12549062 … 0.020481162 0.006528635; 0.0 0.19743301 … -0.0055893334 0.07488632; … ; -0.023937635 -0.036739197 … -0.0034947726 0.04904119; 0.038002744 -0.05372891 … 0.00037613173 -0.011889387;;; 0.0 -0.0 … -0.00052843615 -0.0031841923; -0.0 0.0 … -0.013878753 -0.012208857; … ; 0.0 0.0 … -0.0 -0.0; -0.0 0.0006502942 … -0.0 0.0043347296;;; 0.0 -0.17113881 … 0.0 0.0; -0.6252986 0.0 … 0.0 -0.0035140316; … ; -0.0010301681 0.0 … 7.153189f-5 0.0067811585; -0.0061070193 0.0 … 0.0004158783 0.001537811;;; -0.2521484 -0.36160702 … -0.0025597953 0.0; -0.2626785 -0.031484623 … -0.018764477 -0.025711201; … ; 0.066031694 -0.09855567 … -0.018081622 -0.0066982876; 0.04499356 -0.035856098 … 0.037654907 0.042463027;;;;]\n [-0.07855032 0.00196009 … -0.0 -0.0; -0.014896327 -0.8878472 … -0.008299498 -0.0; … ; 0.04742264 -0.033691153 … -0.0 -0.0; -0.011322196 -0.10518424 … -0.007691069 -0.0018612122;;; -0.0 -0.024953386 … 0.0 0.0; -0.0 0.0 … 2.1948752f-5 0.0; … ; 0.0 -0.0 … 0.0 0.0; 0.0053180666 0.0 … 0.0 0.0;;; 0.0 -0.0 … 0.0 -0.0013910539; -0.0 0.0 … -0.0 0.0052974066; … ; -0.0 0.0 … 0.0 -0.0012707297; -0.0 0.0 … -0.0 -0.00069558783;;; … ;;; -0.0 -0.16272092 … 0.0 0.0; -0.10774095 -0.15646434 … -0.0 0.0; … ; 0.009053699 0.0 … 0.0 0.0; 0.008717909 -0.0020142347 … 0.0014557827 0.0;;; 0.0011598621 -0.772295 … 0.014580138 0.022462297; 0.42342895 0.9504279 … -0.010914816 0.010951316; … ; 0.00361706 0.11409792 … 0.012012022 0.023946112; 0.0 0.013503348 … 0.019456247 -0.012093022;;; -0.0 0.0029368615 … 0.00012988015 -0.0; -0.0 0.1753267 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; -0.0 0.0 … -0.0 0.0;;;;]\n [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.00025325536 -0.000578286 … 0.0 0.0; 0.00025790694 -0.00214369 … 0.0 0.0;;; -0.0021142424 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0;;; -0.0 -0.0 … -6.964325f-5 -3.2026877f-5; 0.0013377107 0.044989742 … -0.0 -8.142743f-5; … ; -0.0002638756 -0.036990084 … -0.012151355 0.008718809; -2.5924955f-5 -0.0 … -0.00012693179 -0.00030154805;;; 0.001276079 0.0012933273 … 0.00049700524 0.0; -5.7388935 -0.17388494 … 0.00066013855 0.0013053188; … ; -0.036486518 -0.032211166 … 0.00082350534 0.0013062144; -0.0010246906 -0.2589682 … -0.00032640586 0.0012392186;;; -0.02728919 -0.030647784 … 0.04076553 -0.0101187; -0.006137956 -0.0 … -0.052455477 -0.03686809; … ; -0.0013391115 -0.0 … -0.0 -0.0; -8.074782f-5 -0.0 … -0.0120127 -0.0;;; -0.0 -0.0 … -0.0016737665 -0.013131372; -0.0 -0.0 … -0.0 -0.021611648; … ; -0.0 -0.0076563214 … -0.01021521 -0.029008528; -0.0 -0.0 … 0.008279179 -0.001842284;;; 0.0 0.26381096 … -0.012868868 0.053802982; 0.0032877645 0.061142918 … 0.021163741 0.014161846; … ; 0.055578362 0.06774405 … 0.0713381 0.055979244; 0.008224021 0.066860475 … 0.01846212 0.0;;; -0.0 -0.0 … -0.0 -0.004081749; -0.0 -0.0 … -0.0 0.0024254376; … ; -0.0 -0.0 … -0.0 -3.81782f-5; -0.0012016319 -0.001447384 … -0.0 -0.008825191;;;;]\n [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.00019299799 -0.00069045747 … 0.0 0.0; 0.00019671685 -0.0018927953 … 0.0 0.0;;; -0.0022356662 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.00022334259 -0.00013145785; 0.00030036323 0.00062986766 … 0.0 0.0005030005; … ; -0.0016554995 -0.013122062 … -0.0007300087 0.005158066; -2.7746066f-5 0.0 … -0.00026150592 0.0010497526;;; -0.0015516146 -0.0023198463 … 0.0007291804 0.0; -0.016532248 -0.020140184 … 0.008194434 0.0053568445; … ; -0.008068854 -0.0022266698 … 0.0015820686 0.0055338885; -0.005404968 -0.035394073 … 0.0065475777 -0.006851507;;; -0.0015183646 0.005927388 … 0.008750405 -0.0047523314; 0.00058639195 0.0 … -0.0045847525 -0.000464695; … ; 0.00063990976 0.0 … 0.0 0.0; -2.6791107f-5 0.0 … 0.00079456065 0.0;;; 0.0 0.0 … 0.00036528043 0.00016716821; 0.0 0.0 … 0.0 -0.0024151024; … ; 0.0 0.0018217764 … 0.00064925384 -0.008731368; 0.0 0.0 … 0.0019635109 0.005983344;;; 0.0 0.003956399 … -0.0131952455 0.004792052; 0.00014768173 0.0063210945 … -0.00094137364 0.0039392924; … ; 0.0047478415 0.0077295173 … 0.009154586 0.0056523103; -0.0007516285 0.0034228517 … 0.0008037041 0.0;;; 0.0 0.0 … 0.0 -0.0027430686; 0.0 0.0 … 0.0 0.0027418195; … ; 0.0 0.0 … 0.0 0.0006357164; -0.00057337456 -0.0006643733 … 0.0 -0.006642742;;;;]\n [0.0; 0.0; … ; 0.00063571654; -0.006642742;;]\n [-0.0; 0.0; … ; -0.0; 0.0;;]\n [-0.011225047; 0.0; … ; -0.013638405; 0.0054359557;;]\n [0.0; 0.0; … ; 0.0; 0.0;;]" }, "metadata": {}, "execution_count": 9 @@ -322,7 +322,7 @@ { "output_type": "execute_result", "data": { - "text/plain": "11-element Vector{Array{Float32}}:\n [-0.00021806598 -0.06348112 … 0.0052406793 0.038411003; -0.07773097 -0.040653504 … -0.038073834 -0.07150472; … ; -0.3857462 0.09934649 … -0.111578666 0.0003815261; 3.2195055 -4.033806 … -0.058096666 -0.02785277;;; -0.010749216 0.047355447 … 0.017163578 -0.0031263405; 0.04316634 -0.03904067 … 0.15384282 0.004648146; … ; 5.828635 0.12968469 … 0.013452118 -0.010562822; 0.86173195 0.102996096 … 0.035828523 0.01809571;;; -0.0021397322 0.049361505 … -0.070748486 -0.15297048; 0.011183702 5.6662957f-5 … -0.04588577 0.0090280445; … ; 9.87716 -2.4787185 … 0.04667692 -0.010099618; -0.035415467 -2.040023 … -0.08392257 -0.004368621;;;;]\n [-0.008018639 -0.0003225244 … 0.0 -0.01121361; -0.08673162 -0.0 … 0.0 -0.0; … ; 0.14745401 0.0 … -0.0 0.0; 0.101449504 -2.346434 … -0.013345936 -0.0;;; -0.0 0.0 … 0.0 0.0; -0.0 -0.0 … 0.0 -0.0; … ; -0.0 -0.0 … -0.0 0.0; 0.0 -0.0 … -0.0 0.0;;; -0.0 0.0 … 0.0 -0.0; -0.0 -0.0 … 0.0 -0.0; … ; 0.0 0.0 … 0.0 0.0; 0.15537591 0.6326366 … -0.0 -0.0004962979;;; 0.0 -0.0 … 0.0 0.013700984; -0.0 0.0 … 0.0 0.0; … ; -0.0 -0.0 … -0.0 0.0; 0.0 -0.0 … -0.0 0.0;;; -0.011875292 0.0053205937 … 0.03391299 -0.0; 0.053400207 0.0 … -0.04150621 0.0; … ; -0.0 -2.1763177 … 0.0 -0.0; -1.8232471 0.9559307 … -0.009736294 -0.0;;; -0.00245871 0.0 … 0.0 -0.0010331688; 0.0068570157 0.025021795 … -0.14920011 -0.0; … ; 2.0682046 0.0 … -0.0 0.0; 1.1239533 -1.9126052 … -0.004528569 -0.0056837318;;; 0.005666931 -0.01852926 … 0.08980837 -0.014818593; 0.00010749137 -0.02538835 … 0.16606535 -0.000123461; … ; -0.0 4.491997 … 0.009403402 -0.0044074836; -0.0 0.0 … 0.0 0.0;;; 0.017662013 0.004467451 … 0.029467857 -0.04149583; -0.020441426 0.0 … -0.021207064 0.0; … ; -0.0 -0.0 … 0.0 -0.0; -0.0 -1.4144952 … -0.007824601 0.0;;;;]\n [0.0 0.0 … -0.015723234 0.0; 0.0 0.019053072 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0004744514 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.06739361 0.0; -0.09585739 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -3.8498719 … -0.022599049 0.0;;; 0.0 -0.0020292487 … 0.0 0.09046595; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 1.7461725 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.00028508742;;; 0.0 0.007714143 … 0.0 0.0; 0.0 0.0 … 0.027254855 0.0; … ; 0.20170179 0.0 … -0.00401346 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.009036362 0.0 … 0.0 0.0065884367; … ; 0.0 0.0 … -0.033151682 0.0; 0.0 -0.10444486 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.039351992 0.0 … 0.0 0.029780675; … ; 0.0 0.5258506 … 0.0 -0.08917469; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … -0.0061670137 0.0; 0.01686566 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; -0.036519386 0.0 … 0.0 0.0;;;;]\n [0.019053072 -0.010412027 … -0.14131185 -0.015723236; 0.0010325817 0.009086323 … -0.013646284 0.0; … ; 6.8704267 -1.6853371 … 0.003114729 0.027191667; 0.0004744514 -0.15118723 … 0.015603071 -0.0;;; -0.09585739 0.009645116 … 0.02385404 0.06739361; -0.049494836 0.016330278 … -0.09587094 -0.048015222; … ; 4.0252676 0.7870403 … -0.11270244 -0.13615319; -3.8498719 0.4120504 … -0.0017415701 -0.022599049;;; -0.0020292487 -0.008511804 … -0.0 0.09046595; -0.0 0.0015682785 … 0.13709556 0.005837257; … ; -1.2591903 -0.68060344 … -0.012275208 0.015926944; 0.0 -0.70120764 … 0.0 0.0;;; 0.0 -0.0 … -0.0 0.0; 0.0 0.0035392644 … 0.0 -0.0; … ; 0.5212126 0.17695217 … -0.0 -0.0008173634; 1.7461725 2.3509243 … -0.0009113588 -0.0002850875;;; 0.007714143 0.022281302 … -0.06751129 0.027254855; 0.06227758 0.10430618 … 0.79051065 -0.13640179; … ; 1.6062735 2.538027 … -0.09509707 -0.019265464; 0.20170179 -1.0471405 … 0.06179228 -0.00401346;;; -0.009036362 0.05796052 … 0.04135585 0.0065884367; -0.013665786 0.06293658 … -0.13115314 -0.012800026; … ; -3.852719 -5.067256 … -0.15071419 -0.025201937; -0.10444486 1.109253 … -0.07847396 -0.033151682;;; 0.039351992 0.025123838 … -0.16102979 0.029780675; -0.14070988 -0.02675643 … 0.015901202 0.018577803; … ; 13.7370615 -0.76723796 … 0.05078054 -0.042161617; 0.5258506 3.9420538 … -0.109191276 -0.08917469;;; 0.01686566 0.050902717 … -0.040578723 -0.0061670137; 0.118974306 0.07661181 … -0.06622419 0.015436642; … ; -1.5769864 -1.5860746 … -0.010553281 -0.010545892; -0.03651943 0.5656046 … -0.0016691175 -0.0;;;;]\n [0.03418055 0.051612485 … 0.0064645456 -0.0; 0.008295907 0.062007513 … 0.02814419 0.0; … ; 13.348502 -8.294862 … 0.002440402 -0.0060147275; -5.333728 5.244511 … -0.006087869 -0.0;;; -0.0 -0.0 … -0.0 -0.011496686; -0.0 0.027177066 … 0.21314076 -0.03230928; … ; 0.0 -0.0 … -0.051908378 0.0041399556; 0.87485343 -2.4698958 … 0.011638436 -0.008762495;;; -0.16228677 -0.22338827 … -0.0832512 -0.026993522; -0.09134889 -0.07243161 … 0.027110953 -0.01019457; … ; 10.058146 -8.452599 … -0.090310454 -0.033414163; 6.656823 0.77055866 … -0.013238339 -0.025906254;;; … ;;; -0.0 0.0 … -0.0 -0.0; -0.0 0.0 … 0.0 -0.0; … ; 0.0 -0.0 … -0.0 -0.00017766465; -0.22990285 0.0 … -0.0 -0.0;;; -0.0 -0.0 … -0.0 -0.0; 0.0 -0.0 … -0.0 -0.0; … ; 0.0 0.0 … -0.009415238 -0.0; -0.14933659 0.0 … -0.0023839583 -0.0;;; 0.077920474 0.011262477 … 0.0153810475 -0.0034447396; 0.029051568 0.071413174 … -0.09085979 -0.02191826; … ; -0.0 0.0 … -0.024480643 -0.0015748793; 0.0 0.41820815 … -0.0 -0.0;;;;]\n [-0.001381093 -0.0028892288 … -0.0020112246 -0.0014662107; -0.0030196395 -0.0027872203 … -0.0023002024 -0.00014365424; … ; 0.03952056 -0.0018391964 … -0.0010019364 -0.0007225208; -0.0015545904 -4.734098f-5 … -0.0 -0.0004017141;;; -0.16850503 -0.32931852 … -0.13341738 -0.03687443; -0.29228464 -0.07532219 … -0.036340583 -0.0; … ; 37.991386 -0.0 … -0.027775863 -0.026375508; -0.06739176 -0.0 … -0.0 -0.0;;; 0.0 0.0 … 0.0 0.0; -0.009324007 0.0 … 0.0 0.0; … ; 0.00011747418 0.003327517 … 4.426166f-5 0.0; 0.00023370852 0.004025098 … 6.0639475f-5 0.001672727;;; -0.0034249562 -0.0 … -0.00039952862 0.0064898855; -0.0070654307 -0.0 … -0.008076418 -0.013019499; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.00038987934;;; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0010206322 -0.0 … -0.0 -0.0; 3.816348f-5 -0.0 … -0.0 -0.0;;; -6.743873f-5 -0.1817549 … -0.0028429362 -0.0037837173; -0.0 -0.013962423 … 0.018773379 0.043401804; … ; -0.0 -0.00998279 … -0.002463793 -0.004893761; -0.0 -0.0040051476 … -0.00010112396 0.007704173;;; -0.0 -0.0 … -0.0 -0.0; 0.010620359 0.007190397 … -7.082452f-5 -5.2438678f-5; … ; 0.012178567 -8.8574845f-5 … -0.0008923226 -0.0; -0.0 -0.009058041 … -0.0 -0.0;;; -0.00039443054 0.41744703 … -0.00027419053 -0.0017254793; -0.0016538996 0.14289013 … -0.0075692637 -0.00296694; … ; -0.0011939678 -0.0037436434 … -0.006111871 -0.0015435882; -0.0007113686 0.051924586 … -0.0070167785 -0.0029331346;;;;]\n [0.005803214 0.002701507 … 0.0015031827 -9.084045f-5; -0.0028235056 0.00075740577 … 0.007793173 0.0; … ; 0.01909215 0.0027542373 … 0.00077308563 -0.00073162984; 0.0062182373 0.0 … 0.0 0.0003455213;;; 0.0066663595 0.018006638 … -0.002354773 0.001227927; 0.015272258 0.008133763 … 0.00014699345 0.0; … ; 0.013575984 0.0 … 3.484671f-5 0.00015178102; 0.00233216 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.007231183 0.0 … 0.0 0.0; … ; -0.00084929046 0.0021462082 … 0.00018178765 0.0; 1.9293073f-5 0.0026544402 … -8.1874256f-5 0.00088122545;;; -0.0027593076 0.0 … -0.002769294 0.0049473904; -0.001490641 0.0 … -0.0054065804 -0.010163404; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0009651847;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; -0.0009130209 0.0 … 0.0 0.0; 0.0002540996 0.0 … 0.0 0.0;;; 0.0 -0.004818674 … 0.0011093619 -0.0026879343; 0.0 -0.0016385047 … 0.0053634974 0.013609446; … ; 0.0 -0.00056208717 … -0.0026207943 -0.0021557973; 0.0 0.0020941023 … 0.0 0.00501133;;; 0.0 0.0 … 0.0 0.0; 0.0065052835 0.0040675937 … 0.00040437613 -0.00017599335; … ; 0.007042987 -0.0018991113 … -0.0012248929 0.0; 0.0 -0.0034102139 … 0.0 0.0;;; 0.0020542783 0.0073202285 … -0.00042799365 0.0058759; 0.009354441 0.029246124 … 0.00984402 0.0058481907; … ; 0.0018361163 -0.00033094082 … 0.00043198862 -0.008477873; 0.0056173713 0.009863984 … -0.010652449 -0.006647675;;;;]\n [0.005803214; -0.0028235056; … ; -0.008477873; -0.006647675;;]\n [0.0; 0.0; … ; -0.002132895; -0.019937657;;]\n [0.0; 0.03926732; … ; -0.002132895; -0.019937657;;]\n [0.0; 0.0; … ; 0.0; 0.0;;]" + "text/plain": "11-element Vector{Array{Float32}}:\n [0.10909061 0.016985286 … -0.033392604 0.035766583; 0.0047575007 -0.10714021 … -0.012110931 0.0013750129; … ; -0.07479966 0.007444864 … 0.036861412 0.009906101; 0.00805714 -0.022324992 … 0.01221438 0.045526262;;; 0.1833335 -0.31012642 … 0.0042630946 -0.015650481; -0.101551116 -0.8807603 … 0.00069726567 -0.030251766; … ; -0.015744127 -0.03496587 … -0.015696604 -0.014254219; -0.023524927 -0.024451341 … -0.002061537 0.015031568;;; -0.12570041 -0.118935265 … 0.015687658 0.011928017; 0.14294061 -0.07356083 … 0.01841417 0.04019364; … ; 0.025564123 -0.013152767 … -0.009696721 -0.0025443526; -0.003473982 0.010800932 … 0.017462572 0.0029346414;;;;]\n [-0.036506597 -0.25065005 … 0.01619241 0.011507603; -0.22996363 -0.35731292 … -0.004716799 0.0038788142; … ; -0.033761796 0.0313675 … 0.022528403 -0.0051091155; -0.0072388975 -0.008173408 … 0.009247528 -0.0056583444;;; 0.08575319 0.17711058 … -0.007875867 0.00058436993; 0.013789675 0.09537185 … -0.0010806506 -0.0074940566; … ; 0.0 0.0029525734 … -0.02821751 -0.01039312; 0.0 -0.009143967 … 0.02028125 -0.003175292;;; -0.042887196 -0.0 … -0.004717435 -0.0; 0.060022302 0.1207603 … 0.0 0.0; … ; 0.0047387774 -0.008239751 … 0.013589981 0.0; 0.0 0.0 … -0.00071525894 0.0007352606;;; -0.06445834 -0.0585551 … -0.009763665 -0.0; 0.0 0.0 … 0.01818304 -0.0006693352; … ; 0.0 0.00562708 … -0.0 0.0024318378; 0.0021766252 -0.0004362477 … -0.0 0.0;;; -0.4763521 -0.8459456 … -0.02505148 -0.0030459946; 0.09090679 -0.0654049 … 0.0602672 -0.0027281241; … ; -0.0073197423 -0.016405828 … -0.025265057 -0.0061176745; -0.0004264298 0.006827222 … 0.030976454 0.0;;; -0.014431472 0.27200603 … -0.00022729569 -0.0038549162; 0.041840933 0.059507057 … -0.0012789028 -0.0013737421; … ; -0.02029101 0.0017885001 … -0.010666857 0.011415687; 0.0095414845 -0.017155882 … -0.001260215 -0.008891415;;; 0.0 -0.0 … 0.0 0.0; -0.0 0.0 … 0.0 0.0; … ; -0.0 -0.0 … 0.0 -0.0; 0.0 -0.0 … 0.0 0.0;;; 0.021060636 -0.0 … -0.0 0.021627596; 0.07137109 0.0 … 0.0025617837 -0.02132579; … ; -0.0022815014 0.0 … -0.0 0.0019612527; -0.012264215 -0.00527389 … 0.0 0.0140815275;;;;]\n [0.0 0.0 … 0.0249988 0.0; -0.26958176 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; -0.0032865105 0.0 … -0.008930933 0.0;;; 0.0 0.0020870604 … 0.018985491 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 -0.0013934255; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.024125686; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.058032494 … 0.009138649 0.0;;; 0.0 0.0 … 0.0 0.001083702; 0.0 -0.56319964 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.019358065 … 0.005610562 0.0;;; 0.0 -0.6203666 … 0.0 0.020886518; 0.0 0.0 … 0.0 0.0; … ; 0.0 -0.01582041 … 0.0 0.0; 0.0 0.0 … 0.0 0.06050062;;; 0.0 0.0 … 0.0 0.006584508; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.011490189;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0015476055; 0.0 -0.0120370025 … 0.0 0.0;;; -0.069292426 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.048977695 … 0.08440569 0.0;;;;]\n [-0.26958176 0.05057835 … 0.03289142 0.0249988; 0.20903519 -0.0 … 0.069433786 -0.021604175; … ; 0.01612467 -0.097033486 … 0.008083403 -0.0; -0.0032865105 0.055063967 … 0.020193087 -0.008930933;;; 0.0020870604 0.09518771 … 0.0 0.018985491; 0.0 0.08856021 … 0.0 0.12412661; … ; -0.0 -0.0 … -0.0 -0.021148896; 0.0 -0.015397718 … -0.0 -0.0013934255;;; 0.0 -0.047148593 … 0.0 -0.024125686; 0.0 -0.0 … -0.1029622 -0.02053965; … ; 0.0021314933 -0.059277765 … 0.0 0.0027859597; 0.058032494 -0.021531822 … -0.010623338 0.009138649;;; -0.56319964 0.045164842 … -0.05104338 0.0010837021; 0.6500704 -0.10057322 … 0.116284445 0.030583534; … ; 0.004337278 -0.08474632 … 0.009033818 -0.0033665996; -0.019358065 0.013887664 … 0.01867761 0.005610562;;; -0.6203666 0.057189662 … 0.13043272 0.020886518; -0.0 -0.2689995 … 0.0013740978 -0.18750094; … ; 0.032115858 -0.052761964 … -0.008885173 0.030270008; -0.01582041 0.107497804 … -0.010417068 0.06050062;;; 0.0 -0.0 … -1.1341947f-5 0.006584508; 0.0 0.0 … 0.005908674 0.063301444; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.00039320008 … 0.0 -0.011490189;;; 0.0 -0.1795146 … 0.0 0.0; -0.3742136 0.0 … -0.0 -0.0145936385; … ; 0.005261438 -0.0 … 0.00018893428 0.0073752846; -0.0120370025 -0.0 … 0.00022015258 0.0015476055;;; -0.069292426 -0.3425286 … 0.01214021 0.0; -0.20765051 -0.038671862 … -0.07501496 0.05664548; … ; 0.045021582 0.06282013 … 0.007421834 0.0027402488; -0.048977695 0.04136082 … 0.15190281 0.08440569;;;;]\n [-0.16807593 -0.00038939033 … 0.0 0.0; -0.01918672 -0.54489285 … 0.2542474 0.0; … ; 0.025190767 0.024571508 … 0.0 -0.0; 0.040931985 0.09611825 … -0.0021987807 0.0036025958;;; -0.0 -0.011272973 … 0.0 0.0; -0.0 0.0 … 0.0010476111 -0.0; … ; 0.0 0.0 … 0.0 0.0; 0.005500955 0.0 … 0.0 -0.0;;; 0.0 -0.0 … -0.0 0.0070097805; -0.0 0.0 … -0.0 -0.0605208; … ; 0.0 -0.0 … 0.0 0.021108273; 0.0 -0.0 … 0.0 0.00059921195;;; … ;;; -0.0 -0.09291314 … 0.0 0.0; -0.043849297 -0.09413411 … 0.0 -0.0; … ; -0.0037466194 -0.0 … 0.0 0.0; 0.03011647 0.024953123 … 0.0011174951 0.0;;; -0.00026625805 -0.47847658 … 0.029899087 -0.011348398; 0.37205762 0.5127711 … 0.0690732 -0.059880264; … ; -0.0010180731 0.023897378 … 0.032561462 -0.013875803; 0.0 -0.004059144 … 0.021860223 0.0030347547;;; -0.0 0.024330003 … 7.210141f-5 0.0; -0.0 0.18096678 … 0.0 -0.0; … ; 0.0 -0.0 … 0.0 -0.0; 0.0 -0.0 … -0.0 -0.0;;;;]\n [-0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0014693483 0.00087103667 … -0.0 -0.0; 0.000712205 0.0004526412 … -0.0 -0.0;;; -0.0032647247 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0;;; -0.0 -0.0 … -6.9886f-5 -2.8113112f-5; 0.015341483 -0.4279785 … -0.0 -8.52097f-5; … ; -0.00057602755 -0.029404787 … 0.0009434741 -0.01466591; -2.2266046f-5 -0.0 … -0.00015644934 -0.0008297956;;; 0.0045853104 0.0052902126 … 0.0014710545 0.0; -3.879381 0.17541379 … 0.0057629375 0.0049753017; … ; -0.0023168456 -0.0061817425 … 0.00254286 0.0050499276; 0.004407694 -0.058885217 … 0.005088185 0.0055331453;;; 0.01664174 0.019312974 … -0.005006531 0.007520891; 0.004637208 0.0 … 0.045063864 0.012015342; … ; 0.0010321845 0.0 … 0.0 0.0; 6.257282f-5 0.0 … 0.008860406 0.0;;; 0.0 0.0 … 0.0050628604 0.027811034; 0.0 0.0 … 0.0 0.059925538; … ; 0.0 0.05243204 … 0.02179366 0.035637934; 0.0 0.0 … 0.03361498 0.04308124;;; 0.0 -0.046526916 … -0.0094169015 0.05705587; 0.002336602 0.019722909 … 0.015049898 0.010068342; … ; 0.029206987 0.050646834 … 0.05410663 0.08566312; 0.0058457106 0.13492039 … 0.013127517 0.0;;; -0.0 -0.0 … -0.0 -0.00043023916; -0.0 -0.0 … -0.0 0.0099360095; … ; -0.0 -0.0 … -0.0 -6.194052f-6; 0.004096542 0.006739668 … -0.0 -0.0022184034;;;;]\n [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; -0.00096417195 0.0025425456 … 0.0 0.0; 0.0007130215 0.0006981252 … 0.0 0.0;;; -0.0032050891 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 1.4059367f-5 0.0; 0.0029872751 -0.0059376317 … 0.0 9.514808f-5; … ; 0.0029633814 -0.009852298 … 0.00012038374 -0.0062369113; 0.0 0.0 … -0.0005627781 -0.0009423485;;; 0.002408507 -0.007942966 … 0.0 0.0; -0.011189836 0.020324802 … -0.004016214 0.0063461987; … ; -0.00035452866 -0.0006969604 … -0.00024563205 0.0068167793; 0.0042496896 -0.00844548 … -0.0020587514 -0.0029978447;;; 0.00039793408 -0.0044802655 … -0.0027069023 -0.004153416; 0.00039026883 0.0 … 0.009906187 -0.006654854; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0026402038 0.0;;; 0.0 0.0 … 0.0011237959 -0.007889131; 0.0 0.0 … 0.0 0.006470887; … ; 0.0 0.015827665 … -0.0074078036 -0.013098773; 0.0 0.0 … 0.0037569758 0.009128958;;; 0.0 -0.0010951807 … -0.009458962 0.010107953; 0.00017005035 -0.006325204 … 0.0037686406 0.0004751422; … ; 0.0011803741 0.0060589933 … 0.00826315 0.018243885; -0.001660798 0.010315852 … 0.002854667 0.0;;; 0.0 0.0 … 0.0 6.654783f-5; 0.0 0.0 … 0.0 0.009028647; … ; 0.0 0.0 … 0.0 0.0004901995; 0.0027190999 0.0037400024 … 0.0 -0.0011963211;;;;]\n [0.0; 0.0; … ; 0.00049019954; -0.0011963211;;]\n [-0.0; 0.0; … ; -0.013183771; 0.005037208;;]\n [-0.011272744; 0.0; … ; -0.013183771; 0.005037208;;]\n [0.0; 0.0; … ; 0.0; 0.0;;]" }, "metadata": {}, "execution_count": 10 diff --git a/dev/generated/lrp/basics/index.html b/dev/generated/lrp/basics/index.html index b50b93d7..7b87a9b6 100644 --- a/dev/generated/lrp/basics/index.html +++ b/dev/generated/lrp/basics/index.html @@ -118,29 +118,29 @@ expl = analyze(input, analyzer; layerwise_relevances=true) expl.extras.layerwise_relevances
11-element Vector{Array{Float32}}:
- [0.002961353 0.022009673 … -0.0030653689 -0.020034079; -0.006119134 -0.0075272 … 0.00032496275 -0.00824792; … ; -0.03715063 0.046613246 … 0.08154914 -0.14958586; -0.0053486587 0.030335393 … 0.019557824 -0.039758652;;; -0.00043376995 -0.09828642 … -0.010889937 7.5798757f-6; 0.0034113086 -0.12384505 … 0.0010917688 -0.0051555396; … ; -0.023221403 0.013476896 … -0.034282662 -0.014312518; -0.0064414297 0.038038895 … 0.022145862 -0.02558117;;; 0.009200367 0.002087603 … 0.013029419 -0.011370873; 0.033957087 0.07564226 … 0.031752244 -0.022041762; … ; 0.025478298 0.010593502 … 0.0108617125 -0.05630788; 0.0052028704 -0.010146006 … 0.017420335 -0.014565961;;;;]
- [-0.0092888195 -0.018100897 … -0.0025791014 4.3109308f-6; 0.022675522 0.10816146 … -0.083172515 0.004839988; … ; 0.023245446 -0.019501707 … -0.064632356 -0.009425115; 0.0006048799 -0.0021516276 … 0.02006575 -0.0;;; 0.0 0.0 … -0.0 0.0018119595; -0.0 0.0059555704 … -0.0005503325 -0.016607296; … ; -0.0029448634 0.037879225 … 0.0079932725 -0.055447113; -0.0 -0.0047668745 … 0.0035015221 0.005387557;;; -0.0021696324 -0.0061763837 … 0.00073793984 -0.0; 0.0 -0.0 … 0.0 -0.0018148222; … ; -0.0 0.0038155206 … -0.0031719357 0.0064706914; -0.0 0.0015601051 … 0.0 -0.00014626244;;; 0.0 -0.0 … -0.0 -0.0; 0.0 -0.0 … 0.0 -0.0; … ; 0.0 -0.0 … -0.0 0.0; -0.0 0.0 … 0.0 -0.0;;; -0.0 -0.0 … -0.0 -0.0; -0.0 0.0 … 0.0 -0.0; … ; 0.0 0.0 … -0.0 -0.0; 0.0 -0.0 … 0.0 0.0;;; 0.00020094521 -0.0 … 0.0 -0.0; 0.0 -0.0 … -0.0 0.0; … ; 0.0 -0.0 … -0.0 -0.0; -0.0037188812 0.0 … -0.0 -0.0;;; -0.0033487594 -0.0036104794 … -0.0077801514 0.0015992848; -0.01942695 -0.0 … 0.0 -0.0; … ; 0.0021671555 0.0 … -0.023715127 -0.0; 0.0 -0.0007199623 … 0.0 0.00035164456;;; 0.001563584 0.0 … -0.0 0.000344069; -0.018045144 -0.021837607 … -0.0035854508 -0.022465652; … ; -0.03172559 0.0040276395 … 0.006588107 -0.0147822; 0.0016765574 0.0021350046 … -0.02621664 0.007649804;;;;]
- [0.0 0.0 … 0.0070637274 0.0; 0.0 0.014728761 … 0.0 0.0; … ; 0.004140362 0.0 … -0.105779685 0.0; 0.0 0.0 … 0.0 0.0;;; -0.014133543 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.013191878 … 0.008983804 0.0;;; 0.0 -0.0136417085 … 0.0 -0.051017914; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 -0.020746253; 0.0 0.0 … 0.0 0.0;;; 0.0 0.008117246 … 0.0 -0.0078088813; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.014993065 0.0; 0.0 -0.021042945 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.015903557 0.0 … 0.0 -0.036209255; … ; -0.014495325 0.0 … 0.0 -0.067308486; 0.0 0.0 … 0.0 0.0;;; 0.0035500182 0.0 … 0.0 0.0; 0.0 0.0 … 0.007366875 0.0; … ; 0.0 -0.0004540714 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 -0.0019303816; 0.0 -3.4894798f-5 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.009329542 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0026295374; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;;;]
- [0.014728761 0.02468845 … -0.09384715 0.0070637274; 0.09986298 -0.23521633 … 0.0476052 -0.027880626; … ; 0.074725755 0.034200676 … -0.43939555 0.006811553; 0.004140362 -0.068814404 … 0.23505896 -0.105779685;;; -0.014133543 -0.012098856 … -0.026820818 -0.0; 0.03114399 -0.037090346 … 0.005652823 -0.0; … ; -0.011903699 0.0048704455 … -0.038816888 0.0; -0.013191878 0.020846969 … -0.082367085 0.008983804;;; -0.0136417085 -0.04762049 … -0.0064465604 -0.051017914; -0.026843153 0.000376382 … 0.0039183428 -0.019856803; … ; 0.0033365998 -0.043515682 … -0.061481107 0.095415734; -0.0 0.01499577 … -0.058857918 -0.020746253;;; 0.008117246 -0.0116923265 … 0.007322596 -0.0078088813; -0.11874526 -0.3722375 … -0.0068662874 0.0025262716; … ; -0.038344566 0.040371828 … 0.064042345 -0.04125319; -0.021042945 0.008802618 … 0.15888385 0.014993065;;; -0.015903557 0.00048161083 … 0.0071812775 -0.036209255; 0.67777777 0.5472117 … -0.004438814 0.064285524; … ; -0.0030684513 0.058568683 … -0.102026016 0.08282453; -0.014495325 -0.015961764 … -0.23677401 -0.067308486;;; 0.0035500182 0.0021994107 … -0.029852789 0.007366875; 0.07196966 0.0 … -0.0 -0.0; … ; 0.008461978 0.0 … 0.0 -0.05974799; -0.0004540715 0.0 … -0.009976954 -0.0;;; -3.48948f-5 -0.00013912024 … 0.00015892777 -0.0019303816; 0.105796345 -0.0 … -0.0 -0.002795157; … ; -0.016131291 -0.0 … 0.0 0.0; 0.009329542 -0.0 … 0.0025178571 0.0;;; -0.0 0.0011322987 … 0.0 0.0026295374; 0.0 0.0 … 0.0 0.0; … ; -0.0 0.0 … -0.0 0.0; -0.0 -0.0 … -0.0 0.0;;;;]
- [0.0 0.0 … 0.0 -0.0; 0.0 -0.0 … 0.0 -0.0; … ; -0.0 -0.0 … 0.0 -0.0; -0.0027978912 0.0010418226 … 0.015508049 0.0;;; 0.004822372 0.008710025 … 0.0014692708 0.0; -0.0035443564 0.031705957 … -0.002165656 -0.0; … ; -0.00072978006 0.00058787275 … 0.0 -0.0; 0.0005991264 -0.0026978229 … 0.0 0.004046496;;; -0.0 -0.0 … 0.0003590151 -0.010216124; -0.0 -0.0 … 0.0 -0.0; … ; 0.0 -0.0 … -0.0 0.0; -0.0 0.0 … -0.0 0.0;;; … ;;; 0.0 0.0 … 0.0 0.0; 0.0 -0.0 … 0.0 -0.019919813; … ; -0.0 0.002629267 … -0.0 -0.01630218; 0.0 -0.0125012025 … -0.0 0.015445757;;; 0.0018523997 0.0 … 0.0 -0.0; -0.0048051956 -0.020962074 … 0.019048082 -0.0019340552; … ; 0.015240619 0.02877393 … -0.24707288 -0.00080773147; 0.006537149 0.002898886 … -0.26188168 -0.06892228;;; 0.0 -0.0 … 0.0 0.014315903; -0.0003101779 -0.0042351633 … -0.012966973 -0.0; … ; 0.0 -0.01085744 … -0.0 -0.0; -0.0 -0.009191156 … -0.0 0.0018394445;;;;]
- [-0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.003754438 -0.0012030728; … ; -0.0 -0.0 … 0.040393878 -0.00012372312; -0.000119485274 -0.0010097368 … 0.00033851937 0.009189564;;; -0.0 -0.0 … -0.0007661634 0.0043107723; -0.0 -0.0 … 7.132407f-5 -0.003792867; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0;;; 0.00414944 0.0 … 0.0 0.0; -0.0014721849 0.0 … 0.0 0.0; … ; 0.0 0.0 … -0.0007405015 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0062107528 -0.04925754 … 0.005143981 0.008204226; … ; 0.008680541 0.008987742 … 0.004258401 0.0017884328; 0.006019235 0.03806165 … -0.40456226 0.008273013;;; -0.013242449 -0.0151247075 … -0.06469714 -0.013984085; -0.00493933 -0.027920038 … -0.026912495 -0.0092493165; … ; -0.0438012 -0.03539953 … -0.09928242 -0.013123716; 0.030197132 -0.034770366 … 0.017782219 -0.0061365743;;; -0.0021494033 -0.00060446816 … -0.001638895 -0.0; -0.002055611 -0.0022343297 … -0.029104717 -0.0012596988; … ; -0.07410818 -0.00712462 … 0.048938654 -0.0009130768; -0.0024372283 -0.003849372 … -0.004207089 -0.00063897145;;; -0.00075568387 -0.00021652244 … -0.0049712816 -0.03298823; -0.0 -0.017100124 … 0.028276836 0.005733783; … ; -0.0 -0.0 … 0.008984299 0.008557427; -0.0 -0.0 … -0.0 -0.001862739;;; -0.00013651312 -0.0007343933 … -0.0 -0.00027815285; -0.001992548 -0.0018799672 … -0.0 -0.0011665188; … ; -0.04422771 -0.1326071 … -5.7050216f-5 -0.00062744453; -0.0019510709 -0.0022835075 … -0.0013014073 -0.0;;;;]
- [0.0 0.0 … 0.0 0.0; 0.0 0.0 … -0.0022392424 -0.00042734947; … ; 0.0 0.0 … 0.001506987 0.00024035173; -0.00085303135 5.191489f-5 … 0.0014170256 0.009512791;;; 0.0 0.0 … -0.00075019355 0.004212533; 0.0 0.0 … 6.23845f-5 -0.0037633292; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.013042398 0.0 … 0.0 0.0; -0.0009182 0.0 … 0.0 0.0; … ; 0.0 0.0 … -0.00035772208 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.0015748473 -0.006692512 … 0.0052548028 -0.004703601; … ; -0.0027814915 0.0022126536 … -0.0027282713 0.0012567077; -0.000682141 0.008135751 … -0.0049227155 0.0026492306;;; 0.0011251634 -0.0016032589 … -0.0027238023 0.0028295887; 0.00074913184 -0.01168181 … -0.003622462 0.006136119; … ; 0.003717032 0.0064409412 … -0.011400958 -5.306094f-6; 0.014382206 -0.0011089026 … 0.005122738 -0.0019243616;;; 0.006239179 -0.00047549242 … -0.0041393363 0.0; 0.00545736 -0.0038603356 … -0.0064081093 0.0024302548; … ; -0.0060098637 0.00027817674 … 0.038760178 0.0015404725; 0.0046270154 0.003275031 … -0.012673601 -0.001154318;;; -0.00038610437 0.009463662 … 3.688573f-5 -0.0024297573; 0.0 -0.0072224513 … 0.0036294856 0.013957667; … ; 0.0 0.0 … 0.0102335 0.01786378; 0.0 0.0 … 0.0 0.002122811;;; -0.00019971232 0.0036347692 … 0.0 -0.00038353208; 0.012169147 -0.0030739678 … 0.0 -0.002182405; … ; -0.010307321 -0.013698176 … -5.167379f-5 0.001263004; -0.0010160904 -0.017502336 … -0.0035623026 0.0;;;;]
- [0.0; 0.0; … ; 0.001263004; 0.0;;]
- [-0.005324428; 0.0; … ; 0.0; 0.0;;]
- [-0.005324428; 0.0; … ; 0.0; 0.0;;]
+ [-0.0038831537 -0.0016364201 … -0.045573 -0.0007823985; -0.009868371 -0.005603635 … -0.010315119 -0.0063129906; … ; 0.025076304 0.009065998 … 0.0012486618 -0.0009917524; 0.0046729147 -0.026177768 … 0.001904136 -0.004317519;;; 0.012448726 -0.0038384055 … -0.047494687 -0.00054027996; 0.0073520658 -0.0015974953 … -0.09586338 0.015866395; … ; -0.0050147185 -0.021957021 … -0.008861424 0.0007192572; -0.004703007 -0.00771669 … -0.0050953464 -0.00030545314;;; -0.0038291058 0.005184213 … 0.0049437806 0.00079900335; -0.014821771 -0.0008268345 … -0.015994215 0.04018665; … ; 0.021390665 -0.0047633173 … -0.0005296806 0.00029906866; 0.0047857366 0.0012345195 … 0.00937819 -0.0020163765;;;;]
+ [-0.0 -0.0015384482 … 0.0 0.008598945; 0.00038294203 -0.0029863832 … 0.0 -0.0011195089; … ; -0.0 0.0 … 0.0014450476 -0.00048536164; -0.0 0.009702566 … -0.0009883826 -0.0011691569;;; -0.00079957035 -0.0005192932 … -0.0012548858 -0.00073483173; 0.0013056269 0.0 … 0.0 0.0; … ; 0.0034405012 0.0 … 0.000369388 0.0; -0.0008780009 0.00021489015 … -0.0 0.0;;; -0.0024585316 -0.012765162 … -0.026199805 -0.00451525; -0.0003825467 -0.0022131738 … 0.041093443 -0.009561424; … ; 0.0050402256 -0.0003939805 … 0.010613688 -0.011685435; -0.002224417 0.033943884 … 0.0 0.0015255885;;; 0.0033699325 -0.0018459557 … -0.0074523115 -0.0016924344; 0.0 -2.4054882f-5 … 0.0 -0.0; … ; -0.0 0.0012737393 … -0.00039809968 -0.0; 0.0 0.0 … -0.0 -0.0;;; 0.0 0.0 … -0.0 0.0; 0.0 -0.0 … 0.0 0.0; … ; 0.0 -0.0 … 0.0 0.0; 0.0 -0.0 … -0.0 0.0;;; 0.001290221 -0.0076769157 … 0.034402575 0.0; 0.00071924576 -0.0038246913 … 0.009003081 -0.0; … ; -0.009414813 0.004746493 … 0.0006822266 0.0; -0.0032743677 0.009196395 … -0.0030326967 0.0;;; 7.008447f-5 0.0014111816 … -0.0 0.0100263115; 0.0034034092 -0.0012987872 … -0.0004759902 0.0; … ; -0.002077336 -1.1088605f-5 … 0.0 -0.00042851767; 0.0 -0.0 … -0.0 -0.0;;; 0.0 -0.0006852316 … 0.00070702797 0.0024452826; 0.0 -0.0008835746 … -0.0016289164 0.0; … ; 0.0 0.0027091035 … -0.0 -0.0014551887; -0.0018945724 -0.0 … -0.0 0.0;;;;]
+ [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.009811934;;; 0.0 0.0 … 0.0 0.0; 0.0 0.005894333 … -0.01743813 0.0; … ; 0.0 -0.0037458532 … 0.012704272 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0031526769 0.0 … -0.0026008259 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0059148953 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 3.4611727f-5;;; 0.0 0.0 … -4.0115297f-5 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.012927536 0.0 … 0.024350777 0.0; … ; 0.0 0.037582703 … 0.0 0.0; 0.0 0.0 … -0.0018690513 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … -0.030507525 0.0; … ; 0.0 -0.0034406525 … 0.0 0.0; 0.0 0.0 … -0.009525346 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 -0.016103031 … 0.016972253 0.0; … ; 0.0 0.04648207 … 0.01000766 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 -0.009955549 … -0.102434464 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 -0.013700821 … 0.0 -0.016326614; 0.0 0.0 … 0.0 0.0;;;;]
+ [-0.0 -0.0 … 4.5440836f-5 0.0; 0.0 -0.0 … -0.0 0.019273452; … ; -0.0 -0.0 … -0.0 -0.0021772722; 0.0 -0.0059929476 … -0.0003794208 -0.009811934;;; 0.005894333 -0.06769997 … 0.00410236 -0.01743813; -0.005548533 0.019269153 … 0.020970758 0.007751715; … ; 0.015911771 -0.0637522 … 0.027622208 0.030549388; -0.0037458532 -0.020323055 … 0.009707856 0.012704272;;; 0.0031526769 0.0017287281 … 0.016420767 -0.002600826; -0.0005557502 0.0 … 0.0 0.0; … ; -0.0031944741 0.0 … 0.0 -0.0; 0.0059148953 0.005947647 … 0.0 3.4611745f-5;;; 0.0 -0.0013372982 … 0.0 -4.01153f-5; 0.0 -0.0 … -0.0 -0.0; … ; 0.0 0.0 … -0.0 0.0; -0.0 0.0 … -0.0 0.0;;; 0.012927536 0.012694595 … 0.017712053 0.024350777; 0.006257179 -0.02464207 … 0.04431523 -0.0034392711; … ; 0.035829537 -0.1511845 … -0.029067537 0.022210369; 0.037582703 0.037922822 … 0.023974525 -0.0018690513;;; 0.0 -0.0 … -0.0023233723 -0.030507525; -0.0 0.0042026946 … 0.008070867 -0.036466334; … ; -0.0 -0.012943027 … -0.0003829958 -0.019259067; -0.0034406525 0.01941959 … 0.008958577 -0.009525346;;; -0.016103031 -0.008272432 … -0.12425316 0.016972253; 0.008363366 -0.029552955 … -0.04811538 -0.13425204; … ; 0.09131794 0.082262665 … -0.026476774 -0.03532927; 0.04648207 0.04768633 … 0.032613188 0.01000766;;; -0.009955549 -0.00033852574 … 0.0027721585 -0.102434464; 0.006476457 -0.014531804 … -0.054926477 -0.008889919; … ; -0.14278576 -0.086011484 … -0.032805458 -0.030393763; -0.013700821 0.03193436 … 0.013625924 -0.016326614;;;;]
+ [-0.0 0.009837505 … 0.00083321135 0.0016100388; 0.0 0.0 … -0.0 0.014751482; … ; 0.0 0.0 … 0.0 0.0024911894; 0.0 0.0 … 0.0 0.00018069432;;; 0.0 0.0 … -0.0020939503 0.00061999797; 0.0 0.0 … -0.0 -0.007820387; … ; 0.0 -0.0 … 0.0 0.0052320133; -0.0 -0.0 … -0.0 -0.0031584154;;; -0.0 0.0 … 0.0 0.0; -0.0 -0.0 … -0.0 0.0; … ; -0.0 -0.0 … 0.0 -0.0; 0.0 -0.0 … -0.0 -0.0;;; … ;;; -0.0 -0.0038078476 … 0.012222226 -0.009125978; -0.023813767 0.027776431 … -0.026417498 0.03608647; … ; 0.03555107 -0.10374649 … 0.04450404 0.021566255; -0.0011495093 -0.008259712 … -0.01589209 -0.0059936964;;; -0.0 -0.0 … 0.0 -0.005831161; 0.0 0.0 … 0.0 -0.0; … ; -0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0028391446 -0.03785916 … -0.03625422 0.0063060126; 0.0028973455 -0.005039949 … 0.050228726 -0.011402732; … ; 0.0023721536 0.02924642 … -0.00808781 0.0046986477; -0.0 -0.0 … 0.0 0.0017547001;;;;]
+ [-0.0 -0.00032919343 … -0.00021266754 -0.00032708238; -0.06445535 0.016767126 … 0.022303578 -0.0035928416; … ; -0.0039565526 -0.0027064886 … -0.018819261 -0.0003234109; -0.00032918417 -0.0 … -0.0 -0.0;;; -0.0030569562 7.234953f-5 … 0.0 0.0; 0.003388972 -0.0067799618 … 0.0 -0.024337104; … ; 3.1223368f-5 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; -0.020130573 -0.02041969 … -0.0056696357 -0.0; -0.0177962 -0.030271105 … -0.011712311 -0.010745524; … ; -0.028359203 -0.03234335 … -0.022150092 -0.012624363; -0.009600129 -0.015224243 … -0.008884472 -0.0065132948;;; -0.00092679274 0.0058560036 … 0.010244188 0.007135728; -0.012662035 0.020497264 … 0.015412333 0.01572129; … ; 0.0 0.0 … 0.0027642269 0.0065141325; 0.0052442094 0.007745886 … 0.0075674076 0.00055994437;;; 0.0032030393 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0014639655 0.0024306322 … 0.0036227761 0.001239807; -0.0049681785 -0.003751059 … -0.0024018092 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.00039988832; 2.032554f-7 0.0021579608 … -0.0014047278 0.00070380286;;; 0.0 0.001394097 … 0.0025973236 0.0; 0.0020144444 0.025083313 … 0.0021311545 0.0021580863; … ; 0.0021440175 0.0011802689 … 0.031960964 -0.09744046; 0.0013966016 -0.006770011 … 0.0029344524 0.047524255;;; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 5.5300447f-5;;;;]
+ [0.0 0.0044041215 … 0.00016468443 -0.0020823092; -0.0044622724 0.00801288 … 0.0083776 -0.00090109697; … ; -0.0021566907 -0.0010610776 … -0.007804811 0.0029821396; 0.002566032 0.0 … 0.0 0.0;;; -0.0023988858 -0.00047416694 … 0.0 0.0; 0.0016085053 -0.0020122884 … 0.0 -0.00089375104; … ; 0.00012068136 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.011013946 0.012182047 … 0.0019965605 0.0; 0.0135749625 0.0023635984 … 0.003702852 0.0014732066; … ; -0.014268804 0.011048073 … -0.0009001733 0.0027087235; 0.0012373454 -0.00057613675 … 0.0007663287 0.0010424253;;; -0.0024143858 -0.003202515 … -0.00039428292 -0.0061316458; -0.009534627 -0.0024973657 … -0.0014496408 -0.0005599244; … ; 0.0 0.0 … -0.00047404226 0.0013510782; -0.0017790678 -0.00085535296 … -2.1873622f-5 0.00024357504;;; 0.0019340301 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.000767038 -0.003981961 … 0.0019554533 0.0006356343; -0.0056608682 -0.0051614223 … -0.0030260044 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0002860292; 4.4010453f-6 0.001764002 … -0.0020483749 0.00016754633;;; 0.0 -0.0018095961 … 0.0004545721 0.0; -0.0024584064 0.0021633047 … 0.0012440955 0.0063662436; … ; -0.0007544176 -0.00021924358 … 0.010368153 -0.0076135676; -0.00079697766 -0.0032707427 … -0.010955581 0.003324185;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 5.804623f-5;;;;]
+ [0.0; -0.0044622724; … ; 0.0; 5.8046237f-5;;]
+ [0.007762578; 0.0; … ; 0.0; 0.0;;]
+ [0.007762578; 0.0; … ; 0.023404092; 0.0;;]
  [0.0; 0.0; … ; 0.0; 0.0;;]

Note that the layerwise relevances are only kept for layers in the outermost Chain of the model. When using our unflattened model, we only obtain three layerwise relevances, one for each chain in the model and the output relevance:

analyzer = LRP(model; flatten=false) # use unflattened model
 
 expl = analyze(input, analyzer; layerwise_relevances=true)
 expl.extras.layerwise_relevances
11-element Vector{Array{Float32}}:
- [-0.008434618 -0.0087917615 … -0.006658117 -0.0103683425; -0.005399833 0.007048681 … -0.00019784019 8.947f-5; … ; -0.00063613895 0.023806619 … -0.018958967 0.052235242; -0.028230084 -0.038751382 … -0.021814026 0.019569611;;; 6.940391f-5 -0.055338766 … -0.0054046577 0.00015729718; -0.0042838077 0.017410897 … 0.00034457076 -0.0008185627; … ; 0.08376578 -0.026342733 … 0.07045996 0.0232485; -0.067591034 0.0106505705 … -0.01688815 -0.023662219;;; -0.020857709 0.0017814997 … 0.0076425555 -0.006319415; -0.036541436 0.04484527 … 0.030378005 -0.0031489655; … ; -0.060475864 -0.024266085 … -0.019328346 0.03277545; 0.041433208 0.06520778 … 0.001454688 -0.019959737;;;;]
- [-0.009124702 0.01702636 … 0.019015193 0.00011621041; 0.011778463 -0.10073626 … -0.023475587 -0.002170433; … ; 0.025397623 -0.075008035 … -0.027888348 0.016553191; 0.0005152671 0.0054551815 … -0.031303097 0.0;;; -0.0 0.0 … 0.0 -0.0015496215; 0.0 -0.01532365 … -0.014000421 -0.0052202013; … ; -0.034362298 0.10837211 … -0.0044242283 0.036892127; 0.0 0.011976789 … 0.008331378 0.0073958267;;; -0.0032818422 -0.0060185497 … 0.001990876 -0.0; 0.0 0.0 … 0.0 -0.0012859384; … ; 0.0 -0.0051343436 … 0.016192982 0.00012771331; -0.0 -0.0033607255 … -0.0 0.0013308028;;; 0.0 -0.0 … 0.0 -0.0; 0.0 0.0 … 0.0 -0.0; … ; 0.0 0.0 … 0.0 0.0; -0.0 -0.0 … -0.0 0.0;;; 0.0 -0.0 … -0.0 -0.0; -0.0 0.0 … 0.0 -0.0; … ; -0.0 0.0 … -0.0 0.0; -0.0 0.0 … 0.0 -0.0;;; 0.00067219156 -0.0 … 0.0 -0.0; 0.0 -0.0 … -0.0 0.0; … ; 0.0 -0.0 … 0.0 0.0; -0.0030900831 -0.0 … -0.0 -0.0;;; -0.011425621 -0.0014169659 … -0.0039024735 0.00011354338; 0.012051551 0.0 … 0.0 0.0; … ; -0.004131242 -0.0 … -0.015868029 -0.0; -0.0 0.002145766 … -0.0 -0.00026961384;;; 0.0023086898 0.0 … -0.0 0.0017377839; -0.0074582584 0.0032651739 … 0.0032221267 -0.013166391; … ; 0.17036542 -0.006587705 … -0.013899546 0.019214738; -0.021441998 0.00019921201 … 0.026668038 0.0055386527;;;;]
- [0.0 0.0 … -0.006994063 0.0; 0.0 0.065133914 … 0.0 0.0; … ; -0.039265938 0.0 … 0.072890945 0.0; 0.0 0.0 … 0.0 0.0;;; -0.005849766 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.027117642 … 0.004829404 0.0;;; 0.0 -0.017624486 … 0.0 -0.022686042; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.041983206; 0.0 0.0 … 0.0 0.0;;; 0.0 -0.0142010655 … 0.0 -0.007440478; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … -0.017357446 0.0; 0.0 0.02451702 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0006645917 0.0 … 0.0 -0.03132602; … ; 0.002417998 0.0 … 0.0 0.12971243; 0.0 0.0 … 0.0 0.0;;; 0.014627337 0.0 … 0.0 0.0; 0.0 0.0 … 0.0019971684 0.0; … ; 0.0 0.0013121307 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 -0.0019767235; 0.0 0.00076813984 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 -0.020260103 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0016133194; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;;;]
- [0.065133914 0.057542376 … -0.052669626 -0.006994063; -0.09736592 -0.031344447 … 0.11998346 -0.0062667704; … ; 0.08423761 0.005468294 … -0.038981784 -0.012301215; -0.039265938 -0.1620077 … -0.40325817 0.072890945;;; -0.005849766 -0.01935943 … -0.018049 -0.0; 0.026579058 0.0057837716 … -0.024062892 -0.0; … ; -0.054943435 -0.088456236 … -0.15514043 -0.0; -0.027117642 -0.092793584 … 0.096221164 0.004829404;;; -0.017624486 -0.03989314 … -0.04772196 -0.022686042; -0.03389671 4.3591994f-5 … 0.0062493035 0.0017706014; … ; 0.19956005 0.028011875 … 0.03934841 -0.22338125; 0.0 -0.057622228 … 0.0339618 0.041983206;;; -0.0142010655 -0.005613217 … 0.01641016 -0.007440478; -0.028993793 0.071542375 … 0.003368103 -0.0017336558; … ; 0.027602276 -0.003763128 … 0.14563672 -0.04226257; 0.02451702 -0.0076049864 … -0.03496282 -0.017357446;;; 0.0006645917 -0.020304812 … 0.06820186 -0.03132602; -0.028878653 -0.12721388 … 0.045547143 -0.009368304; … ; -0.033640824 -0.15821296 … -0.2777062 -0.33040306; 0.002417998 0.11383526 … 0.28673404 0.12971243;;; 0.014627337 -0.00026000128 … -0.010426976 0.0019971684; 0.009488311 -0.0 … 0.0 -0.0; … ; -0.026787827 0.0 … -0.0 0.12240852; 0.001312131 -0.0 … -0.01718286 0.0;;; 0.0007681399 0.00065018825 … -0.012541232 -0.0019767235; -0.03208107 -0.0 … -0.0 -0.00039410422; … ; -0.035496842 0.0 … 0.0 -0.0; -0.020260103 0.0 … 0.02557262 -0.0;;; -0.0 0.0018354295 … 0.0 0.0016133194; 0.0 -0.0 … 0.0 0.0; … ; 0.0 -0.0 … 0.0 -0.0; 0.0 -0.0 … 0.0 -0.0;;;;]
- [0.0 0.0 … 0.0 -0.0; 0.0 0.0 … -0.0 -0.0; … ; 0.0 0.0 … -0.0 0.0; 0.002394198 0.0021396077 … 0.013127395 0.0;;; 0.0028685012 0.008558201 … 0.0026838956 0.0; 0.019459382 0.009310116 … -0.0025591326 0.0; … ; 0.004147363 0.050685942 … -0.0 0.0; 0.00041586507 0.0005925278 … -0.0 -0.0039712386;;; 0.0 -0.0 … -0.00030366756 0.0034621037; -0.0 0.0 … 0.0 -0.0; … ; -0.0 -0.0 … 0.0 -0.0; -0.0 -0.0 … 0.0 -0.0;;; … ;;; 0.0 -0.0 … 0.0 -0.0; -0.0 -0.0 … -0.0 -0.005486934; … ; 0.0 -0.0035835975 … -0.0 0.06235383; -0.0 -0.00010950161 … 0.0 -0.11290945;;; 0.005854754 -0.0 … 0.0 -0.0; -0.023126693 -0.0018030319 … -0.016019821 -0.0009386344; … ; -0.08595384 -0.055454675 … 0.21571699 0.005088198; -0.011670164 -0.0019005635 … 0.30584586 0.056922175;;; -0.0 -0.0 … 0.0 0.0054683913; 0.0003707027 -0.033430763 … -0.030546319 -0.0; … ; -0.0 -0.0028720447 … 0.0 -0.0; -0.0 -0.008918245 … 0.0 -0.008250677;;;;]
- [-0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … 0.0075888243 0.0019853194; … ; -0.0 -0.0 … 0.0077188965 -2.0360923f-5; -1.9381452f-5 -0.0058707255 … 0.0013304334 -0.004747312;;; -0.0 -0.0 … 0.0027907956 0.0065553086; -0.0 -0.0 … 0.00015521309 -0.005213246; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0;;; 0.0030363826 0.0 … 0.0 0.0; 0.0010305601 0.0 … 0.0 0.0; … ; 0.0 0.0 … -0.0005451668 0.0; 0.0 0.0 … 0.0 0.0;;; -0.0 -0.0 … -0.0 -0.0; -0.008004938 -0.06840828 … -0.0066572563 -0.010500898; … ; -0.011093389 -0.011474783 … -0.005531251 -0.0023499297; -0.0077636484 -0.017177561 … 0.8011713 -0.010586549;;; -0.01033935 -0.011665621 … 0.01539777 -0.010865278; -0.004101372 -0.02005488 … -0.019427849 -0.007425724; … ; -0.105005704 -0.024565447 … -0.07115922 -0.010254735; -0.09019728 -0.024195103 … 0.12101292 -0.005045867;;; 0.00035844403 0.0001607045 … 0.0003185435 0.0; 0.00035263566 0.00036315504 … 0.024171833 0.00027426888; … ; -0.03578145 -0.035529785 … 0.029016009 0.00022075372; 0.00037239393 0.0003707795 … 0.00035527145 0.00016806438;;; -5.537065f-5 0.001492225 … -0.0018433405 -0.014502349; -0.0 0.01116607 … 0.0065782135 0.01084608; … ; -0.0 -0.0 … -0.012944686 -0.011270186; -0.0 -0.0 … -0.0 -0.00015722847;;; 6.5449065f-5 0.00030984252 … 0.0 0.0001291895; 0.0006666293 0.00064124307 … 0.0 0.00045243182; … ; -0.037637398 0.06356955 … 2.7856324f-5 0.0002705486; 0.0006574021 0.0007274364 … 0.0004921977 0.0;;;;]
- [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.005582211 0.0015637836; … ; 0.0 0.0 … 0.00028795615 -2.3979605f-5; 0.0003095935 -0.004354669 … 0.0018261069 -0.0031551365;;; 0.0 0.0 … 0.0038254447 0.008703316; 0.0 0.0 … 0.00016611457 -0.0019328643; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; -0.0027743753 0.0 … 0.0 0.0; 0.00043489298 0.0 … 0.0 0.0; … ; 0.0 0.0 … -0.0013151305 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; -0.0063561485 -0.0061438372 … -0.002410306 -0.0007377573; … ; 0.002310994 0.005979384 … -0.00012198948 0.0009795162; -0.003048884 0.0010976149 … 0.009670294 0.013872173;;; -0.0026674622 -0.0010863321 … 0.0073339343 0.0028241724; 0.00012208564 0.009759786 … 0.0069127562 0.00054495345; … ; -0.018652882 -0.0113425525 … -0.010429148 -0.00084039953; -0.010489733 0.012946427 … 0.012696911 -0.0010396148;;; -0.0020952797 -8.645627f-5 … 0.0020122847 0.0; -0.0035948597 0.0011670915 … 0.0073432475 -0.0015028822; … ; -0.0031584222 -0.01089471 … 0.01822634 0.0012948432; 0.002916309 -0.0007726846 … 0.004927424 0.0001298937;;; -4.2341337f-5 0.002738909 … -0.0007706224 -0.0011256726; 0.0 0.006915389 … 0.0008067609 0.010244184; … ; 0.0 0.0 … -0.008204324 -0.007580291; 0.0 0.0 … 0.0 0.00032722086;;; 0.0 0.00089729385 … 0.0 5.676092f-5; 0.0026938403 -0.0061406875 … 0.0 0.0034330413; … ; -0.0099501815 0.0066408846 … 0.0 -0.0019379461; 0.011684146 0.0038140868 … 0.0027028976 0.0;;;;]
- [0.0; 0.0; … ; -0.0019379463; 0.0;;]
- [-0.038360585; 0.0; … ; 0.0; 0.0;;]
- [-0.038360585; 0.0; … ; 0.0; 0.0;;]
+ [0.0010207323 -0.007980485 … -0.032674737 0.0045772777; -0.027682293 0.0058941203 … 0.0020468947 -0.011886046; … ; 0.014307547 -0.0047217556 … -0.008261258 -0.0047789533; -4.8582344f-5 0.0030975891 … -0.0030999926 -0.0014275227;;; 0.0038287179 -0.0050959843 … -0.015875975 -0.0012506621; 0.01762196 -0.00019205933 … 0.000342701 -0.0021429104; … ; -0.0010821145 0.0074184453 … -0.018994732 -0.0015100081; 0.0066144764 -0.001667912 … -0.004199393 -0.00017773638;;; -0.0029024556 0.0061603324 … 0.005302045 0.0017092457; -0.020301988 -0.0023542568 … -0.011563395 0.03955199; … ; -0.005022351 -0.0042651542 … 0.0051177396 0.00030817188; -0.004116833 -0.0051480336 … -0.0026709978 0.00047814468;;;;]
+ [-0.0 -0.0022069407 … -0.0 0.002664379; 0.00010990416 -0.0032996142 … 0.0 0.0068608257; … ; 0.0 0.0 … 0.0003263809 0.0030069803; -0.0 -0.0009069561 … -0.0007290192 -0.0012759961;;; -0.001357918 -0.0024247447 … 0.0059249178 -0.00022268957; -0.0019146611 0.0 … 0.0 -0.0; … ; 0.0019866803 -0.0 … 0.00034997708 0.0; -0.00010038443 -0.0016282512 … -0.0 -0.0;;; -0.0049129566 -0.0033877972 … -0.011050789 -0.0022807335; 0.0066427714 -0.023294369 … 0.018099273 0.012962668; … ; -0.007587545 -0.005291812 … -0.01204115 -0.0075623565; -0.0015627532 -0.0022434099 … 0.0 -0.00039409686;;; -0.0006047327 0.0012370368 … 0.0048490893 -0.005150113; 0.0 -3.007865f-5 … 0.0 0.0; … ; -0.0 -0.00070633786 … -0.0010676244 -0.0; 0.0 0.0 … -0.0 -0.0;;; -0.0 0.0 … 0.0 0.0; 0.0 -0.0 … 0.0 -0.0; … ; 0.0 0.0 … 0.0 0.0; -0.0 0.0 … 0.0 0.0;;; 0.0039219316 -0.009974134 … 0.022232184 -0.0; 0.0013000884 -0.0025634805 … 0.004623608 0.0; … ; 0.0051482697 -0.0054312684 … 0.004514884 0.0; -0.0010540513 -0.0014582151 … -0.0033840511 0.0;;; -0.0004368335 0.0035060833 … 0.0 -0.000973588; 0.0015061196 0.00011707891 … 0.011551401 0.0; … ; 0.00193334 6.9175417f-6 … 0.0 0.0014759052; 0.0 0.0 … -0.0 -0.0;;; -0.0 -0.0009980901 … -0.00055147294 0.002898692; 0.0 -0.0006494175 … -0.0021454024 0.0; … ; 0.0 0.0027915372 … -0.0 -0.002086834; 8.737816f-5 -0.0 … -0.0 0.0;;;;]
+ [0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -0.004800467;;; 0.0 0.0 … 0.0 0.0; 0.0 0.017522603 … 0.034752846 0.0; … ; 0.0 0.004675548 … -0.004731988 0.0; 0.0 0.0 … 0.0 0.0;;; -0.0158092 0.0 … 0.0033431738 0.0; 0.0 0.0 … 0.0 0.0; … ; -0.00095969904 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 -4.8689613f-5;;; 0.0 0.0 … 0.0037445093 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0036162864 0.0 … 0.013698082 0.0; … ; 0.0 0.01124425 … 0.0 0.0; 0.0 0.0 … 0.0027833695 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … -0.028024347 0.0; … ; 0.0 -0.00027473547 … 0.0 0.0; 0.0 0.0 … -0.014293882 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 -0.01207277 … -0.062140226 0.0; … ; 0.0 -0.010729361 … 0.007876189 0.0; 0.0 0.0 … 0.0 0.0;;; 0.0 -0.012971649 … 0.02372979 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 -0.00049811776 … 0.0 -0.02145152; 0.0 0.0 … 0.0 0.0;;;;]
+ [-0.0 0.0 … 0.013893512 0.0; -0.0 -0.0 … -0.0 -0.02704144; … ; 0.0 -0.0 … -0.0 0.0012966063; 0.0 -0.022321453 … -0.00035889685 -0.004800467;;; 0.017522603 -0.062322963 … 0.0762988 0.034752846; -0.017428866 -0.019996174 … 0.059957143 -0.119387954; … ; 0.026025044 0.048225258 … 0.00820133 0.034158085; 0.004675548 -0.010452151 … 0.015543055 -0.004731988;;; -0.0158092 0.0016721005 … -0.022262266 0.0033431738; -0.0007202163 -0.0 … -0.0 -0.0; … ; 0.010006575 0.0 … 0.0 -0.0; -0.00095969904 0.0016532672 … -0.0 -4.868964f-5;;; 0.0 -0.0028197714 … -0.0 0.0037445098; 0.0 -0.0 … -0.0 -0.0; … ; 0.0 0.0 … 0.0 -0.0; -0.0 0.0 … -0.0 0.0;;; 0.0036162864 0.011475153 … -0.007317215 0.013698082; -0.014237667 -0.057123855 … -0.016851736 0.005776418; … ; -0.04622821 0.067472324 … -0.018283978 -0.027103558; 0.01124425 -0.0018985006 … -0.004256531 0.0027833695;;; 0.0 -0.0 … 0.0021832837 -0.028024347; -0.0 -0.005130376 … 0.00027432645 0.014130047; … ; 0.0 -0.031428806 … -0.004049 -0.015983729; -0.00027473547 -0.0021309366 … -0.0048160055 -0.014293882;;; -0.01207277 -0.054163843 … -0.08658156 -0.062140226; -0.025754971 0.0016448617 … -0.08806112 0.04779901; … ; -0.012978482 -8.204502f-5 … -0.0064664073 -0.008652067; -0.010729361 -0.00642478 … 0.0023604718 0.007876189;;; -0.012971649 0.045052778 … 0.12304125 0.02372979; 0.014348112 -0.03575912 … -0.11106775 -0.07331434; … ; -0.021503977 0.058245074 … -0.039259996 -0.01815788; -0.00049811776 -0.019827712 … 0.0025082864 -0.02145152;;;;]
+ [-0.0 -0.0069106407 … 0.017630795 0.0053351102; 0.0 -0.0 … -0.0 -0.015785221; … ; 0.0 0.0 … 0.0 0.002176354; 0.0 0.0 … 0.0 0.00015630218;;; 0.0 0.0 … 0.0027944427 -0.0020671915; 0.0 0.0 … -0.0 0.009794469; … ; 0.0 0.0 … 0.0 0.004196387; 0.0 -0.0 … 0.0 -0.0011542102;;; -0.0 0.0 … 0.0 -0.0; 0.0 -0.0 … -0.0 -0.0; … ; -0.0 0.0 … 0.0 0.0; 0.0 -0.0 … 0.0 -0.0;;; … ;;; -0.0 -0.0025528236 … 0.008244845 0.014012203; -0.034055445 0.0057157986 … -0.08342037 -0.0433095; … ; 0.01988188 -0.0029564125 … 0.0509651 0.008807763; 0.0025244674 -0.010957105 … 0.033860177 0.0016514664;;; -0.0 0.0 … -0.0 0.009983332; -0.0 0.0 … 0.0 0.0; … ; -0.0 0.0 … -0.0 0.0; -0.0 0.0 … 0.0 0.0;;; 0.0015559589 -0.098932415 … -0.031985477 -0.008361466; -0.013876278 0.006019482 … -0.0130204195 0.019404432; … ; -0.002724205 -0.025462287 … 0.028628252 -0.011128603; -0.0 -0.0 … -0.0 -0.00043414664;;;;]
+ [0.0 0.0007313333 … 0.00023889716 0.000802325; -0.1229928 0.01673135 … 0.015141603 0.00059648673; … ; -0.0012434542 0.0043078335 … -0.02793356 0.0008545891; 0.00071591506 0.0 … 0.0 0.0;;; -0.0007877251 -0.00011291591 … -0.0 -0.0; -0.001261818 0.0066601327 … -0.0 0.036132533; … ; -4.8772443f-5 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0;;; -0.019425998 -0.019744134 … -0.0045461487 -0.0; -0.016877916 -0.030849254 … -0.010450846 -0.009466191; … ; -0.028659314 -0.033238385 … -0.021658922 -0.011390429; -0.00831633 -0.014118344 … -0.0076080714 -0.0053279707;;; -0.009999247 0.00022019238 … 0.00041797463 0.00027496935; -0.0036646854 -0.0068193427 … 0.01696281 -0.017905114; … ; 0.0 0.0 … 9.77087f-5 0.00024806437; 0.00019484932 0.00030192756 … 0.00029398574 1.889332f-5;;; 0.0011164722 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; 0.0015054727 -0.00095814624 … 0.00077497464 0.00029341725; -0.00924244 -0.0033535266 … 0.0032362689 -0.0;;; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0 -0.0 … -0.0 0.00167709; -3.8251355f-7 0.0008236506 … -0.003184022 0.002083247;;; -0.0 -0.0014248195 … -0.0042323847 -0.0; -0.0025525352 -0.057414826 … -0.002824782 -0.0028911857; … ; -0.00285632 -0.045944743 … 0.0483161 0.046754297; -0.001428533 -0.040429372 … -0.0058609266 -0.023800116;;; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 -0.0; … ; -0.0 -0.0 … -0.0 -0.0; -0.0 -0.0 … -0.0 0.00052386103;;;;]
+ [0.0 0.0028886402 … -2.7521888f-5 -0.00186171; -0.008642377 0.0065249153 … 0.0047177873 -0.00036978873; … ; -0.0020629764 0.00058060314 … -0.012415658 0.0029499922; 0.0018105485 0.0 … 0.0 0.0;;; 0.00029362284 -0.000107320295 … 0.0 0.0; 0.0003258799 0.0020408833 … 0.0 0.0013276662; … ; -1.4509937f-5 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0;;; 0.011150454 0.019498734 … -0.0021007138 0.0; -0.0003544218 0.0077359406 … -0.003043904 0.004699904; … ; -0.013695506 0.009964922 … -0.006913719 0.0026891092; 0.0008233385 -0.00017595408 … 0.0030297842 0.0024972649;;; -0.0015719595 -0.004308788 … -0.0014717263 -0.005392498; -0.0013754313 -0.0039540417 … 0.005317406 -0.0048811506; … ; 0.0 0.0 … -0.00010810665 0.00040118356; -0.00029156238 -0.0029937886 … -0.0024569104 0.0;;; 0.0014592897 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0014017643 0.002015151 … 0.00056676567 0.0003718638; -0.0071956627 -0.0018875207 … 0.003443957 0.0;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0012142856; 4.6323335f-6 0.0004671621 … -0.006081901 -0.0001528387;;; 0.0 0.001480141 … -0.0058481935 0.0; -0.0052959304 -0.004699109 … 0.0014950532 0.003308977; … ; -0.00024809898 -0.010379411 … 0.023208247 0.0042587984; 0.0006641177 -0.008176546 … -4.94587f-5 -0.0010696442;;; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.00054987177;;;;]
+ [0.0; -0.008642377; … ; 0.0; 0.0005498718;;]
+ [0.0; 0.0; … ; 0.024125015; 0.0;;]
+ [0.0073964978; 0.0; … ; 0.024125015; 0.0011199445;;]
  [0.0; 0.0; … ; 0.0; 0.0;;]

Performance tips

Using LRP without a GPU

Since ExplainableAI.jl's LRP implementation makes use of Tullio.jl, analysis can be accelerated by loading either

This only requires loading the LoopVectorization.jl package before ExplainableAI.jl:

using LoopVectorization
-using ExplainableAI

This page was generated using Literate.jl.

+using ExplainableAI

This page was generated using Literate.jl.

diff --git a/dev/generated/lrp/composites/index.html b/dev/generated/lrp/composites/index.html index 61451fb0..155704ae 100644 --- a/dev/generated/lrp/composites/index.html +++ b/dev/generated/lrp/composites/index.html @@ -192,4 +192,4 @@ Dropout(0.5) => PassRule(), Dense(512 => 100, relu) => EpsilonRule{Float32}(1.0f-6), ), -)

This page was generated using Literate.jl.

+)

This page was generated using Literate.jl.

diff --git a/dev/generated/lrp/custom_layer/index.html b/dev/generated/lrp/custom_layer/index.html index fc1fddef..0a8fee60 100644 --- a/dev/generated/lrp/custom_layer/index.html +++ b/dev/generated/lrp/custom_layer/index.html @@ -65,4 +65,4 @@ LRP(model; skip_checks=true)
LRP(
   Dense(100 => 20, unknown_activation)                    => ZeroRule(),
   Main.MyDoublingLayer() => ZeroRule(),
-)

Instead of throwing the usual ERROR: Unknown layer or activation function found in model, the LRP analyzer was created without having to register either the layer UnknownLayer or the activation function unknown_activation.


This page was generated using Literate.jl.

+)

Instead of throwing the usual ERROR: Unknown layer or activation function found in model, the LRP analyzer was created without having to register either the layer UnknownLayer or the activation function unknown_activation.


This page was generated using Literate.jl.

diff --git a/dev/generated/lrp/custom_rules/index.html b/dev/generated/lrp/custom_rules/index.html index 54ed4a10..c56a7dc5 100644 --- a/dev/generated/lrp/custom_rules/index.html +++ b/dev/generated/lrp/custom_rules/index.html @@ -52,4 +52,4 @@ │ calls │ calls ┌─────────▼─────────┐ ┌─────────▼─────────┐ │ modify_parameters │ │ modify_parameters │ -└───────────────────┘ └───────────────────┘

Therefore modify_layer should only be extended for a specific rule and a specific layer type.

Advanced LRP rules

To implement custom LRP rules that require more than modify_layer, modify_input and modify_denominator, take a look at the LRP developer documentation.


This page was generated using Literate.jl.

+└───────────────────┘ └───────────────────┘

Therefore modify_layer should only be extended for a specific rule and a specific layer type.

Advanced LRP rules

To implement custom LRP rules that require more than modify_layer, modify_input and modify_denominator, take a look at the LRP developer documentation.


This page was generated using Literate.jl.

diff --git a/dev/index.html b/dev/index.html index 2ad719d6..da1a4799 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · ExplainableAI.jl

ExplainableAI.jl

Explainable AI in Julia using Flux.jl.

Installation

To install this package and its dependencies, open the Julia REPL and run

julia> ]add ExplainableAI

Manual

General usage

LRP

API reference

General

LRP

+Home · ExplainableAI.jl

ExplainableAI.jl

Explainable AI in Julia using Flux.jl.

Installation

To install this package and its dependencies, open the Julia REPL and run

julia> ]add ExplainableAI

Manual

General usage

LRP

API reference

General

LRP

diff --git a/dev/lrp/api/index.html b/dev/lrp/api/index.html index 96268e1b..4fc6edea 100644 --- a/dev/lrp/api/index.html +++ b/dev/lrp/api/index.html @@ -1,17 +1,17 @@ -LRP · ExplainableAI.jl

LRP analyzer

Refer to LRP for documentation on the LRP analyzer.

LRP rules

ExplainableAI.ZeroRuleType
ZeroRule()

LRP-$0$ rule. Commonly used on upper layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i \frac{W_{ij}a_j^k}{\sum_l W_{il}a_l^k+b_i} R_i^{k+1}\]

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
source
ExplainableAI.EpsilonRuleType
EpsilonRule([epsilon=1.0e-6])

LRP-$ϵ$ rule. Commonly used on middle layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{W_{ij}a_j^k}{\epsilon +\sum_{l}W_{il}a_l^k+b_i} R_i^{k+1}\]

Optional arguments

  • epsilon: Optional stabilization parameter, defaults to 1.0e-6.

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
source
ExplainableAI.GammaRuleType
GammaRule([gamma=0.25])

LRP-$γ$ rule. Commonly used on lower layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{(W_{ij}+\gamma W_{ij}^+)a_j^k} - {\sum_l(W_{il}+\gamma W_{il}^+)a_l^k+(b_i+\gamma b_i^+)} R_i^{k+1}\]

Optional arguments

  • gamma: Optional multiplier for added positive weights, defaults to 0.25.

References

  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.WSquareRuleType
WSquareRule()

LRP-$w²$ rule. Commonly used on the first layer when values are unbounded.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{W_{ij}^2}{\sum_l W_{il}^2} R_i^{k+1}\]

References

  • G. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition
source
ExplainableAI.FlatRuleType
FlatRule()

LRP-Flat rule. Similar to the WSquareRule, but with all weights set to one and all bias terms set to zero.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{1}{\sum_l 1} R_i^{k+1} = \sum_i\frac{1}{n_i} R_i^{k+1}\]

where $n_i$ is the number of input neurons connected to the output neuron at index $i$.

References

  • S. Lapuschkin et al., Unmasking Clever Hans predictors and assessing what machines really learn
source
ExplainableAI.AlphaBetaRuleType
AlphaBetaRule([alpha=2.0, beta=1.0])

LRP-$αβ$ rule. Weights positive and negative contributions according to the parameters alpha and beta respectively. The difference $α-β$ must be equal to one. Commonly used on lower layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\left( +LRP · ExplainableAI.jl

LRP analyzer

Refer to LRP for documentation on the LRP analyzer.

LRP rules

ExplainableAI.ZeroRuleType
ZeroRule()

LRP-$0$ rule. Commonly used on upper layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i \frac{W_{ij}a_j^k}{\sum_l W_{il}a_l^k+b_i} R_i^{k+1}\]

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
source
ExplainableAI.EpsilonRuleType
EpsilonRule([epsilon=1.0e-6])

LRP-$ϵ$ rule. Commonly used on middle layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{W_{ij}a_j^k}{\epsilon +\sum_{l}W_{il}a_l^k+b_i} R_i^{k+1}\]

Optional arguments

  • epsilon: Optional stabilization parameter, defaults to 1.0e-6.

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
source
ExplainableAI.GammaRuleType
GammaRule([gamma=0.25])

LRP-$γ$ rule. Commonly used on lower layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{(W_{ij}+\gamma W_{ij}^+)a_j^k} + {\sum_l(W_{il}+\gamma W_{il}^+)a_l^k+(b_i+\gamma b_i^+)} R_i^{k+1}\]

Optional arguments

  • gamma: Optional multiplier for added positive weights, defaults to 0.25.

References

  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.WSquareRuleType
WSquareRule()

LRP-$w²$ rule. Commonly used on the first layer when values are unbounded.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{W_{ij}^2}{\sum_l W_{il}^2} R_i^{k+1}\]

References

  • G. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition
source
ExplainableAI.FlatRuleType
FlatRule()

LRP-Flat rule. Similar to the WSquareRule, but with all weights set to one and all bias terms set to zero.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{1}{\sum_l 1} R_i^{k+1} = \sum_i\frac{1}{n_i} R_i^{k+1}\]

where $n_i$ is the number of input neurons connected to the output neuron at index $i$.

References

  • S. Lapuschkin et al., Unmasking Clever Hans predictors and assessing what machines really learn
source
ExplainableAI.AlphaBetaRuleType
AlphaBetaRule([alpha=2.0, beta=1.0])

LRP-$αβ$ rule. Weights positive and negative contributions according to the parameters alpha and beta respectively. The difference $α-β$ must be equal to one. Commonly used on lower layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\left( \alpha\frac{\left(W_{ij}a_j^k\right)^+}{\sum_l\left(W_{il}a_l^k+b_i\right)^+} -\beta\frac{\left(W_{ij}a_j^k\right)^-}{\sum_l\left(W_{il}a_l^k+b_i\right)^-} -\right) R_i^{k+1}\]

Optional arguments

  • alpha: Multiplier for the positive output term, defaults to 2.0.
  • beta: Multiplier for the negative output term, defaults to 1.0.

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.ZPlusRuleType
ZPlusRule()

LRP-$z⁺$ rule. Commonly used on lower layers.

Equivalent to AlphaBetaRule(1.0f0, 0.0f0), but slightly faster. See also AlphaBetaRule.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{\left(W_{ij}a_j^k\right)^+}{\sum_l\left(W_{il}a_l^k+b_i\right)^+} R_i^{k+1}\]

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
  • G. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition
source
ExplainableAI.ZBoxRuleType
ZBoxRule(low, high)

LRP-$zᴮ$-rule. Commonly used on the first layer for pixel input.

The parameters low and high should be set to the lower and upper bounds of the input features, e.g. 0.0 and 1.0 for raw image data. It is also possible to provide two arrays of that match the input size.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k=\sum_i \frac{W_{ij}a_j^k - W_{ij}^{+}l_j - W_{ij}^{-}h_j} - {\sum_l W_{il}a_l^k+b_i - \left(W_{il}^{+}l_l+b_i^{+}\right) - \left(W_{il}^{-}h_l+b_i^{-}\right)} R_i^{k+1}\]

References

  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.PassRuleType
PassRule()

Pass-through rule. Passes relevance through to the lower layer.

Supports layers with constant input and output shapes, e.g. reshaping layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = R_j^{k+1}\]

source
ExplainableAI.GeneralizedGammaRuleType
GeneralizedGammaRule([gamma=0.25])

Generalized LRP-$γ$ rule. Can be used on layers with leakyrelu activation functions.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac +\right) R_i^{k+1}\]

Optional arguments

  • alpha: Multiplier for the positive output term, defaults to 2.0.
  • beta: Multiplier for the negative output term, defaults to 1.0.

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.ZPlusRuleType
ZPlusRule()

LRP-$z⁺$ rule. Commonly used on lower layers.

Equivalent to AlphaBetaRule(1.0f0, 0.0f0), but slightly faster. See also AlphaBetaRule.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac{\left(W_{ij}a_j^k\right)^+}{\sum_l\left(W_{il}a_l^k+b_i\right)^+} R_i^{k+1}\]

References

  • S. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
  • G. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition
source
ExplainableAI.ZBoxRuleType
ZBoxRule(low, high)

LRP-$zᴮ$-rule. Commonly used on the first layer for pixel input.

The parameters low and high should be set to the lower and upper bounds of the input features, e.g. 0.0 and 1.0 for raw image data. It is also possible to provide two arrays of that match the input size.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k=\sum_i \frac{W_{ij}a_j^k - W_{ij}^{+}l_j - W_{ij}^{-}h_j} + {\sum_l W_{il}a_l^k+b_i - \left(W_{il}^{+}l_l+b_i^{+}\right) - \left(W_{il}^{-}h_l+b_i^{-}\right)} R_i^{k+1}\]

References

  • G. Montavon et al., Layer-Wise Relevance Propagation: An Overview
source
ExplainableAI.PassRuleType
PassRule()

Pass-through rule. Passes relevance through to the lower layer.

Supports layers with constant input and output shapes, e.g. reshaping layers.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = R_j^{k+1}\]

source
ExplainableAI.GeneralizedGammaRuleType
GeneralizedGammaRule([gamma=0.25])

Generalized LRP-$γ$ rule. Can be used on layers with leakyrelu activation functions.

Definition

Propagates relevance $R^{k+1}$ at layer output to $R^k$ at layer input according to

\[R_j^k = \sum_i\frac {(W_{ij}+\gamma W_{ij}^+)a_j^+ +(W_{ij}+\gamma W_{ij}^-)a_j^-} {\sum_l(W_{il}+\gamma W_{il}^+)a_j^+ +(W_{il}+\gamma W_{il}^-)a_j^- +(b_i+\gamma b_i^+)} I(z_k>0) \cdot R^{k+1}_i +\sum_i\frac {(W_{ij}+\gamma W_{ij}^-)a_j^+ +(W_{ij}+\gamma W_{ij}^+)a_j^-} {\sum_l(W_{il}+\gamma W_{il}^-)a_j^+ +(W_{il}+\gamma W_{il}^+)a_j^- +(b_i+\gamma b_i^-)} -I(z_k<0) \cdot R^{k+1}_i\]

Optional arguments

  • gamma: Optional multiplier for added positive weights, defaults to 0.25.

References

  • L. Andéol et al., Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization
source

For manual rule assignment, use ChainTuple and ParallelTuple, matching the model structure:

Composites

Applying composites

ExplainableAI.CompositeType
Composite(primitives...)
+I(z_k<0) \cdot R^{k+1}_i\]

Optional arguments

  • gamma: Optional multiplier for added positive weights, defaults to 0.25.

References

  • L. Andéol et al., Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization
source

For manual rule assignment, use ChainTuple and ParallelTuple, matching the model structure:

Composites

Applying composites

ExplainableAI.CompositeType
Composite(primitives...)
 Composite(default_rule, primitives...)

Automatically contructs a list of LRP-rules by sequentially applying composite primitives.

Primitives

To apply a single rule, use:

To apply a set of rules to layers based on their type, use:

Example

Using a VGG11 model:

julia> composite = Composite(
            GlobalTypeMap(
                ConvLayer => AlphaBetaRule(),
@@ -44,7 +44,7 @@
   Dense(4096 => 4096, relu)             => EpsilonRule{Float32}(1.0f-6),
   Dropout(0.5)                          => PassRule(),
   Dense(4096 => 1000)                   => EpsilonRule{Float32}(1.0f-6),
-)
source

Composite primitives

Mapping layers to rules

Composite primitives that apply a single rule:

ExplainableAI.LayerMapType
LayerMap(index, rule)

Composite primitive that maps an LRP-rule to all layers in the model at the given index. The index can either be an integer or a tuple of integers to map a rule to a specific layer in nested Flux Chains.

See show_layer_indices to print layer indices and Composite for an example.

source
ExplainableAI.RangeMapType
RangeMap(range, rule)

Composite primitive that maps an LRP-rule to the specified positional range of layers in the model.

See Composite for an example.

source

To apply LayerMap to nested Flux Chains or Parallel layers, make use of show_layer_indices:

Mapping layers to rules based on type

Composite primitives that apply rules based on the layer type:

ExplainableAI.RangeTypeMapType
RangeTypeMap(range, map)

Composite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the specified range of layers in the model.

See Composite for an example.

source
ExplainableAI.FirstNTypeMapType
FirstNTypeMap(n, map)

Composite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the first n layers in the model.

See Composite for an example.

source

Union types for composites

The following exported union types types can be used to define TypeMaps:

Composite presets

ExplainableAI.EpsilonGammaBoxFunction
EpsilonGammaBox(low, high; [epsilon=1.0f-6, gamma=0.25f0])

Composite using the following primitives:

julia> EpsilonGammaBox(-3.0f0, 3.0f0)
+)
source

Composite primitives

Mapping layers to rules

Composite primitives that apply a single rule:

ExplainableAI.LayerMapType
LayerMap(index, rule)

Composite primitive that maps an LRP-rule to all layers in the model at the given index. The index can either be an integer or a tuple of integers to map a rule to a specific layer in nested Flux Chains.

See show_layer_indices to print layer indices and Composite for an example.

source
ExplainableAI.RangeMapType
RangeMap(range, rule)

Composite primitive that maps an LRP-rule to the specified positional range of layers in the model.

See Composite for an example.

source

To apply LayerMap to nested Flux Chains or Parallel layers, make use of show_layer_indices:

Mapping layers to rules based on type

Composite primitives that apply rules based on the layer type:

ExplainableAI.RangeTypeMapType
RangeTypeMap(range, map)

Composite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the specified range of layers in the model.

See Composite for an example.

source
ExplainableAI.FirstNTypeMapType
FirstNTypeMap(n, map)

Composite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the first n layers in the model.

See Composite for an example.

source

Union types for composites

The following exported union types types can be used to define TypeMaps:

Composite presets

ExplainableAI.EpsilonGammaBoxFunction
EpsilonGammaBox(low, high; [epsilon=1.0f-6, gamma=0.25f0])

Composite using the following primitives:

julia> EpsilonGammaBox(-3.0f0, 3.0f0)
 Composite(
   GlobalTypeMap(  # all layers
     Flux.Conv               => ExplainableAI.GammaRule{Float32}(0.25f0),
@@ -64,7 +64,7 @@
     Flux.ConvTranspose => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),
     Flux.CrossCor      => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),
  ),
-)
source
ExplainableAI.EpsilonPlusFunction
EpsilonPlus(; [epsilon=1.0f-6])

Composite using the following primitives:

julia> EpsilonPlus()
 Composite(
   GlobalTypeMap(  # all layers
     Flux.Conv               => ExplainableAI.ZPlusRule(),
@@ -79,7 +79,7 @@
     typeof(MLUtils.flatten) => ExplainableAI.PassRule(),
     typeof(identity)        => ExplainableAI.PassRule(),
  ),
-)
source
ExplainableAI.EpsilonAlpha2Beta1Function
EpsilonAlpha2Beta1(; [epsilon=1.0f-6])

Composite using the following primitives:

julia> EpsilonAlpha2Beta1()
 Composite(
   GlobalTypeMap(  # all layers
     Flux.Conv               => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),
@@ -94,7 +94,7 @@
     typeof(MLUtils.flatten) => ExplainableAI.PassRule(),
     typeof(identity)        => ExplainableAI.PassRule(),
  ),
-)
source
ExplainableAI.EpsilonPlusFlatFunction
EpsilonPlusFlat(; [epsilon=1.0f-6])

Composite using the following primitives:

julia> EpsilonPlusFlat()
 Composite(
   GlobalTypeMap(  # all layers
     Flux.Conv               => ExplainableAI.ZPlusRule(),
@@ -115,7 +115,7 @@
     Flux.CrossCor      => ExplainableAI.FlatRule(),
     Flux.Dense         => ExplainableAI.FlatRule(),
  ),
-)
source
ExplainableAI.EpsilonAlpha2Beta1FlatFunction
EpsilonAlpha2Beta1Flat(; [epsilon=1.0f-6])

Composite using the following primitives:

julia> EpsilonAlpha2Beta1Flat()
 Composite(
   GlobalTypeMap(  # all layers
     Flux.Conv               => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),
@@ -136,7 +136,7 @@
     Flux.CrossCor      => ExplainableAI.FlatRule(),
     Flux.Dense         => ExplainableAI.FlatRule(),
  ),
-)
source

Custom rules

These utilities can be used to define custom rules without writing boilerplate code. To extend these functions, explicitly import them:

ExplainableAI.modify_parametersFunction
modify_parameters(rule, parameter)

Modify parameters before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
+)
source

Custom rules

These utilities can be used to define custom rules without writing boilerplate code. To extend these functions, explicitly import them:

ExplainableAI.modify_parametersFunction
modify_parameters(rule, parameter)

Modify parameters before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
 │              modify_layer               │
 └─────────┬─────────────────────┬─────────┘
           │ calls               │ calls
@@ -146,7 +146,7 @@
           │ calls               │ calls
 ┌─────────▼─────────┐ ┌─────────▼─────────┐
 │ modify_parameters │ │ modify_parameters │
-└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_weightFunction
modify_weight(rule, weight)

Modify layer weights before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
+└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_weightFunction
modify_weight(rule, weight)

Modify layer weights before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
 │              modify_layer               │
 └─────────┬─────────────────────┬─────────┘
           │ calls               │ calls
@@ -156,7 +156,7 @@
           │ calls               │ calls
 ┌─────────▼─────────┐ ┌─────────▼─────────┐
 │ modify_parameters │ │ modify_parameters │
-└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_biasFunction
modify_bias(rule, bias)

Modify layer bias before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
+└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_biasFunction
modify_bias(rule, bias)

Modify layer bias before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
 │              modify_layer               │
 └─────────┬─────────────────────┬─────────┘
           │ calls               │ calls
@@ -166,7 +166,7 @@
           │ calls               │ calls
 ┌─────────▼─────────┐ ┌─────────▼─────────┐
 │ modify_parameters │ │ modify_parameters │
-└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_layerFunction
modify_layer(rule, layer)

Modify layer before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
+└───────────────────┘ └───────────────────┘
source
ExplainableAI.modify_layerFunction
modify_layer(rule, layer)

Modify layer before computing the relevance.

Note

Use of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.

The default call structure looks as follows:

┌─────────────────────────────────────────┐
 │              modify_layer               │
 └─────────┬─────────────────────┬─────────┘
           │ calls               │ calls
@@ -176,6 +176,6 @@
           │ calls               │ calls
 ┌─────────▼─────────┐ ┌─────────▼─────────┐
 │ modify_parameters │ │ modify_parameters │
-└───────────────────┘ └───────────────────┘
source

Compatibility settings:

ExplainableAI.LRP_CONFIG.supports_layerFunction
LRP_CONFIG.supports_layer(layer)

Check whether LRP can be used on a layer or a Chain. To extend LRP to your own layers, define:

LRP_CONFIG.supports_layer(::MyLayer) = true          # for structs
-LRP_CONFIG.supports_layer(::typeof(mylayer)) = true  # for functions
source
ExplainableAI.LRP_CONFIG.supports_activationFunction
LRP_CONFIG.supports_activation(σ)

Check whether LRP can be used on a given activation function. To extend LRP to your own activation functions, define:

LRP_CONFIG.supports_activation(::typeof(myactivation)) = true  # for functions
-LRP_CONFIG.supports_activation(::MyActivation) = true          # for structs
source

Index

+└───────────────────┘ └───────────────────┘
source

Compatibility settings:

ExplainableAI.LRP_CONFIG.supports_layerFunction
LRP_CONFIG.supports_layer(layer)

Check whether LRP can be used on a layer or a Chain. To extend LRP to your own layers, define:

LRP_CONFIG.supports_layer(::MyLayer) = true          # for structs
+LRP_CONFIG.supports_layer(::typeof(mylayer)) = true  # for functions
source
ExplainableAI.LRP_CONFIG.supports_activationFunction
LRP_CONFIG.supports_activation(σ)

Check whether LRP can be used on a given activation function. To extend LRP to your own activation functions, define:

LRP_CONFIG.supports_activation(::typeof(myactivation)) = true  # for functions
+LRP_CONFIG.supports_activation(::MyActivation) = true          # for structs
source

Index

diff --git a/dev/lrp/developer/index.html b/dev/lrp/developer/index.html index 667aaaf2..28a59c92 100644 --- a/dev/lrp/developer/index.html +++ b/dev/lrp/developer/index.html @@ -8,13 +8,13 @@ s_{i} = R_{i}^{k+1} / (z_{i} + \epsilon) & \text{(Step 2)} \\[0.5em] c_{j} = \sum_i \rho(W_{ij}) \; s_{i} & \text{(Step 3)} \\[0.5em] R_{j}^{k} = a_{j}^{k} c_{j} & \text{(Step 4)} -\end{array}\]

To compute step 1, we first create a modified layer, applying $\rho$ to the weights and biases and replacing the activation function with the identity function. The vector $z$ is then computed using a forward pass through the modified layer. It has the same dimensionality as $R^{k+1}$ and $a^{k+1}$.

Step 2 is an element-wise division of $R^{k+1}$ by $z$. To avoid division by zero, a small constant $\epsilon$ is added to $z$ when necessary.

Step 3 is trivial for fully connected layers, as $\rho(W)$ corresponds to the weight matrix of the modified layer. For other types of linear layers, however, the implementation is more involved: A naive approach would be to construct a large matrix $W$ that corresponds to the affine transformation $Wx+b$ implemented by the modified layer. This has multiple drawbacks:

A better approach can be found by observing that the matrix $W$ is the Jacobian of the affine transformation $f(x) = Wx + b$. The vector $c$ computed in step 3 corresponds to $c = s^T W$, a so-called Vector-Jacobian-Product (VJP) of the vector $s$ with the Jacobian $W$.

VJPs are the fundamental building blocks of reverse-mode automatic differentiation (AD), and therefore implemented by most AD frameworks in a highly performant, matrix-free, GPU-accelerated manner. Note that computing the VJP is much more efficient than first computing the full Jacobian $W$ and later multiplying it with $s$. This is due to the fact that computing the full Jacobian of a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ requires computing $m$ VJPs.

Functions that compute VJP's are commonly called pullbacks. Using the Zygote.jl AD system, we obtain the output $z$ of a modified layer and its pullback back in a single function call:

z, back = Zygote.pullback(modified_layer, aᵏ)

We then call the pullback with the vector $s$ to obtain $c$:

c = back(s)

Finally, step 4 consists of an element-wise multiplication of the vector $c$ with the input activation vector $a^k$, resulting in the relevance vector $R^k$.

This AD-based implementation is used in ExplainableAI.jl as the default method for all layer types that don't have a more optimized implementation (e.g. fully connected layers). We will refer to it as the "AD fallback".

For more background information on automatic differentiation, refer to the JuML lecture on AD.

LRP analyzer struct

The LRP analyzer struct holds three fields: the model to analyze, the LRP rules to use, and pre-allocated modified_layers.

As described in the section on Composites, applying a composite to a model will return LRP rules in nested ChainTuple and ParallelTuples. These wrapper types are used to match the structure of Flux models with Chain and Parallel layers while avoiding type piracy.

When creating an LRP analyzer with the default keyword argument flatten=true, flatten_model is called on the model and rules. This is done for performance reasons, as discussed in Flattening the model.

After passing the Model checks, modified layers are pre-allocated, once again using the ChainTuple and ParallelTuple wrapper types to match the structure of the model. If a rule doesn't modify a layer, the corresponding entry in modified_layers is set to nothing, avoiding unnecessary allocations. If a rule requires multiple modified layers, the corresponding entry in modified_layers is set to a named tuple of modified layers. Apart from these special cases, the corresponding entry in modified_layers is simply set to the modified layer.

For a detailed description of the layer modification mechanism, refer to the section on Advanced layer modification.

Forward and reverse pass

When calling an LRP analyzer, a forward pass through the model is performed, saving the activations $aᵏ$ for all layers $k$ in a vector called acts. This vector of activations is then used to pre-allocate the relevances $R^k$ for all layers in a vector called rels. This is possible since for any layer $k$, $a^k$ and $R^k$ have the same shape. Finally, the last array of relevances $R^N$ in rels is set to zeros, except for the specified output neuron, which is set to one.

We can now run the reverse pass, iterating backwards over the layers in the model and writing relevances $R^k$ into the pre-allocated array rels:

for k in length(model):-1:1
+\end{array}\]

To compute step 1, we first create a modified layer, applying $\rho$ to the weights and biases and replacing the activation function with the identity function. The vector $z$ is then computed using a forward pass through the modified layer. It has the same dimensionality as $R^{k+1}$ and $a^{k+1}$.

Step 2 is an element-wise division of $R^{k+1}$ by $z$. To avoid division by zero, a small constant $\epsilon$ is added to $z$ when necessary.

Step 3 is trivial for fully connected layers, as $\rho(W)$ corresponds to the weight matrix of the modified layer. For other types of linear layers, however, the implementation is more involved: A naive approach would be to construct a large matrix $W$ that corresponds to the affine transformation $Wx+b$ implemented by the modified layer. This has multiple drawbacks:

A better approach can be found by observing that the matrix $W$ is the Jacobian of the affine transformation $f(x) = Wx + b$. The vector $c$ computed in step 3 corresponds to $c = s^T W$, a so-called Vector-Jacobian-Product (VJP) of the vector $s$ with the Jacobian $W$.

VJPs are the fundamental building blocks of reverse-mode automatic differentiation (AD), and therefore implemented by most AD frameworks in a highly performant, matrix-free, GPU-accelerated manner. Note that computing the VJP is much more efficient than first computing the full Jacobian $W$ and later multiplying it with $s$. This is due to the fact that computing the full Jacobian of a function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$ requires computing $m$ VJPs.

Functions that compute VJP's are commonly called pullbacks. Using the Zygote.jl AD system, we obtain the output $z$ of a modified layer and its pullback back in a single function call:

z, back = Zygote.pullback(modified_layer, aᵏ)

We then call the pullback with the vector $s$ to obtain $c$:

c = back(s)

Finally, step 4 consists of an element-wise multiplication of the vector $c$ with the input activation vector $a^k$, resulting in the relevance vector $R^k$.

This AD-based implementation is used in ExplainableAI.jl as the default method for all layer types that don't have a more optimized implementation (e.g. fully connected layers). We will refer to it as the "AD fallback".

For more background information on automatic differentiation, refer to the JuML lecture on AD.

LRP analyzer struct

The LRP analyzer struct holds three fields: the model to analyze, the LRP rules to use, and pre-allocated modified_layers.

As described in the section on Composites, applying a composite to a model will return LRP rules in nested ChainTuple and ParallelTuples. These wrapper types are used to match the structure of Flux models with Chain and Parallel layers while avoiding type piracy.

When creating an LRP analyzer with the default keyword argument flatten=true, flatten_model is called on the model and rules. This is done for performance reasons, as discussed in Flattening the model.

After passing the Model checks, modified layers are pre-allocated, once again using the ChainTuple and ParallelTuple wrapper types to match the structure of the model. If a rule doesn't modify a layer, the corresponding entry in modified_layers is set to nothing, avoiding unnecessary allocations. If a rule requires multiple modified layers, the corresponding entry in modified_layers is set to a named tuple of modified layers. Apart from these special cases, the corresponding entry in modified_layers is simply set to the modified layer.

For a detailed description of the layer modification mechanism, refer to the section on Advanced layer modification.

Forward and reverse pass

When calling an LRP analyzer, a forward pass through the model is performed, saving the activations $aᵏ$ for all layers $k$ in a vector called as. This vector of activations is then used to pre-allocate the relevances $R^k$ for all layers in a vector called Rs. This is possible since for any layer $k$, $a^k$ and $R^k$ have the same shape. Finally, the last array of relevances $R^N$ in Rs is set to zeros, except for the specified output neuron, which is set to one.

We can now run the reverse pass, iterating backwards over the layers in the model and writing relevances $R^k$ into the pre-allocated array Rs:

for k in length(model):-1:1
     #                  └─ loop over layers in reverse
-    lrp!(rels[k], rules[k], layers[k], modified_layers[i], acts[k], rels[k+1])
-    #    └─ Rᵏ: modified in-place                          └─ aᵏ    └─ Rᵏ⁺¹
+    lrp!(Rs[k], rules[k], layers[k], modified_layers[i], as[k], Rs[k+1])
+    #    └─ Rᵏ: modified in-place                        └─ aᵏ  └─ Rᵏ⁺¹
 end

This is done by calling low-level functions

function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)
     Rᵏ .= ...
-end

that implement individual LRP rules. The correct rule is applied via multiple dispatch on the types of the arguments rule and modified_layer. The relevance Rᵏ is then computed based on the input activation aᵏ and the output relevance Rᵏ⁺¹.

The exclamation point in the function name lrp! is a naming convention in Julia to denote functions that modify their arguments – in this case the first argument rels[k], which corresponds to $R^k$.

Rule calls

As discussed in The AD fallback, the default LRP fallback for unknown layers uses AD via Zygote. Now that you are familiar with both the API and the four-step computation of the generic LRP rules, the following implementation should be straightforward to understand:

function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)
+end

that implement individual LRP rules. The correct rule is applied via multiple dispatch on the types of the arguments rule and modified_layer. The relevance Rᵏ is then computed based on the input activation aᵏ and the output relevance Rᵏ⁺¹.

The exclamation point in the function name lrp! is a naming convention in Julia to denote functions that modify their arguments – in this case the first argument Rs[k], which corresponds to $R^k$.

Rule calls

As discussed in The AD fallback, the default LRP fallback for unknown layers uses AD via Zygote. Now that you are familiar with both the API and the four-step computation of the generic LRP rules, the following implementation should be straightforward to understand:

function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)
    # Use modified_layer if available
    layer = isnothing(modified_layer) ? layer : modified_layer
 
@@ -35,4 +35,4 @@
    @tullio Rᵏ[j, b] = layer.weight[i, j] * ãᵏ[j, b] / z[i, b] * Rᵏ⁺¹[i, b]
 end

For maximum low-level control beyond modify_input and modify_denominator, you can also implement your own lrp! function and dispatch on individual rule types MyRule and layer types MyLayer:

function lrp!(Rᵏ, rule::MyRule, layer::MyLayer, modified_layer, aᵏ, Rᵏ⁺¹)
     Rᵏ .= ...
-end
+end
diff --git a/dev/search/index.html b/dev/search/index.html index 4c6322b0..d5063908 100644 --- a/dev/search/index.html +++ b/dev/search/index.html @@ -1,2 +1,2 @@ -Search · ExplainableAI.jl

Loading search...

    +Search · ExplainableAI.jl

    Loading search...

      diff --git a/dev/search_index.js b/dev/search_index.js index f60a31e1..68088484 100644 --- a/dev/search_index.js +++ b/dev/search_index.js @@ -1,3 +1,3 @@ var documenterSearchIndex = {"docs": -[{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"EditURL = \"../../literate/lrp/composites.jl\"","category":"page"},{"location":"generated/lrp/composites/#docs-composites","page":"Assigning rules to layers","title":"Assigning LRP rules to layers","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"In this example, we will show how to assign LRP rules to specific layers. For this purpose, we first define a small VGG-like convolutional neural network:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"using ExplainableAI\nusing Flux\n\nmodel = Chain(\n Chain(\n Conv((3, 3), 3 => 8, relu; pad=1),\n Conv((3, 3), 8 => 8, relu; pad=1),\n MaxPool((2, 2)),\n Conv((3, 3), 8 => 16, relu; pad=1),\n Conv((3, 3), 16 => 16, relu; pad=1),\n MaxPool((2, 2)),\n ),\n Chain(\n Flux.flatten,\n Dense(1024 => 512, relu),\n Dropout(0.5),\n Dense(512 => 100, relu)\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/#docs-composites-manual","page":"Assigning rules to layers","title":"Manually assigning rules","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"When creating an LRP-analyzer, we can assign individual rules to each layer. As we can see above, our model is a Chain of two Flux Chains. Using flatten_model, we can flatten the model into a single Chain:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"model_flat = flatten_model(model)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This allows us to define an LRP analyzer using an array of rules matching the length of the Flux chain:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"rules = [\n FlatRule(),\n ZPlusRule(),\n ZeroRule(),\n ZPlusRule(),\n ZPlusRule(),\n ZeroRule(),\n PassRule(),\n EpsilonRule(),\n PassRule(),\n EpsilonRule(),\n];\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"The LRP analyzer will show a summary of how layers and rules got matched:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"LRP(model_flat, rules)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"However, this approach only works for models that can be fully flattened. For unflattened models and models containing Parallel layers, we can compose rules using ChainTuples and ParallelTuples which match the model structure:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"rules = ChainTuple(\n ChainTuple(\n FlatRule(),\n ZPlusRule(),\n ZeroRule(),\n ZPlusRule(),\n ZPlusRule(),\n ZeroRule()\n ),\n ChainTuple(\n PassRule(),\n EpsilonRule(),\n PassRule(),\n EpsilonRule(),\n ),\n)\n\nanalyzer = LRP(model, rules; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"note: Keyword argument `flatten`\nWe used the LRP keyword argument flatten=false to showcase that the structure of the model can be preserved. For performance reasons, the default flatten=true is recommended.","category":"page"},{"location":"generated/lrp/composites/#docs-composites-custom","page":"Assigning rules to layers","title":"Custom composites","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Instead of manually defining a list of rules, we can also define a Composite. A composite constructs a list of LRP-rules by sequentially applying the composite primitives it contains.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To obtain the same set of rules as in the previous example, we can define","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = Composite(\n GlobalTypeMap( # the following maps of layer types to LRP rules are applied globally\n Conv => ZPlusRule(), # apply ZPlusRule on all Conv layers\n Dense => EpsilonRule(), # apply EpsilonRule on all Dense layers\n Dropout => PassRule(), # apply PassRule on all Dropout layers\n MaxPool => ZeroRule(), # apply ZeroRule on all MaxPool layers\n typeof(Flux.flatten) => PassRule(), # apply PassRule on all flatten layers\n ),\n FirstLayerMap( # the following rule is applied to the first layer\n FlatRule()\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"We now construct an LRP analyzer from composite","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"analyzer = LRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"As you can see, this analyzer contains the same rules as our previous one. To compute rules for a model without creating an analyzer, use lrp_rules:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"lrp_rules(model, composite)","category":"page"},{"location":"generated/lrp/composites/#Composite-primitives","page":"Assigning rules to layers","title":"Composite primitives","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"The following Composite primitives can used to construct a Composite.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To apply a single rule, use:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"LayerMap to apply a rule to a layer at a given index\nGlobalMap to apply a rule to all layers\nRangeMap to apply a rule to a positional range of layers\nFirstLayerMap to apply a rule to the first layer\nLastLayerMap to apply a rule to the last layer","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To apply a set of rules to layers based on their type, use:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"GlobalTypeMap to apply a dictionary that maps layer types to LRP-rules\nRangeTypeMap for a TypeMap on generalized ranges\nFirstLayerTypeMap for a TypeMap on the first layer of a model\nLastLayerTypeMap for a TypeMap on the last layer\nFirstNTypeMap for a TypeMap on the first n layers","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Primitives are called sequentially in the order the Composite was created with and overwrite rules specified by previous primitives.","category":"page"},{"location":"generated/lrp/composites/#Assigning-a-rule-to-a-specific-layer","page":"Assigning rules to layers","title":"Assigning a rule to a specific layer","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To assign a rule to a specific layer, we can use LayerMap, which maps an LRP-rule to all layers in the model at the given index.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To display indices, use the show_layer_indices helper function:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"show_layer_indices(model)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Let's demonstrate LayerMap by assigning a specific rule to the last Conv layer at index (1, 5):","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = Composite(LayerMap((1, 5), EpsilonRule()))\n\nLRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This approach also works with Parallel layers.","category":"page"},{"location":"generated/lrp/composites/#docs-composites-presets","page":"Assigning rules to layers","title":"Composite presets","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"ExplainableAI.jl provides a set of default composites. A list of all implemented default composites can be found in the API reference, e.g. the EpsilonPlusFlat composite:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = EpsilonPlusFlat()","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"analyzer = LRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"EditURL = \"../literate/heatmapping.jl\"","category":"page"},{"location":"generated/heatmapping/#docs-heatmapping","page":"Heatmapping","title":"Heatmapping","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since numerical explanations are not very informative at first sight, we can visualize them by computing a heatmap. This page showcases different options and preset for heatmapping, building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"We start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/heatmapping/#Automatic-heatmap-presets","page":"Heatmapping","title":"Automatic heatmap presets","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The function heatmap automatically applies common presets for each method.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since InputTimesGradient and LRP both compute attributions, their presets are similar. Gradient methods however are typically shown in grayscale:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"analyzer = Gradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"analyzer = InputTimesGradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/heatmapping/#Custom-heatmap-settings","page":"Heatmapping","title":"Custom heatmap settings","text":"","category":"section"},{"location":"generated/heatmapping/#Color-schemes","page":"Heatmapping","title":"Color schemes","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"We can partially or fully override presets by passing keyword arguments to heatmap. For example, we can use a custom color scheme from ColorSchemes.jl using the keyword argument cs:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using ColorSchemes\n\nexpl = analyze(input, analyzer)\nheatmap(expl; cs=ColorSchemes.jet)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Refer to the ColorSchemes.jl catalogue for a gallery of available color schemes.","category":"page"},{"location":"generated/heatmapping/#docs-heatmap-reduce","page":"Heatmapping","title":"Color channel reduction","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Explanations have the same dimensionality as the inputs to the classifier. For images with multiple color channels, this means that the explanation also has a \"color channel\" dimension.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The keyword argument reduce can be used to reduce this dimension to a single scalar value for each pixel. The following presets are available:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":":sum: sum up color channels (default setting)\n:norm: compute 2-norm over the color channels\n:maxabs: compute maximum(abs, x) over the color channels","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:sum)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:norm)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:maxabs)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since MNIST only has a single color channel, there is no need for reduction and heatmaps look identical.","category":"page"},{"location":"generated/heatmapping/#docs-heatmap-rangescale","page":"Heatmapping","title":"Mapping explanations onto the color scheme","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"To map a color-channel-reduced explanation onto a color scheme, we first need to normalize all values to the range 0 1.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"For this purpose, two presets are available through the rangescale keyword argument:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":":extrema: normalize to the minimum and maximum value of the explanation\n:centered: normalize to the maximum absolute value of the explanation. Values of zero will be mapped to the center of the color scheme.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Depending on the color scheme, one of these presets may be more suitable than the other. The default color scheme for InputTimesGradient, seismic, is centered around zero, making :centered a good choice:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:centered)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:extrema)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"However, for the inferno color scheme, which is not centered around zero, :extrema leads to a heatmap with higher contrast.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:centered, cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:extrema, cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"For the full list of heatmap keyword arguments, refer to the heatmap documentation.","category":"page"},{"location":"generated/heatmapping/#docs-heatmapping-batches","page":"Heatmapping","title":"Heatmapping batches","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Heatmapping also works with input batches. Let's demonstrate this by using a batch of 100 images from the MNIST dataset:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"xs, ys = MNIST(Float32, :test)[1:100]\nbatch = reshape(xs, 28, 28, 1, :); # reshape to WHCN format\nnothing #hide","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The heatmap function automatically recognizes that the explanation is batched and returns a Vector of images:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmaps = heatmap(batch, analyzer)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Image.jl's mosaic function can used to display them in a grid:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"mosaic(heatmaps; nrow=10)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"note: Output type consistency\nTo obtain a singleton Vector containing a single heatmap for non-batched inputs, use the heatmap keyword argument unpack_singleton=false.","category":"page"},{"location":"generated/heatmapping/#Processing-heatmaps","page":"Heatmapping","title":"Processing heatmaps","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Heatmapping makes use of the Julia-based image processing ecosystem Images.jl.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"If you want to further process heatmaps, you may benefit from reading about some fundamental conventions that the ecosystem utilizes that are different from how images are typically represented in OpenCV, MATLAB, ImageJ or Python.","category":"page"},{"location":"generated/heatmapping/#Saving-heatmaps","page":"Heatmapping","title":"Saving heatmaps","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since heatmaps are regular Images.jl images, they can be saved as such:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using FileIO\n\nimg = heatmap(input, analyzer)\nsave(\"heatmap.png\", img)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"This page was generated using Literate.jl.","category":"page"},{"location":"lrp/api/#LRP-analyzer","page":"LRP","title":"LRP analyzer","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Refer to LRP for documentation on the LRP analyzer.","category":"page"},{"location":"lrp/api/#api-lrp-rules","page":"LRP","title":"LRP rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ZeroRule\nEpsilonRule\nGammaRule\nWSquareRule\nFlatRule\nAlphaBetaRule\nZPlusRule\nZBoxRule\nPassRule\nGeneralizedGammaRule","category":"page"},{"location":"lrp/api/#ExplainableAI.ZeroRule","page":"LRP","title":"ExplainableAI.ZeroRule","text":"ZeroRule()\n\nLRP-0 rule. Commonly used on upper layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_i fracW_ija_j^ksum_l W_ila_l^k+b_i R_i^k+1\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.EpsilonRule","page":"LRP","title":"ExplainableAI.EpsilonRule","text":"EpsilonRule([epsilon=1.0e-6])\n\nLRP-ϵ rule. Commonly used on middle layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracW_ija_j^kepsilon +sum_lW_ila_l^k+b_i R_i^k+1\n\nOptional arguments\n\nepsilon: Optional stabilization parameter, defaults to 1.0e-6.\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GammaRule","page":"LRP","title":"ExplainableAI.GammaRule","text":"GammaRule([gamma=0.25])\n\nLRP-γ rule. Commonly used on lower layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac(W_ij+gamma W_ij^+)a_j^k\n sum_l(W_il+gamma W_il^+)a_l^k+(b_i+gamma b_i^+) R_i^k+1\n\nOptional arguments\n\ngamma: Optional multiplier for added positive weights, defaults to 0.25.\n\nReferences\n\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.WSquareRule","page":"LRP","title":"ExplainableAI.WSquareRule","text":"WSquareRule()\n\nLRP-w² rule. Commonly used on the first layer when values are unbounded.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracW_ij^2sum_l W_il^2 R_i^k+1\n\nReferences\n\nG. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FlatRule","page":"LRP","title":"ExplainableAI.FlatRule","text":"FlatRule()\n\nLRP-Flat rule. Similar to the WSquareRule, but with all weights set to one and all bias terms set to zero.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac1sum_l 1 R_i^k+1 = sum_ifrac1n_i R_i^k+1\n\nwhere n_i is the number of input neurons connected to the output neuron at index i.\n\nReferences\n\nS. Lapuschkin et al., Unmasking Clever Hans predictors and assessing what machines really learn\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.AlphaBetaRule","page":"LRP","title":"ExplainableAI.AlphaBetaRule","text":"AlphaBetaRule([alpha=2.0, beta=1.0])\n\nLRP-αβ rule. Weights positive and negative contributions according to the parameters alpha and beta respectively. The difference α-β must be equal to one. Commonly used on lower layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ileft(\n alphafracleft(W_ija_j^kright)^+sum_lleft(W_ila_l^k+b_iright)^+\n -betafracleft(W_ija_j^kright)^-sum_lleft(W_ila_l^k+b_iright)^-\nright) R_i^k+1\n\nOptional arguments\n\nalpha: Multiplier for the positive output term, defaults to 2.0.\nbeta: Multiplier for the negative output term, defaults to 1.0.\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ZPlusRule","page":"LRP","title":"ExplainableAI.ZPlusRule","text":"ZPlusRule()\n\nLRP-z rule. Commonly used on lower layers.\n\nEquivalent to AlphaBetaRule(1.0f0, 0.0f0), but slightly faster. See also AlphaBetaRule.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracleft(W_ija_j^kright)^+sum_lleft(W_ila_l^k+b_iright)^+ R_i^k+1\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\nG. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ZBoxRule","page":"LRP","title":"ExplainableAI.ZBoxRule","text":"ZBoxRule(low, high)\n\nLRP-zᴮ-rule. Commonly used on the first layer for pixel input.\n\nThe parameters low and high should be set to the lower and upper bounds of the input features, e.g. 0.0 and 1.0 for raw image data. It is also possible to provide two arrays of that match the input size.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k=sum_i fracW_ija_j^k - W_ij^+l_j - W_ij^-h_j\n sum_l W_ila_l^k+b_i - left(W_il^+l_l+b_i^+right) - left(W_il^-h_l+b_i^-right) R_i^k+1\n\nReferences\n\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.PassRule","page":"LRP","title":"ExplainableAI.PassRule","text":"PassRule()\n\nPass-through rule. Passes relevance through to the lower layer.\n\nSupports layers with constant input and output shapes, e.g. reshaping layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = R_j^k+1\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GeneralizedGammaRule","page":"LRP","title":"ExplainableAI.GeneralizedGammaRule","text":"GeneralizedGammaRule([gamma=0.25])\n\nGeneralized LRP-γ rule. Can be used on layers with leakyrelu activation functions.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac\n (W_ij+gamma W_ij^+)a_j^+ +(W_ij+gamma W_ij^-)a_j^-\n sum_l(W_il+gamma W_il^+)a_j^+ +(W_il+gamma W_il^-)a_j^- +(b_i+gamma b_i^+)\nI(z_k0) cdot R^k+1_i\n+sum_ifrac\n (W_ij+gamma W_ij^-)a_j^+ +(W_ij+gamma W_ij^+)a_j^-\n sum_l(W_il+gamma W_il^-)a_j^+ +(W_il+gamma W_il^+)a_j^- +(b_i+gamma b_i^-)\nI(z_k0) cdot R^k+1_i\n\nOptional arguments\n\ngamma: Optional multiplier for added positive weights, defaults to 0.25.\n\nReferences\n\nL. Andéol et al., Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"For manual rule assignment, use ChainTuple and ParallelTuple, matching the model structure:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ChainTuple\nParallelTuple","category":"page"},{"location":"lrp/api/#ExplainableAI.ChainTuple","page":"LRP","title":"ExplainableAI.ChainTuple","text":"ChainTuple(xs)\n\nThin wrapper around Tuple for use with Flux.jl models.\n\nCombining ChainTuple and ParallelTuple, data xs can be stored while preserving the structure of a Flux model without risking type piracy.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ParallelTuple","page":"LRP","title":"ExplainableAI.ParallelTuple","text":"ParallelTuple(xs)\n\nThin wrapper around Tuple for use with Flux.jl models.\n\nCombining ChainTuple and ParallelTuple, data xs can be stored while preserving the structure of a Flux model without risking type piracy.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#Composites","page":"LRP","title":"Composites","text":"","category":"section"},{"location":"lrp/api/#Applying-composites","page":"LRP","title":"Applying composites","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite\nlrp_rules","category":"page"},{"location":"lrp/api/#ExplainableAI.Composite","page":"LRP","title":"ExplainableAI.Composite","text":"Composite(primitives...)\nComposite(default_rule, primitives...)\n\nAutomatically contructs a list of LRP-rules by sequentially applying composite primitives.\n\nPrimitives\n\nTo apply a single rule, use:\n\nLayerMap to apply a rule to the n-th layer of a model\nGlobalMap to apply a rule to all layers\nRangeMap to apply a rule to a positional range of layers\nFirstLayerMap to apply a rule to the first layer\nLastLayerMap to apply a rule to the last layer\n\nTo apply a set of rules to layers based on their type, use:\n\nGlobalTypeMap to apply a dictionary that maps layer types to LRP-rules\nRangeTypeMap for a TypeMap on generalized ranges\nFirstLayerTypeMap for a TypeMap on the first layer of a model\nLastLayerTypeMap for a TypeMap on the last layer\nFirstNTypeMap for a TypeMap on the first n layers\n\nExample\n\nUsing a VGG11 model:\n\njulia> composite = Composite(\n GlobalTypeMap(\n ConvLayer => AlphaBetaRule(),\n Dense => EpsilonRule(),\n PoolingLayer => EpsilonRule(),\n DropoutLayer => PassRule(),\n ReshapingLayer => PassRule(),\n ),\n FirstNTypeMap(7, Conv => FlatRule()),\n );\n\njulia> analyzer = LRP(model, composite)\nLRP(\n Conv((3, 3), 3 => 64, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 64 => 128, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 128 => 256, relu, pad=1) => FlatRule(),\n Conv((3, 3), 256 => 256, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 256 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n MLUtils.flatten => PassRule(),\n Dense(25088 => 4096, relu) => EpsilonRule{Float32}(1.0f-6),\n Dropout(0.5) => PassRule(),\n Dense(4096 => 4096, relu) => EpsilonRule{Float32}(1.0f-6),\n Dropout(0.5) => PassRule(),\n Dense(4096 => 1000) => EpsilonRule{Float32}(1.0f-6),\n)\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.lrp_rules","page":"LRP","title":"ExplainableAI.lrp_rules","text":"lrp_rules(model, composite)\n\nApply a composite to obtain LRP-rules for a given Flux model.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#api-composite-primitives","page":"LRP","title":"Composite primitives","text":"","category":"section"},{"location":"lrp/api/#Mapping-layers-to-rules","page":"LRP","title":"Mapping layers to rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite primitives that apply a single rule:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"LayerMap\nGlobalMap\nRangeMap\nFirstLayerMap\nLastLayerMap","category":"page"},{"location":"lrp/api/#ExplainableAI.LayerMap","page":"LRP","title":"ExplainableAI.LayerMap","text":"LayerMap(index, rule)\n\nComposite primitive that maps an LRP-rule to all layers in the model at the given index. The index can either be an integer or a tuple of integers to map a rule to a specific layer in nested Flux Chains.\n\nSee show_layer_indices to print layer indices and Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GlobalMap","page":"LRP","title":"ExplainableAI.GlobalMap","text":"GlobalMap(rule)\n\nComposite primitive that maps an LRP-rule to all layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.RangeMap","page":"LRP","title":"ExplainableAI.RangeMap","text":"RangeMap(range, rule)\n\nComposite primitive that maps an LRP-rule to the specified positional range of layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstLayerMap","page":"LRP","title":"ExplainableAI.FirstLayerMap","text":"FirstLayerMap(rule)\n\nComposite primitive that maps an LRP-rule to the first layer in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.LastLayerMap","page":"LRP","title":"ExplainableAI.LastLayerMap","text":"LastLayerMap(rule)\n\nComposite primitive that maps an LRP-rule to the last layer in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"To apply LayerMap to nested Flux Chains or Parallel layers, make use of show_layer_indices:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"show_layer_indices","category":"page"},{"location":"lrp/api/#ExplainableAI.show_layer_indices","page":"LRP","title":"ExplainableAI.show_layer_indices","text":"show_layer_indices(model)\n\nPrint layer indices of Flux models. This is primarily a utility to help define LayerMap primitives.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Mapping-layers-to-rules-based-on-type","page":"LRP","title":"Mapping layers to rules based on type","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite primitives that apply rules based on the layer type:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"GlobalTypeMap\nRangeTypeMap\nFirstLayerTypeMap\nLastLayerTypeMap\nFirstNTypeMap","category":"page"},{"location":"lrp/api/#ExplainableAI.GlobalTypeMap","page":"LRP","title":"ExplainableAI.GlobalTypeMap","text":"GlobalTypeMap(map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.RangeTypeMap","page":"LRP","title":"ExplainableAI.RangeTypeMap","text":"RangeTypeMap(range, map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the specified range of layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstLayerTypeMap","page":"LRP","title":"ExplainableAI.FirstLayerTypeMap","text":"FirstLayerTypeMap(map)\n\nComposite primitive that maps the type of the first layer of the model to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.LastLayerTypeMap","page":"LRP","title":"ExplainableAI.LastLayerTypeMap","text":"LastLayerTypeMap(map)\n\nComposite primitive that maps the type of the last layer of the model to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstNTypeMap","page":"LRP","title":"ExplainableAI.FirstNTypeMap","text":"FirstNTypeMap(n, map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the first n layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#Union-types-for-composites","page":"LRP","title":"Union types for composites","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"The following exported union types types can be used to define TypeMaps:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ConvLayer\nPoolingLayer\nDropoutLayer\nReshapingLayer\nNormalizationLayer","category":"page"},{"location":"lrp/api/#ExplainableAI.ConvLayer","page":"LRP","title":"ExplainableAI.ConvLayer","text":"Union type for convolutional layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.PoolingLayer","page":"LRP","title":"ExplainableAI.PoolingLayer","text":"Union type for pooling layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.DropoutLayer","page":"LRP","title":"ExplainableAI.DropoutLayer","text":"Union type for dropout layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ReshapingLayer","page":"LRP","title":"ExplainableAI.ReshapingLayer","text":"Union type for reshaping layers such as flatten.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.NormalizationLayer","page":"LRP","title":"ExplainableAI.NormalizationLayer","text":"Union type for normalization layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#api-composite-presets","page":"LRP","title":"Composite presets","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"EpsilonGammaBox\nEpsilonPlus\nEpsilonAlpha2Beta1\nEpsilonPlusFlat\nEpsilonAlpha2Beta1Flat","category":"page"},{"location":"lrp/api/#ExplainableAI.EpsilonGammaBox","page":"LRP","title":"ExplainableAI.EpsilonGammaBox","text":"EpsilonGammaBox(low, high; [epsilon=1.0f-6, gamma=0.25f0])\n\nComposite using the following primitives:\n\njulia> EpsilonGammaBox(-3.0f0, 3.0f0)\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.ConvTranspose => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.CrossCor => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n Flux.ConvTranspose => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n Flux.CrossCor => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonPlus","page":"LRP","title":"ExplainableAI.EpsilonPlus","text":"EpsilonPlus(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonPlus()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.ZPlusRule(),\n Flux.ConvTranspose => ExplainableAI.ZPlusRule(),\n Flux.CrossCor => ExplainableAI.ZPlusRule(),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonAlpha2Beta1","page":"LRP","title":"ExplainableAI.EpsilonAlpha2Beta1","text":"EpsilonAlpha2Beta1(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonAlpha2Beta1()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.ConvTranspose => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.CrossCor => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonPlusFlat","page":"LRP","title":"ExplainableAI.EpsilonPlusFlat","text":"EpsilonPlusFlat(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonPlusFlat()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.ZPlusRule(),\n Flux.ConvTranspose => ExplainableAI.ZPlusRule(),\n Flux.CrossCor => ExplainableAI.ZPlusRule(),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.FlatRule(),\n Flux.ConvTranspose => ExplainableAI.FlatRule(),\n Flux.CrossCor => ExplainableAI.FlatRule(),\n Flux.Dense => ExplainableAI.FlatRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonAlpha2Beta1Flat","page":"LRP","title":"ExplainableAI.EpsilonAlpha2Beta1Flat","text":"EpsilonAlpha2Beta1Flat(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonAlpha2Beta1Flat()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.ConvTranspose => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.CrossCor => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.FlatRule(),\n Flux.ConvTranspose => ExplainableAI.FlatRule(),\n Flux.CrossCor => ExplainableAI.FlatRule(),\n Flux.Dense => ExplainableAI.FlatRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Custom-rules","page":"LRP","title":"Custom rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"These utilities can be used to define custom rules without writing boilerplate code. To extend these functions, explicitly import them: ","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ExplainableAI.modify_input\nExplainableAI.modify_denominator\nExplainableAI.modify_parameters\nExplainableAI.modify_weight\nExplainableAI.modify_bias\nExplainableAI.modify_layer\nExplainableAI.is_compatible","category":"page"},{"location":"lrp/api/#ExplainableAI.modify_input","page":"LRP","title":"ExplainableAI.modify_input","text":"modify_input(rule, input)\n\nModify input activation before computing relevance propagation.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_denominator","page":"LRP","title":"ExplainableAI.modify_denominator","text":"modify_denominator(rule, d)\n\nModify denominator z for numerical stability on the forward pass.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_parameters","page":"LRP","title":"ExplainableAI.modify_parameters","text":"modify_parameters(rule, parameter)\n\nModify parameters before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_weight","page":"LRP","title":"ExplainableAI.modify_weight","text":"modify_weight(rule, weight)\n\nModify layer weights before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_bias","page":"LRP","title":"ExplainableAI.modify_bias","text":"modify_bias(rule, bias)\n\nModify layer bias before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_layer","page":"LRP","title":"ExplainableAI.modify_layer","text":"modify_layer(rule, layer)\n\nModify layer before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.is_compatible","page":"LRP","title":"ExplainableAI.is_compatible","text":"is_compatible(rule, layer)\n\nCheck compatibility of a LRP-Rule with layer type.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Compatibility settings:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"LRP_CONFIG.supports_layer\nLRP_CONFIG.supports_activation","category":"page"},{"location":"lrp/api/#ExplainableAI.LRP_CONFIG.supports_layer","page":"LRP","title":"ExplainableAI.LRP_CONFIG.supports_layer","text":"LRP_CONFIG.supports_layer(layer)\n\nCheck whether LRP can be used on a layer or a Chain. To extend LRP to your own layers, define:\n\nLRP_CONFIG.supports_layer(::MyLayer) = true # for structs\nLRP_CONFIG.supports_layer(::typeof(mylayer)) = true # for functions\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.LRP_CONFIG.supports_activation","page":"LRP","title":"ExplainableAI.LRP_CONFIG.supports_activation","text":"LRP_CONFIG.supports_activation(σ)\n\nCheck whether LRP can be used on a given activation function. To extend LRP to your own activation functions, define:\n\nLRP_CONFIG.supports_activation(::typeof(myactivation)) = true # for functions\nLRP_CONFIG.supports_activation(::MyActivation) = true # for structs\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Index","page":"LRP","title":"Index","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"","category":"page"},{"location":"api/#Basic-API","page":"General","title":"Basic API","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"All methods in ExplainableAI.jl work by calling analyze on an input and an analyzer:","category":"page"},{"location":"api/","page":"General","title":"General","text":"analyze\nExplanation\nheatmap","category":"page"},{"location":"api/#ExplainableAI.analyze","page":"General","title":"ExplainableAI.analyze","text":"analyze(input, method)\nanalyze(input, method, neuron_selection)\n\nApply the analyzer method for the given input, returning an Explanation. If neuron_selection is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.\n\nSee also Explanation and heatmap.\n\nKeyword arguments\n\nadd_batch_dim: add batch dimension to the input without allocating. Default is false.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.Explanation","page":"General","title":"ExplainableAI.Explanation","text":"Return type of analyzers when calling analyze.\n\nFields\n\nval: numerical output of the analyzer, e.g. an attribution or gradient\noutput: model output for the given analyzer input\nneuron_selection: neuron index used for the explanation\nanalyzer: symbol corresponding the used analyzer, e.g. :LRP or :Gradient\nextras: optional named tuple that can be used by analyzers to return additional information.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.heatmap","page":"General","title":"ExplainableAI.heatmap","text":"heatmap(explanation)\nheatmap(input, analyzer)\nheatmap(input, analyzer, neuron_selection)\n\nVisualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).\n\nSee also analyze.\n\nKeyword arguments\n\ncs::ColorScheme: color scheme from ColorSchemes.jl that is applied. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is ColorSchemes.seismic.\nreduce::Symbol: selects how color channels are reduced to a single number to apply a color scheme. The following methods can be selected, which are then applied over the color channels for each \"pixel\" in the explanation:\n:sum: sum up color channels\n:norm: compute 2-norm over the color channels\n:maxabs: compute maximum(abs, x) over the color channels\nWhen calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is :sum.\nrangescale::Symbol: selects how the color channel reduced heatmap is normalized before the color scheme is applied. Can be either :extrema or :centered. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default for use with the seismic color scheme is :centered.\npermute::Bool: Whether to flip W&H input channels. Default is true.\nunpack_singleton::Bool: When heatmapping a batch with a single sample, setting unpack_singleton=true will return an image instead of an Vector containing a single image.\n\nNote: keyword arguments can't be used when calling heatmap with an analyzer.\n\n\n\n\n\n","category":"function"},{"location":"api/#Analyzers","page":"General","title":"Analyzers","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"LRP\nGradient\nInputTimesGradient\nSmoothGrad\nIntegratedGradients","category":"page"},{"location":"api/#ExplainableAI.LRP","page":"General","title":"ExplainableAI.LRP","text":"LRP(model, rules)\nLRP(model, composite)\n\nAnalyze model by applying Layer-Wise Relevance Propagation. The analyzer can either be created by passing an array of LRP-rules or by passing a composite, see Composite for an example.\n\nKeyword arguments\n\nskip_checks::Bool: Skip checks whether model is compatible with LRP and contains output softmax. Default is false.\nverbose::Bool: Select whether the model checks should print a summary on failure. Default is true.\n\nReferences\n\n[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.Gradient","page":"General","title":"ExplainableAI.Gradient","text":"Gradient(model)\n\nAnalyze model by calculating the gradient of a neuron activation with respect to the input.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.InputTimesGradient","page":"General","title":"ExplainableAI.InputTimesGradient","text":"InputTimesGradient(model)\n\nAnalyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.SmoothGrad","page":"General","title":"ExplainableAI.SmoothGrad","text":"SmoothGrad(analyzer, [n=50, std=0.1, rng=GLOBAL_RNG])\nSmoothGrad(analyzer, [n=50, distribution=Normal(0, σ²=0.01), rng=GLOBAL_RNG])\n\nAnalyze model by calculating a smoothed sensitivity map. This is done by averaging sensitivity maps of a Gradient analyzer over random samples in a neighborhood of the input, typically by adding Gaussian noise with mean 0.\n\nReferences\n\nSmilkov et al., SmoothGrad: removing noise by adding noise\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.IntegratedGradients","page":"General","title":"ExplainableAI.IntegratedGradients","text":"IntegratedGradients(analyzer, [n=50])\nIntegratedGradients(analyzer, [n=50])\n\nAnalyze model by using the Integrated Gradients method.\n\nReferences\n\nSundararajan et al., Axiomatic Attribution for Deep Networks\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-augmentations","page":"General","title":"Input augmentations","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"SmoothGrad and IntegratedGradients are special cases of the input augmentations NoiseAugmentation and InterpolationAugmentation, which can be applied as a wrapper to any analyzer:","category":"page"},{"location":"api/","page":"General","title":"General","text":"NoiseAugmentation\nInterpolationAugmentation","category":"page"},{"location":"api/#ExplainableAI.NoiseAugmentation","page":"General","title":"ExplainableAI.NoiseAugmentation","text":"NoiseAugmentation(analyzer, n, [std=1, rng=GLOBAL_RNG])\nNoiseAugmentation(analyzer, n, distribution, [rng=GLOBAL_RNG])\n\nA wrapper around analyzers that augments the input with n samples of additive noise sampled from distribution. This input augmentation is then averaged to return an Explanation.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.InterpolationAugmentation","page":"General","title":"ExplainableAI.InterpolationAugmentation","text":"InterpolationAugmentation(model, [n=50])\n\nA wrapper around analyzers that augments the input with n steps of linear interpolation between the input and a reference input (typically zero(input)). The gradients w.r.t. this augmented input are then averaged and multiplied with the difference between the input and the reference input.\n\n\n\n\n\n","category":"type"},{"location":"api/#Model-preparation","page":"General","title":"Model preparation","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"strip_softmax\ncanonize\nflatten_model","category":"page"},{"location":"api/#ExplainableAI.strip_softmax","page":"General","title":"ExplainableAI.strip_softmax","text":"strip_softmax(model)\nstrip_softmax(layer)\n\nRemove softmax activation on layer or model if it exists.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.canonize","page":"General","title":"ExplainableAI.canonize","text":"canonize(model)\n\nCanonize model by flattening it and fusing BatchNorm layers into preceding Dense and Conv layers with linear activation functions.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.flatten_model","page":"General","title":"ExplainableAI.flatten_model","text":"flatten_model(model)\n\nFlatten a Flux Chain containing Chains.\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-preprocessing","page":"General","title":"Input preprocessing","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"preprocess_imagenet","category":"page"},{"location":"api/#ExplainableAI.preprocess_imagenet","page":"General","title":"ExplainableAI.preprocess_imagenet","text":"preprocess_imagenet(img)\n\nPreprocess an image for use with Metalhead.jl's ImageNet models using PyTorch weights. Uses PyTorch's normalization constants.\n\n\n\n\n\n","category":"function"},{"location":"api/#Index","page":"General","title":"Index","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"EditURL = \"../literate/augmentations.jl\"","category":"page"},{"location":"generated/augmentations/#docs-augmentations","page":"Input augmentations","title":"Analyzer augmentations","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"All analyzers implemented in ExplainableAI.jl can be augmented by two types of augmentations: NoiseAugmentations and InterpolationAugmentations. These augmentations are wrappers around analyzers that modify the input before passing it to the analyzer.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We build on the basics shown in the Getting started section and start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/augmentations/#Noise-augmentation","page":"Input augmentations","title":"Noise augmentation","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"The NoiseAugmentation wrapper computes explanations averaged over noisy inputs. Let's demonstrate this on the Gradient analyzer. First, we compute the heatmap of an explanation without augmentation:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = Gradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Now we wrap the analyzer in a NoiseAugmentation with 10 samples of noise. By default, the noise is sampled from a Gaussian distribution with mean 0 and standard deviation 1.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(Gradient(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Note that a higher sample size is desired, as it will lead to a smoother heatmap. However, this comes at the cost of a longer computation time.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We can also set the standard deviation of the Gaussian distribution:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(Gradient(model), 50, 0.1)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"When used with a Gradient analyzer, this is equivalent to SmoothGrad:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = SmoothGrad(model, 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We can also use any distribution from Distributions.jl, for example Poisson noise with rate lambda=05:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using Distributions\n\nanalyzer = NoiseAugmentation(Gradient(model), 50, Poisson(0.5))\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Is is also possible to define your own distributions or mixture distributions.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"NoiseAugmentation can be combined with any analyzer type, for example LRP:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(LRP(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/#Integration-augmentation","page":"Input augmentations","title":"Integration augmentation","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"The InterpolationAugmentation wrapper computes explanations averaged over n steps of linear interpolation between the input and a reference input, which is set to zero(input) by default:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = InterpolationAugmentation(Gradient(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"When used with a Gradient analyzer, this is equivalent to IntegratedGradients:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = IntegratedGradients(model, 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"To select a different reference input, pass it to the analyze or heatmap function using the keyword argument input_ref. Note that this is an arbitrary example for the sake of demonstration.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"matrix_of_ones = ones(Float32, size(input))\n\nanalyzer = InterpolationAugmentation(Gradient(model), 50)\nheatmap(input, analyzer; input_ref=matrix_of_ones)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Once again, InterpolationAugmentation can be combined with any analyzer type, for example LRP:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = InterpolationAugmentation(LRP(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"This page was generated using Literate.jl.","category":"page"},{"location":"lrp/developer/#lrp-dev-docs","page":"Developer documentation","title":"LRP developer documentation","text":"","category":"section"},{"location":"lrp/developer/#Generic-LRP-rule-implementation","page":"Developer documentation","title":"Generic LRP rule implementation","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Before we dive into package-specific implementation details in later sections of this developer documentation, we first need to cover some fundamentals of LRP, starting with our notation.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The generic LRP rule, of which the 0-, epsilon- and gamma-rules are special cases, reads[1][2]","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginequation\nR_j^k = sum_i fracrho(W_ij) a_j^kepsilon + sum_l rho(W_il) a_l^k + rho(b_i) R_i^k+1\nendequation","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"where ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"W is the weight matrix of the layer\nb is the bias vector of the layer\na^k is the activation vector at the input of layer k\na^k+1 is the activation vector at the output of layer k\nR^k is the relevance vector at the input of layer k\nR^k+1 is the relevance vector at the output of layer k\nrho is a function that modifies parameters (what we call modify_parameters)\nepsilon is a small positive constant to avoid division by zero","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Subscript characters are used to index vectors and matrices (e.g. b_i is the i-th entry of the bias vector), while the superscripts ^k and ^k+1 indicate the relative positions of activations a and relevances R in the model. For any k, a^k and R^k have the same shape. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Note that every term in this equation is a scalar value, which removes the need to differentiate between matrix and element-wise operations.","category":"page"},{"location":"lrp/developer/#Linear-layers","page":"Developer documentation","title":"Linear layers","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"LRP was developed for deep rectifier networks, neural networks that are composed of linear layers with ReLU activation functions. Linear layers are layers that can be represented as affine transformations of the form ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginequation\nf(x) = Wx + b quad \nendequation","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This includes most commonly used types of layers, such as fully connected layers, convolutional layers, pooling layers, and normalization layers.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We will now describe a generic implementation of equation (1) that can be applied to any linear layer.","category":"page"},{"location":"lrp/developer/#lrp-dev-ad-fallback","page":"Developer documentation","title":"The automatic differentiation fallback","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The computation of the generic LRP rule can be decomposed into four steps[1]:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginarraylr\nz_i = sum_l rho(W_il) a_l^k + rho(b_i) text(Step 1) 05em\ns_i = R_i^k+1 (z_i + epsilon) text(Step 2) 05em\nc_j = sum_i rho(W_ij) s_i text(Step 3) 05em\nR_j^k = a_j^k c_j text(Step 4)\nendarray","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"To compute step 1, we first create a modified layer, applying rho to the weights and biases and replacing the activation function with the identity function. The vector z is then computed using a forward pass through the modified layer. It has the same dimensionality as R^k+1 and a^k+1.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Step 2 is an element-wise division of R^k+1 by z. To avoid division by zero, a small constant epsilon is added to z when necessary.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Step 3 is trivial for fully connected layers, as rho(W) corresponds to the weight matrix of the modified layer. For other types of linear layers, however, the implementation is more involved: A naive approach would be to construct a large matrix W that corresponds to the affine transformation Wx+b implemented by the modified layer. This has multiple drawbacks:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"the implementation is error-prone\na separate implementation is required for each type of linear layer\nfor some layer types, e.g. pooling layers, the matrix W depends on the input\nfor many layer types, e.g. convolutional layers, the matrix W is very large and sparse, mostly consisting of zeros, leading to a large computational overhead","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"A better approach can be found by observing that the matrix W is the Jacobian of the affine transformation f(x) = Wx + b. The vector c computed in step 3 corresponds to c = s^T W, a so-called Vector-Jacobian-Product (VJP) of the vector s with the Jacobian W. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"VJPs are the fundamental building blocks of reverse-mode automatic differentiation (AD), and therefore implemented by most AD frameworks in a highly performant, matrix-free, GPU-accelerated manner. Note that computing the VJP is much more efficient than first computing the full Jacobian W and later multiplying it with s. This is due to the fact that computing the full Jacobian of a function f mathbbR^n rightarrow mathbbR^m requires computing m VJPs.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Functions that compute VJP's are commonly called pullbacks. Using the Zygote.jl AD system, we obtain the output z of a modified layer and its pullback back in a single function call:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"z, back = Zygote.pullback(modified_layer, aᵏ)","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We then call the pullback with the vector s to obtain c:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"c = back(s)","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Finally, step 4 consists of an element-wise multiplication of the vector c with the input activation vector a^k, resulting in the relevance vector R^k.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This AD-based implementation is used in ExplainableAI.jl as the default method for all layer types that don't have a more optimized implementation (e.g. fully connected layers). We will refer to it as the \"AD fallback\".","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For more background information on automatic differentiation, refer to the JuML lecture on AD.","category":"page"},{"location":"lrp/developer/#LRP-analyzer-struct","page":"Developer documentation","title":"LRP analyzer struct","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The LRP analyzer struct holds three fields: the model to analyze, the LRP rules to use, and pre-allocated modified_layers.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"As described in the section on Composites, applying a composite to a model will return LRP rules in nested ChainTuple and ParallelTuples. These wrapper types are used to match the structure of Flux models with Chain and Parallel layers while avoiding type piracy.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"When creating an LRP analyzer with the default keyword argument flatten=true, flatten_model is called on the model and rules. This is done for performance reasons, as discussed in Flattening the model.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"After passing the Model checks, modified layers are pre-allocated, once again using the ChainTuple and ParallelTuple wrapper types to match the structure of the model. If a rule doesn't modify a layer, the corresponding entry in modified_layers is set to nothing, avoiding unnecessary allocations. If a rule requires multiple modified layers, the corresponding entry in modified_layers is set to a named tuple of modified layers. Apart from these special cases, the corresponding entry in modified_layers is simply set to the modified layer.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For a detailed description of the layer modification mechanism, refer to the section on Advanced layer modification.","category":"page"},{"location":"lrp/developer/#Forward-and-reverse-pass","page":"Developer documentation","title":"Forward and reverse pass","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"When calling an LRP analyzer, a forward pass through the model is performed, saving the activations aᵏ for all layers k in a vector called acts. This vector of activations is then used to pre-allocate the relevances R^k for all layers in a vector called rels. This is possible since for any layer k, a^k and R^k have the same shape. Finally, the last array of relevances R^N in rels is set to zeros, except for the specified output neuron, which is set to one.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We can now run the reverse pass, iterating backwards over the layers in the model and writing relevances R^k into the pre-allocated array rels:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"for k in length(model):-1:1\n # └─ loop over layers in reverse\n lrp!(rels[k], rules[k], layers[k], modified_layers[i], acts[k], rels[k+1])\n # └─ Rᵏ: modified in-place └─ aᵏ └─ Rᵏ⁺¹\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This is done by calling low-level functions","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= ...\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"that implement individual LRP rules. The correct rule is applied via multiple dispatch on the types of the arguments rule and modified_layer. The relevance Rᵏ is then computed based on the input activation aᵏ and the output relevance Rᵏ⁺¹.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The exclamation point in the function name lrp! is a naming convention in Julia to denote functions that modify their arguments – in this case the first argument rels[k], which corresponds to R^k.","category":"page"},{"location":"lrp/developer/#Rule-calls","page":"Developer documentation","title":"Rule calls","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"As discussed in The AD fallback, the default LRP fallback for unknown layers uses AD via Zygote. Now that you are familiar with both the API and the four-step computation of the generic LRP rules, the following implementation should be straightforward to understand:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)\n # Use modified_layer if available\n layer = isnothing(modified_layer) ? layer : modified_layer\n\n ãᵏ = modify_input(rule, aᵏ)\n z, back = Zygote.pullback(modified_layer, ãᵏ)\n s = Rᵏ⁺¹ ./ modify_denominator(rule, z)\n Rᵏ .= ãᵏ .* only(back(s))\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Not only lrp! dispatches on the rule and layer type, but also the internal functions modify_input and modify_denominator. Unknown layers that are registered in the LRP_CONFIG use this exact function.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"All LRP rules are implemented in the file /src/lrp/rules.jl.","category":"page"},{"location":"lrp/developer/#Specialized-implementations","page":"Developer documentation","title":"Specialized implementations","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"In other programming languages, LRP is commonly implemented in an object-oriented manner, providing a single backward pass implementation per rule. This can be seen as a form of single dispatch on the rule type.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Using multiple dispatch, we can implement specialized versions of lrp! that not only take into account the rule type, but also the layer type, for example for fully connected layers or reshaping layers. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Reshaping layers don't affect attributions. We can therefore avoid the computational overhead of AD by writing a specialized implementation that simply reshapes back:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer::ReshapingLayer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= reshape(Rᵏ⁺¹, size(aᵏ))\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We can even provide a specialized implementation of the generic LRP rule for Dense layers. Since we can access the weight matrix directly, we can skip the use of automatic differentiation and implement the following equation directly, using Einstein summation notation:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"R_j^k = sum_i fracrho(W_ij) a_j^kepsilon + sum_l rho(W_il) a_l^k + rho(b_i) R_i^k+1","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer::Dense, modified_layer, aᵏ, Rᵏ⁺¹)\n # Use modified_layer if available\n layer = isnothing(modified_layer) ? layer : modified_layer\n\n ãᵏ = modify_input(rule, aᵏ)\n z = modify_denominator(rule, layer(ãᵏ))\n\n # Implement LRP using Einsum notation, where `b` is the batch index\n @tullio Rᵏ[j, b] = layer.weight[i, j] * ãᵏ[j, b] / z[i, b] * Rᵏ⁺¹[i, b]\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For maximum low-level control beyond modify_input and modify_denominator, you can also implement your own lrp! function and dispatch on individual rule types MyRule and layer types MyLayer:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule::MyRule, layer::MyLayer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= ...\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"[1]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"[2]: W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"EditURL = \"../literate/example.jl\"","category":"page"},{"location":"generated/example/#docs-getting-started","page":"Getting started","title":"Getting started","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For this first example, we already have loaded a pre-trained LeNet5 model to look at explanations on the MNIST dataset.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Supported models\nExplainableAI.jl can be used on any differentiable classifier.Only LRP requires models from Flux.jl.","category":"page"},{"location":"generated/example/#Preparing-the-model","page":"Getting started","title":"Preparing the model","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For models with softmax activations on the output, it is necessary to call strip_softmax before analyzing.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"model = strip_softmax(model);\nnothing #hide","category":"page"},{"location":"generated/example/#Preparing-the-input-data","page":"Getting started","title":"Preparing the input data","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We use MLDatasets to load a single image from the MNIST dataset:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By convention in Flux.jl, this input needs to be resized to WHCN format by adding a color channel and batch dimensions.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"input = reshape(x, 28, 28, 1, :);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Input format\nFor any explanation of a model, ExplainableAI.jl assumes the batch dimension to come last in the input.For the purpose of heatmapping, the input is assumed to be in WHCN order (width, height, channels, batch), which is Flux.jl's convention.","category":"page"},{"location":"generated/example/#Explanations","page":"Getting started","title":"Explanations","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We can now select an analyzer of our choice and call analyze to get an Explanation:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"analyzer = LRP(model)\nexpl = analyze(input, analyzer);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"The return value expl is of type Explanation and bundles the following data:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.val: the numerical output of the analyzer, e.g. an attribution or gradient\nexpl.output: the model output for the given analyzer input\nexpl.neuron_selection: the neuron index used for the explanation\nexpl.analyzer: a symbol corresponding the used analyzer, e.g. :LRP\nexpl.extras: an optional named tuple that can be used by analyzers to return additional information.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We used an LRP analyzer, so expl.analyzer is :LRP.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.analyzer","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By default, the explanation is computed for the maximally activated output neuron. Since our digit is a 9 and Julia's indexing is 1-based, the output neuron at index 10 of our trained model is maximally activated.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.neuron_selection","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Finally, we obtain the result of the analyzer in form of an array.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.val","category":"page"},{"location":"generated/example/#Heatmapping-basics","page":"Getting started","title":"Heatmapping basics","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Since the array expl.val is not very informative at first sight, we can visualize Explanations by computing a heatmap:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"If we are only interested in the heatmap, we can combine analysis and heatmapping into a single function call:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(input, analyzer)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For a more detailed explanation of the heatmap function, refer to the heatmapping section.","category":"page"},{"location":"generated/example/#docs-analyzers-list","page":"Getting started","title":"List of analyzers","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Currently, the following analyzers are implemented:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Gradient\nInputTimesGradient\nSmoothGrad\nIntegratedGradients\nLRP\nRules\nZeroRule\nEpsilonRule\nGammaRule\nGeneralizedGammaRule\nWSquareRule\nFlatRule\nZBoxRule\nZPlusRule\nAlphaBetaRule\nPassRule\nComposite\nEpsilonGammaBox\nEpsilonPlus\nEpsilonPlusFlat\nEpsilonAlpha2Beta1\nEpsilonAlpha2Beta1Flat","category":"page"},{"location":"generated/example/#Neuron-selection","page":"Getting started","title":"Neuron selection","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By passing an additional index to our call to analyze, we can compute an explanation with respect to a specific output neuron. Let's see why the output wasn't interpreted as a 4 (output neuron at index 5)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl = analyze(input, analyzer, 5)\nheatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This heatmap shows us that the \"upper loop\" of the hand-drawn 9 has negative relevance with respect to the output neuron corresponding to digit 4!","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Note\nThe output neuron can also be specified when calling heatmap:heatmap(input, analyzer, 5)","category":"page"},{"location":"generated/example/#Analyzing-batches","page":"Getting started","title":"Analyzing batches","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"ExplainableAI also supports explanations of input batches:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"batchsize = 20\nxs, _ = MNIST(Float32, :test)[1:batchsize]\nbatch = reshape(xs, 28, 28, 1, :) # reshape to WHCN format\nexpl = analyze(batch, analyzer);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This will return a single Explanation expl for the entire batch. Calling heatmap on expl will detect the batch dimension and return a vector of heatmaps.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For more information on heatmapping batches, refer to the heatmapping documentation.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"EditURL = \"../../literate/lrp/custom_layer.jl\"","category":"page"},{"location":"generated/lrp/custom_layer/#docs-custom-layers","page":"Supporting new layer types","title":"Supporting new layers and activation functions","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"One of the design goals of ExplainableAI.jl is to combine ease of use and extensibility for the purpose of research. This example will show you how to extent LRP to new layer types and activation functions.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"using Flux\nusing ExplainableAI","category":"page"},{"location":"generated/lrp/custom_layer/#docs-lrp-model-checks","page":"Supporting new layer types","title":"Model checks","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"To assure that novice users use LRP according to best practices, ExplainableAI.jl runs strict model checks when creating an LRP analyzer.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Let's demonstrate this by defining a new layer type that doubles its input","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"struct MyDoublingLayer end\n(::MyDoublingLayer)(x) = 2 * x\n\nmylayer = MyDoublingLayer()\nmylayer([1, 2, 3])","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"and by defining a model that uses this layer:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"model = Chain(\n Dense(100, 20),\n MyDoublingLayer()\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Creating an LRP analyzer, e.g. LRP(model), will throw an ArgumentError and print a summary of the model check in the REPL:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"julia> LRP(model)\n ChainTuple(\n Dense(100 => 20) => supported,\n MyDoublingLayer() => unknown layer type,\n ),\n\n LRP model check failed\n ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡\n\n Found unknown layer types or activation functions that are not supported by ExplainableAI's LRP implementation yet.\n\n LRP assumes that the model is a deep rectifier network that only contains ReLU-like activation functions.\n\n If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).\n\n [...]\n\nERROR: Unknown layer or activation function found in model","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP should only be used on deep rectifier networks and ExplainableAI doesn't recognize MyDoublingLayer as a compatible layer by default. It will therefore return an error and a model check summary instead of returning an incorrect explanation.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"However, if we know MyDoublingLayer is compatible with deep rectifier networks, we can register it to tell ExplainableAI that it is ok to use. This will be shown in the following section.","category":"page"},{"location":"generated/lrp/custom_layer/#Registering-layers","page":"Supporting new layer types","title":"Registering layers","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"The error in the model check will stop after registering our custom layer type MyDoublingLayer as \"supported\" by ExplainableAI.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"This is done using the function LRP_CONFIG.supports_layer, which should be set to return true for the type MyDoublingLayer:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP_CONFIG.supports_layer(::MyDoublingLayer) = true","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Now we can create and run an analyzer without getting an error:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"analyzer = LRP(model)","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"note: Registering functions\nFlux's Chains can also contain functions, e.g. flatten. This kind of layer can be registered asLRP_CONFIG.supports_layer(::typeof(flatten)) = true","category":"page"},{"location":"generated/lrp/custom_layer/#Registering-activation-functions","page":"Supporting new layer types","title":"Registering activation functions","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"The mechanism for registering custom activation functions is analogous to that of custom layers:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"myrelu(x) = max.(0, x)\n\nmodel = Chain(\n Dense(784, 100, myrelu),\n Dense(100, 10),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Once again, creating an LRP analyzer for this model will throw an ArgumentError and display the following model check summary:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"julia> LRP(model)\n ChainTuple(\n Dense(784 => 100, myrelu) => unsupported or unknown activation function myrelu,\n Dense(100 => 10) => supported,\n ),\n\n LRP model check failed\n ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡\n\n Found unknown layer types or activation functions that are not supported by ExplainableAI's LRP implementation yet.\n\n LRP assumes that the model is a deep rectifier network that only contains ReLU-like activation functions.\n\n If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).\n\n [...]\n\nERROR: Unknown layer or activation function found in model","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Registation works by defining the function LRP_CONFIG.supports_activation as true:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP_CONFIG.supports_activation(::typeof(myrelu)) = true","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"now the analyzer can be created without error:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"analyzer = LRP(model)","category":"page"},{"location":"generated/lrp/custom_layer/#Skipping-model-checks","page":"Supporting new layer types","title":"Skipping model checks","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"All model checks can be skipped at your own risk by setting the LRP-analyzer keyword argument skip_checks=true.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"struct UnknownLayer end\n(::UnknownLayer)(x) = x\n\nunknown_activation(x) = max.(0, x)\n\nmodel = Chain(Dense(100, 20, unknown_activation), MyDoublingLayer())\n\nLRP(model; skip_checks=true)","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Instead of throwing the usual ERROR: Unknown layer or activation function found in model, the LRP analyzer was created without having to register either the layer UnknownLayer or the activation function unknown_activation.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"EditURL = \"../../literate/lrp/basics.jl\"","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-basics","page":"Basic usage","title":"Basic usage of LRP","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This example will show you best practices for using LRP, building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"note: TLDR\nUse strip_softmax to strip the output softmax from your model. Otherwise model checks will fail.\nUse canonize to fuse linear layers.\nDon't just call LRP(model), instead use a Composite to apply LRP rules to your model. Read Assigning rules to layers.\nBy default, LRP will call flatten_model to flatten your model. This reduces computational overhead.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"We start out by loading a small convolutional neural network:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"using ExplainableAI\nusing Flux\n\nmodel = Chain(\n Chain(\n Conv((3, 3), 3 => 8, relu; pad=1),\n Conv((3, 3), 8 => 8, relu; pad=1),\n MaxPool((2, 2)),\n Conv((3, 3), 8 => 16; pad=1),\n BatchNorm(16, relu),\n Conv((3, 3), 16 => 8, relu; pad=1),\n BatchNorm(8, relu),\n ),\n Chain(\n Flux.flatten,\n Dense(2048 => 512, relu),\n Dropout(0.5),\n Dense(512 => 100, softmax)\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This model contains two chains: the convolutional layers and the fully connected layers.","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-model-prep","page":"Basic usage","title":"Model preparation","text":"","category":"section"},{"location":"generated/lrp/basics/#docs-lrp-strip-softmax","page":"Basic usage","title":"Stripping the output softmax","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"When using LRP, it is recommended to explain output logits instead of probabilities. This can be done by stripping the output softmax activation from the model using the strip_softmax function:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model = strip_softmax(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If you don't remove the output softmax, model checks will fail.","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-canonization","page":"Basic usage","title":"Canonizing the model","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP is not invariant to a model's implementation. Applying the GammaRule to two linear layers in a row will yield different results than first fusing the two layers into one linear layer and then applying the rule. This fusing is called \"canonization\" and can be done using the canonize function:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model = canonize(model)","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-flatten-model","page":"Basic usage","title":"Flattening the model","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"ExplainableAI.jl's LRP implementation supports nested Flux Chains and Parallel layers. However, it is recommended to flatten the model before analyzing it.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP is implemented by first running a forward pass through the model, keeping track of the intermediate activations, followed by a backward pass that computes the relevances.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"To keep the LRP implementation simple and maintainable, ExplainableAI.jl does not pre-compute \"nested\" activations. Instead, for every internal chain, a new forward pass is run to compute activations.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"By \"flattening\" a model, this overhead can be avoided. For this purpose, ExplainableAI.jl provides the function flatten_model:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model_flat = flatten_model(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This function is called by default when creating an LRP analyzer. Note that we pass the unflattened model to the analyzer, but analyzer.model is flattened:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"analyzer = LRP(model)\nanalyzer.model","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If this flattening is not desired, it can be disabled by passing the keyword argument flatten=false to the LRP constructor.","category":"page"},{"location":"generated/lrp/basics/#LRP-rules","page":"Basic usage","title":"LRP rules","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"By default, the LRP constructor will assign the ZeroRule to all layers.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This analyzer will return heatmaps that look identical to InputTimesGradient.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP's strength lies in assigning different rules to different layers, based on their functionality in the neural network[1]. ExplainableAI.jl implements many LRP rules out of the box, but it is also possible to implement custom rules.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"To assign different rules to different layers, use one of the composites presets, or create your own composite, as described in Assigning rules to layers.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"composite = EpsilonPlusFlat() # using composite preset EpsilonPlusFlat","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP(model, composite)","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-layerwise","page":"Basic usage","title":"Computing layerwise relevances","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If you are interested in computing layerwise relevances, call analyze with an LRP analyzer and the keyword argument layerwise_relevances=true.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"The layerwise relevances can be accessed in the extras field of the returned Explanation:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"input = rand(Float32, 32, 32, 3, 1) # dummy input for our convolutional neural network\n\nexpl = analyze(input, analyzer; layerwise_relevances=true)\nexpl.extras.layerwise_relevances","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"Note that the layerwise relevances are only kept for layers in the outermost Chain of the model. When using our unflattened model, we only obtain three layerwise relevances, one for each chain in the model and the output relevance:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"analyzer = LRP(model; flatten=false) # use unflattened model\n\nexpl = analyze(input, analyzer; layerwise_relevances=true)\nexpl.extras.layerwise_relevances","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-performance","page":"Basic usage","title":"Performance tips","text":"","category":"section"},{"location":"generated/lrp/basics/#Using-LRP-without-a-GPU","page":"Basic usage","title":"Using LRP without a GPU","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"Since ExplainableAI.jl's LRP implementation makes use of Tullio.jl, analysis can be accelerated by loading either","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"a package from the JuliaGPU ecosystem, e.g. CUDA.jl, if a GPU is available\nLoopVectorization.jl if only a CPU is available.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This only requires loading the LoopVectorization.jl package before ExplainableAI.jl:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"using LoopVectorization\nusing ExplainableAI","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"[1]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"EditURL = \"../../literate/lrp/custom_rules.jl\"","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules","page":"Custom LRP rules","title":"Custom LRP rules","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"One of the design goals of ExplainableAI.jl is to combine ease of use and extensibility for the purpose of research.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"This example will show you how to implement custom LRP rules. building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"using ExplainableAI\nusing Flux\nusing MLDatasets\nusing ImageCore\nusing BSON\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nmodel = BSON.load(\"../../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/lrp/custom_rules/#Implementing-a-custom-rule","page":"Custom LRP rules","title":"Implementing a custom rule","text":"","category":"section"},{"location":"generated/lrp/custom_rules/#Step-1:-Define-rule-struct","page":"Custom LRP rules","title":"Step 1: Define rule struct","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Let's define a rule that modifies the weights and biases of our layer on the forward pass. The rule has to be of supertype AbstractLRPRule.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"struct MyGammaRule <: AbstractLRPRule end","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules-impl","page":"Custom LRP rules","title":"Step 2: Implement rule behavior","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"It is then possible to dispatch on the following four utility functions with the rule type MyCustomLRPRule to define custom rules without writing boilerplate code.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"modify_input(rule::MyGammaRule, input)\nmodify_parameters(rule::MyGammaRule, parameter)\nmodify_denominator(rule::MyGammaRule, denominator)\nis_compatible(rule::MyGammaRule, layer)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"By default:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"modify_input doesn't change the input\nmodify_parameters doesn't change the parameters\nmodify_denominator avoids division by zero by adding a small epsilon-term (1.0f-9)\nis_compatible returns true if a layer has fields weight and bias","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"To extend internal functions, import them explicitly:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"import ExplainableAI: modify_parameters\n\nmodify_parameters(::MyGammaRule, param) = param + 0.25f0 * relu.(param)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Note that we didn't implement three of the four functions. This is because the defaults are sufficient to implement the GammaRule.","category":"page"},{"location":"generated/lrp/custom_rules/#Step-3:-Use-rule-in-LRP-analyzer","page":"Custom LRP rules","title":"Step 3: Use rule in LRP analyzer","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We can directly use our rule to make an analyzer!","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"rules = [\n ZPlusRule(),\n EpsilonRule(),\n MyGammaRule(), # our custom GammaRule\n EpsilonRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n]\nanalyzer = LRP(model, rules)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We just implemented our own version of the γ-rule in 2 lines of code. The heatmap perfectly matches the pre-implemented GammaRule:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"rules = [\n ZPlusRule(),\n EpsilonRule(),\n GammaRule(), # XAI.jl's GammaRule\n EpsilonRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n]\nanalyzer = LRP(model, rules)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/lrp/custom_rules/#Performance-tips","page":"Custom LRP rules","title":"Performance tips","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Make sure functions like modify_parameters don't promote the type of weights (e.g. from Float32 to Float64).\nIf your rule MyRule doesn't modify weights or biases, defining modify_layer(::MyRule, layer) = nothing can provide reduce memory allocations and improve performance.","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules-advanced","page":"Custom LRP rules","title":"Advanced layer modification","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"For more granular control over weights and biases, modify_weight and modify_bias can be used.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"If the layer doesn't use weights (layer.weight) and biases (layer.bias), ExplainableAI provides a lower-level variant of modify_parameters called modify_layer. This function is expected to take a layer and return a new, modified layer. To add compatibility checks between rule and layer types, extend is_compatible.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"warning: Extending modify_layer\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.The default call structure looks as follows:┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘Therefore modify_layer should only be extended for a specific rule and a specific layer type.","category":"page"},{"location":"generated/lrp/custom_rules/#Advanced-LRP-rules","page":"Custom LRP rules","title":"Advanced LRP rules","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"To implement custom LRP rules that require more than modify_layer, modify_input and modify_denominator, take a look at the LRP developer documentation.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"This page was generated using Literate.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = ExplainableAI","category":"page"},{"location":"#ExplainableAI.jl","page":"Home","title":"ExplainableAI.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Explainable AI in Julia using Flux.jl.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To install this package and its dependencies, open the Julia REPL and run ","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ]add ExplainableAI","category":"page"},{"location":"#Manual","page":"Home","title":"Manual","text":"","category":"section"},{"location":"#General-usage","page":"Home","title":"General usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"generated/example.md\",\n \"generated/heatmapping.md\",\n \"generated/augmentations.md\",\n]\nDepth = 3","category":"page"},{"location":"#LRP","page":"Home","title":"LRP","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"generated/lrp/basics.md\",\n \"generated/lrp/composites.md\",\n \"generated/lrp/custom_layer.md\",\n \"generated/lrp/custom_rules.md\",\n \"lrp/developer.md\",\n]\nDepth = 3","category":"page"},{"location":"#API-reference","page":"Home","title":"API reference","text":"","category":"section"},{"location":"#General","page":"Home","title":"General","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"api.md\"]\nDepth = 2","category":"page"},{"location":"#LRP-2","page":"Home","title":"LRP","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"lrp/api.md\"]\nDepth = 2","category":"page"}] +[{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"EditURL = \"../../literate/lrp/composites.jl\"","category":"page"},{"location":"generated/lrp/composites/#docs-composites","page":"Assigning rules to layers","title":"Assigning LRP rules to layers","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"In this example, we will show how to assign LRP rules to specific layers. For this purpose, we first define a small VGG-like convolutional neural network:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"using ExplainableAI\nusing Flux\n\nmodel = Chain(\n Chain(\n Conv((3, 3), 3 => 8, relu; pad=1),\n Conv((3, 3), 8 => 8, relu; pad=1),\n MaxPool((2, 2)),\n Conv((3, 3), 8 => 16, relu; pad=1),\n Conv((3, 3), 16 => 16, relu; pad=1),\n MaxPool((2, 2)),\n ),\n Chain(\n Flux.flatten,\n Dense(1024 => 512, relu),\n Dropout(0.5),\n Dense(512 => 100, relu)\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/#docs-composites-manual","page":"Assigning rules to layers","title":"Manually assigning rules","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"When creating an LRP-analyzer, we can assign individual rules to each layer. As we can see above, our model is a Chain of two Flux Chains. Using flatten_model, we can flatten the model into a single Chain:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"model_flat = flatten_model(model)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This allows us to define an LRP analyzer using an array of rules matching the length of the Flux chain:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"rules = [\n FlatRule(),\n ZPlusRule(),\n ZeroRule(),\n ZPlusRule(),\n ZPlusRule(),\n ZeroRule(),\n PassRule(),\n EpsilonRule(),\n PassRule(),\n EpsilonRule(),\n];\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"The LRP analyzer will show a summary of how layers and rules got matched:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"LRP(model_flat, rules)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"However, this approach only works for models that can be fully flattened. For unflattened models and models containing Parallel layers, we can compose rules using ChainTuples and ParallelTuples which match the model structure:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"rules = ChainTuple(\n ChainTuple(\n FlatRule(),\n ZPlusRule(),\n ZeroRule(),\n ZPlusRule(),\n ZPlusRule(),\n ZeroRule()\n ),\n ChainTuple(\n PassRule(),\n EpsilonRule(),\n PassRule(),\n EpsilonRule(),\n ),\n)\n\nanalyzer = LRP(model, rules; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"note: Keyword argument `flatten`\nWe used the LRP keyword argument flatten=false to showcase that the structure of the model can be preserved. For performance reasons, the default flatten=true is recommended.","category":"page"},{"location":"generated/lrp/composites/#docs-composites-custom","page":"Assigning rules to layers","title":"Custom composites","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Instead of manually defining a list of rules, we can also define a Composite. A composite constructs a list of LRP-rules by sequentially applying the composite primitives it contains.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To obtain the same set of rules as in the previous example, we can define","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = Composite(\n GlobalTypeMap( # the following maps of layer types to LRP rules are applied globally\n Conv => ZPlusRule(), # apply ZPlusRule on all Conv layers\n Dense => EpsilonRule(), # apply EpsilonRule on all Dense layers\n Dropout => PassRule(), # apply PassRule on all Dropout layers\n MaxPool => ZeroRule(), # apply ZeroRule on all MaxPool layers\n typeof(Flux.flatten) => PassRule(), # apply PassRule on all flatten layers\n ),\n FirstLayerMap( # the following rule is applied to the first layer\n FlatRule()\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"We now construct an LRP analyzer from composite","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"analyzer = LRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"As you can see, this analyzer contains the same rules as our previous one. To compute rules for a model without creating an analyzer, use lrp_rules:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"lrp_rules(model, composite)","category":"page"},{"location":"generated/lrp/composites/#Composite-primitives","page":"Assigning rules to layers","title":"Composite primitives","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"The following Composite primitives can used to construct a Composite.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To apply a single rule, use:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"LayerMap to apply a rule to a layer at a given index\nGlobalMap to apply a rule to all layers\nRangeMap to apply a rule to a positional range of layers\nFirstLayerMap to apply a rule to the first layer\nLastLayerMap to apply a rule to the last layer","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To apply a set of rules to layers based on their type, use:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"GlobalTypeMap to apply a dictionary that maps layer types to LRP-rules\nRangeTypeMap for a TypeMap on generalized ranges\nFirstLayerTypeMap for a TypeMap on the first layer of a model\nLastLayerTypeMap for a TypeMap on the last layer\nFirstNTypeMap for a TypeMap on the first n layers","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Primitives are called sequentially in the order the Composite was created with and overwrite rules specified by previous primitives.","category":"page"},{"location":"generated/lrp/composites/#Assigning-a-rule-to-a-specific-layer","page":"Assigning rules to layers","title":"Assigning a rule to a specific layer","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To assign a rule to a specific layer, we can use LayerMap, which maps an LRP-rule to all layers in the model at the given index.","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"To display indices, use the show_layer_indices helper function:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"show_layer_indices(model)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"Let's demonstrate LayerMap by assigning a specific rule to the last Conv layer at index (1, 5):","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = Composite(LayerMap((1, 5), EpsilonRule()))\n\nLRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This approach also works with Parallel layers.","category":"page"},{"location":"generated/lrp/composites/#docs-composites-presets","page":"Assigning rules to layers","title":"Composite presets","text":"","category":"section"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"ExplainableAI.jl provides a set of default composites. A list of all implemented default composites can be found in the API reference, e.g. the EpsilonPlusFlat composite:","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"composite = EpsilonPlusFlat()","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"analyzer = LRP(model, composite; flatten=false)","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"","category":"page"},{"location":"generated/lrp/composites/","page":"Assigning rules to layers","title":"Assigning rules to layers","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"EditURL = \"../literate/heatmapping.jl\"","category":"page"},{"location":"generated/heatmapping/#docs-heatmapping","page":"Heatmapping","title":"Heatmapping","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since numerical explanations are not very informative at first sight, we can visualize them by computing a heatmap. This page showcases different options and preset for heatmapping, building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"We start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/heatmapping/#Automatic-heatmap-presets","page":"Heatmapping","title":"Automatic heatmap presets","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The function heatmap automatically applies common presets for each method.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since InputTimesGradient and LRP both compute attributions, their presets are similar. Gradient methods however are typically shown in grayscale:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"analyzer = Gradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"analyzer = InputTimesGradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/heatmapping/#Custom-heatmap-settings","page":"Heatmapping","title":"Custom heatmap settings","text":"","category":"section"},{"location":"generated/heatmapping/#Color-schemes","page":"Heatmapping","title":"Color schemes","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"We can partially or fully override presets by passing keyword arguments to heatmap. For example, we can use a custom color scheme from ColorSchemes.jl using the keyword argument cs:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using ColorSchemes\n\nexpl = analyze(input, analyzer)\nheatmap(expl; cs=ColorSchemes.jet)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Refer to the ColorSchemes.jl catalogue for a gallery of available color schemes.","category":"page"},{"location":"generated/heatmapping/#docs-heatmap-reduce","page":"Heatmapping","title":"Color channel reduction","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Explanations have the same dimensionality as the inputs to the classifier. For images with multiple color channels, this means that the explanation also has a \"color channel\" dimension.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The keyword argument reduce can be used to reduce this dimension to a single scalar value for each pixel. The following presets are available:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":":sum: sum up color channels (default setting)\n:norm: compute 2-norm over the color channels\n:maxabs: compute maximum(abs, x) over the color channels","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:sum)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:norm)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; reduce=:maxabs)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since MNIST only has a single color channel, there is no need for reduction and heatmaps look identical.","category":"page"},{"location":"generated/heatmapping/#docs-heatmap-rangescale","page":"Heatmapping","title":"Mapping explanations onto the color scheme","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"To map a color-channel-reduced explanation onto a color scheme, we first need to normalize all values to the range 0 1.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"For this purpose, two presets are available through the rangescale keyword argument:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":":extrema: normalize to the minimum and maximum value of the explanation\n:centered: normalize to the maximum absolute value of the explanation. Values of zero will be mapped to the center of the color scheme.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Depending on the color scheme, one of these presets may be more suitable than the other. The default color scheme for InputTimesGradient, seismic, is centered around zero, making :centered a good choice:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:centered)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:extrema)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"However, for the inferno color scheme, which is not centered around zero, :extrema leads to a heatmap with higher contrast.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:centered, cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmap(expl; rangescale=:extrema, cs=ColorSchemes.inferno)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"For the full list of heatmap keyword arguments, refer to the heatmap documentation.","category":"page"},{"location":"generated/heatmapping/#docs-heatmapping-batches","page":"Heatmapping","title":"Heatmapping batches","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Heatmapping also works with input batches. Let's demonstrate this by using a batch of 100 images from the MNIST dataset:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"xs, ys = MNIST(Float32, :test)[1:100]\nbatch = reshape(xs, 28, 28, 1, :); # reshape to WHCN format\nnothing #hide","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"The heatmap function automatically recognizes that the explanation is batched and returns a Vector of images:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"heatmaps = heatmap(batch, analyzer)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Image.jl's mosaic function can used to display them in a grid:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"mosaic(heatmaps; nrow=10)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"note: Output type consistency\nTo obtain a singleton Vector containing a single heatmap for non-batched inputs, use the heatmap keyword argument unpack_singleton=false.","category":"page"},{"location":"generated/heatmapping/#Processing-heatmaps","page":"Heatmapping","title":"Processing heatmaps","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Heatmapping makes use of the Julia-based image processing ecosystem Images.jl.","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"If you want to further process heatmaps, you may benefit from reading about some fundamental conventions that the ecosystem utilizes that are different from how images are typically represented in OpenCV, MATLAB, ImageJ or Python.","category":"page"},{"location":"generated/heatmapping/#Saving-heatmaps","page":"Heatmapping","title":"Saving heatmaps","text":"","category":"section"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"Since heatmaps are regular Images.jl images, they can be saved as such:","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"using FileIO\n\nimg = heatmap(input, analyzer)\nsave(\"heatmap.png\", img)","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"","category":"page"},{"location":"generated/heatmapping/","page":"Heatmapping","title":"Heatmapping","text":"This page was generated using Literate.jl.","category":"page"},{"location":"lrp/api/#LRP-analyzer","page":"LRP","title":"LRP analyzer","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Refer to LRP for documentation on the LRP analyzer.","category":"page"},{"location":"lrp/api/#api-lrp-rules","page":"LRP","title":"LRP rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ZeroRule\nEpsilonRule\nGammaRule\nWSquareRule\nFlatRule\nAlphaBetaRule\nZPlusRule\nZBoxRule\nPassRule\nGeneralizedGammaRule","category":"page"},{"location":"lrp/api/#ExplainableAI.ZeroRule","page":"LRP","title":"ExplainableAI.ZeroRule","text":"ZeroRule()\n\nLRP-0 rule. Commonly used on upper layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_i fracW_ija_j^ksum_l W_ila_l^k+b_i R_i^k+1\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.EpsilonRule","page":"LRP","title":"ExplainableAI.EpsilonRule","text":"EpsilonRule([epsilon=1.0e-6])\n\nLRP-ϵ rule. Commonly used on middle layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracW_ija_j^kepsilon +sum_lW_ila_l^k+b_i R_i^k+1\n\nOptional arguments\n\nepsilon: Optional stabilization parameter, defaults to 1.0e-6.\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GammaRule","page":"LRP","title":"ExplainableAI.GammaRule","text":"GammaRule([gamma=0.25])\n\nLRP-γ rule. Commonly used on lower layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac(W_ij+gamma W_ij^+)a_j^k\n sum_l(W_il+gamma W_il^+)a_l^k+(b_i+gamma b_i^+) R_i^k+1\n\nOptional arguments\n\ngamma: Optional multiplier for added positive weights, defaults to 0.25.\n\nReferences\n\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.WSquareRule","page":"LRP","title":"ExplainableAI.WSquareRule","text":"WSquareRule()\n\nLRP-w² rule. Commonly used on the first layer when values are unbounded.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracW_ij^2sum_l W_il^2 R_i^k+1\n\nReferences\n\nG. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FlatRule","page":"LRP","title":"ExplainableAI.FlatRule","text":"FlatRule()\n\nLRP-Flat rule. Similar to the WSquareRule, but with all weights set to one and all bias terms set to zero.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac1sum_l 1 R_i^k+1 = sum_ifrac1n_i R_i^k+1\n\nwhere n_i is the number of input neurons connected to the output neuron at index i.\n\nReferences\n\nS. Lapuschkin et al., Unmasking Clever Hans predictors and assessing what machines really learn\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.AlphaBetaRule","page":"LRP","title":"ExplainableAI.AlphaBetaRule","text":"AlphaBetaRule([alpha=2.0, beta=1.0])\n\nLRP-αβ rule. Weights positive and negative contributions according to the parameters alpha and beta respectively. The difference α-β must be equal to one. Commonly used on lower layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ileft(\n alphafracleft(W_ija_j^kright)^+sum_lleft(W_ila_l^k+b_iright)^+\n -betafracleft(W_ija_j^kright)^-sum_lleft(W_ila_l^k+b_iright)^-\nright) R_i^k+1\n\nOptional arguments\n\nalpha: Multiplier for the positive output term, defaults to 2.0.\nbeta: Multiplier for the negative output term, defaults to 1.0.\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ZPlusRule","page":"LRP","title":"ExplainableAI.ZPlusRule","text":"ZPlusRule()\n\nLRP-z rule. Commonly used on lower layers.\n\nEquivalent to AlphaBetaRule(1.0f0, 0.0f0), but slightly faster. See also AlphaBetaRule.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifracleft(W_ija_j^kright)^+sum_lleft(W_ila_l^k+b_iright)^+ R_i^k+1\n\nReferences\n\nS. Bach et al., On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation\nG. Montavon et al., Explaining Nonlinear Classification Decisions with Deep Taylor Decomposition\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ZBoxRule","page":"LRP","title":"ExplainableAI.ZBoxRule","text":"ZBoxRule(low, high)\n\nLRP-zᴮ-rule. Commonly used on the first layer for pixel input.\n\nThe parameters low and high should be set to the lower and upper bounds of the input features, e.g. 0.0 and 1.0 for raw image data. It is also possible to provide two arrays of that match the input size.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k=sum_i fracW_ija_j^k - W_ij^+l_j - W_ij^-h_j\n sum_l W_ila_l^k+b_i - left(W_il^+l_l+b_i^+right) - left(W_il^-h_l+b_i^-right) R_i^k+1\n\nReferences\n\nG. Montavon et al., Layer-Wise Relevance Propagation: An Overview\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.PassRule","page":"LRP","title":"ExplainableAI.PassRule","text":"PassRule()\n\nPass-through rule. Passes relevance through to the lower layer.\n\nSupports layers with constant input and output shapes, e.g. reshaping layers.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = R_j^k+1\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GeneralizedGammaRule","page":"LRP","title":"ExplainableAI.GeneralizedGammaRule","text":"GeneralizedGammaRule([gamma=0.25])\n\nGeneralized LRP-γ rule. Can be used on layers with leakyrelu activation functions.\n\nDefinition\n\nPropagates relevance R^k+1 at layer output to R^k at layer input according to\n\nR_j^k = sum_ifrac\n (W_ij+gamma W_ij^+)a_j^+ +(W_ij+gamma W_ij^-)a_j^-\n sum_l(W_il+gamma W_il^+)a_j^+ +(W_il+gamma W_il^-)a_j^- +(b_i+gamma b_i^+)\nI(z_k0) cdot R^k+1_i\n+sum_ifrac\n (W_ij+gamma W_ij^-)a_j^+ +(W_ij+gamma W_ij^+)a_j^-\n sum_l(W_il+gamma W_il^-)a_j^+ +(W_il+gamma W_il^+)a_j^- +(b_i+gamma b_i^-)\nI(z_k0) cdot R^k+1_i\n\nOptional arguments\n\ngamma: Optional multiplier for added positive weights, defaults to 0.25.\n\nReferences\n\nL. Andéol et al., Learning Domain Invariant Representations by Joint Wasserstein Distance Minimization\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"For manual rule assignment, use ChainTuple and ParallelTuple, matching the model structure:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ChainTuple\nParallelTuple","category":"page"},{"location":"lrp/api/#ExplainableAI.ChainTuple","page":"LRP","title":"ExplainableAI.ChainTuple","text":"ChainTuple(xs)\n\nThin wrapper around Tuple for use with Flux.jl models.\n\nCombining ChainTuple and ParallelTuple, data xs can be stored while preserving the structure of a Flux model without risking type piracy.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ParallelTuple","page":"LRP","title":"ExplainableAI.ParallelTuple","text":"ParallelTuple(xs)\n\nThin wrapper around Tuple for use with Flux.jl models.\n\nCombining ChainTuple and ParallelTuple, data xs can be stored while preserving the structure of a Flux model without risking type piracy.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#Composites","page":"LRP","title":"Composites","text":"","category":"section"},{"location":"lrp/api/#Applying-composites","page":"LRP","title":"Applying composites","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite\nlrp_rules","category":"page"},{"location":"lrp/api/#ExplainableAI.Composite","page":"LRP","title":"ExplainableAI.Composite","text":"Composite(primitives...)\nComposite(default_rule, primitives...)\n\nAutomatically contructs a list of LRP-rules by sequentially applying composite primitives.\n\nPrimitives\n\nTo apply a single rule, use:\n\nLayerMap to apply a rule to the n-th layer of a model\nGlobalMap to apply a rule to all layers\nRangeMap to apply a rule to a positional range of layers\nFirstLayerMap to apply a rule to the first layer\nLastLayerMap to apply a rule to the last layer\n\nTo apply a set of rules to layers based on their type, use:\n\nGlobalTypeMap to apply a dictionary that maps layer types to LRP-rules\nRangeTypeMap for a TypeMap on generalized ranges\nFirstLayerTypeMap for a TypeMap on the first layer of a model\nLastLayerTypeMap for a TypeMap on the last layer\nFirstNTypeMap for a TypeMap on the first n layers\n\nExample\n\nUsing a VGG11 model:\n\njulia> composite = Composite(\n GlobalTypeMap(\n ConvLayer => AlphaBetaRule(),\n Dense => EpsilonRule(),\n PoolingLayer => EpsilonRule(),\n DropoutLayer => PassRule(),\n ReshapingLayer => PassRule(),\n ),\n FirstNTypeMap(7, Conv => FlatRule()),\n );\n\njulia> analyzer = LRP(model, composite)\nLRP(\n Conv((3, 3), 3 => 64, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 64 => 128, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 128 => 256, relu, pad=1) => FlatRule(),\n Conv((3, 3), 256 => 256, relu, pad=1) => FlatRule(),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 256 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Conv((3, 3), 512 => 512, relu, pad=1) => AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n MaxPool((2, 2)) => EpsilonRule{Float32}(1.0f-6),\n MLUtils.flatten => PassRule(),\n Dense(25088 => 4096, relu) => EpsilonRule{Float32}(1.0f-6),\n Dropout(0.5) => PassRule(),\n Dense(4096 => 4096, relu) => EpsilonRule{Float32}(1.0f-6),\n Dropout(0.5) => PassRule(),\n Dense(4096 => 1000) => EpsilonRule{Float32}(1.0f-6),\n)\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.lrp_rules","page":"LRP","title":"ExplainableAI.lrp_rules","text":"lrp_rules(model, composite)\n\nApply a composite to obtain LRP-rules for a given Flux model.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#api-composite-primitives","page":"LRP","title":"Composite primitives","text":"","category":"section"},{"location":"lrp/api/#Mapping-layers-to-rules","page":"LRP","title":"Mapping layers to rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite primitives that apply a single rule:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"LayerMap\nGlobalMap\nRangeMap\nFirstLayerMap\nLastLayerMap","category":"page"},{"location":"lrp/api/#ExplainableAI.LayerMap","page":"LRP","title":"ExplainableAI.LayerMap","text":"LayerMap(index, rule)\n\nComposite primitive that maps an LRP-rule to all layers in the model at the given index. The index can either be an integer or a tuple of integers to map a rule to a specific layer in nested Flux Chains.\n\nSee show_layer_indices to print layer indices and Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.GlobalMap","page":"LRP","title":"ExplainableAI.GlobalMap","text":"GlobalMap(rule)\n\nComposite primitive that maps an LRP-rule to all layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.RangeMap","page":"LRP","title":"ExplainableAI.RangeMap","text":"RangeMap(range, rule)\n\nComposite primitive that maps an LRP-rule to the specified positional range of layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstLayerMap","page":"LRP","title":"ExplainableAI.FirstLayerMap","text":"FirstLayerMap(rule)\n\nComposite primitive that maps an LRP-rule to the first layer in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.LastLayerMap","page":"LRP","title":"ExplainableAI.LastLayerMap","text":"LastLayerMap(rule)\n\nComposite primitive that maps an LRP-rule to the last layer in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"To apply LayerMap to nested Flux Chains or Parallel layers, make use of show_layer_indices:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"show_layer_indices","category":"page"},{"location":"lrp/api/#ExplainableAI.show_layer_indices","page":"LRP","title":"ExplainableAI.show_layer_indices","text":"show_layer_indices(model)\n\nPrint layer indices of Flux models. This is primarily a utility to help define LayerMap primitives.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Mapping-layers-to-rules-based-on-type","page":"LRP","title":"Mapping layers to rules based on type","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Composite primitives that apply rules based on the layer type:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"GlobalTypeMap\nRangeTypeMap\nFirstLayerTypeMap\nLastLayerTypeMap\nFirstNTypeMap","category":"page"},{"location":"lrp/api/#ExplainableAI.GlobalTypeMap","page":"LRP","title":"ExplainableAI.GlobalTypeMap","text":"GlobalTypeMap(map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.RangeTypeMap","page":"LRP","title":"ExplainableAI.RangeTypeMap","text":"RangeTypeMap(range, map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the specified range of layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstLayerTypeMap","page":"LRP","title":"ExplainableAI.FirstLayerTypeMap","text":"FirstLayerTypeMap(map)\n\nComposite primitive that maps the type of the first layer of the model to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.LastLayerTypeMap","page":"LRP","title":"ExplainableAI.LastLayerTypeMap","text":"LastLayerTypeMap(map)\n\nComposite primitive that maps the type of the last layer of the model to LRP rules based on a list of type-rule-pairs map.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.FirstNTypeMap","page":"LRP","title":"ExplainableAI.FirstNTypeMap","text":"FirstNTypeMap(n, map)\n\nComposite primitive that maps layer types to LRP rules based on a list of type-rule-pairs map within the first n layers in the model.\n\nSee Composite for an example.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#Union-types-for-composites","page":"LRP","title":"Union types for composites","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"The following exported union types types can be used to define TypeMaps:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ConvLayer\nPoolingLayer\nDropoutLayer\nReshapingLayer\nNormalizationLayer","category":"page"},{"location":"lrp/api/#ExplainableAI.ConvLayer","page":"LRP","title":"ExplainableAI.ConvLayer","text":"Union type for convolutional layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.PoolingLayer","page":"LRP","title":"ExplainableAI.PoolingLayer","text":"Union type for pooling layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.DropoutLayer","page":"LRP","title":"ExplainableAI.DropoutLayer","text":"Union type for dropout layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.ReshapingLayer","page":"LRP","title":"ExplainableAI.ReshapingLayer","text":"Union type for reshaping layers such as flatten.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#ExplainableAI.NormalizationLayer","page":"LRP","title":"ExplainableAI.NormalizationLayer","text":"Union type for normalization layers.\n\n\n\n\n\n","category":"type"},{"location":"lrp/api/#api-composite-presets","page":"LRP","title":"Composite presets","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"EpsilonGammaBox\nEpsilonPlus\nEpsilonAlpha2Beta1\nEpsilonPlusFlat\nEpsilonAlpha2Beta1Flat","category":"page"},{"location":"lrp/api/#ExplainableAI.EpsilonGammaBox","page":"LRP","title":"ExplainableAI.EpsilonGammaBox","text":"EpsilonGammaBox(low, high; [epsilon=1.0f-6, gamma=0.25f0])\n\nComposite using the following primitives:\n\njulia> EpsilonGammaBox(-3.0f0, 3.0f0)\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.ConvTranspose => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.CrossCor => ExplainableAI.GammaRule{Float32}(0.25f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n Flux.ConvTranspose => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n Flux.CrossCor => ExplainableAI.ZBoxRule{Float32}(-3.0f0, 3.0f0),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonPlus","page":"LRP","title":"ExplainableAI.EpsilonPlus","text":"EpsilonPlus(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonPlus()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.ZPlusRule(),\n Flux.ConvTranspose => ExplainableAI.ZPlusRule(),\n Flux.CrossCor => ExplainableAI.ZPlusRule(),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonAlpha2Beta1","page":"LRP","title":"ExplainableAI.EpsilonAlpha2Beta1","text":"EpsilonAlpha2Beta1(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonAlpha2Beta1()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.ConvTranspose => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.CrossCor => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonPlusFlat","page":"LRP","title":"ExplainableAI.EpsilonPlusFlat","text":"EpsilonPlusFlat(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonPlusFlat()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.ZPlusRule(),\n Flux.ConvTranspose => ExplainableAI.ZPlusRule(),\n Flux.CrossCor => ExplainableAI.ZPlusRule(),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.FlatRule(),\n Flux.ConvTranspose => ExplainableAI.FlatRule(),\n Flux.CrossCor => ExplainableAI.FlatRule(),\n Flux.Dense => ExplainableAI.FlatRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.EpsilonAlpha2Beta1Flat","page":"LRP","title":"ExplainableAI.EpsilonAlpha2Beta1Flat","text":"EpsilonAlpha2Beta1Flat(; [epsilon=1.0f-6])\n\nComposite using the following primitives:\n\njulia> EpsilonAlpha2Beta1Flat()\nComposite(\n GlobalTypeMap( # all layers\n Flux.Conv => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.ConvTranspose => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.CrossCor => ExplainableAI.AlphaBetaRule{Float32}(2.0f0, 1.0f0),\n Flux.Dense => ExplainableAI.EpsilonRule{Float32}(1.0f-6),\n typeof(NNlib.dropout) => ExplainableAI.PassRule(),\n Flux.AlphaDropout => ExplainableAI.PassRule(),\n Flux.Dropout => ExplainableAI.PassRule(),\n Flux.BatchNorm => ExplainableAI.PassRule(),\n typeof(Flux.flatten) => ExplainableAI.PassRule(),\n typeof(MLUtils.flatten) => ExplainableAI.PassRule(),\n typeof(identity) => ExplainableAI.PassRule(),\n ),\n FirstLayerTypeMap( # first layer\n Flux.Conv => ExplainableAI.FlatRule(),\n Flux.ConvTranspose => ExplainableAI.FlatRule(),\n Flux.CrossCor => ExplainableAI.FlatRule(),\n Flux.Dense => ExplainableAI.FlatRule(),\n ),\n)\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Custom-rules","page":"LRP","title":"Custom rules","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"These utilities can be used to define custom rules without writing boilerplate code. To extend these functions, explicitly import them: ","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"ExplainableAI.modify_input\nExplainableAI.modify_denominator\nExplainableAI.modify_parameters\nExplainableAI.modify_weight\nExplainableAI.modify_bias\nExplainableAI.modify_layer\nExplainableAI.is_compatible","category":"page"},{"location":"lrp/api/#ExplainableAI.modify_input","page":"LRP","title":"ExplainableAI.modify_input","text":"modify_input(rule, input)\n\nModify input activation before computing relevance propagation.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_denominator","page":"LRP","title":"ExplainableAI.modify_denominator","text":"modify_denominator(rule, d)\n\nModify denominator z for numerical stability on the forward pass.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_parameters","page":"LRP","title":"ExplainableAI.modify_parameters","text":"modify_parameters(rule, parameter)\n\nModify parameters before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_weight","page":"LRP","title":"ExplainableAI.modify_weight","text":"modify_weight(rule, weight)\n\nModify layer weights before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_bias","page":"LRP","title":"ExplainableAI.modify_bias","text":"modify_bias(rule, bias)\n\nModify layer bias before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.modify_layer","page":"LRP","title":"ExplainableAI.modify_layer","text":"modify_layer(rule, layer)\n\nModify layer before computing the relevance.\n\nNote\n\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.\n\nThe default call structure looks as follows:\n\n┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.is_compatible","page":"LRP","title":"ExplainableAI.is_compatible","text":"is_compatible(rule, layer)\n\nCheck compatibility of a LRP-Rule with layer type.\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"Compatibility settings:","category":"page"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"LRP_CONFIG.supports_layer\nLRP_CONFIG.supports_activation","category":"page"},{"location":"lrp/api/#ExplainableAI.LRP_CONFIG.supports_layer","page":"LRP","title":"ExplainableAI.LRP_CONFIG.supports_layer","text":"LRP_CONFIG.supports_layer(layer)\n\nCheck whether LRP can be used on a layer or a Chain. To extend LRP to your own layers, define:\n\nLRP_CONFIG.supports_layer(::MyLayer) = true # for structs\nLRP_CONFIG.supports_layer(::typeof(mylayer)) = true # for functions\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#ExplainableAI.LRP_CONFIG.supports_activation","page":"LRP","title":"ExplainableAI.LRP_CONFIG.supports_activation","text":"LRP_CONFIG.supports_activation(σ)\n\nCheck whether LRP can be used on a given activation function. To extend LRP to your own activation functions, define:\n\nLRP_CONFIG.supports_activation(::typeof(myactivation)) = true # for functions\nLRP_CONFIG.supports_activation(::MyActivation) = true # for structs\n\n\n\n\n\n","category":"function"},{"location":"lrp/api/#Index","page":"LRP","title":"Index","text":"","category":"section"},{"location":"lrp/api/","page":"LRP","title":"LRP","text":"","category":"page"},{"location":"api/#Basic-API","page":"General","title":"Basic API","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"All methods in ExplainableAI.jl work by calling analyze on an input and an analyzer:","category":"page"},{"location":"api/","page":"General","title":"General","text":"analyze\nExplanation\nheatmap","category":"page"},{"location":"api/#ExplainableAI.analyze","page":"General","title":"ExplainableAI.analyze","text":"analyze(input, method)\nanalyze(input, method, neuron_selection)\n\nApply the analyzer method for the given input, returning an Explanation. If neuron_selection is specified, the explanation will be calculated for that neuron. Otherwise, the output neuron with the highest activation is automatically chosen.\n\nSee also Explanation and heatmap.\n\nKeyword arguments\n\nadd_batch_dim: add batch dimension to the input without allocating. Default is false.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.Explanation","page":"General","title":"ExplainableAI.Explanation","text":"Return type of analyzers when calling analyze.\n\nFields\n\nval: numerical output of the analyzer, e.g. an attribution or gradient\noutput: model output for the given analyzer input\nneuron_selection: neuron index used for the explanation\nanalyzer: symbol corresponding the used analyzer, e.g. :LRP or :Gradient\nextras: optional named tuple that can be used by analyzers to return additional information.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.heatmap","page":"General","title":"ExplainableAI.heatmap","text":"heatmap(explanation)\nheatmap(input, analyzer)\nheatmap(input, analyzer, neuron_selection)\n\nVisualize explanation. Assumes Flux's WHCN convention (width, height, color channels, batch size).\n\nSee also analyze.\n\nKeyword arguments\n\ncs::ColorScheme: color scheme from ColorSchemes.jl that is applied. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is ColorSchemes.seismic.\nreduce::Symbol: selects how color channels are reduced to a single number to apply a color scheme. The following methods can be selected, which are then applied over the color channels for each \"pixel\" in the explanation:\n:sum: sum up color channels\n:norm: compute 2-norm over the color channels\n:maxabs: compute maximum(abs, x) over the color channels\nWhen calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default is :sum.\nrangescale::Symbol: selects how the color channel reduced heatmap is normalized before the color scheme is applied. Can be either :extrema or :centered. When calling heatmap with an Explanation or analyzer, the method default is selected. When calling heatmap with an array, the default for use with the seismic color scheme is :centered.\npermute::Bool: Whether to flip W&H input channels. Default is true.\nunpack_singleton::Bool: When heatmapping a batch with a single sample, setting unpack_singleton=true will return an image instead of an Vector containing a single image.\n\nNote: keyword arguments can't be used when calling heatmap with an analyzer.\n\n\n\n\n\n","category":"function"},{"location":"api/#Analyzers","page":"General","title":"Analyzers","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"LRP\nGradient\nInputTimesGradient\nSmoothGrad\nIntegratedGradients","category":"page"},{"location":"api/#ExplainableAI.LRP","page":"General","title":"ExplainableAI.LRP","text":"LRP(model, rules)\nLRP(model, composite)\n\nAnalyze model by applying Layer-Wise Relevance Propagation. The analyzer can either be created by passing an array of LRP-rules or by passing a composite, see Composite for an example.\n\nKeyword arguments\n\nskip_checks::Bool: Skip checks whether model is compatible with LRP and contains output softmax. Default is false.\nverbose::Bool: Select whether the model checks should print a summary on failure. Default is true.\n\nReferences\n\n[1] G. Montavon et al., Layer-Wise Relevance Propagation: An Overview [2] W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.Gradient","page":"General","title":"ExplainableAI.Gradient","text":"Gradient(model)\n\nAnalyze model by calculating the gradient of a neuron activation with respect to the input.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.InputTimesGradient","page":"General","title":"ExplainableAI.InputTimesGradient","text":"InputTimesGradient(model)\n\nAnalyze model by calculating the gradient of a neuron activation with respect to the input. This gradient is then multiplied element-wise with the input.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.SmoothGrad","page":"General","title":"ExplainableAI.SmoothGrad","text":"SmoothGrad(analyzer, [n=50, std=0.1, rng=GLOBAL_RNG])\nSmoothGrad(analyzer, [n=50, distribution=Normal(0, σ²=0.01), rng=GLOBAL_RNG])\n\nAnalyze model by calculating a smoothed sensitivity map. This is done by averaging sensitivity maps of a Gradient analyzer over random samples in a neighborhood of the input, typically by adding Gaussian noise with mean 0.\n\nReferences\n\nSmilkov et al., SmoothGrad: removing noise by adding noise\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.IntegratedGradients","page":"General","title":"ExplainableAI.IntegratedGradients","text":"IntegratedGradients(analyzer, [n=50])\nIntegratedGradients(analyzer, [n=50])\n\nAnalyze model by using the Integrated Gradients method.\n\nReferences\n\nSundararajan et al., Axiomatic Attribution for Deep Networks\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-augmentations","page":"General","title":"Input augmentations","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"SmoothGrad and IntegratedGradients are special cases of the input augmentations NoiseAugmentation and InterpolationAugmentation, which can be applied as a wrapper to any analyzer:","category":"page"},{"location":"api/","page":"General","title":"General","text":"NoiseAugmentation\nInterpolationAugmentation","category":"page"},{"location":"api/#ExplainableAI.NoiseAugmentation","page":"General","title":"ExplainableAI.NoiseAugmentation","text":"NoiseAugmentation(analyzer, n, [std=1, rng=GLOBAL_RNG])\nNoiseAugmentation(analyzer, n, distribution, [rng=GLOBAL_RNG])\n\nA wrapper around analyzers that augments the input with n samples of additive noise sampled from distribution. This input augmentation is then averaged to return an Explanation.\n\n\n\n\n\n","category":"type"},{"location":"api/#ExplainableAI.InterpolationAugmentation","page":"General","title":"ExplainableAI.InterpolationAugmentation","text":"InterpolationAugmentation(model, [n=50])\n\nA wrapper around analyzers that augments the input with n steps of linear interpolation between the input and a reference input (typically zero(input)). The gradients w.r.t. this augmented input are then averaged and multiplied with the difference between the input and the reference input.\n\n\n\n\n\n","category":"type"},{"location":"api/#Model-preparation","page":"General","title":"Model preparation","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"strip_softmax\ncanonize\nflatten_model","category":"page"},{"location":"api/#ExplainableAI.strip_softmax","page":"General","title":"ExplainableAI.strip_softmax","text":"strip_softmax(model)\nstrip_softmax(layer)\n\nRemove softmax activation on layer or model if it exists.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.canonize","page":"General","title":"ExplainableAI.canonize","text":"canonize(model)\n\nCanonize model by flattening it and fusing BatchNorm layers into preceding Dense and Conv layers with linear activation functions.\n\n\n\n\n\n","category":"function"},{"location":"api/#ExplainableAI.flatten_model","page":"General","title":"ExplainableAI.flatten_model","text":"flatten_model(model)\n\nFlatten a Flux Chain containing Chains.\n\n\n\n\n\n","category":"function"},{"location":"api/#Input-preprocessing","page":"General","title":"Input preprocessing","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"preprocess_imagenet","category":"page"},{"location":"api/#ExplainableAI.preprocess_imagenet","page":"General","title":"ExplainableAI.preprocess_imagenet","text":"preprocess_imagenet(img)\n\nPreprocess an image for use with Metalhead.jl's ImageNet models using PyTorch weights. Uses PyTorch's normalization constants.\n\n\n\n\n\n","category":"function"},{"location":"api/#Index","page":"General","title":"Index","text":"","category":"section"},{"location":"api/","page":"General","title":"General","text":"","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"EditURL = \"../literate/augmentations.jl\"","category":"page"},{"location":"generated/augmentations/#docs-augmentations","page":"Input augmentations","title":"Analyzer augmentations","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"All analyzers implemented in ExplainableAI.jl can be augmented by two types of augmentations: NoiseAugmentations and InterpolationAugmentations. These augmentations are wrappers around analyzers that modify the input before passing it to the analyzer.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We build on the basics shown in the Getting started section and start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/augmentations/#Noise-augmentation","page":"Input augmentations","title":"Noise augmentation","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"The NoiseAugmentation wrapper computes explanations averaged over noisy inputs. Let's demonstrate this on the Gradient analyzer. First, we compute the heatmap of an explanation without augmentation:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = Gradient(model)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Now we wrap the analyzer in a NoiseAugmentation with 10 samples of noise. By default, the noise is sampled from a Gaussian distribution with mean 0 and standard deviation 1.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(Gradient(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Note that a higher sample size is desired, as it will lead to a smoother heatmap. However, this comes at the cost of a longer computation time.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We can also set the standard deviation of the Gaussian distribution:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(Gradient(model), 50, 0.1)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"When used with a Gradient analyzer, this is equivalent to SmoothGrad:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = SmoothGrad(model, 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"We can also use any distribution from Distributions.jl, for example Poisson noise with rate lambda=05:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"using Distributions\n\nanalyzer = NoiseAugmentation(Gradient(model), 50, Poisson(0.5))\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Is is also possible to define your own distributions or mixture distributions.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"NoiseAugmentation can be combined with any analyzer type, for example LRP:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = NoiseAugmentation(LRP(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/#Integration-augmentation","page":"Input augmentations","title":"Integration augmentation","text":"","category":"section"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"The InterpolationAugmentation wrapper computes explanations averaged over n steps of linear interpolation between the input and a reference input, which is set to zero(input) by default:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = InterpolationAugmentation(Gradient(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"When used with a Gradient analyzer, this is equivalent to IntegratedGradients:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = IntegratedGradients(model, 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"To select a different reference input, pass it to the analyze or heatmap function using the keyword argument input_ref. Note that this is an arbitrary example for the sake of demonstration.","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"matrix_of_ones = ones(Float32, size(input))\n\nanalyzer = InterpolationAugmentation(Gradient(model), 50)\nheatmap(input, analyzer; input_ref=matrix_of_ones)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"Once again, InterpolationAugmentation can be combined with any analyzer type, for example LRP:","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"analyzer = InterpolationAugmentation(LRP(model), 50)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"","category":"page"},{"location":"generated/augmentations/","page":"Input augmentations","title":"Input augmentations","text":"This page was generated using Literate.jl.","category":"page"},{"location":"lrp/developer/#lrp-dev-docs","page":"Developer documentation","title":"LRP developer documentation","text":"","category":"section"},{"location":"lrp/developer/#Generic-LRP-rule-implementation","page":"Developer documentation","title":"Generic LRP rule implementation","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Before we dive into package-specific implementation details in later sections of this developer documentation, we first need to cover some fundamentals of LRP, starting with our notation.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The generic LRP rule, of which the 0-, epsilon- and gamma-rules are special cases, reads[1][2]","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginequation\nR_j^k = sum_i fracrho(W_ij) a_j^kepsilon + sum_l rho(W_il) a_l^k + rho(b_i) R_i^k+1\nendequation","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"where ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"W is the weight matrix of the layer\nb is the bias vector of the layer\na^k is the activation vector at the input of layer k\na^k+1 is the activation vector at the output of layer k\nR^k is the relevance vector at the input of layer k\nR^k+1 is the relevance vector at the output of layer k\nrho is a function that modifies parameters (what we call modify_parameters)\nepsilon is a small positive constant to avoid division by zero","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Subscript characters are used to index vectors and matrices (e.g. b_i is the i-th entry of the bias vector), while the superscripts ^k and ^k+1 indicate the relative positions of activations a and relevances R in the model. For any k, a^k and R^k have the same shape. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Note that every term in this equation is a scalar value, which removes the need to differentiate between matrix and element-wise operations.","category":"page"},{"location":"lrp/developer/#Linear-layers","page":"Developer documentation","title":"Linear layers","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"LRP was developed for deep rectifier networks, neural networks that are composed of linear layers with ReLU activation functions. Linear layers are layers that can be represented as affine transformations of the form ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginequation\nf(x) = Wx + b quad \nendequation","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This includes most commonly used types of layers, such as fully connected layers, convolutional layers, pooling layers, and normalization layers.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We will now describe a generic implementation of equation (1) that can be applied to any linear layer.","category":"page"},{"location":"lrp/developer/#lrp-dev-ad-fallback","page":"Developer documentation","title":"The automatic differentiation fallback","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The computation of the generic LRP rule can be decomposed into four steps[1]:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"beginarraylr\nz_i = sum_l rho(W_il) a_l^k + rho(b_i) text(Step 1) 05em\ns_i = R_i^k+1 (z_i + epsilon) text(Step 2) 05em\nc_j = sum_i rho(W_ij) s_i text(Step 3) 05em\nR_j^k = a_j^k c_j text(Step 4)\nendarray","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"To compute step 1, we first create a modified layer, applying rho to the weights and biases and replacing the activation function with the identity function. The vector z is then computed using a forward pass through the modified layer. It has the same dimensionality as R^k+1 and a^k+1.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Step 2 is an element-wise division of R^k+1 by z. To avoid division by zero, a small constant epsilon is added to z when necessary.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Step 3 is trivial for fully connected layers, as rho(W) corresponds to the weight matrix of the modified layer. For other types of linear layers, however, the implementation is more involved: A naive approach would be to construct a large matrix W that corresponds to the affine transformation Wx+b implemented by the modified layer. This has multiple drawbacks:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"the implementation is error-prone\na separate implementation is required for each type of linear layer\nfor some layer types, e.g. pooling layers, the matrix W depends on the input\nfor many layer types, e.g. convolutional layers, the matrix W is very large and sparse, mostly consisting of zeros, leading to a large computational overhead","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"A better approach can be found by observing that the matrix W is the Jacobian of the affine transformation f(x) = Wx + b. The vector c computed in step 3 corresponds to c = s^T W, a so-called Vector-Jacobian-Product (VJP) of the vector s with the Jacobian W. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"VJPs are the fundamental building blocks of reverse-mode automatic differentiation (AD), and therefore implemented by most AD frameworks in a highly performant, matrix-free, GPU-accelerated manner. Note that computing the VJP is much more efficient than first computing the full Jacobian W and later multiplying it with s. This is due to the fact that computing the full Jacobian of a function f mathbbR^n rightarrow mathbbR^m requires computing m VJPs.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Functions that compute VJP's are commonly called pullbacks. Using the Zygote.jl AD system, we obtain the output z of a modified layer and its pullback back in a single function call:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"z, back = Zygote.pullback(modified_layer, aᵏ)","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We then call the pullback with the vector s to obtain c:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"c = back(s)","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Finally, step 4 consists of an element-wise multiplication of the vector c with the input activation vector a^k, resulting in the relevance vector R^k.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This AD-based implementation is used in ExplainableAI.jl as the default method for all layer types that don't have a more optimized implementation (e.g. fully connected layers). We will refer to it as the \"AD fallback\".","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For more background information on automatic differentiation, refer to the JuML lecture on AD.","category":"page"},{"location":"lrp/developer/#LRP-analyzer-struct","page":"Developer documentation","title":"LRP analyzer struct","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The LRP analyzer struct holds three fields: the model to analyze, the LRP rules to use, and pre-allocated modified_layers.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"As described in the section on Composites, applying a composite to a model will return LRP rules in nested ChainTuple and ParallelTuples. These wrapper types are used to match the structure of Flux models with Chain and Parallel layers while avoiding type piracy.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"When creating an LRP analyzer with the default keyword argument flatten=true, flatten_model is called on the model and rules. This is done for performance reasons, as discussed in Flattening the model.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"After passing the Model checks, modified layers are pre-allocated, once again using the ChainTuple and ParallelTuple wrapper types to match the structure of the model. If a rule doesn't modify a layer, the corresponding entry in modified_layers is set to nothing, avoiding unnecessary allocations. If a rule requires multiple modified layers, the corresponding entry in modified_layers is set to a named tuple of modified layers. Apart from these special cases, the corresponding entry in modified_layers is simply set to the modified layer.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For a detailed description of the layer modification mechanism, refer to the section on Advanced layer modification.","category":"page"},{"location":"lrp/developer/#Forward-and-reverse-pass","page":"Developer documentation","title":"Forward and reverse pass","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"When calling an LRP analyzer, a forward pass through the model is performed, saving the activations aᵏ for all layers k in a vector called as. This vector of activations is then used to pre-allocate the relevances R^k for all layers in a vector called Rs. This is possible since for any layer k, a^k and R^k have the same shape. Finally, the last array of relevances R^N in Rs is set to zeros, except for the specified output neuron, which is set to one.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We can now run the reverse pass, iterating backwards over the layers in the model and writing relevances R^k into the pre-allocated array Rs:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"for k in length(model):-1:1\n # └─ loop over layers in reverse\n lrp!(Rs[k], rules[k], layers[k], modified_layers[i], as[k], Rs[k+1])\n # └─ Rᵏ: modified in-place └─ aᵏ └─ Rᵏ⁺¹\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"This is done by calling low-level functions","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= ...\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"that implement individual LRP rules. The correct rule is applied via multiple dispatch on the types of the arguments rule and modified_layer. The relevance Rᵏ is then computed based on the input activation aᵏ and the output relevance Rᵏ⁺¹.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"The exclamation point in the function name lrp! is a naming convention in Julia to denote functions that modify their arguments – in this case the first argument Rs[k], which corresponds to R^k.","category":"page"},{"location":"lrp/developer/#Rule-calls","page":"Developer documentation","title":"Rule calls","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"As discussed in The AD fallback, the default LRP fallback for unknown layers uses AD via Zygote. Now that you are familiar with both the API and the four-step computation of the generic LRP rules, the following implementation should be straightforward to understand:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer, modified_layer, aᵏ, Rᵏ⁺¹)\n # Use modified_layer if available\n layer = isnothing(modified_layer) ? layer : modified_layer\n\n ãᵏ = modify_input(rule, aᵏ)\n z, back = Zygote.pullback(modified_layer, ãᵏ)\n s = Rᵏ⁺¹ ./ modify_denominator(rule, z)\n Rᵏ .= ãᵏ .* only(back(s))\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Not only lrp! dispatches on the rule and layer type, but also the internal functions modify_input and modify_denominator. Unknown layers that are registered in the LRP_CONFIG use this exact function.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"All LRP rules are implemented in the file /src/lrp/rules.jl.","category":"page"},{"location":"lrp/developer/#Specialized-implementations","page":"Developer documentation","title":"Specialized implementations","text":"","category":"section"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"In other programming languages, LRP is commonly implemented in an object-oriented manner, providing a single backward pass implementation per rule. This can be seen as a form of single dispatch on the rule type.","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Using multiple dispatch, we can implement specialized versions of lrp! that not only take into account the rule type, but also the layer type, for example for fully connected layers or reshaping layers. ","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"Reshaping layers don't affect attributions. We can therefore avoid the computational overhead of AD by writing a specialized implementation that simply reshapes back:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer::ReshapingLayer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= reshape(Rᵏ⁺¹, size(aᵏ))\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"We can even provide a specialized implementation of the generic LRP rule for Dense layers. Since we can access the weight matrix directly, we can skip the use of automatic differentiation and implement the following equation directly, using Einstein summation notation:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"R_j^k = sum_i fracrho(W_ij) a_j^kepsilon + sum_l rho(W_il) a_l^k + rho(b_i) R_i^k+1","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule, layer::Dense, modified_layer, aᵏ, Rᵏ⁺¹)\n # Use modified_layer if available\n layer = isnothing(modified_layer) ? layer : modified_layer\n\n ãᵏ = modify_input(rule, aᵏ)\n z = modify_denominator(rule, layer(ãᵏ))\n\n # Implement LRP using Einsum notation, where `b` is the batch index\n @tullio Rᵏ[j, b] = layer.weight[i, j] * ãᵏ[j, b] / z[i, b] * Rᵏ⁺¹[i, b]\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"For maximum low-level control beyond modify_input and modify_denominator, you can also implement your own lrp! function and dispatch on individual rule types MyRule and layer types MyLayer:","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"function lrp!(Rᵏ, rule::MyRule, layer::MyLayer, modified_layer, aᵏ, Rᵏ⁺¹)\n Rᵏ .= ...\nend","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"[1]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview","category":"page"},{"location":"lrp/developer/","page":"Developer documentation","title":"Developer documentation","text":"[2]: W. Samek et al., Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"EditURL = \"../literate/example.jl\"","category":"page"},{"location":"generated/example/#docs-getting-started","page":"Getting started","title":"Getting started","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For this first example, we already have loaded a pre-trained LeNet5 model to look at explanations on the MNIST dataset.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"using ExplainableAI\nusing Flux\n\nusing BSON # hide\nmodel = BSON.load(\"../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Supported models\nExplainableAI.jl can be used on any differentiable classifier.Only LRP requires models from Flux.jl.","category":"page"},{"location":"generated/example/#Preparing-the-model","page":"Getting started","title":"Preparing the model","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For models with softmax activations on the output, it is necessary to call strip_softmax before analyzing.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"model = strip_softmax(model);\nnothing #hide","category":"page"},{"location":"generated/example/#Preparing-the-input-data","page":"Getting started","title":"Preparing the input data","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We use MLDatasets to load a single image from the MNIST dataset:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"using MLDatasets\nusing ImageCore, ImageIO, ImageShow\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\n\nconvert2image(MNIST, x)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By convention in Flux.jl, this input needs to be resized to WHCN format by adding a color channel and batch dimensions.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"input = reshape(x, 28, 28, 1, :);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Input format\nFor any explanation of a model, ExplainableAI.jl assumes the batch dimension to come last in the input.For the purpose of heatmapping, the input is assumed to be in WHCN order (width, height, channels, batch), which is Flux.jl's convention.","category":"page"},{"location":"generated/example/#Explanations","page":"Getting started","title":"Explanations","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We can now select an analyzer of our choice and call analyze to get an Explanation:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"analyzer = LRP(model)\nexpl = analyze(input, analyzer);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"The return value expl is of type Explanation and bundles the following data:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.val: the numerical output of the analyzer, e.g. an attribution or gradient\nexpl.output: the model output for the given analyzer input\nexpl.neuron_selection: the neuron index used for the explanation\nexpl.analyzer: a symbol corresponding the used analyzer, e.g. :LRP\nexpl.extras: an optional named tuple that can be used by analyzers to return additional information.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"We used an LRP analyzer, so expl.analyzer is :LRP.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.analyzer","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By default, the explanation is computed for the maximally activated output neuron. Since our digit is a 9 and Julia's indexing is 1-based, the output neuron at index 10 of our trained model is maximally activated.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.neuron_selection","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Finally, we obtain the result of the analyzer in form of an array.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl.val","category":"page"},{"location":"generated/example/#Heatmapping-basics","page":"Getting started","title":"Heatmapping basics","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Since the array expl.val is not very informative at first sight, we can visualize Explanations by computing a heatmap:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"If we are only interested in the heatmap, we can combine analysis and heatmapping into a single function call:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(input, analyzer)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For a more detailed explanation of the heatmap function, refer to the heatmapping section.","category":"page"},{"location":"generated/example/#docs-analyzers-list","page":"Getting started","title":"List of analyzers","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Currently, the following analyzers are implemented:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"Gradient\nInputTimesGradient\nSmoothGrad\nIntegratedGradients\nLRP\nRules\nZeroRule\nEpsilonRule\nGammaRule\nGeneralizedGammaRule\nWSquareRule\nFlatRule\nZBoxRule\nZPlusRule\nAlphaBetaRule\nPassRule\nComposite\nEpsilonGammaBox\nEpsilonPlus\nEpsilonPlusFlat\nEpsilonAlpha2Beta1\nEpsilonAlpha2Beta1Flat","category":"page"},{"location":"generated/example/#Neuron-selection","page":"Getting started","title":"Neuron selection","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"By passing an additional index to our call to analyze, we can compute an explanation with respect to a specific output neuron. Let's see why the output wasn't interpreted as a 4 (output neuron at index 5)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"expl = analyze(input, analyzer, 5)\nheatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This heatmap shows us that the \"upper loop\" of the hand-drawn 9 has negative relevance with respect to the output neuron corresponding to digit 4!","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"note: Note\nThe output neuron can also be specified when calling heatmap:heatmap(input, analyzer, 5)","category":"page"},{"location":"generated/example/#Analyzing-batches","page":"Getting started","title":"Analyzing batches","text":"","category":"section"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"ExplainableAI also supports explanations of input batches:","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"batchsize = 20\nxs, _ = MNIST(Float32, :test)[1:batchsize]\nbatch = reshape(xs, 28, 28, 1, :) # reshape to WHCN format\nexpl = analyze(batch, analyzer);\nnothing #hide","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This will return a single Explanation expl for the entire batch. Calling heatmap on expl will detect the batch dimension and return a vector of heatmaps.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"heatmap(expl)","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"For more information on heatmapping batches, refer to the heatmapping documentation.","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"","category":"page"},{"location":"generated/example/","page":"Getting started","title":"Getting started","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"EditURL = \"../../literate/lrp/custom_layer.jl\"","category":"page"},{"location":"generated/lrp/custom_layer/#docs-custom-layers","page":"Supporting new layer types","title":"Supporting new layers and activation functions","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"One of the design goals of ExplainableAI.jl is to combine ease of use and extensibility for the purpose of research. This example will show you how to extent LRP to new layer types and activation functions.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"using Flux\nusing ExplainableAI","category":"page"},{"location":"generated/lrp/custom_layer/#docs-lrp-model-checks","page":"Supporting new layer types","title":"Model checks","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"To assure that novice users use LRP according to best practices, ExplainableAI.jl runs strict model checks when creating an LRP analyzer.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Let's demonstrate this by defining a new layer type that doubles its input","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"struct MyDoublingLayer end\n(::MyDoublingLayer)(x) = 2 * x\n\nmylayer = MyDoublingLayer()\nmylayer([1, 2, 3])","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"and by defining a model that uses this layer:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"model = Chain(\n Dense(100, 20),\n MyDoublingLayer()\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Creating an LRP analyzer, e.g. LRP(model), will throw an ArgumentError and print a summary of the model check in the REPL:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"julia> LRP(model)\n ChainTuple(\n Dense(100 => 20) => supported,\n MyDoublingLayer() => unknown layer type,\n ),\n\n LRP model check failed\n ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡\n\n Found unknown layer types or activation functions that are not supported by ExplainableAI's LRP implementation yet.\n\n LRP assumes that the model is a deep rectifier network that only contains ReLU-like activation functions.\n\n If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).\n\n [...]\n\nERROR: Unknown layer or activation function found in model","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP should only be used on deep rectifier networks and ExplainableAI doesn't recognize MyDoublingLayer as a compatible layer by default. It will therefore return an error and a model check summary instead of returning an incorrect explanation.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"However, if we know MyDoublingLayer is compatible with deep rectifier networks, we can register it to tell ExplainableAI that it is ok to use. This will be shown in the following section.","category":"page"},{"location":"generated/lrp/custom_layer/#Registering-layers","page":"Supporting new layer types","title":"Registering layers","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"The error in the model check will stop after registering our custom layer type MyDoublingLayer as \"supported\" by ExplainableAI.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"This is done using the function LRP_CONFIG.supports_layer, which should be set to return true for the type MyDoublingLayer:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP_CONFIG.supports_layer(::MyDoublingLayer) = true","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Now we can create and run an analyzer without getting an error:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"analyzer = LRP(model)","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"note: Registering functions\nFlux's Chains can also contain functions, e.g. flatten. This kind of layer can be registered asLRP_CONFIG.supports_layer(::typeof(flatten)) = true","category":"page"},{"location":"generated/lrp/custom_layer/#Registering-activation-functions","page":"Supporting new layer types","title":"Registering activation functions","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"The mechanism for registering custom activation functions is analogous to that of custom layers:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"myrelu(x) = max.(0, x)\n\nmodel = Chain(\n Dense(784, 100, myrelu),\n Dense(100, 10),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Once again, creating an LRP analyzer for this model will throw an ArgumentError and display the following model check summary:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"julia> LRP(model)\n ChainTuple(\n Dense(784 => 100, myrelu) => unsupported or unknown activation function myrelu,\n Dense(100 => 10) => supported,\n ),\n\n LRP model check failed\n ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡\n\n Found unknown layer types or activation functions that are not supported by ExplainableAI's LRP implementation yet.\n\n LRP assumes that the model is a deep rectifier network that only contains ReLU-like activation functions.\n\n If you think the missing layer should be supported by default, please submit an issue (https://github.com/adrhill/ExplainableAI.jl/issues).\n\n [...]\n\nERROR: Unknown layer or activation function found in model","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Registation works by defining the function LRP_CONFIG.supports_activation as true:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"LRP_CONFIG.supports_activation(::typeof(myrelu)) = true","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"now the analyzer can be created without error:","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"analyzer = LRP(model)","category":"page"},{"location":"generated/lrp/custom_layer/#Skipping-model-checks","page":"Supporting new layer types","title":"Skipping model checks","text":"","category":"section"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"All model checks can be skipped at your own risk by setting the LRP-analyzer keyword argument skip_checks=true.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"struct UnknownLayer end\n(::UnknownLayer)(x) = x\n\nunknown_activation(x) = max.(0, x)\n\nmodel = Chain(Dense(100, 20, unknown_activation), MyDoublingLayer())\n\nLRP(model; skip_checks=true)","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"Instead of throwing the usual ERROR: Unknown layer or activation function found in model, the LRP analyzer was created without having to register either the layer UnknownLayer or the activation function unknown_activation.","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"","category":"page"},{"location":"generated/lrp/custom_layer/","page":"Supporting new layer types","title":"Supporting new layer types","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"EditURL = \"../../literate/lrp/basics.jl\"","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-basics","page":"Basic usage","title":"Basic usage of LRP","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This example will show you best practices for using LRP, building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"note: TLDR\nUse strip_softmax to strip the output softmax from your model. Otherwise model checks will fail.\nUse canonize to fuse linear layers.\nDon't just call LRP(model), instead use a Composite to apply LRP rules to your model. Read Assigning rules to layers.\nBy default, LRP will call flatten_model to flatten your model. This reduces computational overhead.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"We start out by loading a small convolutional neural network:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"using ExplainableAI\nusing Flux\n\nmodel = Chain(\n Chain(\n Conv((3, 3), 3 => 8, relu; pad=1),\n Conv((3, 3), 8 => 8, relu; pad=1),\n MaxPool((2, 2)),\n Conv((3, 3), 8 => 16; pad=1),\n BatchNorm(16, relu),\n Conv((3, 3), 16 => 8, relu; pad=1),\n BatchNorm(8, relu),\n ),\n Chain(\n Flux.flatten,\n Dense(2048 => 512, relu),\n Dropout(0.5),\n Dense(512 => 100, softmax)\n ),\n);\nnothing #hide","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This model contains two chains: the convolutional layers and the fully connected layers.","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-model-prep","page":"Basic usage","title":"Model preparation","text":"","category":"section"},{"location":"generated/lrp/basics/#docs-lrp-strip-softmax","page":"Basic usage","title":"Stripping the output softmax","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"When using LRP, it is recommended to explain output logits instead of probabilities. This can be done by stripping the output softmax activation from the model using the strip_softmax function:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model = strip_softmax(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If you don't remove the output softmax, model checks will fail.","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-canonization","page":"Basic usage","title":"Canonizing the model","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP is not invariant to a model's implementation. Applying the GammaRule to two linear layers in a row will yield different results than first fusing the two layers into one linear layer and then applying the rule. This fusing is called \"canonization\" and can be done using the canonize function:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model = canonize(model)","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-flatten-model","page":"Basic usage","title":"Flattening the model","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"ExplainableAI.jl's LRP implementation supports nested Flux Chains and Parallel layers. However, it is recommended to flatten the model before analyzing it.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP is implemented by first running a forward pass through the model, keeping track of the intermediate activations, followed by a backward pass that computes the relevances.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"To keep the LRP implementation simple and maintainable, ExplainableAI.jl does not pre-compute \"nested\" activations. Instead, for every internal chain, a new forward pass is run to compute activations.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"By \"flattening\" a model, this overhead can be avoided. For this purpose, ExplainableAI.jl provides the function flatten_model:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"model_flat = flatten_model(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This function is called by default when creating an LRP analyzer. Note that we pass the unflattened model to the analyzer, but analyzer.model is flattened:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"analyzer = LRP(model)\nanalyzer.model","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If this flattening is not desired, it can be disabled by passing the keyword argument flatten=false to the LRP constructor.","category":"page"},{"location":"generated/lrp/basics/#LRP-rules","page":"Basic usage","title":"LRP rules","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"By default, the LRP constructor will assign the ZeroRule to all layers.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP(model)","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This analyzer will return heatmaps that look identical to InputTimesGradient.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP's strength lies in assigning different rules to different layers, based on their functionality in the neural network[1]. ExplainableAI.jl implements many LRP rules out of the box, but it is also possible to implement custom rules.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"To assign different rules to different layers, use one of the composites presets, or create your own composite, as described in Assigning rules to layers.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"composite = EpsilonPlusFlat() # using composite preset EpsilonPlusFlat","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"LRP(model, composite)","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-layerwise","page":"Basic usage","title":"Computing layerwise relevances","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"If you are interested in computing layerwise relevances, call analyze with an LRP analyzer and the keyword argument layerwise_relevances=true.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"The layerwise relevances can be accessed in the extras field of the returned Explanation:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"input = rand(Float32, 32, 32, 3, 1) # dummy input for our convolutional neural network\n\nexpl = analyze(input, analyzer; layerwise_relevances=true)\nexpl.extras.layerwise_relevances","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"Note that the layerwise relevances are only kept for layers in the outermost Chain of the model. When using our unflattened model, we only obtain three layerwise relevances, one for each chain in the model and the output relevance:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"analyzer = LRP(model; flatten=false) # use unflattened model\n\nexpl = analyze(input, analyzer; layerwise_relevances=true)\nexpl.extras.layerwise_relevances","category":"page"},{"location":"generated/lrp/basics/#docs-lrp-performance","page":"Basic usage","title":"Performance tips","text":"","category":"section"},{"location":"generated/lrp/basics/#Using-LRP-without-a-GPU","page":"Basic usage","title":"Using LRP without a GPU","text":"","category":"section"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"Since ExplainableAI.jl's LRP implementation makes use of Tullio.jl, analysis can be accelerated by loading either","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"a package from the JuliaGPU ecosystem, e.g. CUDA.jl, if a GPU is available\nLoopVectorization.jl if only a CPU is available.","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This only requires loading the LoopVectorization.jl package before ExplainableAI.jl:","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"using LoopVectorization\nusing ExplainableAI","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"[1]: G. Montavon et al., Layer-Wise Relevance Propagation: An Overview","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"","category":"page"},{"location":"generated/lrp/basics/","page":"Basic usage","title":"Basic usage","text":"This page was generated using Literate.jl.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"EditURL = \"../../literate/lrp/custom_rules.jl\"","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules","page":"Custom LRP rules","title":"Custom LRP rules","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"One of the design goals of ExplainableAI.jl is to combine ease of use and extensibility for the purpose of research.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"This example will show you how to implement custom LRP rules. building on the basics shown in the Getting started section.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We start out by loading the same pre-trained LeNet5 model and MNIST input data:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"using ExplainableAI\nusing Flux\nusing MLDatasets\nusing ImageCore\nusing BSON\n\nindex = 10\nx, y = MNIST(Float32, :test)[10]\ninput = reshape(x, 28, 28, 1, :)\n\nmodel = BSON.load(\"../../model.bson\", @__MODULE__)[:model] # hide\nmodel","category":"page"},{"location":"generated/lrp/custom_rules/#Implementing-a-custom-rule","page":"Custom LRP rules","title":"Implementing a custom rule","text":"","category":"section"},{"location":"generated/lrp/custom_rules/#Step-1:-Define-rule-struct","page":"Custom LRP rules","title":"Step 1: Define rule struct","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Let's define a rule that modifies the weights and biases of our layer on the forward pass. The rule has to be of supertype AbstractLRPRule.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"struct MyGammaRule <: AbstractLRPRule end","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules-impl","page":"Custom LRP rules","title":"Step 2: Implement rule behavior","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"It is then possible to dispatch on the following four utility functions with the rule type MyCustomLRPRule to define custom rules without writing boilerplate code.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"modify_input(rule::MyGammaRule, input)\nmodify_parameters(rule::MyGammaRule, parameter)\nmodify_denominator(rule::MyGammaRule, denominator)\nis_compatible(rule::MyGammaRule, layer)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"By default:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"modify_input doesn't change the input\nmodify_parameters doesn't change the parameters\nmodify_denominator avoids division by zero by adding a small epsilon-term (1.0f-9)\nis_compatible returns true if a layer has fields weight and bias","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"To extend internal functions, import them explicitly:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"import ExplainableAI: modify_parameters\n\nmodify_parameters(::MyGammaRule, param) = param + 0.25f0 * relu.(param)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Note that we didn't implement three of the four functions. This is because the defaults are sufficient to implement the GammaRule.","category":"page"},{"location":"generated/lrp/custom_rules/#Step-3:-Use-rule-in-LRP-analyzer","page":"Custom LRP rules","title":"Step 3: Use rule in LRP analyzer","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We can directly use our rule to make an analyzer!","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"rules = [\n ZPlusRule(),\n EpsilonRule(),\n MyGammaRule(), # our custom GammaRule\n EpsilonRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n]\nanalyzer = LRP(model, rules)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"We just implemented our own version of the γ-rule in 2 lines of code. The heatmap perfectly matches the pre-implemented GammaRule:","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"rules = [\n ZPlusRule(),\n EpsilonRule(),\n GammaRule(), # XAI.jl's GammaRule\n EpsilonRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n ZeroRule(),\n]\nanalyzer = LRP(model, rules)\nheatmap(input, analyzer)","category":"page"},{"location":"generated/lrp/custom_rules/#Performance-tips","page":"Custom LRP rules","title":"Performance tips","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"Make sure functions like modify_parameters don't promote the type of weights (e.g. from Float32 to Float64).\nIf your rule MyRule doesn't modify weights or biases, defining modify_layer(::MyRule, layer) = nothing can provide reduce memory allocations and improve performance.","category":"page"},{"location":"generated/lrp/custom_rules/#docs-custom-rules-advanced","page":"Custom LRP rules","title":"Advanced layer modification","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"For more granular control over weights and biases, modify_weight and modify_bias can be used.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"If the layer doesn't use weights (layer.weight) and biases (layer.bias), ExplainableAI provides a lower-level variant of modify_parameters called modify_layer. This function is expected to take a layer and return a new, modified layer. To add compatibility checks between rule and layer types, extend is_compatible.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"warning: Extending modify_layer\nUse of a custom function modify_layer will overwrite functionality of modify_parameters, modify_weight and modify_bias for the implemented combination of rule and layer types. This is due to the fact that internally, modify_weight and modify_bias are called by the default implementation of modify_layer. modify_weight and modify_bias in turn call modify_parameters by default.The default call structure looks as follows:┌─────────────────────────────────────────┐\n│ modify_layer │\n└─────────┬─────────────────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_weight │ │ modify_bias │\n└─────────┬─────────┘ └─────────┬─────────┘\n │ calls │ calls\n┌─────────▼─────────┐ ┌─────────▼─────────┐\n│ modify_parameters │ │ modify_parameters │\n└───────────────────┘ └───────────────────┘Therefore modify_layer should only be extended for a specific rule and a specific layer type.","category":"page"},{"location":"generated/lrp/custom_rules/#Advanced-LRP-rules","page":"Custom LRP rules","title":"Advanced LRP rules","text":"","category":"section"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"To implement custom LRP rules that require more than modify_layer, modify_input and modify_denominator, take a look at the LRP developer documentation.","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"","category":"page"},{"location":"generated/lrp/custom_rules/","page":"Custom LRP rules","title":"Custom LRP rules","text":"This page was generated using Literate.jl.","category":"page"},{"location":"","page":"Home","title":"Home","text":"CurrentModule = ExplainableAI","category":"page"},{"location":"#ExplainableAI.jl","page":"Home","title":"ExplainableAI.jl","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Explainable AI in Julia using Flux.jl.","category":"page"},{"location":"#Installation","page":"Home","title":"Installation","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"To install this package and its dependencies, open the Julia REPL and run ","category":"page"},{"location":"","page":"Home","title":"Home","text":"julia> ]add ExplainableAI","category":"page"},{"location":"#Manual","page":"Home","title":"Manual","text":"","category":"section"},{"location":"#General-usage","page":"Home","title":"General usage","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"generated/example.md\",\n \"generated/heatmapping.md\",\n \"generated/augmentations.md\",\n]\nDepth = 3","category":"page"},{"location":"#LRP","page":"Home","title":"LRP","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\n \"generated/lrp/basics.md\",\n \"generated/lrp/composites.md\",\n \"generated/lrp/custom_layer.md\",\n \"generated/lrp/custom_rules.md\",\n \"lrp/developer.md\",\n]\nDepth = 3","category":"page"},{"location":"#API-reference","page":"Home","title":"API reference","text":"","category":"section"},{"location":"#General","page":"Home","title":"General","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"api.md\"]\nDepth = 2","category":"page"},{"location":"#LRP-2","page":"Home","title":"LRP","text":"","category":"section"},{"location":"","page":"Home","title":"Home","text":"Pages = [\"lrp/api.md\"]\nDepth = 2","category":"page"}] }