diff --git a/tutorials/README.md b/tutorials/README.md
index abc28118..21020a66 100644
--- a/tutorials/README.md
+++ b/tutorials/README.md
@@ -78,9 +78,11 @@ Also the main conclusions (🠊) from the thesis (on images and text) about the
| $p_{keep}$ | **optimized** (*i*, *txt*), **$0.5$** (*ts*) | $0.1$| $0.1$ | default | $0.1$| $0.1$|
| $n_{features}$ |**$8$** | $6$ |default | default | default | $16$ |
-🠊 The most crucial parameter is $p_{keep}$. Lower values of $p_{keep}$ lead to more sentitive explanations (observed for both images and text).
+🠊 The most crucial parameter is $p_{keep}$. Lower values of $p_{keep}$ lead to more sentitive explanations (observed for both images and text). Easier classificication tasks usually require a lower $p_keep$ as this will cause more perturbation in the input and therefore a more distinct signal in the model predictions.
-🠊 The feature resolution $n_{features}$ exhibited an optimum at a value of $6$.
+🠊 The feature resolution $n_{features}$ exhibited an optimum at a value of $6$. Higher values can offer a finer grained result but require (far) more $n_masks$. This is also dependent on the scale of the phenomena in the input data that we want to take into account in the explanation.
+
+🠊 Larger $n_masks$ will return more consistent results at the cost of computation time. If 2 identical runs yield (very) different results, these will likely contain a lot of (or even mostly) noise and a higher value for $n_masks$ should be used instead.
#### LIME
| Hyperparameter | Default value | (*i*) | (*ts*)| (*ts*)|