From f783995eac12442af0c2b288368ae0802d4bea51 Mon Sep 17 00:00:00 2001 From: arsenkhy <77200251+arsenkhy@users.noreply.github.com> Date: Mon, 25 Jul 2022 20:41:17 +0600 Subject: [PATCH] Fix spelling and grammar errors in README.md (#110) * Fix spelling and grammar errors in README.md - deviding -> dividing - initialisation -> initialization - max tend -> max tends - utilising -> utilizing * Apply suggestion - Changed z->s to use the British spelling --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 983a01b..abcb788 100644 --- a/README.md +++ b/README.md @@ -113,7 +113,7 @@ reason the first step of correction is the widely used Benjamini Hochberg FDR. Following that however we also need to account for the fact that we have been testing the same features over and over again in each iteration with the same test. For this scenario the Bonferroni is perfect, so it is applied by -deviding the p-value threshold with the current iteration index. +dividing the p-value threshold with the current iteration index. If this two step correction is not required, the two_step parameter has to be set to False, then (with perc=100) BorutaPy behaves exactly as the R version. @@ -134,7 +134,7 @@ __n_estimators__ : int or string, default = 1000 __perc__ : int, default = 100 > Instead of the max we use the percentile defined by the user, to pick > our threshold for comparison between shadow and real features. The max - > tend to be too stringent. This provides a finer control over this. The + > tends to be too stringent. This provides a finer control over this. The > lower perc is the more false positives will be picked as relevant but > also the less relevant features will be left out. The usual trade-off. > The default is essentially the vanilla Boruta corresponding to the max.