|
| 1 | +--- |
| 2 | +title: "CompStats" |
| 3 | +format: |
| 4 | + dashboard: |
| 5 | + logo: images/ingeotec.png |
| 6 | + orientation: columns |
| 7 | + nav-buttons: [github] |
| 8 | + theme: cosmo |
| 9 | +execute: |
| 10 | + freeze: auto |
| 11 | +--- |
| 12 | + |
| 13 | +# Introduction |
| 14 | + |
| 15 | +## Column |
| 16 | + |
| 17 | +::: {.card title='Introduction'} |
| 18 | +Collaborative competitions have gained popularity in the scientific and technological fields. These competitions involve defining tasks, selecting evaluation scores, and devising result verification methods. In the standard scenario, participants receive a training set and are expected to provide a solution for a held-out dataset kept by organizers. An essential challenge for organizers arises when comparing algorithms' performance, assessing multiple participants, and ranking them. Statistical tools are often used for this purpose; however, traditional statistical methods often fail to capture decisive differences between systems' performance. CompStats implements an evaluation methodology for statistically analyzing competition results and competition. CompStats offers several advantages, including off-the-shell comparisons with correction mechanisms and the inclusion of confidence intervals. |
| 19 | +::: |
| 20 | + |
| 21 | +::: {.card title='Installing using conda'} |
| 22 | + |
| 23 | +`CompStats` can be install using the conda package manager with the following instruction. |
| 24 | + |
| 25 | +```{sh} |
| 26 | +conda install --channel conda-forge CompStats |
| 27 | +``` |
| 28 | +::: |
| 29 | + |
| 30 | +::: {.card title='Installing using pip'} |
| 31 | +A more general approach to installing `CompStats` is through the use of the command pip, as illustrated in the following instruction. |
| 32 | + |
| 33 | +```{sh} |
| 34 | +pip install CompStats |
| 35 | +``` |
| 36 | +::: |
| 37 | + |
| 38 | +# Quick Start Guide |
| 39 | + |
| 40 | +## Column |
| 41 | + |
| 42 | +To illustrate the use of `CompStats`, the following snippets show an example. The instructions load the necessary libraries, including the one to obtain the problem (e.g., digits), four different classifiers, and the last line is the score used to measure the performance and compare the algorithm. |
| 43 | + |
| 44 | +```{python} |
| 45 | +#| echo: true |
| 46 | +
|
| 47 | +from sklearn.svm import LinearSVC |
| 48 | +from sklearn.naive_bayes import GaussianNB |
| 49 | +from sklearn.ensemble import RandomForestClassifier, HistGradientBoostingClassifier |
| 50 | +from sklearn.datasets import load_digits |
| 51 | +from sklearn.model_selection import train_test_split |
| 52 | +from sklearn.base import clone |
| 53 | +from CompStats.metrics import f1_score |
| 54 | +``` |
| 55 | + |
| 56 | +The first step is to load the digits problem and split the dataset into training and validation sets. The second step is to estimate the parameters of a linear Support Vector Machine and predict the validation set's classes. The predictions are stored in the variable `hy`. |
| 57 | + |
| 58 | +```{python} |
| 59 | +#| echo: true |
| 60 | +
|
| 61 | +X, y = load_digits(return_X_y=True) |
| 62 | +_ = train_test_split(X, y, test_size=0.3) |
| 63 | +X_train, X_val, y_train, y_val = _ |
| 64 | +m = LinearSVC().fit(X_train, y_train) |
| 65 | +hy = m.predict(X_val) |
| 66 | +``` |
| 67 | + |
| 68 | +## Column |
| 69 | + |
| 70 | +Once the predictions are available, it is time to measure the algorithm's performance, as seen in the following code. It is essential to note that the API used in `sklearn.metrics` is followed; the difference is that the function returns an instance with different methods that can be used to estimate different performance statistics and compare algorithms. |
| 71 | + |
| 72 | +```{python} |
| 73 | +#| echo: true |
| 74 | +
|
| 75 | +score = f1_score(y_val, hy, average='macro') |
| 76 | +score |
| 77 | +``` |
| 78 | + |
| 79 | +The previous code shows the macro-f1 score and its standard error. The actual performance value is stored in the attributes `statistic` function, and `se` |
| 80 | + |
| 81 | +```{python} |
| 82 | +#| echo: true |
| 83 | +
|
| 84 | +score.statistic, score.se |
| 85 | +``` |
| 86 | + |
| 87 | +Continuing with the example, let us assume that one wants to test another classifier on the same problem, in this case, a random forest, as can be seen in the following two lines. The second line predicts the validation set and sets it to the analysis. |
| 88 | + |
| 89 | +```{python} |
| 90 | +#| echo: true |
| 91 | +
|
| 92 | +ens = RandomForestClassifier().fit(X_train, y_train) |
| 93 | +score(ens.predict(X_val), name='Random Forest') |
| 94 | +``` |
| 95 | + |
| 96 | +Let us incorporate another predictions, now with Naive Bayes classifier, and Histogram Gradient Boosting as seen below. |
| 97 | + |
| 98 | +```{python} |
| 99 | +#| echo: true |
| 100 | +
|
| 101 | +nb = GaussianNB().fit(X_train, y_train) |
| 102 | +score(nb.predict(X_val), name='Naive Bayes') |
| 103 | +hist = HistGradientBoostingClassifier().fit(X_train, y_train) |
| 104 | +score(hist.predict(X_val), name='Hist. Grad. Boost. Tree') |
| 105 | +``` |
0 commit comments