An EP algorithm for learning highly interpretable classifiers

From KDIS Research Group - Wiki

Jump to: navigation, search

This Website contains additional material to the paper titled "An EP algorithm for learning highly interpretable classifiers".

Full paper A. Cano, A. Zafra, and S. Ventura. An EP algorithm for learning highly interpretable classifiers. In Proceedings of the 10th International Conference on Intelligent Systems Design and Applications, ISDA'11, pages 325-330, 2011.

Contents

[edit] Abstract

This paper introduces an Evolutionary Programming algorithm for solving classification problems using highly interpretable IF-THEN classification rules. It is an algorithm aimed to maximize the comprehensibility of the classifier by minimizing the number of rules and employing only relevant attributes. The proposal is evaluated and compared to other 5 well-known classification techniques over 18 datasets. The results obtained from the experiments show its competitive accuracy and the significantly better interpretability of the classifiers provided in terms of number of rules, number of conditions and a complexity metric.

[edit] Datasets

The datasets employed have been selected from the KEEL repository website. These datasets are very varied considering different degrees of complexity, number of classes, number of features and number of instances.

Dataset # Instances # Attributes # Classes
Zoo 101 16 7
Iris 150 4 3
Wine 178 13 3
Glass 214 9 7
Dermatology 358 34 6
Haberman 306 3 2
Ecoli 336 7 8
Australian 690 14 2
Pima 768 8 2
Contraceptive 1473 9 3
Thyroid 7200 21 3
Monk-2 432 6 2
Lymphography 148 18 4
Bupa 345 6 2
Flare 1066 11 6
Saheart 462 9 2
NewThyroid 215 5 3
Tae 151 5 3

[edit] Software

The algorithms used in the experimentation are available on KEEL software website.

A WEKA plugin package of EP-IRC algorithm is available to download (requires WEKA 3.7.3).

The EP-IRC algorithm to run on JCLEC is available to download.

To run the configuration example with the Iris dataset, just type:

java -jar jclec4-EP-IRC.jar EPIRC.cfg 

[edit] Results

All algorithms are tested using 10-fold stratified cross validation on all datasets and all experiments are repeated with 5 different seeds for stochastic methods.

The last Figure shows the application of the Bonferroni-Dunn test for p = 0.05, whose critical difference is 1.606. This critical difference is added to the ranking of the control algorithm to determine the threshold at which significant differences exist between the results obtained by the different proposals and the control algorithm. This graphic represents a bar chart, whose values are proportional to the mean rank obtained from each algorithm. The limits for this test are represented with a thicker horizontal line and those values that exceed this line are algorithms with significantly different results than the control algorithm, which is EP-IRC. Therefore, the algorithms left beyond the critical difference from the control algorithm are significantly worse, and those algorithms right beyond the critical difference are significantly better. Observing this figure, there is any algorithm significantly better than EP-IRC regarding to accuracy results. C45R, MPLCS and RIPPER are significantly worse than the proposal regarding the number of rules and all the algorithms are significantly worse regarding all the other interpretability metrics considered.

Personal tools