2024-02-23 18:19:00 +01:00
|
|
|
# Notes
|
|
|
|
|
|
|
|
Branch for research on classifier accuracy prediction.
|
|
|
|
|
|
|
|
I had some work done for binary (models_binary.py and main_binary.py).
|
|
|
|
I would like to approach the multiclass case directly now.
|
|
|
|
|
|
|
|
I think I will frame the problem setting as follows.
|
|
|
|
A Classifier Accuracy Prediction (CAP) method is method tha receives as input:
|
|
|
|
- h: classifier (already trained),
|
|
|
|
- V: labelled collection (for training the CAP),
|
|
|
|
- acc_func: callable: any function that works on a contingency table
|
|
|
|
|
|
|
|
And implements:
|
|
|
|
- fit: trains the CAP
|
|
|
|
- predict: predicts the evaluation measure on unseen data (provided, calls predict_ct and acc_func)
|
2024-03-03 14:52:12 +01:00
|
|
|
- predict_ct: predicts the contingency table
|
|
|
|
|
|
|
|
Important:
|
|
|
|
- When the quantifiers' iperparameters are optimized, we should make sure that the
|
|
|
|
classifier is not being reused, or that the iperparameters do no include any from
|
|
|
|
the underlying classifier
|
|
|
|
|
|
|
|
TODO:
|
|
|
|
- Add additional covariates [done, check]
|
|
|
|
- Add model selection for CAP
|
|
|
|
- Add Doc
|
|
|
|
- Add ATC
|
|
|
|
- Add APP in training and adapt plots and tables
|
|
|
|
- Add plots: error by drift, etc
|
|
|
|
- Add characterization of classifiers in terms of accuracy and use this as a variable
|
|
|
|
analyzing results
|