Feature-Weighted Conformal Prediction for Interpretable Machine Learning Models
Keywords:
Adaptive prediction intervals, Conformal prediction, Feature importance, Interpretability, Uncertainty quantificationAbstract
Reliable uncertainty quantification is essential for deploying machine learning models in high-stakes decision-making scenarios. Conformal Prediction (CP) offers distribution-free prediction intervals with valid coverage under minimal assumptions, but standard CP methods often ignore model interpretability and fail to incorporate feature relevance into their nonconformity scores. This paper introduces Feature-Weighted Conformal Prediction (FWCP), a novel framework that leverages feature importance from interpretable models (e.g., decision trees, generalized additive models) to construct adaptive nonconformity measures. FWCP produces prediction intervals that are narrower in regions of high model confidence and wider in regions of uncertainty while maintaining valid coverage. Through both synthetic and real-world experiments, FWCP consistently outperforms baseline CP methods in terms of average interval width while preserving interpretability and computational efficiency.
References
M. T. Ribeiro, S. Singh, and C. Guestrin, “‘Why should I trust you?’: Explaining the predictions of any classifier,” Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD ’16, pp. 1135–1144, Aug. 2016, doi: https://doi.org/10.1145/2939672.2939778
G. Shafer and V. Vovk, “A tutorial on conformal prediction,” The Journal of Machine Learning Research, vol. 9, pp. 371–421, Jun. 2008, Available: https://dl.acm.org/doi/10.5555/1390681.1390693
V. Vovk, A. Gammerman, and G. Shafer, Algorithmic learning in a random world, 1st ed. New York, NY: Springer Nature, 2005.
Y. Romano, E. Patterson, and E. J. Candès, “Conformalized quantile regression,” in Proceedings of the 33rd International Conference on Neural Information Processing Systems, Dec. 2019, pp. 3543–3553. Available: https://dl.acm.org/doi/10.5555/3454287.3454605
Y. Romano, M. Sesia, and E. J. Candès, “Classification with valid and adaptive coverage,” in NIPS’20: Proceedings of the 34th International Conference on Neural Information Processing Systems, Red Hook, NY, United States: Curran Associates Inc., Dec. 2020, pp. 3581–3591. Available: https://dl.acm.org/doi/10.5555/3495724.3496026
S. M. Lundberg and S.-I. Lee, “A unified approach to interpreting model predictions,” in NIPS’17: Proceedings of the 31st International Conference on Neural Information Processing Systems, Red Hook, NY, United States: Curran Associates Inc., Dec. 2017, pp. 4768–4777. Available: https://dl.acm.org/doi/10.5555/3295222.3295230