Deep neural networks (DNNs) have revolutionized fields like computer vision and robotics, but key limitations, such as poor calibration of posterior probabilities and the unidentifiability of class-conditional densities, hinder their use in safety-critical applications. This thesis addresses these challenges by developing probabilistic DNN frameworks tailored to different settings, including fine-grained classification, multi-class, and multi-label learning, aiming to improve reliability and trustworthiness in real-world deployment.
Deep neural networks (DNNs) used in multi-label learning often suffer from poor calibration. We identify that popular asymmetric losses, designed to handle class imbalance, lack the strictly proper property necessary for accurate probability estimation. To address this, the Strictly Proper Asymmetric (SPA) loss was proposed to enhance calibration constraints during training. Extensive experiments demonstrate that this approach significantly reduces calibration error while maintaining state-of-the-art accuracy.
Deep neural networks (DNNs) often produce poorly calibrated class-posterior probabilities, primarily due to the cross-entropy loss focusing mainly on the true class probability and neglecting others. We propose that calibrating a C-class problem can be reframed as calibrating C(C-1)/2 pairwise binary classification tasks, suggesting that providing calibration supervision to all binary problems can enhance DNN calibration. We introduce the Calibration by Pairwise Constraints (CPC) method, which incorporates two types of binary calibration constraints with minimal additional complexity to standard training.
Fine-grained novelty detection remains a challenge in deep classification due to inconsistencies between classification objectives and novelty detection requirements. This paper proposes a novel Class-Conditional Gaussianity (CCG) loss, which regularizes deep classifiers to enforce Gaussian-like class-conditional distributions, improving both classification and novelty detection performance. By aligning feature space structures with theoretical optimality, the proposed method enhances outlier detection while maintaining high classification accuracy. Extensive experiments demonstrate that this approach outperforms existing methods in fine-grained visual classification tasks.
Probabilistic Deep Neural Networks for Trust-worthy Artificial Intelligence
Jiacheng Cheng
Ph.D. Thesis, University of California San Diego,
2025.
Towards Calibrated Multi-label Deep Neural Networks
Jiacheng Cheng and Nuno Vasconcelos
IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR),
2024.
Calibrating Deep Neural Networks by Pairwise Constraints
Jiacheng Cheng and Nuno Vasconcelos
IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR),
2022.
Learning Deep Classifiers Consistent with Fine-Grained Novelty Detection
Jiacheng Cheng and Nuno Vasconcelos
IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR),
2021.