Home | People | Research | Publications | Demos |
News | Jobs |
Prospective Students |
About | Internal |
Cost Sensitive Boosting | ||||||||||||||||||||
A novel framework is proposed for the design of cost-sensitive boosting algorithms. The framework is based on the
identification of two necessary conditions for optimal cost-sensitive learning that 1) expected losses must be minimized by optimal
cost-sensitive decision rules and 2) empirical loss minimization must emphasize the neighborhood of the target cost-sensitive
boundary. It is shown that these conditions enable the derivation of cost-sensitive losses that can be minimized by gradient descent, in
the functional space of convex combinations of weak learners, to produce novel boosting algorithms. The proposed framework is
applied to the derivation of cost-sensitive extensions of AdaBoost, RealBoost, and LogitBoost. Experimental evidence, with a synthetic
problem, standard data sets, and the computer vision problems of face and car detection, is presented in support of the cost-sensitive
optimality of the new algorithms. Their performance is also compared to those of various previous cost-sensitive boosting proposals,
as well as the popular combination of large-margin classifiers and probability calibration. Cost-sensitive boosting is shown to
consistently outperform all other methods.
Many classification problems such as
fraud detection, business decision making and medical diagnosis are naturally cost sensitive.
These require cost-sensitive extensions of state of the art learning techniques.
Risk minimization, probability elicitation, and cost-sensitive SVMs
Asymmetric Boosting
|
Copyright @ 2007
www.svcl.ucsd.edu