Comparison of strategies for validating binary logistic regression models
The Bias-corrected curve (see below) shows if the apparent fit of the model is overfited. the explanation I found on page 270-271: "The nonparametric estimate is evaluated at a sequence of predicted probability levels.
Then the distances from the 45◦ line are compared with the differences when the current model is evaluated back on the whole sample (or omitted sample for cross-validation).
The differences in the differences are estimates of overoptimism.
After averaging over many replications, the predicted-value-specific differences are then subtracted from the apparent differences and an adjusted calibration curve is obtained.
Stack Exchange network consists of 175 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Sign up to join this community I have a question regarding calibration plot for a binary logistic regression model (calibrate) in the rms(regression modelling strategies) package.
You can also think of logistic regression as a special case of linear regression when the outcome variable is categorical, where we are using log of odds as dependent variable.
This is the Efron-Gong optimism bootstrap in its original version.
Read more » In the previous three posts I used multiple linear regression, decision trees, gradient boosting, and support vector machine to predict miles per gallon for 2019 vehicles.