predict.lrm.RdComputes a variety of types of predicted values for fits from
lrm and orm, either from the original dataset or for new
observations. The Mean.lrm and Mean.orm functions produce
an R function to compute the predicted mean of a numeric ordered
response variable given the linear predictor, which is assumed to use
the first intercept when it was computed. The returned function has two
optional arguments if confidence intervals are desired: conf.int
and the design matrix X. When this derived function is called
with nonzero conf.int, an attribute named limits is attached
to the estimated mean. This is a list with elements lower and
upper containing normal approximations for confidence limits
using the delta method.
For orm fits on censored data, the function created by Mean.orm
has an argument tmax which specifies the restriction time for mean
restricted survival time.
# S3 method for class 'lrm'
predict(object, ..., type=c("lp", "fitted",
"fitted.ind", "mean", "x", "data.frame",
"terms", "cterms", "ccterms", "adjto","adjto.data.frame",
"model.frame"), se.fit=FALSE, codes=FALSE)
# S3 method for class 'orm'
predict(object, ..., type=c("lp", "fitted",
"fitted.ind", "mean", "x", "data.frame",
"terms", "cterms", "ccterms", "adjto","adjto.data.frame",
"model.frame"), se.fit=FALSE, codes=FALSE)
# S3 method for class 'lrm'
Mean(object, codes=FALSE, ...)
# S3 method for class 'orm'
Mean(object, codes=FALSE, ...)a object created by lrm or orm
arguments passed to predictrms, such as kint and newdata
(which is used if you are predicting out of data). See
predictrms to see how NAs are handled. Ignored for other functions.
See predict.rms for "x", "data.frame", "terms", "cterms",
"ccterms", "adjto", "adjto.data.frame" and
"model.frame". type="lp" is used to get
linear predictors (using the first intercept by default; specify
kint to use others). type="fitted"
is used to get all the probabilities \(Y\geq j\).
type="fitted.ind" gets all the individual probabilities
\(Y=j\) (not recommended for orm fits). For an ordinal response
variable, type="mean" computes
the estimated mean \(Y\) by summing values of \(Y\)
multiplied by the estimated \(Prob(Y=j)\). If \(Y\) was a character or
factor object, the levels are the character values or factor levels,
so these must be translatable to numeric, unless codes=TRUE.
See the Hannah and Quigley reference below for the method of estimating
(and presenting) the mean score. If you specify
type="fitted","fitted.ind","mean" you may not specify kint.
applies only to type="lp", to get standard errors.
if TRUE, type="mean", Mean.lrm, and Mean.orm
use the integer codes \(1,2,\ldots,k\) for the \(k\)-level response
in computing the predicted mean response.
a vector (type="lp" with se.fit=FALSE, or
type="mean" or only one
observation being predicted), a list (with elements linear.predictors
and se.fit if se.fit=TRUE), a matrix (type="fitted"
or type="fitted.ind"), a data frame, or a design matrix. For
Mean.lrm and Mean.orm, the result is an R function.
Hannah M, Quigley P: Presentation of ordinal regression analysis on the original scale. Biometrics 52:771–5; 1996.
# See help for predict.rms for several binary logistic
# regression examples
# Examples of predictions from ordinal models
set.seed(1)
y <- factor(sample(1:3, 400, TRUE), 1:3, c('good','better','best'))
x1 <- runif(400)
x2 <- runif(400)
f <- lrm(y ~ rcs(x1,4)*x2, x=TRUE) #x=TRUE needed for se.fit
# Get 0.95 confidence limits for Prob[better or best]
L <- predict(f, se.fit=TRUE) #omitted kint= so use 1st intercept
plogis(with(L, linear.predictors + 1.96*cbind(-se.fit,se.fit)))
#> se.fit
#> 1 0.55632 0.79429
#> 2 0.49296 0.75272
#> 3 0.59717 0.89470
#> 4 0.58820 0.78211
#> 5 0.33221 0.63671
#> 6 0.60064 0.77538
#> 7 0.55617 0.72790
#> 8 0.57795 0.80900
#> 9 0.52976 0.81914
#> 10 0.19828 0.60001
#> 11 0.30182 0.64121
#> 12 0.56095 0.72774
#> 13 0.60592 0.76021
#> 14 0.58415 0.81239
#> 15 0.62204 0.77688
#> 16 0.61464 0.75074
#> 17 0.57551 0.79601
#> 18 0.55243 0.82035
#> 19 0.61393 0.79668
#> 20 0.60467 0.76396
#> 21 0.58390 0.73756
#> 22 0.62472 0.75539
#> 23 0.58643 0.78447
#> 24 0.55677 0.76514
#> 25 0.55736 0.81400
#> 26 0.48263 0.75086
#> 27 0.53168 0.79520
#> 28 0.41006 0.66126
#> 29 0.52432 0.69887
#> 30 0.59382 0.77095
#> 31 0.55364 0.77834
#> 32 0.47364 0.79464
#> 33 0.56367 0.80808
#> 34 0.59972 0.77392
#> 35 0.56843 0.75378
#> 36 0.60280 0.74315
#> 37 0.56291 0.79529
#> 38 0.46329 0.79630
#> 39 0.56947 0.80386
#> 40 0.49977 0.70163
#> 41 0.42846 0.80924
#> 42 0.48978 0.72546
#> 43 0.60347 0.76092
#> 44 0.63359 0.77428
#> 45 0.57341 0.72561
#> 46 0.58976 0.73792
#> 47 0.46046 0.74249
#> 48 0.53105 0.75655
#> 49 0.56494 0.75230
#> 50 0.55269 0.78048
#> 51 0.58787 0.89319
#> 52 0.59081 0.77835
#> 53 0.46612 0.74213
#> 54 0.54173 0.79416
#> 55 0.52108 0.81885
#> 56 0.59772 0.83734
#> 57 0.62997 0.78794
#> 58 0.51665 0.81012
#> 59 0.63027 0.79737
#> 60 0.52877 0.80718
#> 61 0.49290 0.76861
#> 62 0.59315 0.75046
#> 63 0.48653 0.79281
#> 64 0.51379 0.76040
#> 65 0.47515 0.73010
#> 66 0.41901 0.66738
#> 67 0.62015 0.76879
#> 68 0.58399 0.73852
#> 69 0.60192 0.77073
#> 70 0.59196 0.82374
#> 71 0.60548 0.74266
#> 72 0.59722 0.89605
#> 73 0.57242 0.79087
#> 74 0.53553 0.73437
#> 75 0.57805 0.74186
#> 76 0.46673 0.73646
#> 77 0.47013 0.67821
#> 78 0.59703 0.74053
#> 79 0.60391 0.77101
#> 80 0.52067 0.81121
#> 81 0.59676 0.75403
#> 82 0.54068 0.78622
#> 83 0.61261 0.82725
#> 84 0.58923 0.86552
#> 85 0.63430 0.77015
#> 86 0.59836 0.77601
#> 87 0.57172 0.76620
#> 88 0.49189 0.79165
#> 89 0.56714 0.72685
#> 90 0.56310 0.74683
#> 91 0.44586 0.89132
#> 92 0.21810 0.60535
#> 93 0.47349 0.72886
#> 94 0.49058 0.76808
#> 95 0.56394 0.73173
#> 96 0.55228 0.75764
#> 97 0.59891 0.75486
#> 98 0.57960 0.79007
#> 99 0.39736 0.68492
#> 100 0.50745 0.83078
#> 101 0.58754 0.77960
#> 102 0.58036 0.78678
#> 103 0.62666 0.79975
#> 104 0.61710 0.81214
#> 105 0.58439 0.75232
#> 106 0.16047 0.58530
#> 107 0.48791 0.74777
#> 108 0.44809 0.68365
#> 109 0.54834 0.78125
#> 110 0.58968 0.79706
#> 111 0.55437 0.71250
#> 112 0.59190 0.78179
#> 113 0.49053 0.79039
#> 114 0.56928 0.80544
#> 115 0.56688 0.81997
#> 116 0.57952 0.72423
#> 117 0.61734 0.81773
#> 118 0.60928 0.76881
#> 119 0.57233 0.77543
#> 120 0.55174 0.79440
#> 121 0.54116 0.70622
#> 122 0.53194 0.80991
#> 123 0.54307 0.83932
#> 124 0.49666 0.79265
#> 125 0.55942 0.81743
#> 126 0.58583 0.82514
#> 127 0.60416 0.76727
#> 128 0.59107 0.75782
#> 129 0.45219 0.69887
#> 130 0.35508 0.66790
#> 131 0.62049 0.77246
#> 132 0.41118 0.81477
#> 133 0.52168 0.72438
#> 134 0.53418 0.71844
#> 135 0.55858 0.75595
#> 136 0.49992 0.78191
#> 137 0.55192 0.82721
#> 138 0.56941 0.73176
#> 139 0.56160 0.77363
#> 140 0.63387 0.79232
#> 141 0.53145 0.73536
#> 142 0.58815 0.87890
#> 143 0.60151 0.83485
#> 144 0.56173 0.72611
#> 145 0.58022 0.72346
#> 146 0.57220 0.74737
#> 147 0.48976 0.78740
#> 148 0.57312 0.78848
#> 149 0.56842 0.78777
#> 150 0.57623 0.79045
#> 151 0.38178 0.66192
#> 152 0.62116 0.79685
#> 153 0.51109 0.78534
#> 154 0.60369 0.83133
#> 155 0.60831 0.75554
#> 156 0.53939 0.73023
#> 157 0.50594 0.78339
#> 158 0.51201 0.82909
#> 159 0.58563 0.72412
#> 160 0.43133 0.70494
#> 161 0.63199 0.78854
#> 162 0.57763 0.73836
#> 163 0.48498 0.73135
#> 164 0.60802 0.76944
#> 165 0.41626 0.65831
#> 166 0.61315 0.79670
#> 167 0.56032 0.71808
#> 168 0.54835 0.79594
#> 169 0.54611 0.82989
#> 170 0.54532 0.80225
#> 171 0.53349 0.79475
#> 172 0.59386 0.74737
#> 173 0.57052 0.72130
#> 174 0.48179 0.79667
#> 175 0.48491 0.70445
#> 176 0.58965 0.79044
#> 177 0.59252 0.83815
#> 178 0.58980 0.75828
#> 179 0.60140 0.75831
#> 180 0.58119 0.78765
#> 181 0.49017 0.73145
#> 182 0.25149 0.61339
#> 183 0.51918 0.80587
#> 184 0.54682 0.79226
#> 185 0.62461 0.80401
#> 186 0.59020 0.74033
#> 187 0.56509 0.75614
#> 188 0.62471 0.76312
#> 189 0.59781 0.90852
#> 190 0.61919 0.77059
#> 191 0.61153 0.79498
#> 192 0.57246 0.72129
#> 193 0.60683 0.75650
#> 194 0.62132 0.81052
#> 195 0.47583 0.70203
#> 196 0.57947 0.78611
#> 197 0.61121 0.75947
#> 198 0.60652 0.75541
#> 199 0.51760 0.73309
#> 200 0.57798 0.74360
#> 201 0.59485 0.73231
#> 202 0.55503 0.76963
#> 203 0.57238 0.73627
#> 204 0.55682 0.80904
#> 205 0.17942 0.59153
#> 206 0.14128 0.57833
#> 207 0.57471 0.73840
#> 208 0.58621 0.75253
#> 209 0.57561 0.84516
#> 210 0.56629 0.74970
#> 211 0.60224 0.87316
#> 212 0.57464 0.73901
#> 213 0.59664 0.79184
#> 214 0.55699 0.79708
#> 215 0.62035 0.77254
#> 216 0.56282 0.81482
#> 217 0.55147 0.77450
#> 218 0.53794 0.72371
#> 219 0.61732 0.81989
#> 220 0.53301 0.80589
#> 221 0.38416 0.67868
#> 222 0.57086 0.77438
#> 223 0.61218 0.78346
#> 224 0.54910 0.76922
#> 225 0.49776 0.72642
#> 226 0.58739 0.78200
#> 227 0.57788 0.73008
#> 228 0.63025 0.79933
#> 229 0.60939 0.74908
#> 230 0.55316 0.78159
#> 231 0.58238 0.73159
#> 232 0.57538 0.73579
#> 233 0.54664 0.72114
#> 234 0.60443 0.82111
#> 235 0.60221 0.76174
#> 236 0.50754 0.72381
#> 237 0.56101 0.76318
#> 238 0.56177 0.76266
#> 239 0.55063 0.79873
#> 240 0.61140 0.81720
#> 241 0.38174 0.64922
#> 242 0.49797 0.72426
#> 243 0.52976 0.74166
#> 244 0.58335 0.72565
#> 245 0.54260 0.73730
#> 246 0.57115 0.71986
#> 247 0.53828 0.77542
#> 248 0.49345 0.68361
#> 249 0.62562 0.77604
#> 250 0.59817 0.87636
#> 251 0.59298 0.83301
#> 252 0.58047 0.72287
#> 253 0.57045 0.73223
#> 254 0.55857 0.81949
#> 255 0.57367 0.78511
#> 256 0.52105 0.79857
#> 257 0.57722 0.75800
#> 258 0.63691 0.77545
#> 259 0.61184 0.75900
#> 260 0.38320 0.66458
#> 261 0.48981 0.73685
#> 262 0.59099 0.80647
#> 263 0.46830 0.79482
#> 264 0.62121 0.81446
#> 265 0.57711 0.74294
#> 266 0.55499 0.79168
#> 267 0.41780 0.65945
#> 268 0.52361 0.73372
#> 269 0.54925 0.83137
#> 270 0.62861 0.77869
#> 271 0.57198 0.75610
#> 272 0.57663 0.79712
#> 273 0.59859 0.74282
#> 274 0.62877 0.75821
#> 275 0.58004 0.79100
#> 276 0.59352 0.74346
#> 277 0.57305 0.78198
#> 278 0.49284 0.79879
#> 279 0.55641 0.72112
#> 280 0.55989 0.71983
#> 281 0.59209 0.74739
#> 282 0.59154 0.74155
#> 283 0.58030 0.74091
#> 284 0.57782 0.78990
#> 285 0.60672 0.77433
#> 286 0.59401 0.77942
#> 287 0.58222 0.74157
#> 288 0.52281 0.74868
#> 289 0.39689 0.81916
#> 290 0.45665 0.67618
#> 291 0.56702 0.75888
#> 292 0.60866 0.76745
#> 293 0.52647 0.71109
#> 294 0.63387 0.78904
#> 295 0.58906 0.77382
#> 296 0.59845 0.77455
#> 297 0.49721 0.75337
#> 298 0.58608 0.73119
#> 299 0.50757 0.76889
#> 300 0.57246 0.80267
#> 301 0.55577 0.77225
#> 302 0.40141 0.65439
#> 303 0.60951 0.75630
#> 304 0.60457 0.82038
#> 305 0.61377 0.82192
#> 306 0.55166 0.75952
#> 307 0.57769 0.73562
#> 308 0.63025 0.75842
#> 309 0.59788 0.87678
#> 310 0.62032 0.77468
#> 311 0.60000 0.78534
#> 312 0.60388 0.76672
#> 313 0.48761 0.80452
#> 314 0.60406 0.76439
#> 315 0.58416 0.78778
#> 316 0.59800 0.74434
#> 317 0.53072 0.81626
#> 318 0.57066 0.75256
#> 319 0.56925 0.73600
#> 320 0.59362 0.78070
#> 321 0.47103 0.87966
#> 322 0.58325 0.78209
#> 323 0.51118 0.69910
#> 324 0.56671 0.79325
#> 325 0.57227 0.77500
#> 326 0.61157 0.81280
#> 327 0.53246 0.72310
#> 328 0.63618 0.77005
#> 329 0.56828 0.73918
#> 330 0.54329 0.79612
#> 331 0.56618 0.79631
#> 332 0.60657 0.79575
#> 333 0.60871 0.75635
#> 334 0.55316 0.78226
#> 335 0.62105 0.77054
#> 336 0.50307 0.72315
#> 337 0.53157 0.71882
#> 338 0.60125 0.75081
#> 339 0.51833 0.77507
#> 340 0.55149 0.78424
#> 341 0.49785 0.73003
#> 342 0.62557 0.77176
#> 343 0.58074 0.78327
#> 344 0.50446 0.72272
#> 345 0.57389 0.76698
#> 346 0.58186 0.74358
#> 347 0.59995 0.74658
#> 348 0.51780 0.69952
#> 349 0.54827 0.76455
#> 350 0.60206 0.82360
#> 351 0.58984 0.78362
#> 352 0.30479 0.63402
#> 353 0.60894 0.76950
#> 354 0.53986 0.77340
#> 355 0.55934 0.82069
#> 356 0.54625 0.72916
#> 357 0.50469 0.75948
#> 358 0.53345 0.72283
#> 359 0.20674 0.60024
#> 360 0.59599 0.74003
#> 361 0.53123 0.76011
#> 362 0.55567 0.73994
#> 363 0.57142 0.72426
#> 364 0.56071 0.72176
#> 365 0.55056 0.78902
#> 366 0.57210 0.73045
#> 367 0.55762 0.79212
#> 368 0.60499 0.77859
#> 369 0.58922 0.75768
#> 370 0.59341 0.80656
#> 371 0.59126 0.78307
#> 372 0.60515 0.77396
#> 373 0.54231 0.79464
#> 374 0.37161 0.67929
#> 375 0.60384 0.77850
#> 376 0.51359 0.75281
#> 377 0.52192 0.77806
#> 378 0.58768 0.78454
#> 379 0.53733 0.72622
#> 380 0.10791 0.56403
#> 381 0.45985 0.67337
#> 382 0.61785 0.77802
#> 383 0.61727 0.76439
#> 384 0.60772 0.74473
#> 385 0.61366 0.82168
#> 386 0.53166 0.74769
#> 387 0.61995 0.81601
#> 388 0.57640 0.75052
#> 389 0.52322 0.76692
#> 390 0.47711 0.85964
#> 391 0.57296 0.76014
#> 392 0.44567 0.80442
#> 393 0.28915 0.62392
#> 394 0.57579 0.76280
#> 395 0.62164 0.75949
#> 396 0.53454 0.79933
#> 397 0.56980 0.78065
#> 398 0.60587 0.77007
#> 399 0.53677 0.72307
#> 400 0.45743 0.73878
predict(f, type="fitted.ind")[1:10,] #gets Prob(better) and all others
#> y=good y=better y=best
#> 1 0.31247 0.36315 0.32438
#> 2 0.36761 0.35947 0.27292
#> 3 0.21983 0.34374 0.43643
#> 4 0.30635 0.36297 0.33069
#> 5 0.51713 0.31361 0.16926
#> 6 0.30501 0.36291 0.33208
#> 7 0.35325 0.36129 0.28546
#> 8 0.29339 0.36212 0.34449
#> 9 0.30686 0.36299 0.33015
#> 10 0.62147 0.26122 0.11731
d <- data.frame(x1=c(.1,.5),x2=c(.5,.15))
predict(f, d, type="fitted") # Prob(Y>=j) for new observation
#> y>=better y>=best
#> 1 0.66674 0.30389
#> 2 0.68602 0.32284
predict(f, d, type="fitted.ind") # Prob(Y=j)
#> y=good y=better y=best
#> 1 0.33326 0.36285 0.30389
#> 2 0.31398 0.36318 0.32284
predict(f, d, type='mean', codes=TRUE) # predicts mean(y) using codes 1,2,3
#> 1 2
#> 1.9706 2.0089
m <- Mean(f, codes=TRUE)
lp <- predict(f, d)
m(lp)
#> 1 2
#> 1.9706 2.0089
# Can use function m as an argument to Predict or nomogram to
# get predicted means instead of log odds or probabilities
dd <- datadist(x1,x2); options(datadist='dd')
m
#> function (lp = numeric(0), X = numeric(0), tmax = NULL, intercepts = c(`y>=better` = 1.00598680784607,
#> `y>=best` = -0.516345358348157), slopes = c(x1 = -2.72536777894854,
#> `x1'` = 10.8751898744843, `x1''` = -46.2807102886999, x2 = -0.524169698357157,
#> `x1 * x2` = 4.43355962439769, `x1' * x2` = -15.6659728469206,
#> `x1'' * x2` = 64.4795212438901), info = list(a = c(111.879171645122,
#> 107.613077301517, -51.0513136825326, 0), b = c(40.5641143277179,
#> 23.428036335091, 3.92203545230184, 29.8257534821882, 20.4002608616928,
#> 12.0915572257875, 2.05931268585215, 23.428036335091, 16.0563275324573,
#> 2.92781225509664, 14.9188426240104, 12.0915572257875, 8.42199079520884,
#> 1.55332323598271, 3.92203545230184, 2.92781225509664, 0.560511439714542,
#> 2.37349599345151, 2.05931268585215, 1.55332323598271, 0.299618747574564,
#> 29.8257534821882, 14.9188426240104, 2.37349599345151, 38.9575578952516,
#> 20.2105470685253, 10.2726787665173, 1.65121819505407, 20.4002608616928,
#> 12.0915572257875, 2.05931268585215, 20.2105470685253, 13.9440320282293,
#> 8.37447798613467, 1.43836001130869, 12.0915572257875, 8.42199079520884,
#> 1.55332323598271, 10.2726787665173, 8.37447798613467, 5.8830159955668,
#> 1.0917548069541, 2.05931268585215, 1.55332323598271, 0.299618747574564,
#> 1.65121819505407, 1.43836001130869, 1.0917548069541, 0.211569808753024
#> ), ab = c(31.3789089849231, 28.88755571848, 15.6287692653363,
#> 13.6022976561976, 2.50695433727298, 2.04440071182461, 28.8966132567733,
#> 28.9561029352379, 14.6579570807986, 15.1677964013896, 7.30867923235488,
#> 7.61016339165552, 1.16672279865478, 1.20677319479673), iname = c("y>=better",
#> "y>=best"), xname = c("x1", "x1'", "x1''", "x2", "x1 * x2", "x1' * x2",
#> "x1'' * x2")), values = 1:3, interceptRef = 1, Ncens = NULL,
#> famfunctions = expression(function(x) plogis(x), function(x) qlogis(x),
#> function(x, f) f * (1 - f), function(x, f, deriv) f *
#> (1 - 3 * f + 2 * f * f), function(x) {
#> f <- plogis(x)
#> f * (1 - f)
#> }), conf.int = 0)
#> {
#> ns <- length(intercepts)
#> lp <- if (length(lp))
#> lp - intercepts[interceptRef]
#> else matxv(X, slopes)
#> xb <- sapply(intercepts, "+", lp)
#> cumprob <- eval(famfunctions[1])
#> deriv <- eval(famfunctions[5])
#> P <- matrix(cumprob(xb), ncol = ns)
#> if (!length(tmax)) {
#> if (length(Ncens) && sum(Ncens) > 0 && min(1 - P) > 0.001)
#> warning("Computing the mean when the lowest P(Y < y) is ",
#> format(min(1 - P)), "\nand tmax omitted will result in only a lower limit to the mean")
#> }
#> else {
#> if (tmax > max(values))
#> stop("tmax=", tmax, " > maximum observed Y=", format(max(values)))
#> values[values > tmax] <- tmax
#> }
#> P <- cbind(1, P) - cbind(P, 0)
#> m <- drop(P %*% values)
#> names(m) <- names(lp)
#> if (conf.int) {
#> if (!length(X))
#> stop("must specify X if conf.int > 0")
#> lb <- matrix(sapply(intercepts, "+", lp), ncol = ns)
#> dmean.dalpha <- t(apply(deriv(lb), 1, FUN = function(x) x *
#> (values[2:length(values)] - values[1:ns])))
#> dmean.dbeta <- apply(dmean.dalpha, 1, sum) * X
#> dmean.dtheta <- cbind(dmean.dalpha, dmean.dbeta)
#> if (getOption("rmsdebug", FALSE)) {
#> prn(infoMxop(info, np = TRUE))
#> prn(dim(dmean.dtheta))
#> }
#> mean.var <- diag(dmean.dtheta %*% infoMxop(info, B = t(dmean.dtheta)))
#> w <- qnorm((1 + conf.int)/2) * sqrt(mean.var)
#> attr(m, "limits") <- list(lower = m - w, upper = m +
#> w)
#> }
#> m
#> }
#> <environment: 0x62da969f5e90>
plot(Predict(f, x1, fun=m), ylab='Predicted Mean')
#> Error in value.chk(at, which(name == n), NA, np, lim): variable x1 does not have limits defined by datadist
# Note: Run f through bootcov with coef.reps=TRUE to get proper confidence
# limits for predicted means from the prop. odds model
options(datadist=NULL)