multinomial.RdFits a multinomial logit model (MLM) to a (preferably unordered) factor response.
multinomial(zero = NULL, parallel = FALSE, nointercept = NULL,
refLevel = "(Last)", ynames = FALSE,
imethod = 1, imu = NULL, byrow.arg = FALSE,
Thresh = NULL, Trev = FALSE,
Tref = if (Trev) "M" else 1, whitespace = FALSE)Can be an integer-valued vector specifying which
linear/additive predictors are modelled as intercepts only.
Any values must be from the set {1,2,...,\(M\)}.
The default value means none are modelled as
intercept-only terms.
See CommonVGAMffArguments for more information.
A logical, or formula specifying which terms have equal/unequal coefficients.
Logical.
If TRUE then
"mu[,1]" is replaced by the
probability of the
first named response category, etc.
(e.g., "P[normal]"),
so that the output is more readable,
albeit less compact.
This is seen in output such as
predict(fit) and
coef(fit, matrix = TRUE).
Of course, "mu" stands for the
fitted probabilities, and it remains
the default for upward compatibility
and predictability.
See CommonVGAMffArguments for details.
See CommonVGAMffArguments for details.
Either a (1) single positive integer or (2) a value of
the factor or (3) a character string.
If inputted as an integer then it specifies which
column of the response matrix is the reference or baseline level.
The default is the last one (the \((M+1)\)th one).
If used, this argument will be usually assigned
the value 1.
If inputted as a value of a factor then beware of
missing values
of certain levels of the factor
(drop.unused.levels = TRUE or
drop.unused.levels = FALSE).
See the example below.
If inputted as a character string then this should be
equal to (A) one of the levels of the factor response,
else (B) one
of the column names of the matrix response of counts;
e.g.,
vglm(cbind(normal, mild, severe) ~ let,
multinomial(refLevel = "severe"), data = pneumo)
if it was (incorrectly because the response is ordinal)
applied to the pneumo data set.
Another example is
vglm(ethnicity ~ age,
multinomial(refLevel = "European"), data = xs.nz)
if it was applied to the xs.nz data set.
Choosing 2 will use the mean sample proportions of each
column of the response matrix, which corresponds to
the MLEs for intercept-only models.
See CommonVGAMffArguments for more details.
Same as cumulative.
Because these arguments concern the intercepts,
they should not be confused with the
stereotype model where they would
be applied to the A matrix instead.
In this help file the response \(Y\) is assumed to be a factor with unordered values \(1,2,\dots,M+1\), so that \(M\) is the number of linear/additive predictors \(\eta_j\).
The default model can be written
$$\eta_j = \log(P[Y=j]/ P[Y=M+1])$$
where \(\eta_j\) is the \(j\)th
linear/additive predictor.
Here, \(j=1,\ldots,M\), and
\(\eta_{M+1}\)
is 0 by definition. That is, the last level
of the factor,
or last column of the response matrix, is
taken as the
reference level or baseline—this is for
identifiability
of the parameters. The reference or
baseline level can
be changed with the refLevel argument.
In almost all the literature, the constraint matrices associated with
this family of models are known. For example, setting parallel
= TRUE will make all constraint matrices (including the intercept)
equal to a vector of \(M\) 1's; to suppress the intercepts from
being parallel then set parallel = FALSE ~ 1. If the
constraint matrices are unknown and to be estimated, then this can be
achieved by fitting the model as a reduced-rank vector generalized
linear model (RR-VGLM; see rrvglm). In particular, a
multinomial logit model with unknown constraint matrices is known as a
stereotype model (Anderson, 1984), and can be fitted with
rrvglm.
The above details correspond to the ordinary MLM where all the levels are altered (in the terminology of GAITD regression).
An object of class "vglmff"
(see vglmff-class).
The object is used by modelling functions
such as vglm,
rrvglm and vgam.
Agresti, A. (2013). Categorical Data Analysis, 3rd ed. Hoboken, NJ, USA: Wiley.
Anderson, J. A. (1984). Regression and ordered categorical variables. Journal of the Royal Statistical Society, Series B, Methodological, 46, 1–30.
Hastie, T. J., Tibshirani, R. J. and Friedman, J. H. (2009). The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2nd ed. New York, USA: Springer-Verlag.
McCullagh, P. and Nelder, J. A. (1989). Generalized Linear Models, 2nd ed. London: Chapman & Hall.
Tutz, G. (2012). Regression for Categorical Data, Cambridge: Cambridge University Press.
Yee, T. W. and Hastie, T. J. (2003). Reduced-rank vector generalized linear models. Statistical Modelling, 3, 15–41.
Yee, T. W. (2010). The VGAM package for categorical data analysis. Journal of Statistical Software, 32, 1–34. doi:10.18637/jss.v032.i10 .
Yee, T. W. and Ma, C. (2024). Generally altered, inflated, truncated and deflated regression. Statistical Science, 39, 568–588.
The response should be either a matrix of counts
(with row sums that are all positive), or a
factor. In both cases, the y slot returned by
vglm/vgam/rrvglm
is the matrix of sample proportions.
The multinomial logit model is more appropriate for a nominal
(unordered) factor response than for an
ordinal (ordered) factor
response.
Models more suited for the latter include those based on
cumulative probabilities, e.g., cumulative.
multinomial is prone to numerical difficulties if
the groups are separable and/or the fitted probabilities
are close to 0 or 1. The fitted values returned
are estimates of the probabilities \(P[Y=j]\) for
\(j=1,\ldots,M+1\). See safeBinaryRegression
for the logistic regression case.
Here is an example of the usage of the parallel
argument. If there are covariates x2, x3
and x4, then parallel = TRUE ~ x2 + x3 -
1 and parallel = FALSE ~ x4 are equivalent. This
would constrain the regression coefficients for x2
and x3 to be equal; those of the intercepts and
x4 would be different.
In Example 4 below, a conditional logit model is
fitted to an artificial data set that explores how
cost and travel time affect people's decision about
how to travel to work. Walking is the baseline group.
The variable Cost.car is the difference between
the cost of travel to work by car and walking, etc. The
variable Time.car is the difference between
the travel duration/time to work by car and walking,
etc. For other details about the xij argument see
vglm.control and fill1.
The multinom function in the
nnet package uses the first level of the factor as
baseline, whereas the last level of the factor is used
here. Consequently the estimated regression coefficients
differ.
No check is made to verify that the response is nominal.
See CommonVGAMffArguments for more warnings.
multilogitlink,
margeff,
cumulative,
acat,
cratio,
sratio,
CM.equid,
CommonVGAMffArguments,
dirichlet,
dirmultinomial,
rrvglm,
fill1,
Multinomial,
gaitdpoisson,
Gaitdpois,
iris.
# Example 1: Regn spline VGAM: marital status versus age
data(marital.nz)
ooo <- with(marital.nz, order(age))
om.nz <- marital.nz[ooo, ]
fit1 <- vglm(mstatus ~ sm.bs(age), multinomial, om.nz)
coef(fit1, matrix = TRUE) # Mostly meaningless
#> log(mu[,1]/mu[,4]) log(mu[,2]/mu[,4]) log(mu[,3]/mu[,4])
#> (Intercept) 5.844418 9.502096 13.10287
#> sm.bs(age)1 -3.988086 -5.655932 -17.19862
#> sm.bs(age)2 -4.085133 -5.650005 -10.69507
#> sm.bs(age)3 -9.277378 -9.292940 -16.56821
if (FALSE) with(om.nz,
matplot(age, fitted(fit1), type = "l", las = 1, lwd = 2))
legend("topright", leg = colnames(fitted(fit1)),
lty = 1:4, col = 1:4, lwd = 2) # \dontrun{}
#> Error in (function (s, units = "user", cex = NULL, font = NULL, vfont = NULL, ...) { if (!is.null(vfont)) vfont <- c(typeface = pmatch(vfont[1L], Hershey$typeface), fontindex = pmatch(vfont[2L], Hershey$fontindex)) .External.graphics(C_strWidth, as.graphicsAnnot(s), pmatch(units, c("user", "figure", "inches")), cex, font, vfont, ...)})(dots[[1L]][[1L]], cex = dots[[2L]][[1L]], font = dots[[3L]][[1L]], units = "user"): plot.new has not been called yet
# Example 2a: a simple example
ycounts <- t(rmultinom(10, size = 20, prob = c(0.1, 0.2, 0.8)))
fit <- vglm(ycounts ~ 1, multinomial)
head(fitted(fit)) # Proportions
#> [,1] [,2] [,3]
#> 1 0.12 0.18 0.7
#> 2 0.12 0.18 0.7
#> 3 0.12 0.18 0.7
#> 4 0.12 0.18 0.7
#> 5 0.12 0.18 0.7
#> 6 0.12 0.18 0.7
fit@prior.weights # NOT recommended for the prior weights
#> [,1]
#> 1 20
#> 2 20
#> 3 20
#> 4 20
#> 5 20
#> 6 20
#> 7 20
#> 8 20
#> 9 20
#> 10 20
weights(fit, type = "prior", matrix = FALSE) # The better method
#> [1] 20 20 20 20 20 20 20 20 20 20
depvar(fit) # Sample proportions; same as fit@y
#> [,1] [,2] [,3]
#> 1 0.10 0.25 0.65
#> 2 0.15 0.10 0.75
#> 3 0.05 0.15 0.80
#> 4 0.15 0.05 0.80
#> 5 0.00 0.35 0.65
#> 6 0.20 0.25 0.55
#> 7 0.15 0.15 0.70
#> 8 0.10 0.15 0.75
#> 9 0.20 0.20 0.60
#> 10 0.10 0.15 0.75
constraints(fit) # Constraint matrices
#> $`(Intercept)`
#> [,1] [,2]
#> [1,] 1 0
#> [2,] 0 1
#>
# Example 2b: Different reference level used as the baseline
fit2 <- vglm(ycounts ~ 1, multinomial(refLevel = 2))
coef(fit2, matrix = TRUE)
#> log(mu[,1]/mu[,2]) log(mu[,3]/mu[,2])
#> (Intercept) -0.4054651 1.358123
coef(fit , matrix = TRUE) # Easy to reconcile this output with fit2
#> log(mu[,1]/mu[,3]) log(mu[,2]/mu[,3])
#> (Intercept) -1.763589 -1.358123
# Example 3: The response is a factor.
nn <- 10
dframe3 <- data.frame(yfac = gl(3, nn, labels = c("Ctrl",
"Trt1", "Trt2")),
x2 = runif(3 * nn))
myrefLevel <- with(dframe3, yfac[12])
fit3a <- vglm(yfac ~ x2, multinomial(refLevel = myrefLevel), dframe3)
fit3b <- vglm(yfac ~ x2, multinomial(refLevel = 2), dframe3)
coef(fit3a, matrix = TRUE) # "Trt1" is the reference level
#> log(mu[,1]/mu[,2]) log(mu[,3]/mu[,2])
#> (Intercept) -0.07530819 0.2186135
#> x2 0.15235521 -0.4661535
coef(fit3b, matrix = TRUE) # "Trt1" is the reference level
#> log(mu[,1]/mu[,2]) log(mu[,3]/mu[,2])
#> (Intercept) -0.07530819 0.2186135
#> x2 0.15235521 -0.4661535
margeff(fit3b)
#> , , 1
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098384 -0.01718073 0.05816457
#> x2 0.08559652 0.03750473 -0.12310125
#>
#> , , 2
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04073280 -0.01349198 0.05422478
#> x2 0.08497162 0.02976907 -0.11474069
#>
#> , , 3
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04088822 -0.01866722 0.05955544
#> x2 0.08543669 0.04061876 -0.12605544
#>
#> , , 4
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04040704 -0.01171525 0.05212229
#> x2 0.08424188 0.02603998 -0.11028186
#>
#> , , 5
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098777 -0.01689819 0.05788596
#> x2 0.08559712 0.03691258 -0.12250971
#>
#> , , 6
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04042557 -0.01180141 0.05222698
#> x2 0.08428299 0.02622085 -0.11050384
#>
#> , , 7
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098762 -0.01663180 0.05761942
#> x2 0.08558960 0.03635425 -0.12194385
#>
#> , , 8
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04082850 -0.01914287 0.05997137
#> x2 0.08532466 0.04161467 -0.12693932
#>
#> , , 9
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04031203 -0.01129171 0.05160374
#> x2 0.08403159 0.02515079 -0.10918237
#>
#> , , 10
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04079379 -0.01937458 0.06016836
#> x2 0.08525831 0.04209972 -0.12735803
#>
#> , , 11
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04089594 -0.01859532 0.05949126
#> x2 0.08545090 0.04046819 -0.12591908
#>
#> , , 12
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04063730 -0.01289773 0.05353503
#> x2 0.08475571 0.02852203 -0.11327774
#>
#> , , 13
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04080659 -0.01402613 0.05483272
#> x2 0.08514045 0.03088982 -0.11603027
#>
#> , , 14
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04097161 -0.01596891 0.05694052
#> x2 0.08553820 0.03496456 -0.12050275
#>
#> , , 15
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04097169 -0.01597096 0.05694265
#> x2 0.08553843 0.03496886 -0.12050729
#>
#> , , 16
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04070405 -0.01330370 0.05400775
#> x2 0.08490637 0.02937398 -0.11428035
#>
#> , , 17
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04058971 -0.01262964 0.05321935
#> x2 0.08464888 0.02795936 -0.11260824
#>
#> , , 18
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04080067 -0.01933057 0.06013124
#> x2 0.08527153 0.04200760 -0.12727912
#>
#> , , 19
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04077278 -0.01950365 0.06027643
#> x2 0.08521787 0.04236989 -0.12758776
#>
#> , , 20
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04093495 -0.01531420 0.05624915
#> x2 0.08544381 0.03359169 -0.11903550
#>
#> , , 21
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098448 -0.01714980 0.05813428
#> x2 0.08559703 0.03743991 -0.12303694
#>
#> , , 22
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.0409560 -0.01786581 0.05882181
#> x2 0.0855568 0.03894019 -0.12449698
#>
#> , , 23
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04067076 -0.02004659 0.06071735
#> x2 0.08501921 0.04350605 -0.12852527
#>
#> , , 24
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04094022 -0.01810064 0.05904085
#> x2 0.08553013 0.03943211 -0.12496224
#>
#> , , 25
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04077201 -0.01376528 0.05453729
#> x2 0.08506104 0.03034252 -0.11540357
#>
#> , , 26
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04089223 -0.01479849 0.05569072
#> x2 0.08534051 0.03251006 -0.11785057
#>
#> , , 27
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04085333 -0.01441902 0.05527235
#> x2 0.08524887 0.03171406 -0.11696293
#>
#> , , 28
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04043525 -0.01184690 0.05228215
#> x2 0.08430447 0.02631636 -0.11062083
#>
#> , , 29
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098702 -0.01656627 0.05755329
#> x2 0.08558658 0.03621687 -0.12180345
#>
#> , , 30
#>
#> Ctrl Trt1 Trt2
#> (Intercept) -0.04098747 -0.01661279 0.05760026
#> x2 0.08558877 0.03631439 -0.12190317
#>
# Example 4: Fit a rank-1 stereotype model
fit4 <- rrvglm(Country ~ Width + Height + HP, multinomial, car.all)
coef(fit4) # Contains the C matrix
#> (Intercept):1 (Intercept):2 (Intercept):3 (Intercept):4 (Intercept):5
#> 93.19726718 35.60500353 26.34616575 35.91664677 45.93666985
#> (Intercept):6 (Intercept):7 (Intercept):8 (Intercept):9 Width
#> 50.05609023 74.63270320 62.87884106 27.22813150 -1.65508604
#> Height HP
#> 0.20684489 0.02872411
constraints(fit4)$HP # The A matrix
#> [,1]
#> [1,] 1.0000000
#> [2,] 0.3941479
#> [3,] 0.2926137
#> [4,] 0.3737671
#> [5,] 0.4650090
#> [6,] 0.5202978
#> [7,] 0.7834294
#> [8,] 0.6651159
#> [9,] 0.2945319
coef(fit4, matrix = TRUE) # The B matrix
#> log(mu[,1]/mu[,10]) log(mu[,2]/mu[,10]) log(mu[,3]/mu[,10])
#> (Intercept) 93.19726718 35.60500353 26.346165749
#> Width -1.65508604 -0.65234874 -0.484300898
#> Height 0.20684489 0.08152748 0.060525654
#> HP 0.02872411 0.01132155 0.008405068
#> log(mu[,4]/mu[,10]) log(mu[,5]/mu[,10]) log(mu[,6]/mu[,10])
#> (Intercept) 35.91664677 45.93666985 50.05609023
#> Width -0.61861663 -0.76962997 -0.86113765
#> Height 0.07731180 0.09618474 0.10762094
#> HP 0.01073612 0.01335697 0.01494509
#> log(mu[,7]/mu[,10]) log(mu[,8]/mu[,10]) log(mu[,9]/mu[,10])
#> (Intercept) 74.63270320 62.87884106 27.228131500
#> Width -1.29664306 -1.10082401 -0.487475603
#> Height 0.16204837 0.13757582 0.060922414
#> HP 0.02250331 0.01910486 0.008460165
Coef(fit4)@C # The C matrix
#> latvar
#> Width -1.65508602
#> Height 0.20684488
#> HP 0.02872411
concoef(fit4) # Better to get the C matrix this way
#> latvar
#> Width -1.65508602
#> Height 0.20684488
#> HP 0.02872411
Coef(fit4)@A # The A matrix
#> latvar
#> log(mu[,1]/mu[,10]) 1.0000000
#> log(mu[,2]/mu[,10]) 0.3941479
#> log(mu[,3]/mu[,10]) 0.2926137
#> log(mu[,4]/mu[,10]) 0.3737671
#> log(mu[,5]/mu[,10]) 0.4650090
#> log(mu[,6]/mu[,10]) 0.5202978
#> log(mu[,7]/mu[,10]) 0.7834294
#> log(mu[,8]/mu[,10]) 0.6651159
#> log(mu[,9]/mu[,10]) 0.2945319
svd(coef(fit4, matrix = TRUE)[-1, ])$d # Has rank 1; = C %*% t(A)
#> [1] 2.894479e+00 1.723514e-17 1.902914e-18
# Classification (but watch out for NAs in some of the variables):
apply(fitted(fit4), 1, which.max) # Classification
#> Acura Integra Acura Legend Audi 100
#> 5 10 10
#> Audi 80 BMW 325i BMW 535i
#> 5 5 5
#> Buick Century Buick Electra Buick Le Sabre
#> 10 10 10
#> Buick Riviera Cadillac Brougham Cadillac De Ville
#> 10 10 10
#> Cadillac Eldorado Chevrolet Astro Chevrolet Beretta
#> 10 10 10
#> Chevrolet Camaro Chevrolet Caprice Chevrolet Cavalier
#> 10 10 5
#> Chevrolet Corvette Chevrolet Lumina Chevrolet Lumina APV
#> 10 10 10
#> Chrysler Imperial Chrysler Le Baron Chrysler Le Baron Coupe
#> 10 5 10
#> Dodge Caravan Dodge Colt Dodge Daytona
#> 10 5 10
#> Dodge Dynasty Dodge Grand Caravan Dodge Omni
#> 10 10 5
#> Dodge Shadow Eagle Premier Eagle Summit
#> 5 10 5
#> Ford Aerostar Ford Escort Ford Festiva
#> 10 5 5
#> Ford LTD Crown Victoria Ford Mustang Ford Probe
#> 10 10 10
#> Ford Taurus Ford Tempo Ford Thunderbird
#> 10 10 10
#> GEO Metro GEO Prizm GEO Storm
#> 5 5 5
#> Honda Accord Honda Civic Honda Civic CRX
#> 5 5 5
#> Honda Prelude Hyundai Excel Hyundai Sonata
#> 5 5 10
#> Infiniti Q45 Lexus LS400 Lincoln Continental
#> 10 10 10
#> Lincoln Mark VII Lincoln Town Car Mazda 626
#> 10 10 5
#> Mazda 929 Mazda MPV Mazda MX-5 Miata
#> 5 10 5
#> Mazda MX-6 Mazda Protege Mazda RX7
#> 5 5 5
#> Mercedes-Benz 190 Mercedes-Benz 300E Mercury Tracer
#> 5 5 5
#> Mitsubishi Galant Mitsubishi Precis Mitsubishi Sigma
#> 5 5 5
#> Mitsubishi Wagon Nissan 240SX Nissan 300ZX
#> 5 5 10
#> Nissan Axxess Nissan Maxima Nissan Pulsar NX
#> 5 10 5
#> Nissan Sentra Nissan Stanza Nissan Van
#> 5 5 5
#> Peugeot 405 Peugeot 505 Plymouth Laser
#> 5 5 5
#> Pontiac Grand Am Pontiac LeMans Porsche 944
#> 5 5 5
#> Saab 900 Saab 9000 Sterling 827
#> 5 10 5
#> Subaru Justy Subaru Legacy Subaru Loyale
#> 7 5 5
#> Subaru XT Toyota Camry Toyota Celica
#> 5 5 10
#> Toyota Corolla Toyota Cressida Toyota Supra
#> 5 5 10
#> Toyota Tercel Volkswagen Corrado Volkswagen Fox
#> 5 5 5
#> Volkswagen GTI Volkswagen Golf Volkswagen Jetta
#> 5 5 5
#> Volkswagen Vanagon Volvo 240 Volvo 740
#> 10 5 10
# Classification:
colnames(fitted(fit4))[apply(fitted(fit4), 1, which.max)]
#> [1] "Japan" "USA" "USA" "Japan" "Japan" "Japan" "USA" "USA" "USA"
#> [10] "USA" "USA" "USA" "USA" "USA" "USA" "USA" "USA" "Japan"
#> [19] "USA" "USA" "USA" "USA" "Japan" "USA" "USA" "Japan" "USA"
#> [28] "USA" "USA" "Japan" "Japan" "USA" "Japan" "USA" "Japan" "Japan"
#> [37] "USA" "USA" "USA" "USA" "USA" "USA" "Japan" "Japan" "Japan"
#> [46] "Japan" "Japan" "Japan" "Japan" "Japan" "USA" "USA" "USA" "USA"
#> [55] "USA" "USA" "Japan" "Japan" "USA" "Japan" "Japan" "Japan" "Japan"
#> [64] "Japan" "Japan" "Japan" "Japan" "Japan" "Japan" "Japan" "Japan" "USA"
#> [73] "Japan" "USA" "Japan" "Japan" "Japan" "Japan" "Japan" "Japan" "Japan"
#> [82] "Japan" "Japan" "Japan" "Japan" "USA" "Japan" "Korea" "Japan" "Japan"
#> [91] "Japan" "Japan" "USA" "Japan" "Japan" "USA" "Japan" "Japan" "Japan"
#> [100] "Japan" "Japan" "Japan" "USA" "Japan" "USA"
apply(predict(fit4, car.all, type = "response"),
1, which.max) # Ditto
#> Acura Integra Acura Legend Audi 100
#> 5 10 10
#> Audi 80 BMW 325i BMW 535i
#> 5 5 5
#> Buick Century Buick Electra Buick Le Sabre
#> 10 10 10
#> Buick Regal Buick Riviera Cadillac Brougham
#> 10 10 10
#> Cadillac De Ville Cadillac Eldorado Chevrolet Astro
#> 10 10 10
#> Chevrolet Beretta Chevrolet Camaro Chevrolet Caprice
#> 10 10 10
#> Chevrolet Cavalier Chevrolet Corvette Chevrolet Lumina
#> 5 10 10
#> Chevrolet Lumina APV Chrysler Imperial Chrysler Le Baron
#> 10 10 5
#> Chrysler Le Baron Coupe Dodge Caravan Dodge Colt
#> 10 10 5
#> Dodge Daytona Dodge Dynasty Dodge Grand Caravan
#> 10 10 10
#> Dodge Omni Dodge Shadow Dodge Spirit
#> 5 5 5
#> Eagle Premier Eagle Summit Ford Aerostar
#> 10 5 10
#> Ford Escort Ford Festiva Ford LTD Crown Victoria
#> 5 5 10
#> Ford Mustang Ford Probe Ford Taurus
#> 10 10 10
#> Ford Tempo Ford Thunderbird GEO Metro
#> 10 10 5
#> GEO Prizm GEO Storm Honda Accord
#> 5 5 5
#> Honda Civic Honda Civic CRX Honda Prelude
#> 5 5 5
#> Hyundai Excel Hyundai Sonata Infiniti Q45
#> 5 10 10
#> Lexus LS400 Lincoln Continental Lincoln Mark VII
#> 10 10 10
#> Lincoln Town Car Mazda 626 Mazda 929
#> 10 5 5
#> Mazda MPV Mazda MX-5 Miata Mazda MX-6
#> 10 5 5
#> Mazda Protege Mazda RX7 Mercedes-Benz 190
#> 5 5 5
#> Mercedes-Benz 300E Mercury Tracer Mitsubishi Galant
#> 5 5 5
#> Mitsubishi Precis Mitsubishi Sigma Mitsubishi Wagon
#> 5 5 5
#> Nissan 240SX Nissan 300ZX Nissan Axxess
#> 5 10 5
#> Nissan Maxima Nissan Pulsar NX Nissan Sentra
#> 10 5 5
#> Nissan Stanza Nissan Van Peugeot 405
#> 5 5 5
#> Peugeot 505 Plymouth Laser Pontiac Bonneville
#> 5 5 10
#> Pontiac Grand Am Pontiac LeMans Porsche 944
#> 5 5 5
#> Saab 900 Saab 9000 Sterling 827
#> 5 10 5
#> Subaru Justy Subaru Legacy Subaru Loyale
#> 7 5 5
#> Subaru XT Toyota Camry Toyota Celica
#> 5 5 10
#> Toyota Corolla Toyota Cressida Toyota Supra
#> 5 5 10
#> Toyota Tercel Volkswagen Corrado Volkswagen Fox
#> 5 5 5
#> Volkswagen GTI Volkswagen Golf Volkswagen Jetta
#> 5 5 5
#> Volkswagen Vanagon Volvo 240 Volvo 740
#> 10 5 10
# Example 5: Using the xij argument (aka conditional logit model)
if (FALSE) set.seed(111)
nn <- 100 # Number of people who travel to work
M <- 3 # There are M+1 models of transport to go to work
ycounts <- matrix(0, nn, M+1)
ycounts[cbind(1:nn, sample(x = M+1, size = nn, replace = TRUE))] = 1
dimnames(ycounts) <- list(NULL, c("bus","train","car","walk"))
gotowork <- data.frame(cost.bus = runif(nn), time.bus = runif(nn),
cost.train= runif(nn), time.train= runif(nn),
cost.car = runif(nn), time.car = runif(nn),
cost.walk = runif(nn), time.walk = runif(nn))
gotowork <- round(gotowork, digits = 2) # For convenience
gotowork <- transform(gotowork,
Cost.bus = cost.bus - cost.walk,
Cost.car = cost.car - cost.walk,
Cost.train = cost.train - cost.walk,
Cost = cost.train - cost.walk, # for labelling
Time.bus = time.bus - time.walk,
Time.car = time.car - time.walk,
Time.train = time.train - time.walk,
Time = time.train - time.walk) # for labelling
fit <- vglm(ycounts ~ Cost + Time,
multinomial(parall = TRUE ~ Cost + Time - 1),
xij = list(Cost ~ Cost.bus + Cost.train + Cost.car,
Time ~ Time.bus + Time.train + Time.car),
form2 = ~ Cost + Cost.bus + Cost.train + Cost.car +
Time + Time.bus + Time.train + Time.car,
data = gotowork, trace = TRUE)
#> Iteration 1: deviance = 270.3515
#> Iteration 2: deviance = 270.30513
#> Iteration 3: deviance = 270.30512
head(model.matrix(fit, type = "lm")) # LM model matrix
#> (Intercept) Cost Time
#> 1 1 0.33 -0.17
#> 2 1 0.13 -0.01
#> 3 1 0.34 -0.39
#> 4 1 0.11 0.57
#> 5 1 0.10 -0.05
#> 6 1 0.23 -0.37
head(model.matrix(fit, type = "vlm")) # Big VLM model matrix
#> (Intercept):1 (Intercept):2 (Intercept):3 Cost Time
#> 1:1 1 0 0 0.06 -0.83
#> 1:2 0 1 0 0.33 -0.17
#> 1:3 0 0 1 0.09 -0.90
#> 2:1 1 0 0 -0.56 -0.05
#> 2:2 0 1 0 0.13 -0.01
#> 2:3 0 0 1 -0.41 0.28
coef(fit)
#> (Intercept):1 (Intercept):2 (Intercept):3 Cost Time
#> 0.2794973 0.1554697 0.5596305 -0.1951613 -0.5135673
coef(fit, matrix = TRUE)
#> log(mu[,1]/mu[,4]) log(mu[,2]/mu[,4]) log(mu[,3]/mu[,4])
#> (Intercept) 0.2794973 0.1554697 0.5596305
#> Cost -0.1951613 -0.1951613 -0.1951613
#> Time -0.5135673 -0.5135673 -0.5135673
constraints(fit)
#> $`(Intercept)`
#> [,1] [,2] [,3]
#> [1,] 1 0 0
#> [2,] 0 1 0
#> [3,] 0 0 1
#>
#> $Cost
#> [,1]
#> [1,] 1
#> [2,] 1
#> [3,] 1
#>
#> $Time
#> [,1]
#> [1,] 1
#> [2,] 1
#> [3,] 1
#>
summary(fit)
#>
#> Call:
#> vglm(formula = ycounts ~ Cost + Time, family = multinomial(parall = TRUE ~
#> Cost + Time - 1), data = gotowork, form2 = ~Cost + Cost.bus +
#> Cost.train + Cost.car + Time + Time.bus + Time.train + Time.car,
#> xij = list(Cost ~ Cost.bus + Cost.train + Cost.car, Time ~
#> Time.bus + Time.train + Time.car), trace = TRUE)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept):1 0.2795 0.3053 0.915 0.3600
#> (Intercept):2 0.1555 0.3141 0.495 0.6207
#> (Intercept):3 0.5596 0.2878 1.944 0.0518 .
#> Cost -0.1952 0.4250 -0.459 0.6461
#> Time -0.5136 0.3938 -1.304 0.1922
#> ---
#> Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#>
#> Names of linear predictors: log(mu[,1]/mu[,4]), log(mu[,2]/mu[,4]),
#> log(mu[,3]/mu[,4])
#>
#> Residual deviance: 270.3051 on 295 degrees of freedom
#>
#> Log-likelihood: -135.1526 on 295 degrees of freedom
#>
#> Number of Fisher scoring iterations: 3
#>
#> No Hauck-Donner effect found in any of the estimates
#>
#>
#> Reference group is level 4 of the response
max(abs(predict(fit) - predict(fit, new = gotowork))) # Should be 0
#> [1] 2.137179e-15
# \dontrun{}