gmm.RdFunction to estimate a vector of parameters based on moment conditions using the GMM method of Hansen(82).
gmm(g,x,t0=NULL,gradv=NULL, type=c("twoStep","cue","iterative"),
wmatrix = c("optimal","ident"), vcov=c("HAC","MDS","iid","TrueFixed"),
kernel=c("Quadratic Spectral","Truncated", "Bartlett", "Parzen", "Tukey-Hanning"),
crit=10e-7,bw = bwAndrews, prewhite = 1, ar.method = "ols", approx="AR(1)",
tol = 1e-7, itermax=100,optfct=c("optim","optimize","nlminb", "constrOptim"),
model=TRUE, X=FALSE, Y=FALSE, TypeGmm = "baseGmm", centeredVcov = TRUE,
weightsMatrix = NULL, traceIter = FALSE, data, eqConst = NULL,
eqConstFullVcov = FALSE, mustar = NULL, onlyCoefficients=FALSE, ...)
evalGmm(g, x, t0, tetw=NULL, gradv=NULL, wmatrix = c("optimal","ident"),
vcov=c("HAC","iid","TrueFixed"), kernel=c("Quadratic Spectral","Truncated",
"Bartlett", "Parzen", "Tukey-Hanning"),crit=10e-7,bw = bwAndrews,
prewhite = FALSE, ar.method = "ols", approx="AR(1)",tol = 1e-7,
model=TRUE, X=FALSE, Y=FALSE, centeredVcov = TRUE, weightsMatrix = NULL,
data, mustar = NULL)
gmmWithConst(obj, which, value)A function of the form \(g(\theta,x)\) and which returns a \(n \times q\) matrix with typical element \(g_i(\theta,x_t)\) for \(i=1,...q\) and \(t=1,...,n\). This matrix is then used to build the q sample moment conditions. It can also be a formula if the model is linear (see details below).
The matrix or vector of data from which the function \(g(\theta,x)\) is computed. If "g" is a formula, it is an \(n \times Nh\) matrix of instruments or a formula (see details below).
A \(k \times 1\) vector of starting values. It is required only when "g" is a function because only then a numerical algorithm is used to minimize the objective function. If the dimension of \(\theta\) is one, see the argument "optfct".
A \(k \times 1\) vector to compute the weighting matrix.
A function of the form \(G(\theta,x)\) which returns a \(q\times k\) matrix of derivatives of \(\bar{g}(\theta)\) with respect to \(\theta\). By default, the numerical algorithm numericDeriv is used. It is of course strongly suggested to provide this function when it is possible. This gradient is used to compute the asymptotic covariance matrix of \(\hat{\theta}\) and to obtain the analytical gradient of the objective function if the method is set to "CG" or "BFGS" in optim and if "type" is not set to "cue". If "g" is a formula, the gradiant is not required (see the details below).
The GMM method: "twostep" is the two step GMM proposed by Hansen(1982) and the "cue" and "iterative" are respectively the continuous updated and the iterative GMM proposed by Hansen, Eaton et Yaron (1996)
Which weighting matrix should be used in the objective function. By default, it is the inverse of the covariance matrix of \(g(\theta,x)\). The other choice is the identity matrix which is usually used to obtain a first step estimate of \(\theta\)
Assumption on the properties of the random vector x. By default, x is a weakly dependant process. The "iid" option will avoid using the HAC matrix which will accelerate the estimation if one is ready to make that assumption. The option "TrueFixed" is used only when the matrix of weights is provided and it is the optimal one.
type of kernel used to compute the covariance matrix of the vector of sample moment conditions (see kernHAC for more details)
The stopping rule for the iterative GMM. It can be reduce to increase the precision.
The method to compute the bandwidth parameter in the HAC
weighting matrix. The default is link[sandwich]{bwAndrews} (as proposed in Andrews
(1991)), which minimizes the MSE of the weighting matrix. Alternatives
are link{bwWilhelm} (as proposed in Wilhelm
(2015)), which minimizes the mean-square error (MSE) of the resulting
GMM estimator, and link[sandwich]{bwNeweyWest} (as proposed in Newey-West(1994)).
logical or integer. Should the estimating functions be prewhitened? If TRUE or greater than 0 a VAR model of order as.integer(prewhite) is fitted via ar with method "ols" and demean = FALSE.
character. The method argument passed to ar for prewhitening.
A character specifying the approximation method if the bandwidth has to be chosen by bwAndrews.
Weights that exceed tol are used for computing the covariance matrix, all other weights are treated as 0.
The maximum number of iterations for the iterative GMM. It is unlikely that the algorithm does not converge but we keep it as a safety.
Only when the dimension of \(\theta\) is 1, you can choose between the algorithm optim or optimize. In that case, the former is unreliable. If optimize is chosen, "t0" must be \(1\times 2\) which represents the interval in which the algorithm seeks the solution. It is also possible to choose the nlminb algorithm. In that case, boundaries for the coefficients can be set by the options upper= and lower=. The constrOptim is only available for nonlinear models for now. The standard errors may have to be corrected if the estimtes reach the boundary set by ui and ci.
logical. If TRUE the corresponding components of the fit (the model frame, the model matrix, the response) are returned if g is a formula.
The name of the class object created by the method getModel. It allows developers to extend the package and create other GMM methods.
Should the moment function be centered when computing its covariance matrix. Doing so may improve inference.
It allows users to provide gmm with a fixed weighting matrix. This matrix must be \(q \times q\), symmetric and strictly positive definite. When provided, the type option becomes irrelevant.
Tracing information for GMM of type "iter"
A data.frame or a matrix with column names (Optional).
Either a named vector (if "g" is a function), a simple vector for the nonlinear case indicating which of the \(\theta_0\) is restricted, or a qx2 vector defining equality constraints of the form \(\theta_i=c_i\). See below for an example.
The equality constraint is of the form which=value. "which" can be a vector of type characters with the names of the coefficients being constrained, or a vector of type numeric with the position of the coefficient in the whole vector.
Object of class "gmm"
If FALSE, the constrained coefficients are assumed to be fixed and only the covariance of the unconstrained coefficients is computed. If TRUE, the covariance matrix of the full set of coefficients is computed.
If not null, it must be a vector with the number of elements being equal to the number of moment conditions. In that case, the vector is subtracted from the sample moment vector before minimizing the objective function. It is useful to do a bootstrap procedure.
If set to TRUE, the function only returns
the coefficient estimates. It may be of interest when the standard
errors are not needed
More options to give to optim.
If we want to estimate a model like \(Y_t = \theta_1 + X_{2t} \theta_2 + \cdots + X_{k}\theta_k + \epsilon_t\) using the moment conditions \(Cov(\epsilon_tH_t)=0\), where \(H_t\) is a vector of \(Nh\) instruments, than we can define "g" like we do for lm. We would have \(g = y ~\tilde{}~ x2+x3+ \cdots +xk\) and the argument "x" above would become the matrix H of instruments. As for lm, \(Y_t\) can be a \(Ny \times 1\) vector which would imply that \(k=Nh \times Ny\). The intercept is included by default so you do not have to add a column of ones to the matrix \(H\). You do not need to provide the gradiant in that case since in that case it is embedded in gmm. The intercept can be removed by adding -1 to the formula. In that case, the column of ones need to be added manually to H. It is also possible to express "x" as a formula. For example, if the instruments are \(\{1,z_1,z_2,z_3\}\), we can set "x" to \(\tilde{} z1+z2+z3\). By default, a column of ones is added. To remove it, set "x" to \(\tilde{}z1+z2+z3-1\).
The following explains the last example bellow. Thanks to Dieter Rozenich, a student from the Vienna University of Economics and Business Administration. He suggested that it would help to understand the implementation of the Jacobian.
For the two parameters of a normal distribution \((\mu,\sigma)\) we have the following three moment conditions: $$ m_{1} = \mu - x_{i} $$ $$ m_{2} = \sigma^2 - (x_{i}-\mu)^2 $$ $$ m_{3} = x_{i}^{3} - \mu (\mu^2+3\sigma^{2}) $$ \(m_{1},m_{2}\) can be directly obtained by the definition of \((\mu,\sigma)\). The third moment condition comes from the third derivative of the moment generating function (MGF)
$$ M_{X}(t) = exp\Big(\mu t + \frac{\sigma^{2}t^{2}}{2}\Big) $$
evaluated at \((t=0)\).
Note that we have more equations (3) than unknown parameters (2).
The Jacobian of these two conditions is (it should be an array but I can't make it work):
$$ 1~~~~~~~~~~ 0 $$ $$ -2\mu+2x ~~~~~ 2\sigma $$ $$-3\mu^{2}-3\sigma^{2} ~~~~ -6\mu\sigma$$
gmmWithConst() re-estimates an unrestricted model by adding an
equality constraint.
evalGmm() creates an object of class '"gmm"' for a given
parameter vector. If no vector "tetw" is provided and the weighting
matrix needs to be computed, "t0" is used.,
'gmm' returns an object of 'class' '"gmm"'
The functions 'summary' is used to obtain and print a summary of the results. It also compute the J-test of overidentying restriction
The object of class "gmm" is a list containing at least:
\(k\times 1\) vector of coefficients
the residuals, that is response minus fitted values if "g" is a formula.
the fitted mean values if "g" is a formula.
the covariance matrix of the coefficients
the value of the objective function \(\| var(\bar{g})^{-1/2}\bar{g}\|^2\)
the terms object used when g is a formula.
the matched call.
if requested, the response used (if "g" is a formula).
if requested, the model matrix used if "g" is a formula or the data if "g" is a function.
if requested (the default), the model frame used if "g" is a formula.
Information produced by either optim or nlminb related to the convergence if "g" is a function. It is printed by the summary.gmm method.
Zeileis A (2006), Object-oriented Computation of Sandwich Estimators. Journal of Statistical Software, 16(9), 1–16. URL doi:10.18637/jss.v016.i09 .
Pierre Chausse (2010), Computing Generalized Method of Moments and Generalized Empirical Likelihood with R. Journal of Statistical Software, 34(11), 1–35. URL doi:10.18637/jss.v034.i11 .
Andrews DWK (1991), Heteroskedasticity and Autocorrelation Consistent Covariance Matrix Estimation. Econometrica, 59, 817–858.
Newey WK & West KD (1987), A Simple, Positive Semi-Definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix. Econometrica, 55, 703–708.
Newey WK & West KD (1994), Automatic Lag Selection in Covariance Matrix Estimation. Review of Economic Studies, 61, 631-653.
Hansen, L.P. (1982), Large Sample Properties of Generalized Method of Moments Estimators. Econometrica, 50, 1029-1054,
Hansen, L.P. and Heaton, J. and Yaron, A.(1996), Finite-Sample Properties of Some Alternative GMM Estimators. Journal of Business and Economic Statistics, 14 262-280.
## CAPM test with GMM
data(Finance)
r <- Finance[1:300, 1:10]
rm <- Finance[1:300, "rm"]
rf <- Finance[1:300, "rf"]
z <- as.matrix(r-rf)
t <- nrow(z)
zm <- rm-rf
h <- matrix(zm, t, 1)
res <- gmm(z ~ zm, x = h)
summary(res)
#>
#> Call:
#> gmm(g = z ~ zm, x = h)
#>
#>
#> Method: twoStep
#>
#> Kernel: Quadratic Spectral
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> WMK_(Intercept) -4.6697e-03 5.6646e-02 -8.2437e-02 9.3430e-01
#> UIS_(Intercept) 1.0235e-01 1.2550e-01 8.1550e-01 4.1478e-01
#> ORB_(Intercept) 1.4587e-01 2.0318e-01 7.1794e-01 4.7279e-01
#> MAT_(Intercept) 3.5895e-02 1.1021e-01 3.2571e-01 7.4464e-01
#> ABAX_(Intercept) 9.1742e-02 2.8806e-01 3.1848e-01 7.5012e-01
#> T_(Intercept) 2.3103e-02 7.7412e-02 2.9845e-01 7.6536e-01
#> EMR_(Intercept) 2.9929e-02 5.5176e-02 5.4243e-01 5.8753e-01
#> JCS_(Intercept) 1.1680e-01 1.5454e-01 7.5583e-01 4.4975e-01
#> VOXX_(Intercept) 2.0871e-02 1.8164e-01 1.1490e-01 9.0852e-01
#> ZOOM_(Intercept) -2.1914e-01 2.0235e-01 -1.0829e+00 2.7884e-01
#> WMK_zm 3.1719e-01 1.2569e-01 2.5236e+00 1.1616e-02
#> UIS_zm 1.2627e+00 2.2985e-01 5.4936e+00 3.9374e-08
#> ORB_zm 1.4939e+00 4.2834e-01 3.4877e+00 4.8725e-04
#> MAT_zm 1.0150e+00 2.1760e-01 4.6644e+00 3.0948e-06
#> ABAX_zm 1.0890e+00 5.7863e-01 1.8820e+00 5.9838e-02
#> T_zm 8.4898e-01 1.5383e-01 5.5188e+00 3.4124e-08
#> EMR_zm 7.4079e-01 9.9768e-02 7.4251e+00 1.1266e-13
#> JCS_zm 9.5882e-01 3.4791e-01 2.7559e+00 5.8526e-03
#> VOXX_zm 1.4822e+00 3.6950e-01 4.0113e+00 6.0384e-05
#> ZOOM_zm 2.0777e+00 3.2143e-01 6.4640e+00 1.0198e-10
#>
#> J-Test: degrees of freedom is 0
#> J-test P-value
#> Test E(g)=0: 3.69839012726863e-29 *******
#>
## linear tests can be performed using linearHypothesis from the car package
## The CAPM can be tested as follows:
library(car)
#> Loading required package: carData
linearHypothesis(res,cbind(diag(10),matrix(0,10,10)),rep(0,10))
#>
#> Linear hypothesis test:
#> WMK_((Intercept) = 0
#> UIS_((Intercept) = 0
#> ORB_((Intercept) = 0
#> MAT_((Intercept) = 0
#> ABAX_((Intercept) = 0
#> T_((Intercept) = 0
#> EMR_((Intercept) = 0
#> JCS_((Intercept) = 0
#> VOXX_((Intercept) = 0
#> ZOOM_((Intercept) = 0
#>
#> Model 1: restricted model
#> Model 2: z ~ zm
#>
#> Res.Df Df Chisq Pr(>Chisq)
#> 1 308
#> 2 298 10 4.0194 0.9465
# The CAPM of Black
g <- function(theta, x) {
e <- x[,2:11] - theta[1] - (x[,1] - theta[1]) %*% matrix(theta[2:11], 1, 10)
gmat <- cbind(e, e*c(x[,1]))
return(gmat) }
x <- as.matrix(cbind(rm, r))
res_black <- gmm(g, x = x, t0 = rep(0, 11))
summary(res_black)$coefficients
#> Estimate Std. Error t value Pr(>|t|)
#> Theta[1] 0.51555413 0.17197287 2.99788050 2.718643e-03
#> Theta[2] 1.11554512 0.11562521 9.64794062 5.015275e-22
#> Theta[3] 0.67955765 0.19708738 3.44800183 5.647502e-04
#> Theta[4] -0.03222924 0.42352438 -0.07609772 9.393414e-01
#> Theta[5] 0.84956301 0.15478131 5.48879599 4.046827e-08
#> Theta[6] -0.20526958 0.47872542 -0.42878355 6.680808e-01
#> Theta[7] 0.62526842 0.12162083 5.14112930 2.730920e-07
#> Theta[8] 1.05318099 0.06871289 15.32726862 5.026837e-53
#> Theta[9] 0.64007014 0.23257324 2.75212289 5.921030e-03
#> Theta[10] 0.59576248 0.29522496 2.01799494 4.359179e-02
#> Theta[11] 1.15696525 0.24015734 4.81753016 1.453461e-06
## APT test with Fama-French factors and GMM
f1 <- zm
f2 <- Finance[1:300, "hml"]
f3 <- Finance[1:300, "smb"]
h <- cbind(f1, f2, f3)
res2 <- gmm(z ~ f1 + f2 + f3, x = h)
coef(res2)
#> WMK_(Intercept) UIS_(Intercept) ORB_(Intercept) MAT_(Intercept)
#> -0.03035625 0.06109011 0.10011407 0.07479548
#> ABAX_(Intercept) T_(Intercept) EMR_(Intercept) JCS_(Intercept)
#> 0.04377167 0.02564060 0.02195915 0.07648882
#> VOXX_(Intercept) ZOOM_(Intercept) WMK_f1 UIS_f1
#> -0.02301129 -0.18311147 0.44617715 1.48147102
#> ORB_f1 MAT_f1 ABAX_f1 T_f1
#> 1.74942244 0.87367060 1.46891181 0.76696537
#> EMR_f1 JCS_f1 VOXX_f1 ZOOM_f1
#> 0.75303357 1.21072612 1.76680641 1.93629288
#> WMK_f2 UIS_f2 ORB_f2 MAT_f2
#> 0.32885761 0.46139351 0.43751460 -0.80948660
#> ABAX_f2 T_f2 EMR_f2 JCS_f2
#> -0.18758572 0.36693330 0.26216024 0.23096146
#> VOXX_f2 ZOOM_f2 WMK_f3 UIS_f3
#> 0.19115617 -0.68884920 0.23984281 0.53428197
#> ORB_f3 MAT_f3 ABAX_f3 T_f3
#> 0.75827544 0.33170156 2.23679321 -0.91487982
#> EMR_f3 JCS_f3 VOXX_f3 ZOOM_f3
#> -0.28287307 1.01277192 1.23677324 0.17156298
summary(res2)$coefficients
#> Estimate Std. Error t value Pr(>|t|)
#> WMK_(Intercept) -0.03035625 0.05469016 -0.5550587 5.788545e-01
#> UIS_(Intercept) 0.06109011 0.12818543 0.4765761 6.336640e-01
#> ORB_(Intercept) 0.10011407 0.21483878 0.4659963 6.412182e-01
#> MAT_(Intercept) 0.07479548 0.09574023 0.7812336 4.346651e-01
#> ABAX_(Intercept) 0.04377167 0.27524144 0.1590301 8.736452e-01
#> T_(Intercept) 0.02564060 0.07433412 0.3449372 7.301416e-01
#> EMR_(Intercept) 0.02195915 0.05416867 0.4053846 6.851948e-01
#> JCS_(Intercept) 0.07648882 0.15540843 0.4921794 6.225925e-01
#> VOXX_(Intercept) -0.02301129 0.17904871 -0.1285197 8.977377e-01
#> ZOOM_(Intercept) -0.18311147 0.22061012 -0.8300230 4.065258e-01
#> WMK_f1 0.44617715 0.13672418 3.2633377 1.101082e-03
#> UIS_f1 1.48147102 0.24631519 6.0145337 1.804052e-09
#> ORB_f1 1.74942244 0.49748580 3.5165274 4.372315e-04
#> MAT_f1 0.87367060 0.26882868 3.2499159 1.154392e-03
#> ABAX_f1 1.46891181 0.58983495 2.4903777 1.276074e-02
#> T_f1 0.76696537 0.14929534 5.1372358 2.788091e-07
#> EMR_f1 0.75303357 0.12922680 5.8272243 5.635684e-09
#> JCS_f1 1.21072612 0.37291933 3.2466167 1.167856e-03
#> VOXX_f1 1.76680641 0.39005746 4.5296055 5.909393e-06
#> ZOOM_f1 1.93629288 0.43442012 4.4571897 8.304111e-06
#> WMK_f2 0.32885761 0.21518546 1.5282520 1.264500e-01
#> UIS_f2 0.46139351 0.35449018 1.3015692 1.930637e-01
#> ORB_f2 0.43751460 0.60601643 0.7219517 4.703242e-01
#> MAT_f2 -0.80948660 0.45862594 -1.7650258 7.755943e-02
#> ABAX_f2 -0.18758572 0.90646157 -0.2069428 8.360545e-01
#> T_f2 0.36693330 0.17030186 2.1546054 3.119273e-02
#> EMR_f2 0.26216024 0.15176426 1.7274176 8.409270e-02
#> JCS_f2 0.23096146 0.52766971 0.4377008 6.616032e-01
#> VOXX_f2 0.19115617 0.60230535 0.3173742 7.509597e-01
#> ZOOM_f2 -0.68884920 0.66116285 -1.0418752 2.974695e-01
#> WMK_f3 0.23984281 0.16056017 1.4937877 1.352311e-01
#> UIS_f3 0.53428197 0.30736930 1.7382411 8.216833e-02
#> ORB_f3 0.75827544 0.55430033 1.3679866 1.713163e-01
#> MAT_f3 0.33170156 0.34681915 0.9564107 3.388648e-01
#> ABAX_f3 2.23679321 0.93669929 2.3879523 1.694254e-02
#> T_f3 -0.91487982 0.22271704 -4.1078124 3.994244e-05
#> EMR_f3 -0.28287307 0.14691520 -1.9254174 5.417717e-02
#> JCS_f3 1.01277192 0.53961860 1.8768291 6.054152e-02
#> VOXX_f3 1.23677324 0.52198483 2.3693663 1.781860e-02
#> ZOOM_f3 0.17156298 0.60915050 0.2816430 7.782173e-01
## Same result with x defined as a formula:
res2 <- gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3)
#> Error in eval(predvars, data, env): object 'z' not found
coef(res2)
#> WMK_(Intercept) UIS_(Intercept) ORB_(Intercept) MAT_(Intercept)
#> -0.03035625 0.06109011 0.10011407 0.07479548
#> ABAX_(Intercept) T_(Intercept) EMR_(Intercept) JCS_(Intercept)
#> 0.04377167 0.02564060 0.02195915 0.07648882
#> VOXX_(Intercept) ZOOM_(Intercept) WMK_f1 UIS_f1
#> -0.02301129 -0.18311147 0.44617715 1.48147102
#> ORB_f1 MAT_f1 ABAX_f1 T_f1
#> 1.74942244 0.87367060 1.46891181 0.76696537
#> EMR_f1 JCS_f1 VOXX_f1 ZOOM_f1
#> 0.75303357 1.21072612 1.76680641 1.93629288
#> WMK_f2 UIS_f2 ORB_f2 MAT_f2
#> 0.32885761 0.46139351 0.43751460 -0.80948660
#> ABAX_f2 T_f2 EMR_f2 JCS_f2
#> -0.18758572 0.36693330 0.26216024 0.23096146
#> VOXX_f2 ZOOM_f2 WMK_f3 UIS_f3
#> 0.19115617 -0.68884920 0.23984281 0.53428197
#> ORB_f3 MAT_f3 ABAX_f3 T_f3
#> 0.75827544 0.33170156 2.23679321 -0.91487982
#> EMR_f3 JCS_f3 VOXX_f3 ZOOM_f3
#> -0.28287307 1.01277192 1.23677324 0.17156298
## The following example has been provided by Dieter Rozenich (see details).
# It generates normal random numbers and uses the GMM to estimate
# mean and sd.
#-------------------------------------------------------------------------------
# Random numbers of a normal distribution
# First we generate normally distributed random numbers and compute the two parameters:
n <- 1000
x <- rnorm(n, mean = 4, sd = 2)
# Implementing the 3 moment conditions
g <- function(tet, x)
{
m1 <- (tet[1] - x)
m2 <- (tet[2]^2 - (x - tet[1])^2)
m3 <- x^3 - tet[1]*(tet[1]^2 + 3*tet[2]^2)
f <- cbind(m1, m2, m3)
return(f)
}
# Implementing the jacobian
Dg <- function(tet, x)
{
jacobian <- matrix(c( 1, 2*(-tet[1]+mean(x)), -3*tet[1]^2-3*tet[2]^2,0, 2*tet[2],
-6*tet[1]*tet[2]), nrow=3,ncol=2)
return(jacobian)
}
# Now we want to estimate the two parameters using the GMM.
gmm(g, x, c(0, 0), grad = Dg)
#> Method
#> twoStep
#>
#> Objective function value: 1.113753e-05
#>
#> Theta[1] Theta[2]
#> 3.9648 2.0022
#>
#> Convergence code = 0
# Two-stage-least-squares (2SLS), or IV with iid errors.
# The model is:
# Y(t) = b[0] + b[1]C(t) + b[2]Y(t-1) + e(t)
# e(t) is an MA(1)
# The instruments are Z(t)={1 C(t) y(t-2) y(t-3) y(t-4)}
getdat <- function(n) {
e <- arima.sim(n,model=list(ma=.9))
C <- runif(n,0,5)
Y <- rep(0,n)
Y[1] = 1 + 2*C[1] + e[1]
for (i in 2:n){
Y[i] = 1 + 2*C[i] + 0.9*Y[i-1] + e[i]
}
Yt <- Y[5:n]
X <- cbind(1,C[5:n],Y[4:(n-1)])
Z <- cbind(1,C[5:n],Y[3:(n-2)],Y[2:(n-3)],Y[1:(n-4)])
return(list(Y=Yt,X=X,Z=Z))
}
d <- getdat(5000)
res4 <- gmm(d$Y~d$X-1,~d$Z-1,vcov="iid")
#> Error in eval(predvars, data, env): object 'd' not found
res4
#> Error: object 'res4' not found
### Examples with equality constraint
######################################
# Random numbers of a normal distribution
## Not run:
# The following works but produces warning message because the dimension of coef is 1
# Brent should be used
# without named vector
# Method Brent is used because the problem is now one-dimensional
gmm(g, x, c(4, 0), grad = Dg, eqConst=1, method="Brent", lower=-10,upper=10)
#> Method
#> twoStep (with equality constraints)
#>
#> Objective function value: 0.0003125294
#>
#> Theta[2]
#> 2.0003
#>
#> Convergence code = 0
#> #### Equality constraints ####
#> Theta[1] = 4
#> ##############################
#>
# with named vector
gmm(g, x, c(mu=4, sig=2), grad = Dg, eqConst="sig", method="Brent", lower=-10,upper=10)
#> Method
#> twoStep (with equality constraints)
#>
#> Objective function value: 8.919407e-06
#>
#> mu
#> 3.9649
#>
#> Convergence code = 0
#> #### Equality constraints ####
#> sig = 2
#> ##############################
#>
## End(Not run)
gmm(g, x, c(4, 0), grad = Dg, eqConst=1,method="Brent",lower=0,upper=6)
#> Method
#> twoStep (with equality constraints)
#>
#> Objective function value: 0.0003125294
#>
#> Theta[2]
#> 2.0003
#>
#> Convergence code = 0
#> #### Equality constraints ####
#> Theta[1] = 4
#> ##############################
#>
gmm(g, x, c(mu=4, sig=2), grad = Dg, eqConst="sig",method="Brent",lower=0,upper=6)
#> Method
#> twoStep (with equality constraints)
#>
#> Objective function value: 8.919407e-06
#>
#> mu
#> 3.9649
#>
#> Convergence code = 0
#> #### Equality constraints ####
#> sig = 2
#> ##############################
#>
# Example with formula
# first coef = 0 and second coef = 1
# Only available for one dimensional yt
z <- z[,1]
res2 <- gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, eqConst = matrix(c(1,2,0,1),2,2))
#> Error in eval(predvars, data, env): object 'z' not found
res2
#> Method
#> twoStep
#>
#> Objective function value: 1.225509e-31
#>
#> WMK_(Intercept) UIS_(Intercept) ORB_(Intercept) MAT_(Intercept)
#> -0.030356 0.061090 0.100114 0.074795
#> ABAX_(Intercept) T_(Intercept) EMR_(Intercept) JCS_(Intercept)
#> 0.043772 0.025641 0.021959 0.076489
#> VOXX_(Intercept) ZOOM_(Intercept) WMK_f1 UIS_f1
#> -0.023011 -0.183111 0.446177 1.481471
#> ORB_f1 MAT_f1 ABAX_f1 T_f1
#> 1.749422 0.873671 1.468912 0.766965
#> EMR_f1 JCS_f1 VOXX_f1 ZOOM_f1
#> 0.753034 1.210726 1.766806 1.936293
#> WMK_f2 UIS_f2 ORB_f2 MAT_f2
#> 0.328858 0.461394 0.437515 -0.809487
#> ABAX_f2 T_f2 EMR_f2 JCS_f2
#> -0.187586 0.366933 0.262160 0.230961
#> VOXX_f2 ZOOM_f2 WMK_f3 UIS_f3
#> 0.191156 -0.688849 0.239843 0.534282
#> ORB_f3 MAT_f3 ABAX_f3 T_f3
#> 0.758275 0.331702 2.236793 -0.914880
#> EMR_f3 JCS_f3 VOXX_f3 ZOOM_f3
#> -0.282873 1.012772 1.236773 0.171563
#>
# CUE with starting t0 requires eqConst to be a vector
res3 <- gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, t0=c(0,1,.5,.5), type="cue", eqConst = c(1,2))
#> Error in eval(predvars, data, env): object 'z' not found
res3
#> Error: object 'res3' not found
### Examples with equality constraints, where the constrained coefficients is used to compute
### the covariance matrix.
### Useful when some coefficients have been estimated before, they are just identified in GMM
### and don't need to be re-estimated.
### To use with caution because the covariance won't be valid if the coefficients do not solve
### the GMM FOC.
######################################
res4 <- gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3, t0=c(0,1,.5,.5), eqConst = c(1,2),
eqConstFullVcov=TRUE)
#> Error in eval(predvars, data, env): object 'z' not found
summary(res4)
#> Error: object 'res4' not found
### Examples with equality constraint using gmmWithConst
###########################################################
res2 <- gmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3)
#> Error in eval(predvars, data, env): object 'z' not found
gmmWithConst(res2,c("f2","f3"),c(.5,.5))
#> Error in gmmWithConst(res2, c("f2", "f3"), c(0.5, 0.5)): Wrong coefficient names in eqConst
gmmWithConst(res2,c(2,3),c(.5,.5))
#> Method
#> twoStep (with equality constraints)
#>
#> Objective function value: 0.002114484
#>
#> (Intercept) f3
#> -0.027722 0.297662
#>
#> #### Equality constraints ####
#> f1 = 0.5
#> f2 = 0.5
#> ##############################
#>
## Creating an object without estimation for a fixed parameter vector
###################################################################
res2_2 <- evalGmm(z ~ f1 + f2 + f3, ~ f1 + f2 + f3,
t0=res2$coefficients, tetw=res2$coefficients)
#> Error in eval(predvars, data, env): object 'z' not found
summary(res2_2)
#> Error: object 'res2_2' not found