Given a raw correlation matrix and a vector of reliabilities, report the disattenuated correlations above the diagonal.

correct.cor(x, y)

Arguments

x

A raw correlation matrix

y

Vector of reliabilities

Details

Disattenuated correlations may be thought of as correlations between the latent variables measured by a set of observed variables. That is, what would the correlation be between two (unreliable) variables be if both variables were measured perfectly reliably.

This function is mainly used if importing correlations and reliabilities from somewhere else. If the raw data are available, use score.items, or cluster.loadings or cluster.cor.

Examples of the output of this function are seen in cluster.loadings and cluster.cor

Value

Raw correlations below the diagonal, reliabilities on the diagonal, disattenuated above the diagonal.

References

Revelle, W. (in preparation) An Introduction to Psychometric Theory with applications in R. Springer. at https://personality-project.org/r/book/

Author

Maintainer: William Revelle revelle@northwestern.edu

Examples


# attitude from the datasets package
#example 1 is a rather clunky way of doing things

a1 <- attitude[,c(1:3)]
a2 <- attitude[,c(4:7)]
x1 <- rowSums(a1)  #find the sum of the first 3 attitudes
x2 <- rowSums(a2)   #find the sum of the last 4 attitudes
alpha1 <- alpha(a1)
#> Number of categories should be increased  in order to count frequencies. 
alpha2 <- alpha(a2)
#> Number of categories should be increased  in order to count frequencies. 
x <- matrix(c(x1,x2),ncol=2)
x.cor <- cor(x)
alpha <- c(alpha1$total$raw_alpha,alpha2$total$raw_alpha)
round(correct.cor(x.cor,alpha),2)
#>      [,1] [,2]
#> [1,] 0.82 0.78
#> [2,] 0.61 0.75
#
#much better - although uses standardized alpha 
clusters <- matrix(c(rep(1,3),rep(0,7),rep(1,4)),ncol=2)
cluster.loadings(clusters,cor(attitude))
#> Call: cluster.loadings(keys = clusters, r.mat = cor(attitude))
#> 
#> (Standardized) Alpha:
#> [1] 0.82 0.74
#> 
#> (Standardized) G6*:
#> [1] 0.83 0.78
#> 
#> Average item correlation:
#> [1] 0.60 0.42
#> 
#> Number of items:
#> [1] 3 4
#> 
#> Scale intercorrelations corrected for attenuation 
#>  raw correlations below the diagonal, alpha on the diagonal 
#>  corrected correlations above the diagonal:
#>      [,1] [,2]
#> [1,] 0.82 0.77
#> [2,] 0.60 0.74
#> 
#> Item by scale intercorrelations
#>  corrected for item overlap and scale reliability
#>            [,1] [,2]
#> rating     0.85 0.57
#> complaints 0.92 0.63
#> privileges 0.58 0.54
#> learning   0.73 0.72
#> raises     0.73 0.85
#> critical   0.21 0.36
#> advance    0.31 0.72
# or 
clusters <- matrix(c(rep(1,3),rep(0,7),rep(1,4)),ncol=2)
cluster.cor(clusters,cor(attitude))
#> Call: cluster.cor(keys = clusters, r.mat = cor(attitude))
#> 
#> (Standardized) Alpha:
#> [1] 0.82 0.74
#> 
#> (Standardized) G6*:
#> [1] 0.83 0.78
#> 
#> Average item correlation:
#> [1] 0.60 0.42
#> 
#> Number of items:
#> [1] 3 4
#> 
#> Signal to Noise ratio based upon average r and n 
#> [1] 4.6 2.9
#> 
#> Scale intercorrelations corrected for attenuation 
#>  raw correlations below the diagonal, alpha on the diagonal 
#>  corrected correlations above the diagonal:
#>      [,1] [,2]
#> [1,] 0.82 0.77
#> [2,] 0.60 0.74
#
#best
keys <- make.keys(attitude,list(first=1:3,second=4:7))
scores <- scoreItems(keys,attitude)
#> Number of categories should be increased  in order to count frequencies. 
scores$corrected
#>            first    second
#> first  0.8222013 0.7798835
#> second 0.6104550 0.7451947

#However, to do the more general case of correcting correlations for reliabilty
#corrected <- cor2cov(x.cor,1/alpha)
#diag(corrected) <- 1