Given a set of n items, form n/2 or n/3 mini scales or parcels of the most similar pairs or triplets of items. These may be used as the basis for subsequent scale construction or multivariate (e.g., factor) analysis.

parcels(x, size = 3, max = TRUE, flip=TRUE,congruence = FALSE)
keysort(keys)

Arguments

x

A matrix/dataframe of items or a correlation/covariance matrix of items

size

Form parcels of size 2 or size 3

flip

if flip=TRUE, negative correlations lead to at least one item being negatively scored

max

Should item correlation/covariance be adjusted for their maximum correlation

congruence

Should the correlations be converted to congruence coefficients?

keys

Sort a matrix of keys to reflect item order as much as possible

Details

Items used in measuring ability or other aspects of personality are typically not very reliable. One suggestion has been to form items into homogeneous item composites (HICs), Factorially Homogeneous Item Dimensions (FHIDs) or mini scales (parcels). Parcelling may be done rationally, factorially, or empirically based upon the structure of the correlation/covariance matrix. link{parcels} facilitates the finding of parcels by forming a keys matrix suitable for using in score.items. These keys represent the n/2 most similar pairs or the n/3 most similar triplets.

The algorithm is straightforward: For size = 2, the correlation matrix is searched for the highest correlation. These two items form the first parcel and are dropped from the matrix. The procedure is repeated until there are no more pairs to form.

For size=3, the three items with the greatest sum of variances and covariances with each other is found. This triplet is the first parcel. All three items are removed and the procedure then identifies the next most similar triplet. The procedure repeats until n/3 parcels are identified.

Value

keys

A matrix of scoring keys to be used to form mini scales (parcels) These will be in order of importance, that is, the first parcel (P1) will reflect the most similar pair or triplet. The keys may also be sorted by average item order by using the keysort function.

References

Cattell, R. B. (1956). Validation and intensification of the sixteen personality factor questionnaire. Journal of Clinical Psychology , 12 (3), 205 -214.

Author

William Revelle

See also

scoreItems to score the parcels or iclust for an alternative way of forming item clusters.

Examples

parcels(Thurstone)
#>                   P1 P2 P3
#> Sentences          1  0  0
#> Vocabulary         1  0  0
#> Sent.Completion    1  0  0
#> First.Letters      0  1  0
#> Four.Letter.Words  0  1  0
#> Suffixes           0  1  0
#> Letter.Series      0  0  1
#> Pedigrees          0  0  1
#> Letter.Group       0  0  1
keys <- parcels(bfi)
keys <- keysort(keys)
scoreItems(keys,bfi)
#> Call: scoreItems(keys = keys, items = bfi)
#> 
#> (Unstandardized) Alpha:
#>         P3   P4  P6   P2   P1  P9   P8   P5  P7
#> alpha 0.72 0.65 0.5 0.72 0.82 0.3 0.12 0.63 0.5
#> 
#> Standard errors of unstandardized Alpha:
#>          P3    P4    P6    P2    P1    P9    P8    P5    P7
#> ASE   0.019 0.021 0.024 0.019 0.016 0.028 0.016 0.021 0.024
#> 
#> Average item correlation:
#>             P3   P4   P6   P2  P1   P9    P8   P5   P7
#> average.r 0.46 0.38 0.25 0.46 0.6 0.12 0.044 0.36 0.25
#> 
#> Median item correlation:
#>    P3    P4    P6    P2    P1    P9    P8    P5    P7 
#> 0.483 0.378 0.245 0.464 0.553 0.038 0.161 0.389 0.208 
#> 
#>  Guttman 6* reliability: 
#>            P3   P4   P6   P2   P1   P9   P8   P5   P7
#> Lambda.6 0.69 0.64 0.53 0.69 0.79 0.39 0.22 0.62 0.55
#> 
#> Signal/Noise based upon av.r : 
#>               P3  P4 P6  P2  P1   P9   P8  P5   P7
#> Signal/Noise 2.5 1.8  1 2.6 4.5 0.43 0.14 1.7 0.98
#> 
#> Scale intercorrelations corrected for attenuation 
#>  raw correlations below the diagonal, alpha on the diagonal 
#>  corrected correlations above the diagonal:
#>        P3     P4     P6     P2     P1      P9     P8     P5     P7
#> P3  0.717 -0.328  0.519  0.630 -0.192  0.4029  0.551  0.589 -0.146
#> P4 -0.223  0.647 -0.971 -0.294  0.319  0.0302 -0.412 -0.315  0.436
#> P6  0.312 -0.554  0.502  0.466 -0.091 -0.0402  0.500  0.620 -0.256
#> P2  0.453 -0.200  0.280  0.720 -0.211  0.1775  0.201  0.598 -0.535
#> P1 -0.147  0.232 -0.059 -0.162  0.818  0.1087 -0.426 -0.090  0.853
#> P9  0.187  0.013 -0.016  0.082  0.054  0.2999 -0.025 -0.418  0.059
#> P8  0.162 -0.115  0.123  0.059 -0.134 -0.0047  0.121  0.152 -0.252
#> P5  0.397 -0.202  0.350  0.404 -0.065 -0.1823  0.042  0.633 -0.083
#> P7 -0.087  0.247 -0.128 -0.320  0.543  0.0226 -0.062 -0.047  0.496
#> 
#>  Average adjusted correlations within and between scales (MIMS)
#>    P3    P4    P6    P2    P1    P9    P8    P5    P7   
#> P3  0.46                                                
#> P4 -0.24  0.38                                          
#> P6  0.28 -0.56  0.25                                    
#> P2  0.56 -0.28  0.32  0.46                              
#> P1 -0.19  0.34 -0.07 -0.27  0.60                        
#> P9  0.17  0.01 -0.01  0.10  0.07  0.12                  
#> P8  0.63 -0.50  0.44  0.29 -0.70 -0.02  0.04            
#> P5  0.37 -0.21  0.30  0.47 -0.08 -0.16  0.15  0.36      
#> P7 -0.09  0.28 -0.12 -0.42  0.75  0.02 -0.25 -0.05  0.25
#> 
#>  Average adjusted item x scale correlations within and between scales (MIMT)
#>    P3    P4    P6    P2    P1    P9    P8    P5    P7   
#> P3  0.80                                                
#> P4 -0.17  0.77                                          
#> P6  0.22 -0.39  0.71                                    
#> P2  0.37 -0.16  0.23  0.80                              
#> P1 -0.13  0.20 -0.05 -0.14  0.86                        
#> P9  0.12  0.01 -0.01  0.05  0.03  0.65                  
#> P8  0.16 -0.06  0.06  0.05 -0.11 -0.03  0.54            
#> P5  0.29 -0.15  0.27  0.30 -0.05 -0.15  0.03  0.76      
#> P7 -0.05  0.17 -0.08 -0.22  0.37  0.00 -0.04 -0.02  0.70
#> 
#>  In order to see the item by scale loadings and frequency counts of the data
#>  print with the short option = FALSE