Russell A. Matthews, Ph.D. – University of Alabama Laura Pineault, M.A. – Wayne State University Yeong-Hyun Hong, M.A. – University of Alabama
While a stubborn resistance to their use continues, the application of single-item measures has the potential to help applied researchers address conceptual, methodological, and empirical challenges. As has been discussed in the larger literature (e.g., Cheah et al., 2018; Fuchs & Diamantopoulos, 2009), in the context of multi-item measurement, more items average out random error across items, which improves (certain types of) reliability (Sarstedt & Wilczynski, 2009), which can allow for increased measurement accuracy (Peter, 1979), with the potential for greater construct validity (Wanous et al., 1997). More practically, using multi-item measures allows for different options when dealing with issues like missing data (Cheah et al., 2018; Sloan et al., 2002). The straw man argument then is that because multi-item measures have these potential advantages, single-item measures are somehow inherently deficient.
Admittedly, a major stumbling block related to substantiating the use of single-item measures is that many were developed with limited evidence to support their validity (Fuchs & Diamantopoulos, 2009). And while research has focused on validating measures for a given constructs (e.g., Aronsson et al., 2000; Blake & McKay, 1986; Kim & Abraham, 2017), the development of single-item measures in the organizational sciences has generally been limited to the nomological network of several inter-related constructs (e.g. Fisher et al., 2016; Gilbert & Kelloway, 2014). Based on a large-scale evidence-based approach we sought to empirically demonstrate that many constructs in the organizational sciences can be reliably and validly assessed with a single item.
Demonstrating content validity is “an initial step toward construct validation by all studies which use new, modified, or previous unexamined measures” (Schriesheim et al., 1993, p. 385) and was the focus of Study 1. Across 91 selected constructs, we demonstrate that definitional correspondence (as a measure of content validity) for 96.7% of the single-item measures was the same as or higher than that of multi-item measures of the same construct based on sample of ?na?ve? raters (working adults, N = 561). A potential concern with single-item measures is the issue of content adequacy (Hinkin & Tracey, 1999). To overcome this issue, single-item measures tend to be longer, present more content-relevant examples with the item, and/or present a revised version of the construct definition. However, doing so runs the inherent risk that the resulting measures are overly complex or difficult for respondents to understand, processes, and respond to in a thoughtful way (Peter, 1979; Tourangeau, 2018). More concretely then, the trade-off is that in addressing issues of construct coverage, single-item measures may engender usability concerns on the part of respondents. In Study 2, based on a heterogeneous sample (N =392) of working adults, we demonstrate that the majority of single-item measures demonstrate little to no comprehension or usability concerns. Results from Study 3 provide strong evidence for the reliability of the proposed single-item measures based on test-retest reliabilities across the three temporal conditions (i.e., one-day, two-week, one-month).
Finally, in Study 4 we examine issues of construct and criterion validity using a multi-trait, multi-method approach. Leveraging data from 1,321 working adults, findings suggests that intentional and rigorous development, refinement, and psychometric testing of single-item measures in ways that maximize reliability and content adequacy can yield criterion validity estimates for single-item measures that are comparable or exceed those derived from multi-item measures.
Collectively, 75 of the 91 measures (82%) demonstrated very good or extensive validity, evidencing moderate to high content validity, no usability concerns, moderate to high test-retest reliability, and extensive criterion validity. The knee-jerk reaction that all single-item measures are in some respects imply a weak research design is counterproductive, and serves to limit advancements in the organizational sciences.
While there are constructs where single-item measures are not appropriate (e.g., have abstract conceptual definitions), our research makes clear that it is possible to develop measures that accurately and reliably represent a given construct. In light of the practical advantages afforded by their use, we strongly encourage researchers to proactively consider how leveraging single-item measures may help address existing and emerging conceptual, methodological, and empirical challenges within their given research domain.