Skip to main content

Table 6 Psychometric Properties

From: Assessing safety climate in acute hospital settings: a systematic review of the adequacy of the psychometric properties of survey measurement tools

Content Validity

Haynes et al. (1995, [77] p.238) defined Content Validity as “the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose”. It is used for ascertaining whether the content of the measure was appropriate and pertinent to the study purpose and is usually undertaken by seven or more experts in addition to other sources including review of empirical literature and relevant theory [78].

Criterion Validity

Criterion validity delivers evidence about how well scores on a measure correlate with other measures of the same construct or very similar underlying constructs that theoretically should be related [79]. As Flin et al. (2006) [20] indicated, Criterion Validity could be established by correlating the safety climate scores with outcome measures. Outcome measures of safety in health care could include items such as patient injuries, worker injuries, or other organizational outcomes [20].

Construct Validity

Construct validity can be defined as the degree to which items on an instrument relate to the relevant theoretical construct [80]. A variety of ways exists to assess the construct validity of an instrument, including Factor analysis. Factor analysis is a statistical method that “explores the extent to which individual items in a questionnaire can be grouped together according to the correlations between the responses to them”, thus reducing the dimensionality of the data (Hutchinson et al., 2006, [81] p.348). Convergent Validity represents the degree to which different measures of the same construct show correlation with each other and is tested using confirmatory factor analysis (CFA). Conversely, Discriminant Validity represents the extent to which measures of different constructs show correlation with one other [78]. The two main techniques of Factor Analysis are Exploratory Factor Analysis (EFA), and Confirmatory Factor Analysis (CFA). EFA is used to uncover the underlying factor structure of a questionnaire, while CFA is used to test the proposed factor structure of the questionnaire [81]. A CFA measurement model shows convergent validity if items load significantly (.40 or greater) onto the assigned factor and model fit indices suggest adequate fit [25]. Models with a cutoff value close to .90 for CFI; a cutoff value close to .08 for SRMR; and a cutoff value close to .06 for RMSEA are indicative of good model fit [38].

Reliability

Reliability reflects the degree to which test scores are replicable [76, 82]. It ensures that respondents are responding consistently to the items within each composite. Reliability is also referred to as consistency. It can be assessed using Cronbach’s alpha, which is the most commonly used internal consistency reliability coefficient. Cronbach’s alpha ranges from 0 to 1.00 with the minimum criterion for acceptable reliability is an alpha of at least .70. [83, 84].