Skip to main content

Table 3 Inter-rater agreement measured by kappa using various weighting schemes (combined over 18 observation periods)

From: Reliability, feasibility, and validity of the quality of interactions schedule (QuIS) in acute hospital care: an observational study

Method

 

Weights

KAPPA

95% CI

+ s

+ c

N

- p

- r

Unweighted

+ social

1

    

0.53

(0.45, 0.60)

+ care

0

1

   

Neutral

0

0

1

  

- protective

0

0

0

1

 

- restrictive

0

0

0

0

1

Equal weighting given ignoring differences within + ve categories, and within –ve categories (equivalent to testing agreement on a 3-point scale)

+ social

1

    

0.62

(0.48, 0.77)

+ care

1

1

   

Neutral

0.5

0.5

1

  

- protective

0

0

0.5

1

 

- restrictive

0

0

0.5

1

1

Weighted (linear weights reflecting ordinality with equal spacing)

+ social

1

    

0.56

(0.46, 0.66)

+ care

0.75

1

   

Neutral

0.5

0.75

1

  

- protective

0.25

0.5

0.75

1

 

- restrictive

0

0.25

0.5

0.75

1

Weightings given to neutral compared to a positive or negative = 0.5, assuming that disagreement between the positives is equal to disagreement between the negatives

Weighted 1

+ social

1

    

0.60

(0.47, 0.73)

+ care

0.9

1

   

Neutral

0.5

0.5

1

  

- protective

0

0

0.5

1

 

- restrictive

0

0

0.5

0.9

1

Weighted 2

+ social

1

    

0.57

(0.47, 0.68)

+ care

0.75

1

   

Neutral

0.5

0.5

1

  

- protective

0

0

0.5

1

 

- restrictive

0

0

0.5

0.75

1

Weighted 3

+ social

1

    

0.55

(0.46, 0.64)

+ care

0.6

1

   

Neutral

0.5

0.5

1

  

- protective

0

0

0.5

1

 

- restrictive

0

0

0.5

0.6

1