Publication

Development of the Human Factors Skills for Healthcare Instrument: a valid and reliable tool for assessing interprofessional learning across healthcare practice settings

Publication piece on evaluating a human factors questionnaire for clinical populations

G. B. Reedy, M. Lavelle, T. Simpson, J.E. Anderson (2017)

Highlights

  • Human factors skills deficiencies are a leading cause of clinical error, and the value of training these skills is well recognised, specifically improving patient safety and reducing mortality.
  • The aim of this study was to develop, pilot and evaluate a structured instrument to assess self-efficacy of human factors skills that can be used before simulation and after simulation training, and is relevant to a range of healthcare professions.
  • The paper argues that the HuFSHI provides a reliable and valid method of evaluating trainees’ self-efficacy as regards human factors skills—and thus is a particularly useful instrument to help us understand how simulation helps them develop their human factors learning across both acute and mental health settings

 

Abstract

Background

As clinical simulation has evolved, it is increasingly used to educate staff who work in healthcare contexts (e.g. hospital administrators) or frequently encounter clinical populations as part of their work (e.g. police officers) but are not healthcare professionals. This is in recognition of the important role such individuals play in the patients’ experience of healthcare, frequently being a patients’ first point of contact with health services. The aim of the training is to improve the ability of the team to communicate and co-ordinate their actions, but there is no validated instrument to evaluate the human factors learning of non-clinical staff. Our aim was to develop, pilot and evaluate an adapted version of the Human Factors Skills for Healthcare Instrument, for non-clinical professionals.

Method

Through consultation with a multi-professional expert group, we developed and piloted a 39-item survey with 272 healthcare professionals attending training courses across two large simulation centres in London, one specialising in acute care and one in mental health, both serving healthcare professionals working across acute and community settings. Following psychometric evaluation, the final 12-item instrument was evaluated with a second sample of 711 trainees.

Results

Exploratory factor analysis revealed a 12-item, one-factor solution with good internal consistency (α=0.92). The instrument had discriminant validity, with newly qualified trainees scoring significantly lower than experienced trainees (t(98)=4.88, p<0.001) and was sensitive to change following training in acute and mental health settings, across professional groups (p<0.001). Confirmatory factor analysis revealed an adequate model fit (RMSEA=0.066).

Conclusion

The Human Factors Skills for Healthcare Instrument provides a reliable and valid method of assessing trainees’ human factors skills self-efficacy across acute and mental health settings. This instrument has the potential to improve the assessment and evaluation of human factors skills learning in both uniprofessional and interprofessional clinical simulation training.

Return to R&D