Development and psychometric evaluation of the radiographers’ competence scale

Abstract

Assessing the competence of registered radiographers’ clinical work is of great importance because of the recent change in nursing focus and rapid technological development. Self-assessment assists radiographers to validate and improve clinical practice by identifying their strengths as well as areas that may need to be developed. The aim of the study was to develop and psychometrically test a specially designed instrument, the Radiographers Competence Scale (RCS). A cross sectional survey was conducted comprising 406 randomly selected radiographers all over Sweden. The study consisted of two phases; the development of the instrument and evaluation of its psychometric properties. The first phase included three steps: 1) construction of the RCS; 2) pilot testing of face and content validity; and 3) creation of a web-based 54-item questionnaire for testing the instrument. The second phase comprised psychometric evaluation of construct validity, internal consistency reliability and item reduction. The analysis reduced the initial 54 items of the RCS to 28 items. A logical two-factor solution was identified explaining 53.8% of the total variance. The first factor labelled “Nurse initiated care” explained 31.7% of the total variance. Factor 2 labelled “Technical and radiographic processes” explained 22.1% of the total variance. The scale had good internal consistency reliability, with a Cronbach’s alpha of 0.87. The RCS is a short, easy to administer scale for capturing radiographers’ competence levels and the frequency of using their competence. The scale was found to be valid and reliable. The self-assessment RCS can be used in management, patient safety and quality improvement to enhance the radiographic process.

Share and Cite:

Andersson, B. , Christensson, L. , Fridlund, B. and Broström, A. (2012) Development and psychometric evaluation of the radiographers’ competence scale. Open Journal of Nursing, 2, 85-96. doi: 10.4236/ojn.2012.22014.

1. INTRODUCTION

Competence in nursing practice is a challenging concept that is continually being debated and discussed [1,2]. Definitions used to describe competence vary and, in particular, the simultaneous use of the terms competence and performance gives rise to confusion [3-5]. While [5] offers a distinction between the two concepts, where competence is concerned with perceived skills, and performance with an actual situated behaviour that is measurable. Competence has also been described as being closely related to “being able to” and “having the ability to” do something [6]. Nevertheless, there is no agreement as to whether competence implies a greater level of ability or capacity than performance [6]. Benner [7] defined competence in general as the ability to perform a task with desirable outcomes.

In more recent nursing studies, the issue of competence has been explored in different ways. There is a general consensus that it is based on a combination of components that reflect knowledge, understanding and judgment, cognitive skills, technical and interpersonal skills and personal attitudes [2]. Among others, Meretoja et al. [8] provide details of a Nursing Competence Scale (NCS) used to measure the competence level of nursing professionals. The NCS is a self-assessment tool consisting of 73 items grouped into seven sub-scales used to assess registered nurses in medical and surgical work environments in a hospital setting. The NCS has strong validity and reliability. Another available instrument is the Competency Inventory for Registered Nurses (CIRN). This 58-item instrument was developed from a qualitative study based on the International Council of Nurses’ (ICN) framework. Liu et al. [9] identified strong evidence of internal consistency reliability, content and construct validity of the CIRN.

The examination of professional competence also includes the way of acting in a specific context, in this case, a diagnostic radiology department as people may not possess identical knowledge although they may work in the same field. Knowledge can be so deeply embedded that a registered radiographer with extensive experience may carry out his/her duties intuitively. This is known as “tacit knowledge” and often difficult to assess [10]. A central premise of tacit knowledge is that “we know more than we can express” [11]. According to Benner [7] and Dreyfus et al. [12], five levels of professional pathways “from novice to expert” are described as the basis for achieving increased skills and competencies. Understanding and judging situations are the key skills in complex human activities [13]. Benner [7] described the nurse’s evidence-based knowledge as derived from actual nursing situations in an emergency context. She developed it even further by emphasizing a more holistic view of caring behaviour [14], which is often challenging due to complex technological advances in the health care services. Assessing clinical competence among registered radiographers is therefore of major importance because of the immense changes that have taken place in the past decade (i.e., the rapid technological development and change in nursing focus) at all diagnostic radiology departments. In most countries, registered nurses are responsible for patient care, while a radiological technologist or corresponding professional is in charge of the radiological equipment [15]. In Sweden, specially educated and registered radiographers have a unique position due to being responsible for the entire radiographic examination, nursing actions as well as for the medical technology, e.g. injections, catheterizing and medical technical equipment [16]. In this paper, radiographer will be used to refer to these professionals.

The use of self-assessment tools allows radiographers to consider different aspects of nursing in their clinical work and helps them to improve their clinical competence [17]. Furthermore, the assessment of competence is also an ethical matter, as well as a quality of care concern [18]. Accordingly, competence assessment should be a core function in management, patient safety and quality improvement. Valid and reliable methods for assessing professional clinical competence are therefore essential. However, based on a review of the literature, no specific and reliable tool was identified to meet the specific needs of radiographers. Accordingly, the aim of the present study was to develop and psychometrically test a specially designed instrument, the Radiographers Competence Scale (RCS).

2. METHOD

2.1. Design

The design was a cross sectional survey consisting of two phases; the development of the instrument and evaluation of its psychometric properties. The first phase included three steps: 1) construction of the RCS; 2) pilot testing of the face and content validity; and 3) creation of a web-based questionnaire for testing the instrument. The second phase comprised psychometric evaluation of the construct validity and internal consistency reliability.

2.2. Phase I. Instrument Development

2.2.1. Step 1. Construction of the Radiographers Competence Scale

The development of the RCS was guided by the concept of Streiner and Norman [19]. The basis was a qualitative study exploring professional competence [16], and two main areas (i.e., direct and indirect patient related areas) emerged from the data. The first was broken down into four competencies focusing on the care provided in close proximity to the patient; guiding, performing the examination, providing support and being vigilant. The second area was likewise divided into four competencies focusing on the surrounding environment and activities and including; organization, ensuring quality, handling the image and collaboration with internal and external agencies.

When defining the construction of the RCS it was valuable that all involved were practising nurses and/or researchers. These individuals had different specialities, for example; cardiovascular, geriatric, intensive, anaesthetics and radiographic nursing care. Experience of developing and psychometrically testing instruments was also considered a strength among members of the research group [20].

The initial version of the RCS consisted of 42 items in eight areas with between four and seven items per area, based on the categories and sub-categories reported in a qualitative study by Andersson et al. [16]. Each item represented behaviours and was answered by means of a two part scale, one of which focused on valuation of radiographer competence and the other on the frequency of its use. Valuation of the competence was measured on a 10-point scale (1 - 10) where one was the lowest and 10 the highest grade. The frequency of using the competence was measured by the following response alternative: “never used”, “very seldom used”, “sometimes used”, “often used”, “very often used” and “always used”. In the present study, only the first part focusing on valuation of the competence was used for item reduction and reliability testing of the RCS.

2.2.2. Step 2. Pilot Test for Face and Content Validity

Pilot testing of the face and content validity was conducted in line with Lynn’s Criteria [21], (i.e., content relevance, clarity, concreteness, understandability and readability of the scale). A strategically selected group comprising 16 participants with varying experiences of the field was used. The group members were; one third year radiography student; six clinically experienced radiographers; four radiographers in management positions; three PhD students and two nursing researchers who were familiar with diagnostic radiology. The participants were asked to judge the relevance of the items, individually and as a set. A 4-point rating scale (from 1 “not relevant”, to 4 “very relevant”), was used. In addition, the participants were requested to identify important areas not included in the instrument. Hence, after every set of items there was space for comments. Further assessment was undertaken regarding the items dealing with competencies and the association between the items and competencies. Finally, missing items or competencies and suggested additional items were also assessed.

Following analysis of the data, the 42-item version of the RCS was amended. As a result, it was proposed that 12 items concerning relatives, patient safety, vigilance, prioritizing and optimizing image quality should be added to the questionnaire. The mean value of the relevance of the items was judged to be 3.4 (range 2 - 4) with the lowest value (2) pertaining to understandability and readability of the items; “prioritization of patients”, “providing relief to the patient”, “intervening in life and death situations” and “independent reporting of medical images”. Furthermore, the face and content validity resulted in linguistic adjustments to enhance readability. The order of the competencies and items was also changed to ensure a more systematic and easy to use tool. The number of items in all competence areas was increased except for the first; “organization and leadership”. In line with the recommendations of Berk [22], four of the authors independently acted as experts when considering the logical consistency of the competencies and the number of items to be included in the RCS. Content and face validity were based on agreement between the four authors.

2.2.3. Step 3. Construction of a Web-Based Questionnaire

The amended pilot-tested 54-item version of the RCS was used to construct a web-based questionnaire. The RCS was divided into eight different competencies with six to eight items in each area (Figure 1). Every item had two levels; valuation of radiographic competence and frequency of its use, each were answered on a 10-point scale. After every section a space was provided for comments. The web-based questionnaire included instructions for participants and an opportunity to obtain demographic data including; age, sex, professional status, educational level and number of years in present position.

2.3. Phase II. Item Reduction and Psychometric Evaluation

2.3.1. Sample and Design

Radiographers from all over Sweden were identified from a register administered by the Swedish Association of Health Professionals (SAHP). The SAHP is a trade union and professional organization for radiographers, nurses, midwives and biomedical scientists. The inclusion criterion was clinically active participants currently working as radiographers. Of the 3592 Swedish radiographers listed in the SAHP, 2167 were members of the SAHP at the time of the study, of whom 1772 met the inclusion criteria. Using the register, a computer systematically generated a list of 500 radiographers who were invited to participate.

In late November 2010, a link to the web-based questionnaire, comprising of the RCS was e-mailed to the participants. An accompanying letter was distributed, containing information about the study, that participation was voluntary and that confidentiality would be maintained at all times. Informed consent was obtained before the participant completed the questionnaire. The first reminder was sent after one week and a second after two weeks. This resulted in 200 responses, a response rate of 40%. As the number of responses was considered low, a new computer generated list of 500 participants was chosen from the SAHP register and a reminder sent after two weeks. A total of 1000 questionnaires was distributed, resulting in 406 responses (40.6%).

2.3.2. Item Reduction

All data were analysed using SPSS 18.0 for Windows (SPSS Inc., Chicago, Illinois, USA). The number of items was reduced in two phases. Firstly, a corrected item-total correlation and Cronbach’s alpha if item deleted, was conducted on the 54-item questionnaire [23- 25]. Items with low correlations, i.e. ≤0.5, were removed one at a time and new item-total statistics calculated on each occasion. Secondly, repeated explorative factor analyses (Varimax type with Kaiser’s Normalization) were conducted on the remaining items [19,20]. One item was removed at a time from the factor with the lowest factor loading. A new factor analysis was performed each time an item was extracted. According to Field [26], factor loadings of >0.50 were considered sufficient.

2.3.3. Construct Validity

Construct validity (i.e., to emphasize a clear and theoretically sound factor structure) was assessed using principal component analyses with Varimax rotation with Kaiser’s Normalization [19,20]. Initially, data were examined with Bartlett’s test of sphericity, as well as with

Figure 1. A description of the initial 54-item version of the RCS, the item reduction and items included in factor 1 and factor 2 of the validated 28-item version of the RCS.

the measure of sample adequacy in each variable and overall. The number of factors extracted was decided by the Kaiser criteria (Eigenvalue < 1.0). Catell’s scree test was also used to control for the number of tentative factors to be retained [27]. Pett et al. [25] recommend that a newly developed instrument should explain 60% of the total variance.

2.3.4. Internal Consistency Reliability

The internal consistency reliability was established using Cronbach’s alpha coefficient [19,20,23]. With regard to developing a new instrument, the lowest value for Cronbach’s alpha coefficient was set at >0.70 [24].

2.3.5. Floor and Ceiling Effects and Missing Data

The proportion of floor and ceiling effects (people obtaining minimum and maximum scores respectively) among the items was also examined as were missing data [19].

2.4. Ethical Considerations

This study was conducted in accordance with the principles outlined in the Declaration of Helsinki [28] and according to The Ethical Guidelines for Nursing Research in the Nordic Countries [29], as well as to The Swedish Law for Ethical Approval for Research on Humans [30] and the The Official Secrets Act [31]. Subjects were informed that participation was voluntary, that the data would be treated confidentially and that they could withdrawal from the study at any time. It was impossible to associate any specific answer with a given participant. Completing the questionnaires implied informed consent.

3. RESULT

Valid questionnaires were obtained from 406 respondents with clinical experience in diagnostic radiology departments. The mean age of the participants was 47 years (+SD 10.55), ranging between 22 and 66 years, 88% of whom were women (Table 1).

3.1. Item Reduction of the RCS

In the first step, the use of corrected item-total correlation and Cronbach’s alpha if item deleted led to the removal of 12 items with low correlations (<0.5) from the 54-item scale. In the second step, several explorative factor analyses were performed on the remaining 42 items. Principal component analyses were conducted to obtain the solution with optimal scale variance. One item at a time was removed from the factor with the lowest loading which led to a further 14 items being removed.

Table 1. Characteristics of the sample (n = 406).

Figure 1 presents the 26 items that were removed from the eight competencies in the initial 54-item questionnaire.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] McCready, T. (2007) Portfolios and the assessment of competence in nursing: A literature review. International Journal of Nursing Studies, 44, 143-151. doi:10.1016/j.ijnurstu.2006.01.013
[2] Redfern, S., Norman, I., Calman, L., Watson, R. and Murrells, T. (2002) Assessing competence to practice in nursing: A review of the literature. Research Paper Education, 17, 51-77.
[3] Ramritu, P.L. and Barnard, A. (2001) New nurse graduates’ understanding of competence. International Nursing Review, 48, 47-57. doi:10.1046/j.1466-7657.2001.00048.x
[4] Watson, R., Calman, L., Norman, I., Redfern, S. and Murrells, T. (2002) Assessing clinical competence in student nurses. Journal of Clinical Nursing, 11, 554-555. doi:10.1046/j.1365-2702.2002.00590.x
[5] While, A.E. (1994) Competence versus performance: Which is more important? Journal of Advanced Nursing, 20, 525-531. doi:10.1111/j.1365-2648.1994.tb02391.x
[6] Eraut, M., Germain, J., James, J., Cole, J.J., Bowring, S. and Pearson, J. (1998) Evaluation of vocational training of science graduates in the NHS. University of Sussex School of Education, Brighton.
[7] Benner, P. (1984) From novice to expert: Excellence and power in clinical nursing practice. Addison-Wesley, Menlo Park.
[8] Meretoja, R., Isoaho, H. and Leino-Kilpi, H. (2004) Nurse Competence scale: Development and psychometric testing. Journal of Advanced Nursing, 47, 124-133. doi:10.1111/j.1365-2648.2004.03071.x
[9] Liu, M., Kunaiktikul, W., Senaratana, W., Tonmukayakul, O. and Eriksen, L. (2007) Development of competency inventory for registered nurses in the People’s Republic of China: Scale development. International Journal of Nursing Studies, 44, 805-813. doi:10.1016/j.ijnurstu.2006.01.010
[10] Johannessen, K.S. (1999) Praxis och tyst kunnande. Dialoger, Stockholm.
[11] Polanyi, M. (1967) The tacit dimension. Doubleday, Garden City.
[12] Dreyfus, H.L., Dreyfus, S.E. and Athanasiou, T. (1986) Mind over machine: The power of human intuition and expertise in the era of the computer. Basil Blackwell, Oxford.
[13] Dreyfus, H.L. (1982) Husserl, intentionality and cognitive science. MIT Press, Cambridge.
[14] Benner, P.E., Chesla, C.A. and Tanner, C.A. (1996) Expertise in nursing practice: Caring, clinical judgment, and ethics. Springer Pub. Co., New York.
[15] Reeves, P.J. (1999) Models of Care for Diagnostic Radiography and their use in the education of undergraduate and postgraduate radiographers. Dissertation, University of Wales, Bangor.
[16] Andersson, B., Fridlund, B., Elgan, C. and Axelsson, ?. (2008) Radiographers’ areas of professional competence related to good nursing care. Scandinavian Journal of Caring Sciences, 22, 401-409. doi:10.1111/j.1471-6712.2007.00543.x
[17] Campbell, B. and Mackay, G. (2001) Continuing competence: An Ontario nursing regulatory program that supports nurses and employers. Nursing Administration Quarterly, 25, 22-30.
[18] The Swedish Society of Radiographers and The Swedish Association of Health Professionals (2008) Code of Ethics for Radiographers. The Swedish Association of Health Professionals, Stockholm. http://www.swedrad.com
[19] Streiner, D.L. and Norman, G.R. (2008) Health measurement scales: A practical guide to their development and use. 4th Edition, Oxford University Press, Oxford.
[20] Burns, N. and Grove, S.K. (2001) The practice of nursing research: Conduct, critique & utilization. 4th Edition, Saunders, Philadelphia.
[21] Lynn, M.R. (1986) Determination and quantification of content validity. Nursing Research, 35, 382-385. doi:10.1097/00006199-198611000-00017
[22] Berk, R.A. (1990) Importance of expert judgment in content-related validity evidence. Western Journal Nursing Research, 12, 659-671. doi:10.1177/019394599001200507
[23] Cronbach, L.J. and Warrington, W.G. (1951) Time-limit tests: Estimating their reliability and degree of speeding. Psychometrika, 6, 167-188. doi:10.1007/BF02289113
[24] Nunnally, J.C. and Bernstein, I.H. (1994) Psychometric theory. 3rd Edition, McGraw-Hill, New York.
[25] Pett, M.A., Lackey, N.R. and Sullivan, J.J. (2003) Making sense of factor analysis: The use of factor analysis for instrument development in health care research. SAGE, London.
[26] Field, A. (2005) Discovering statistics using SPSS. 2nd Edition, SAGE, London.
[27] DeVellis, R.F. (2003) Scale development: Theory and applications. 2nd Edition, Sage, Newbury Park.
[28] MFR-Rapport (2002) Riktlinjer f?r etisk v?rdering av medicinsk humanforskning: Forskningsetisk policy och organisation i Sverige. (Guidelines for ethical valuation of medical human research: Research ethics about policy and organization in Sweden.) 2nd Edition, Swedish Research Council in Medicine, Stockholm.
[29] Sykepleiernes Samarbeid i Norden (2003) Etiska riktlinjer f?r omv?rdnadsforskning i Norden (in Swedish). (Ethical recommendations for nursing research in the Nordic countries.) http://www.ssn-nnf.org/vard/index.html
[30] SFS 2003: 460. Lag om etikpr?vning av forskning som avser m?nniskor (in Swedish). (Law for ethical approval regarding research on humans.) Stockholm, The Riksdag. http://www.riksdagen.se/webbnav/index.aspx?nid=3911&bet=2003:460
[31] The Official Secrets Act 1989 (Prescription) (Amendment). http://www.legislation.gov.uk/uksi/2007/2148/pdfs/uksi_20072148_en.pdf
[32] Smith, T.N. and Baird, M. (2007) Radiographers’ role in radiological reporting: A model to support future demand. Medical Journal of Australia, 186, 629-631.
[33] Price, R.C. and Le Mausurier, S.B. (2007) Longitudinal changes in extended roles in radiography: A new perspective. Radiography, 13, 18-29. doi:10.1016/j.radi.2005.11.001
[34] Reeves, P.J. (2008) Research in medical imaging and the role of the consultant radiographer: A discussion. Radiography, 14, 61-64. doi:10.1016/j.radi.2008.11.004
[35] Rattray, J. and Jones, M.C. (2007) Essential elements of questionnaire design and development. Journal of Clinical Nursing, 16, 234-243. doi:10.1111/j.1365-2702.2006.01573.x
[36] Tabachnick, B.G. and Fidell, L.S. (2007) Experimental designs using ANOVA. Thomson/Brooks/Cole, Belmont.
[37] Pallant, J. (2010) SPSS survival manual: A step by step guide to data analysis using SPSS. 4th Edition, Open University Press/McGrawHill, Maidenhead.
[38] Polit, D.F. and Beck, C.T. (2008) Nursing research: Generating and assessing evidence for nursing practice. 8th Edition, Wolters Kluwer Health/Lippincott Williams & Wilkins, Philadelphia.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.