Impact of Adaptive Quizzing as a Practice and Remediation Strategy to Prepare for the NCLEX-RN

Abstract

National Council Licensure Examination-Registered Nurse (NCLEX-RN) outcomes are extremely important to nursing institutions and students for myriad reasons. For students, the NCLEX-RN represents one of the final milestones to conquer before entering a nursing career. Nursing programs monitor NCLEX-RN pass rates as an important gauge of program quality and minimum levels frequently must be met. This study explored the implementation of an adaptive quizzing and learning system as part of an NCLEX-RN preparation strategy designed to increase student engagement and subsequent success on the NCLEX-RN. The adaptive quizzing system was used as part of an ongoing, proactive strategy to student preparation. This strategy is in contrast to the practice of using high-stakes exams scores to try and predict student NCLEX-RN outcomes. In the latter case there is mixed evidence on how scores relate to remediation and moving students toward success based on evidence of need. The study school required students (N = 54) to take regular, adaptive practice quizzes throughout their final year in the nursing program. Students were also given the HESI E2 in their final semester and all but one student achieved the target threshold with a range of scores (772 to 1028). Despite this variability, 90.7% of the students in the study group passed the NCLEX-RN on their first attempt, with a pass rate of 98% when considering those who passed on the second attempt. NCLEX-RN pass rates at the study school increased by 11.55% following the implementation of the system and in the second year of implementation (the data analyzed for this study) increased an additional 3.95% from the previous year. With many factors to consider, we cannot say, unequivocally, that using the AQS resulted in an increase in NCLEX-RN pass rates at the study school. Findings from this retrospective study do, however, support the use of an adaptive quizzing system as a component of the NCLEX-RN preparation strategy. Implications of these findings are discussed.

Share and Cite:

Malkemes, S. and Phelan, J. (2017) Impact of Adaptive Quizzing as a Practice and Remediation Strategy to Prepare for the NCLEX-RN. Open Journal of Nursing, 7, 1289-1306. doi: 10.4236/ojn.2017.711093.

1. Introduction

Upon graduation from nursing school, graduates must pass the National Council Licensure Examination for Registered Nurses (NCLEX-RN®) before becoming a licensed nurse. Many nursing accrediting bodies include the first time NCLEX-RN pass rate of the graduating student population in the accreditation process, although some (including the Commission on Collegiate Nursing Education) consider pass rates for all test takers [1] [2] . Nevertheless, the percentage of graduating students passing the NCLEX-RN the first time has emerged as one indicator of a successful, and high-quality nursing program [1] [3] [4] [5] . The focus on first-time pass rates is further supported by data from the National Council of State Boards of Nursing (NCSBN) indicating a higher percentage of failure for repeat test-takers [1] . The NCSBN attributes the difference in failure rates to the extended time between graduation and retaking exams for the repeat test-takers. Moreover, if a student cannot pass the NCLEX-RN, they are precluded from entering their chosen career.

2. Literature Review

The importance placed on passing the NCLEX-RN naturally results in pressure to increase pass rates. Factors which may play a role in students’ NCLEX-RN success include the content and pedagogy within the nursing curriculum, the admission criteria for the program, intervention and remediation strategies for struggling students as well as within-program measures which may correlate with NCLEX-RN success [6] . The research literature includes studies looking at preprogram factors (such as academic aptitude) as well as in-program factors (nursing course grades etc.) and NCLEX-RN results. Not surprisingly, academic aptitude (both pre and within program) is related to NCLEX-RN-success. Broad measures such as SAT scores, entrance exams, overall nursing program GPA, measures of academic aptitude, science GPA, scores in advanced medical-surgical courses, and biology course grades have all been shown to be associated with NCLEX-RN success [7] [8] [9] .

Within many nursing programs there is an emphasis on identifying ways to help predict NCLEX-RN outcomes for students who are about to finish the program. Most frequently schools employ some form of standardized test measure to try and identify students “at-risk” for NCLEX-RN failure. Some schools go further and implement “high-stakes” testing policies which preclude students from graduating unless they have reached a minimum score In these cases, students who have met all previously outlined program requirements, may be held back from completing the program until they can achieve a threshold score on a standardized test [1] [10] [11] [12] .

This focus on prediction is problematic for several reasons; First of all, while there is evidence that students scoring above a certain threshold are likely to pass the NCLEX-RN [13] , there is little or no predictive information about the students who are below the threshold. Furthermore, high-stakes tests do not identify those students at-risk of failing the NCLEX-RN-which is a much more useful piece of information when considering the need for remediation [10] [11] . Finally, the information from predictive tests often comes when students are close to graduation and so the potential impact on student learning is limited. The 9th annual HESI Validity Study highlights this issue with the HESI E2 (a commonly used high-stakes exam): “… the E2, is usually administered during the last semester or quarter of the curriculum when little time remains for remediation before taking the NCLEX-RN” ( [14] , p. S13).

Thus, when progression policies are in place, the connection between first-time NCLEX-RN pass rates and program quality becomes a little less clear; when standardized tests are used to preclude graduation or student progression, the result is that the pool of students permitted to take the NCLEX-RN is not necessarily an accurate representation of the student population as a whole. This results in a confounded view of “success” of a nursing program as it relates to NCLEX-RN pass rates. As Giddens (2009) noted, “Is there really anything to celebrate when a nursing program with only a 50% persistence to graduation rate boasts of a 100% first-time [test taker] NCLEX-RN pass rate?” (p. 124). Some have suggested that important information to include alongside first time pass rates, is the percentage of students who completed the program. Thus painting a more accurate picture of program quality [15] [16] .

Furthermore, research into best practices in testing and assessment direct nurse educators to comprehensively evaluate student ability levels with more than one indicator especially when making important decisions which impact students’ futures. The Fair Testing Imperative in Nursing Education document from the NLN [1] stated that there are “no universally accepted standards for how predictive tests and related policies should be implemented” (p. 1). The document further recommended faculty consider policies including multiple data sources to make decisions about students’ competence to graduate, as well as use tests and other measures to “support student learning and improve teaching” (NLN, 2012, p. 4). As Spurlock and Hunt (2008) noted: “Looking at a single, clinical-only indicator to represent students’ readiness for graduation devalues the rest of their education, whether it occurred in a community college, diploma school, or university setting.” This means not making an important decision, like whether a student will graduate or not, on the basis of a single test score.

Of course standardized assessments can serve other purposes as well; although many are designed (and usually used) as summative, end-of-program measures, scores on the exams may help guide students’ remediation and studying efforts. Despite successful progress throughout the curriculum, students may have difficulty achieving the benchmark on a standardized measure [12] [17] . If this is the case, results may provide students and faculty valuable information on areas of curricular weakness (and strength) which can be used to focus remediation efforts. Indeed, some nursing programs have moved from using standardized tests in a high-stakes situation, to using them to help identify areas in which students need remediation [17] [18] . In these instances, exam scores can help shape remediation efforts, but to be effective, students are likely to need additional resources to help address their individual needs, master critical content required to pass the NCLEX-RN and move on to the next phase in their chosen field. Information indicating a student did not meet a particular benchmark can be useful only if there is time for it to help drive remediation in a formative way, and resources with which students can engage in remediation efforts.

3. Background

In recent years, faculty at the study school observed that students were not completing practice NCLEX-RN-style questions on their own except to study for examinations. Faculty were aware that success on NCLEX-RN was dependent on this practice and had decided to require students complete a designated number of questions per each clinical course during the senior year of the program. This policy, however, was difficult to implement as there was no way to track and monitor the number of questions students were answering and so was not instituted in a systematic way at the time. Administrative changes occurred within the school of nursing beginning in the year 2013 and following disappointing NCLEX-RN results in 2014 (79.17% pass rate), the school leadership recognized the need to do something more. As part of a multi-pronged strategy, the adaptive quizzing and learning system, PassPoint (Wolters Kluwer) was adopted as a method to administer and monitor student completion of practice NCLEX-RN-style questions. It was implemented at the beginning of the senior year in an effort to identify weaker students during the first semester to allow for earlier intervention and more remediation prior to the NCLEX-RN.

3.1. Adaptive Quizzing

An adaptive quizzing and learning system is designed to provide students with an environment in which they can effectively and efficiently learn, practice, and master course content. Just as a computerized adaptive test (CAT) adjusts question difficulty according to each student’s responses, so can an adaptive learning system adjust and personalize the learning experience based on how the student interacts with it. The adaptive quizzing system (AQS) used at the study school is an online, computer-based platform with a large database of calibrated NCLEX- RN-style test questions in multiple formats (multiple-choice, fill-in-the-blank, hot-spot, graphics etc.).

Instructors can integrate the system into their courses in different ways to best align with current teaching practices and goals. For example, faculty can create class assignments using collections of questions chosen from the large AQS database, or set a Mastery Level (ML) target in a particular topic for students to achieve. Students can also use the system for their own independent studying and to practice test-taking skills. To do so, students select a nursing topic (or client need), the desired number of quiz questions (between 5 and 20) and the system builds an adaptive practice quiz using those parameters. Quizzes are delivered to students based on an adaptive algorithm which personalizes the experience for each student, optimizing learning potential by selecting quiz questions targeted at the current level of understanding. Students take the quiz and upon completion are shown a detailed answer key along with rationales and explanations of key concepts. Multiple versions of the AQS exist, each aligned to a particular nursing course or exam (e.g., Medical-Surgical or NCLEX-RN).

Adaptive quizzing is efficient and helps focus each student’s learning on content tailored to their estimated ability level [19] . Within the AQS, a student’s ability level is determined and continuously updated based on responses to calibrated questions with known difficulty parameters. As students answer more difficult questions correctly, they are given increasingly more challenging questions on subsequent quizzes. As they answer the more challenging questions correctly, the student moves up in ML. ML is reported on a scale of 1 to 8 and the levels provide students with a motivating measure of their progress and a clear indication of which topics or client needs they have mastered and which require more work. The AQS allows students to practice and learn in a low-stakes, authentic environment to help prepare for the high-stakes situation―the NCLEX-RN. This type of practice can also be invaluable to populations such as EL (English learner) or LEP (limited English proficient) students as well as those requiring extra support in content mastery and test-taking strategies. In both cases evidence suggests that allowing students to engage in more independent, self-paced learning (by way of using interactive web-based tools) can help increase confidence as well support student learning [20] [21] .

The AQS provides students a forum to actively practice for exams and master course content. Research shows that active studying―in this case defined as answering practice questions on relevant course topics―leads to better retention of that content than does simply reading over notes, or other more passive study techniques [22] [23] . Most students study in a very passive way―they highlight their text books, or read over class notes again and again. The research suggests that passive studying techniques such as these may be less effective than actually practicing what you will ultimately need to do to show you have learned something. And that is to retrieve the information from your memory. Put more simply, the idea is that rather than studying by repeatedly reading over notes, or highlighting text, a more effective method is to review content and take a practice quiz on that material. In taking the quiz one is actively trying to retrieve information from the brain and this changes how one can access it later. “When we use our memories by retrieving things, we change our access to that information. What we recall becomes more recallable in the future. In a sense you are practicing what you are going to need to do later” [24] .

The research overwhelmingly supports that retrieval practice leads to a higher long-term retention of material than simply studying using more passive techniques. As Roediger & Butler (2011) state: “… testing, which is commonly conceptualized as an assessment tool, can be used as a learning tool as well” (p. 6). Some of the benefits of retrieval practice and frequent quizzing are even more salient in the college environment. With many student focusing their study efforts almost solely right before the few major course exams, the addition of more regular, spaced-out quizzes would be beneficial [23] [25] . The AQS also gives instructors a window into their students’ performance with an assessment dashboard displaying individual and class-wide information on overall usage as well as content mastery. Instructors can monitor these student usage and mastery data to determine how students are using the system, where additional instruction, practice, or remediation is required and to focus lectures or assignments on key concepts with which students are struggling.

Some have suggested that nursing schools implement comprehensive testing systems to help students prepare for the NCLEX-RN [26] . A piece of predictive information is only useful, however, if it can be acted upon. If the information comes too late, or if there is no clear path to remediation and change, the “predictive” information loses its value and does not help students prepare. In contrast, the AQS can be used throughout the nursing program to help inform, guide, and support student learning. As the student engages with the AQS, they get practice answering NCLEX-RN-style questions in a low-stakes environment, gain feedback on strengths and weaknesses, can better focus their studying, with the potential to increase content mastery and continue learning.

3.2. Purpose

The current study was designed to explore the use of the AQS and its efficacy as a learning tool for nursing students. The AQS was implemented with the study cohort in two, consecutive nursing courses to help increase student practice, focus remediation efforts and determine the potential impact on NCLEX-RN outcomes. The purpose of the study was to explore:

1) Degree of student usage and engagement with the AQS.

2) The relationship between usage and content mastery level as measured by the AQS.

3) The correlation between AQS usage and mastery and HESI E2 and NCLEX-RN outcomes.

4) The relationship between scores on the HESI and course GPA.

5) The impact of the AQS on NCLEX-RN pass rates.

4. Methods

4.1. Participants and Design

The study used a convenience sample of senior nursing students (N = 55) enrolled in a senior-level BSN course at a nursing school in the eastern United States during the spring semester of 2016. The study implemented a retrospective descriptive and correlational design to explore the relationship between usage and content mastery measured in the AQS, course outcome data, standardized testing scores, as well as NCLEX-RN outcomes. A follow-up survey was sent to students electronically.

4.2. Data Sources

Data were gathered from course records and included HESI E2 results along with usage and ML data from the AQS (which was used throughout the year). Usage and ML data were collected at two time points―once at graduation and again in early July. A survey measure was sent to participating students who were asked questions about the AQS, their usage and perceived value and benefits. Students were given the opportunity to provide additional information for each question in an open-ended format. During the spring 2016 HESI E2 administration there was a system error which caused students to be locked out of the exam. Because of this error, students were allowed to retake the exam (a second time) with no penalty.

4.3. AQS Implementation

The AQS was first implemented in the senior-level course in fall, 2014. During the first semester, students were required to answer 25 questions per week. This number increased to 50 questions per week at the end of the semester, and remained at that level during the spring semester. Students took Mastery Level (ML) quizzes and exams on different topics and by the spring, students had to have a ML of 4. If by a certain time they had not achieved this level, the number of required questions increased to 100 questions per week.

The implementation strategy was revised for the second year of use; for the first two weeks students had to answer 25 questions, and subsequently 50 per week. The target ML increased from 4 to 4.5. In the spring semester, students had to answer a minimum of 50 questions per week. If students were weaker in certain areas, faculty suggested they answer more questions in those areas. Faculty also encouraged all students to sit through at least one 265-question exam to get used to it. Students received 5% of the course points (as a quiz grade) in both senior level courses if they answered the required number of questions (450 - 1000 depending upon their ML). While this number of points was a small incentive, faculty reported it can make a difference with some students.

In the 2015-16 academic year students were required to achieve a ML of 4.5 in the AQS, participate in capstone activities (gaming & ATI exams), and achieve 800 or above on the HESI E2 in order for the Dean to sign the NEV form required by the state in order to sit for the examinations. Students graduated in late May (2016) and were not required to take quizzes in the AQS after early May. Many students continued to use the AQS following graduation and prior to taking the NCLEX-RNas verified by student usage data. In the 2015-16 academic year, the study school also started using ATI as well as some alternative strategies (including games) to help students learn course concepts and answer questions correctly. Gaming was first started with class of 2014 and continued in subsequent years.

5. Results

5.1. AQS Usage and Mastery

AQS usage and mastery data were collected at two time points: late April, 2016 (Time 1) and early July, 2016 (Time 2). Complete data were available for 54/55 students. One student in the initial cohort did not meet the general education graduation requirements. Overall student usage and ML (at Time 1) is shown in Table 1. Student use was varied; at Time 1, students had answered an average of 3091 questions (SD = 1031.64), taken an average of 253.93 quizzes (SD = 117.14) and achieved an average quizzing ML of 5.02 (SD = 0.63). Students logged into the AQS an average of 129.24 times (with a range of 58 - 317). Given the large standard deviation for questions answered, the median (2849) is a more accurate measure of overall usage. Remediation link usage was low with 63% of the cohort not accessing any links.

Only 29/54 students took a practice NCLEX-RN-style exam within the AQS. For those who took an exam the average was 1.9 exams (SD = 1.11) and students achieved an average exam ML of 6.4 (SD = 1.35).

At Time 2 students had answered an average of 3594.7 questions (SD = 1262.25), taken an average of 288.69 quizzes (SD = 131.52) and achieved an average quizzing ML of 5.27 (SD = 0.61). Students had logged into the AQS an average of 151.20 times (with a range of 67 - 417) (see Table 2). The median number

Table 1. AQS quizzing usage and mastery descriptive statistics at time 1 (N = 54).

Table 2. AQS quizzing usage and mastery descriptive statistics time 2 (N = 54).

of questions answered at Time 2 was 3316.50. Remediation link usage increased slightly from Time 1, but still 55% of the students did not access the links at all.

Data from the AQS indicate the most recent log in date for all students. Many students were still using the AQS at the time of Time 2 data collection (seven weeks after graduation). Indeed, between Time 1 and Time 2, the median increase in questions answered was 355. This change is noteworthy in that Time 1 data collection was close to the end of the course, and so most of the usage between Time 1 and Time 2 was not required usage (for the course) but independent quizzing undertaken by students as they prepared to take the NCLEX-RN.

5.2. Mastery Level Distribution

Overall ML distribution is shown in Figure 1. Distribution of final ML was narrow with a standard deviation of 0.61 and a range of 4.5 - 7.2. ML data were not normally distributed, as assessed by Shapiro-Wilk’s test (p < 0.05). Analyses of ML at Time 2 indicated one outlier case―a student who achieved a 7.2 ML. Based on this, we removed this student from the subsequent analyses.

5.3. Overall Question Distribution

No students answered fewer than 2085 questions and the maximum was 8890. Overall number of questions were not normally distributed, as assessed by Shapiro-Wilk’s test (p < 0.05). Analyses of questions answered at Time 2 indicated two outlier cases―a student who answered 6948 questions and one who answered 8890. Course requirements (described above) outlined the number of questions students had to answer and so we would expect to see all students answering at least the minimum number. In this case, the minimum was between 450 - 1000 (depending on student ML achieved), and all students answered at least 1,000 questions over the minimum.

5.4. Cumulative GPA

Average student cumulative GPA was 3.26 (SD = 0.24). The minimum GPA for the group was 2.82 and the maximum 3.83. There was a significant, positive correlation between cumulative GPA and overall student ML in the AQS, r(51) = 0.418, p < 0.01. Student GPA was also positively correlated with the student’s highest HESI E2, r(51) = 0.479, p < 0.001.

Figure 1. Frequency of final ML.

5.5. HESI E2 Exam Groups

As described above, students took the HESI E2 as a course requirement. Considering each student’s highest score, the class average was 887.83 (SD = 57.43). Of the 54 students in the cohort, 39 took the HESI E2 once, seven students took it twice, seven students three times, and one student four times. Students were placed into a HESI group in the following way: Students who took the HESI two or fewer times were in Group 1, and those who took the HESI E2 three or more times were in Group 2. A comparison of these outcomes for these students is shown in Table 3.

An independent samples Median test revealed significant differences between overall AQS ML at time 2 between Group 1 (M = 5.26, SD = 0.53) and Group 2 (M = 4.79, SD = 0.24), p < 0.05. We were interested to see if students who did not achieve the target score on the HESI E2 the first or second time, answered more questions within the AQS. Number of questions at Time 1 was normally distributed for Group 2 but not Group 1, as assessed by Shapiro-Wilk’s test (p < 0.05). Three outlier students were removed (for this analysis only). An independent samples Mann-Whitney U test comparing average number of questions for Group 1 (M = 2870.75, SD = 711.27) and Group 2 (M = 3192.14, SD = 712.05) at Time 1 was not significant. Nor were there significant differences between the number of questions answered at Time 2, although the average

5.6. AQS Usage and Overall Content Mastery

Using a Pearson correlation analysis we explored the relationship between AQS

Table 3. HESI E2 group comparisons on course and AQS outcomes.

usage and mastery variables. Given the non-normal distribution, we used log transformed data for both overall ML and questions answered. The analysis revealed a significant (mild) positive correlation between the number of questions a student answered and overall ML, r(54) = 0.276, p < 0.05. Thus, as students continued answering questions within the AQS, they were able to answer increasingly difficult questions indicating an increase in content mastery.

5.7. NCLEX-RN® Outcome

Of the 54 students in the cohort, 49 passed the NCLEX-RN on their first attempt, and an additional four passed on their second attempt (a 90.7% first time pass rate). Only one student did not achieve the benchmark score on the HESI E2. This student passed the NCLEX-RN on the first attempt. The four students who did not pass the NCLEX-RN on their first attempt scored 802, 823, 888, 913 on the HESIE2 and of these four students, three students achieved the benchmark on the first attempt, and one took the HESI E2 three times. There were two students who took HESI E2 four times. The first student passed NCLEX-RN on the first attempt and the second did not.

5.8. NCLEX-RN® Pass Rates

NCLEX-RN pass rates by year and by cohort are shown in Figure 2. Pass rates are included for traditional BSN and accelerated BSN students as both contribute to the overall pass rate. There exist some idiosyncrasies in the pass rate data owing to how data are compiled; the state compiles data from October 1 through September 30 and in 2013-2014 there were a number of both traditional students (N = 2) and accelerated students (N = 6) who tested after October 1. This accounts for the discrepancy seen in Figure 2 for 2013-2014 where the overall pass rate is greater than either of the pass rates for traditional or accelerated students.

The study school implemented the AQS in the fall of 2014. In 2014-2015 the

Figure 2. Study school’s NCLEX-RN pass rates by year.

overall NCLEX-RN pass rate was 83% (this group included some students who had not passed from the previous year). The graduating BSN cohort had a pass rate of 86.95%, an increase of 11.55% from the previous year. In 2015-2016 (the second year of AQS implementation) the pass rate for the BSN cohort again increased to 90.9% (overall program pass rate was 90.41%).

5.9. Student Survey

The student survey completion rate was 19% with ten students completing the post-course survey. The survey focused on student self-reported use of the AQS as well as opinions on current features and need for improvement or change. Table 4 summarizes responses on AQS usage and perceived value.

Students were also asked to indicate how much they valued certain features of the AQS. Over 80% of the respondents indicated that the large number of practice questions, the ability to track progress throughout the year, web-based convenience, easy to use interface, connection to course content, and access to full-length practice exams were all “extremely important”. Mastery levels were seen as extremely important by 70% as was the adaptivity of the system. Only one student rated the features of the AQS as not important at all.

6. Discussion

Overall, findings from this retrospective study support the use of an adaptive quizzing system to augment student learning and exam preparation and increase engagement within nursing programs. The study explored the usage of the AQS during the final semesters of a BSN nursing program (fall 2015 and spring 2016). The study school implemented the AQS to help address the issue of students not completing practice questions to prepare for the NCLEX-RN, despite being encouraged to do so. Given the evidence that answering practice test questions builds knowledge, the school required students complete a designated number of

Table 4. Student survey response summary (five items).

questions per each clinical course during the senior year of the program. This policy had proven almost impossible to implement as there was no way to track and monitor students’ practice. The introduction of the AQS allowed faculty to administer practice assignments and monitor student completion of practice NCLEX-RN-style questions.

The AQS was used by a group of 54 students and instructor engagement with the system (defined as number of student assignments created) was high. Students were required to take adaptive quizzes within the system in an ongoing, systematic way during the final two semesters of the BSN program. Overall average student quizzing ML was 5.23 (SD = 0.55) and ranged from 4.5 - 6.5. Students answered an average of 3585.94 questions within the AQS during the two semester usage period. These data support the implementation of the AQS as a means for encouraging and monitoring student practice for the NCLEX-RN. Course requirements outlined the number of questions students were required to answer and all students answered at least 1000 practice questions above the minimum requirement (between 450 - 1000).

At the study school the HESI E2 is used to help evaluate student readiness for passing the NCLEX-RN. Students who scored < 800 on their first attempt were able to retake the exam, If they did not score > 800 on the second try, they were able to take the exam a third time. Students who were unsuccessful on the HESI E2were required to meet with their gaming coach for individualized attention and remediation which included using the AQS, NCLEX-RN Study Guides, and several other online study programs. Our results indicated that higher AQS ML and usage was not associated with scores on the HESI E2.

While all but one student achieved the target threshold on the HESI E2, students achieved a range of scores (from 772 to 1028). Despite this range, 90.7% of the students in the study group passed the NCLEX-RN on their first attempt, with a pass rate of 98% (when including those who passed on the second attempt). NCLEX-RN pass rates at the study school increased by 11.55% following the implementation of the AQS and in the second year of implementation (the data analyzed for this study) increased an additional 3.95% from the previous year. Student HESI E2 scores within the course had varying “correlations” with potential NCLEX-RN success―based on the categories provided by HESI. As almost all of the students passed the NCLEX-RN it is not possible to do a logistic regression analysis which can allow us to measure how well one variable (in this case HESI E2 score) predicts another (NCLEX-RN success). But as only five students didn’t pass the first time, there is insufficient variance in NCLEX-RN outcomes to explore this further.

A sub-set of students (N = 10) provided feedback on their usage of and opinions on the AQS. Of the students who responded, the majority indicated that use of the AQS improved their performance in the course, and on the NCLEX-RN. The majority of students also indicated that the AQS was helpful in preparing for exams, getting feedback on strengths and weaknesses and increased knowledge of course concepts. Students who provided information on perceived benefits of the AQS pointed to factors such as decreased test anxiety, identifying areas for remediation as well as opportunity for ongoing practice as beneficial.

The sample in this study was small and was a convenience sample from one nursing program which limits generalizability of the findings. Generalizability may also be limited by sample diversity―all but eight of the study sample were female. These limitations, however, are not unusual in studies in this area. For example, in a recent literature review of strategies to improve NCLEX-RN success, half of the cited studies which included sample sizes had samples of 60 students or fewer [27] . With many factors to consider, we cannot say, unequivocally, that using the AQS resulted in an increase in NCLEX-RN pass rates at the study school. In addition, the study sample was a small, convenience sample and so results may not be generalizable to all student populations. Findings from this study do, however, provide additional support to findings from other studies in which the mastery level attained by students was a consistent factor in students who passed the NCLEX-RN [18] . The nature of the NCLEX-RN outcome data (pass/fail) and the high percentage of students passing the NCLEX-RN on their first try, renders analysis complex as there is little or no variation in student outcomes.

Important to consider when comparing the use of large scale standardized exams and adaptive quizzing systems AQS is how and why these tools are implemented. Assessment tools can be used in both formative and summative ways―an important distinction of which is how we interpret the data. Data from a summative assessment typically represents student mastery at the end of a course of study, with the assumption that not much will change based on the results. Scores provide a measure of student accomplishments of course and/or program expectations. Assessment data used in a formative way provides information and feedback which can be used to adjust both teaching and learning in order to meet educational goals. The assumption in the latter case is that students are still in the midst of learning and progressing toward educational goals. An adaptive system is particularly useful here because students are all at different levels and can benefit have a more personalized, tailored remediation experience.

The different purposes of assessments must be considered when evaluating their utility. The HESI E2 is a long (160 item) NCLEX-RN-style exam covering the breadth of content students will see on the actual NCLEX-RN. But giving one exam does not constitute an “educational intervention” or strategy which can be used to help students prepare for the NCLEX-RN. It is one measure and provides one score conveying information about performance on an NCLEX-RN-like experience (albeit not an adaptive one). Furthermore, we know from the literature that while the HESI E2 can be a useful tool within a school's assessment program, one cannot use scores to accurately predict NCLEX-RN failure, so we must be wary of using scores as the sole predictor of NCLEX-RN outcomes [28] . There is certainly value in providing students the opportunity to take a longer-length NCLEX-RN-style exam to help gauge current preparedness for the NCLEX-RN and identify students who may be struggling. When used as a guide for remediation, the implementation of a standardized-practice test seems reasonable, but only if the results are used in this way. Information from a standardized test is valuable if used to help inform students’ subsequent studying and preparation for the NCLEX-RN―which in turn can positively impact success rates. But if used to deny progression, it merely serves as a punitive measure, and does not constitute an educationally-sound “strategy” to improve NCLEX-RN pass rates.

We do not intend to criticize standardized testing in nursing education in this discussion; rather our goal is to highlight the potential risks of implementing it as part of a high-stakes progression policy. We suggest that adaptive quizzing and learning can be used in concert with a standardized test; the adaptive quizzing providing a forum for targeted practice, learning by testing, and remediation directed in part by areas of need revealed by results of the standardized test. If the HESI E2 is given in isolation, surely this results in missed opportunities to help students improve.

As discussed above, engaging more frequently in active studying and learning techniques leads to a higher long-term retention of material than simply studying using more passive techniques. Preparing for the NCLEX-RN should focus on proactive measures in which information on students’ current mastery levels can be acted upon in a formative, rather than a punitive way.

This study is part of an ongoing effort to better understand instructor implementation and student use of the AQS as part of an NCLEX-RN preparation strategy and the impact of usage on NCLEX-RN outcomes. There is a clear tension between the need for schools to keep NCLEX-RN pass rates high while adhering to sound educational practices. We will continue to explore the extent to which use of proactive, formative methods can help students better prepare themselves for success on the NCLEX-RN. Earlier implementation into the program may further support students’ learning and faculty may want to consider introducing students to adaptive quizzing sooner rather than later to maximize the benefits. Earlier engagement with the AQS would provide more practice answering NCLEX-RN-style questions in a low-stakes environment, increased opportunities for feedback on strengths and weaknesses, more focused studying, with the potential to increase content mastery and continued learning.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Holstein, B.L., Zangrilli, B.F. and Taboas, P. (2006) Standardized Testing Tools to Support Quality Educational Outcomes. Quality Management in Health Care, 15, 300-308.
https://doi.org/10.1097/00019514-200610000-00014
[2] National League for Nursing Board of Governors (2012) The Fair Testing Imperative in Nursing Education: A Living Document from the National League for Nursing. New York.
[3] Pennington, T.D. and Spurlock, D. (2010) A Systematic Review of the Effectiveness of Remediation Interventions to Improve NCLEX-RN Pass Rates. Journal of Nursing Education, 49, 485-492.
https://doi.org/10.3928/01484834-20100630-05
[4] Shultz, C.M. (2010) High-Stakes Testing!? Help Is on the Way. Nursing Education Perspectives, 31, 205.
[5] Spurlock, D. (2012) The Imperative of Accuracy and Precision in High-Stakes Testing.
[6] Horton, C., Polek, C. and Hardie, T.L. (2012) The Relationship between Enhanced Remediation and NCLEX-RN Success. Teaching and Learning in Nursing, 7, 146-151.
https://doi.org/10.1016/j.teln.2012.06.002
[7] Crow, C.S., Handley, M., Morrison, R.S. and Shelton, M.M. (2004) Requirements and Interventions Used by BSN Programs to Promote and Predict NCLEX-RN Success: A National Study. Journal of Professional Nursing, 20, 174-186.
https://doi.org/10.1016/j.profnurs.2004.04.004
[8] Seldomridge, L.A. and DiBartolo, M.C. (2004) Can Success and Failure Be Predicted for Baccalaureate Graduates on the Computerized NCLEX-RN? Journal of Professional Nursing, 20, 361-368.
[9] McCarthy, M.A., Harris, D. and Tracz, S.M. (2014) Academic and Nursing Aptitude and the NCLEX-RN in Baccalaureate Programs. Journal of Nursing Education, 53, 151-159.
https://doi.org/10.3928/01484834-20140220-01
[10] Spurlock Jr., D. (2006) Do No Harm: Progression Policies and High-Stakes Testing in Nursing Education. The Journal of Nursing Education, 45, 297-302.
[11] Spurlock, D.R. and Hunt, L.A. (2008) A Study of the Usefulness of the HESI Exit Exam in Predicting NCLEX-RN Failure. Journal of Nursing Education, 47, 157-166.
https://doi.org/10.3928/01484834-20080401-07
[12] Sullivan, D. (2014) A Concept Analysis of “High Stakes Testing”. Nurse Educator, 39, 72-76.
https://doi.org/10.3928/01484834-20080401-07
[13] Lauchner, K.A., Newman, M. and Britt, R.B. (2005) Predicting Licensure Success with a Computerized Comprehensive Nursing Exam: The HESI Exit Exam. Nurse Educator, 30, 4S-9S.
[14] Zweighaft, E.L. (2013) Impact of HESI Specialty Exams: The Ninth HESI Exit Exam Validity Study. Journal of Professional Nursing, 29, S10-S16.
https://doi.org/10.1016/j.profnurs.2012.06.011
[15] Giddens, J.F. (2009) Changing Paradigms and Challenging Assumptions: Redefining Quality and NCLEX-RN Pass Rates. Journal of Nursing Education, 48, 123-124.
https://doi.org/10.3928/01484834-20090301-04
[16] Spurlock, D. (2013) The Promise and Peril of High-Stakes Tests in Nursing Education. Journal of Nursing Regulation, 4, 4-8.
https://doi.org/10.1016/S2155-8256(15)30172-1
[17] Molsbee, C.P. and Benton, B. (2016) A Move Away from High-Stakes Testing toward Comprehensive Competency. Teaching and Learning in Nursing, 11, 4-7.
https://doi.org/10.1016/j.teln.2015.10.003
[18] Cox-Davenport, R.A. and Phelan, J.C. (2015) Laying the Groundwork for NCLEX-RN Success: An Exploration of Adaptive Quizzing as an Examination Preparation Method. CIN: Computers, Informatics, Nursing, 33, 208-215.
[19] Newman, A., Stokes, P. and Bryant, G. (2013) Leaning to Adapt: A Case for Accelerating Adaptive Learning in Higher Education. Education Growth Advisors.
http://tytonpartners.com/library/accelerating-adaptive-learning-in-higher-education/
[20] Koch, J., Andrew, S., Salamonson, Y., Everett, B. and Davidson, P.M. (2010) Nursing Students’ Perception of a Web-Based Intervention to Support Learning. Nurse Education Today, 30, 584-590.
https://doi.org/10.1016/j.nedt.2009.12.005
[21] Koch, J., Salamonson, Y., Rolley, J.X. and Davidson, P.M. (2011) Learning Preference as a Predictor of Academic Performance in First Year Accelerated Graduate Entry Nursing Students: A Prospective Follow-Up Study. Nurse Education Today, 31, 611-616.
https://doi.org/10.1016/j.nedt.2010.10.019
[22] Karpicke, J.D. and Blunt, J.R. (2011) Retrieval Practice Produces More Learning than Elaborative Studying with Concept Mapping. Science, 331, 772-775.
https://doi.org/10.1126/science.1199327
[23] Roediger, H.L. and Butler, A.C. (2011) The Critical Role of Retrieval Practice in Long-Term Retention. Trends in Cognitive Sciences, 15, 20-27.
https://doi.org/10.1016/j.tics.2010.09.003
[24] Bjork, R.A. (2011) To Really Learn, Quit Studying and Take a Test. New York Times.
http://www.nytimes.com/2011/01/21/science/21memory.html?_r=0
[25] Mawhinney, V.T., Bostow, D.E., Laws, D.R., Blumenfeld, G.J. and Hopkins, B.L. (1971) A Comparison of Students Studying-Behavior Produced by Daily, Weekly, and Three-Week Testing Schedules. Journal of Applied Behavior Analysis, 4, 257-264.
https://doi.org/10.1901/jaba.1971.4-257
[26] Relf, M.V., Cox, C.W., Farley, J., et al. (2006) Ensuring NCLEX-RN Success for First-Time Test-Takers. Journal of Professional Nursing, 22, 322-326.
https://doi.org/10.1016/j.profnurs.2005.11.004
[27] Quinn, B.L., Smolinski, M. and Bostain Peters, A. (2017) Strategies to Improve NCLEX-RN Success: A Review. Teaching and Learning in Nursing. [In Press]
[28] Spurlock, D.R. and Hanks, C. (2004) Establishing Progression Policies with the HESI Exit Examination: A Review of the Evidence. Journal of Nursing Education, 43, 539-545.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.