Creative Education
2012. Vol.3, No.8, 1336-1344
Published Online December 2012 in SciRes (http://www.SciRP.org/journal/ce) http://dx.doi.org/10.4236/ce.2012.38196
Copyright © 2012 SciRes.
1336
The Relationship between Program Evaluation Experiences and
Stakeholder Career Satisfaction
Saad F. Shawer, Saad A. Alkahtani
Department of Teacher Preparation, Arabic Language Institute, King Saud University, Riyadh, KSA
Email: saadshawer@wsu.edu, alkahtan@ksu.edu.sa
Received October 2nd, 2012; revised November 3rd, 2012; accepted November 18th, 2012
This paper examines the relationship between program evaluation experiences and stakeholder career sat-
isfaction. The study employs mixed paradigms, descriptive and correlational research, qualitative evalua-
tion, interviews, rating-scales and the parametric Pearson product-moment coefficient of correlation. Data
analysis reveals differences between the descriptive and correlational findings. The descriptive findings
show low faculty and program director career satisfaction at the beginning of program evaluation while
concluding program evaluation experiences show a dramatically high career satisfaction. Correlational
results, however, indicate not only a relatively low but also negative correlation between initial and final
program evaluation experiences in career satisfaction. The study concludes a relationship exists between
initial and final program evaluation experiences in stakeholder career satisfaction. The more program
evaluation experiences stakeholders have, the less career dissatisfaction signs they show. Supportive pro-
gram evaluation contexts lower program stakeholder negativity and encourage effective implementation
and use of program evaluation.
Keywords: Career Satisfaction; Professional Satisfaction; Language Program Evaluation
Introduction
Program evaluation, whether externally imposed or internally
motivated, is undertaken to help programs identify weaknesses
and strengths so that they can improve performance, demon-
strate they deliver what they promise and justify why they
should continue (Stake, 2011; Sullivan, 2006). Moreover, pro-
gram evaluation is not only conducted to improve programs and
services but also to create opportunities for stakeholders to
learn and develop in the workplace (Elder, 2009; Norris, 2009).
Unfortunately, some program administrators and many program
stakeholders see program evaluation otherwise, being a threat
rather than an opportunity to help advance their career. As a
result, program stakeholders form negative attitudes towards
their profession and the evaluation process as a whole. Not only
does this have dramatic negative consequences for the program
evaluation process alone, but also for program overall perfor-
mance and stakeholder career satisfaction (Norris, 2006).
Concerns have therefore been voiced regarding the influence
of program evaluation, especially externally imposed ones, on
program stakeholder career satisfaction and ultimately career
success (Byrnes, 2008; Carsten-Wickham, 2008). Other con-
cerns have been also expressed about faculty welfare and de-
velopment opportunities in the workplace (Shawer, 2010a; Sha-
wer, Gilmore, & Banks-Joseph, 2008). As a result, many voices
have asked for a shift of focus from doing program evaluation
to assessing the consequences of the evaluation process for
program performance and stakeholders (Byrnes, 2008).
Several attempts have been made to decrease the negative in-
fluence of program evaluation on program stakeholders. For
example, program evaluation can be a useful strategy for both
program and stakeholder development when stakeholders em-
brace the evaluation process (Chase, 2006; Byrnes, 2008; Shawer,
2010b; Shawer, 2011). The present study, therefore, sought to
address career dissatisfaction concerns through examining whether
a relationship exists between language program evaluation ex-
periences and faculty and program director career satisfaction
in three language-education programs.
The Arabic Language Institute at King Saud University
offers three language programs: the Language and Culture Pro-
gram, the Teacher Training Program and the Teacher Prepara-
tion Program. The Language and Culture Program offers courses
about Arabic language and culture to nonnative speakers. This
program involves 32 courses at four levels over two years. Each
level is one semester long. Program audience is those students
who seek to develop their Arabic language proficiency to be
able to pursue academic study in universities and colleges where
Arabic is the medium of instruction. Students who complete 80
credits are awarded the Language Proficiency Diploma.
The Teacher Training Program is a one-year program for
training teaches of Arabic as a second and foreign language.
Students in this program are required to complete a total of 40
credit hours in two semesters. Students enrolled in this program
must have an experience in teaching Arabic as a second or for-
eign language. Students who successfully complete this pro-
gram are awarded the postgraduate Diploma for training teach-
ers of Arabic to nonnative speakers. The Teacher Preparation
Program is designed for prospective teachers of Arabic as a
second or foreign language. This program comprises two levels,
one semester each, where students attend 15 credit hours per
week. Successful completion of the program entitles the teacher
candidates to the Postgraduate Diploma in the Teaching of
Arabic to nonnative speakers. The program covers a variety of
subject and scientific areas in the field of Applied Linguistics,
S. F. SHAWER, S. A. ALKAHTANI
especially those required for a professional teacher in Teaching
Arabic to Speakers of Other Languages (TASOL). For example,
the program offers courses in methods and techniques of teach-
ing Arabic to speakers of other languages, second language ac-
quisition, contrastive and error analysis, language testing, lan-
guage learning strategies, and psychology of language learning.
Students who successfully complete this program are awarded
the Graduate Diploma in teaching Arabic to nonnative speak-
ers.
Career Satisfaction and Development
Career satisfaction involves those positive feelings that indi-
viduals demonstrate about what they do and the profession
(Shawer, 2010b). Faculty career satisfaction therefore posi-
tively correlates with career development in the workplace.
When professionals feel positive about their career, they are in
a position to reflect on practice and improve their career (Hu-
berman, 1993; Reynolds, Ross, & Rakow, 2002; Rosenholtz,
1991). When program stakeholders, for example, take assess-
ment as an integral part of program evaluation, imposed evalua-
tion will be an opportunity rather than a threat for institutional,
program and professional development (Norris, 2006). From
the very beginning, stakeholders will plan to make use of pro-
gram evaluation to improve program targets, content, teaching
and learning as well as assessment means and outcomes. Stu-
dent learning outcomes in particular will be key performance
indicators of program effectiveness (Lynch, 1996).
On the other hand, career or professional development is
where individuals continue to advance their knowledge and
skills during their careers (Beck & Kosnik, 2001; Cochran-
Smith, 2003; Shawer, 2010b). Career development is no longer
confined to traditional institution-initiated formal “interventions
and training to direct the evolution in professional behavior in a
more desirable way” (Kelchtermans & Vandenberghe, 1994: p.
45). It has become a lifelong process of learning in the work-
place. Career development, therefore, involves those “ongoing
formal and informal learning activities through which profess-
sionals continue to advance their professional competence so
that they can improve their practices and profession” (Shawer,
2010b: p. 598). Although professionals better advance their
career skills through learning from actual experiences in the
workplace, this depends largely upon what they feel about their
career (Carr & Kemmis, 1986; Schön, 1983).
Program Evaluation and Language-Education
Programs
Program evaluation is “an information-gathering and inter-
preting endeavor that attempts to answer a specified set of ques-
tions about a program’s performance and effectiveness” (Rossi,
Freeman, & Lipsey, 1999: p. 62). As such, it assesses program
strengths and weaknesses to determine program values so that
programs can address the needs of audience and plan for new
developments (Bernhardt, 2006; Patton, 1990; Sullivan, 2006).
On the other hand, a language-education program “generally
consists of a slate of courses designed to prepare students for
some language-related endeavor” (Lynch, 1996: p. 2). Like ge-
neric-education programs, language-education programs cannot
do without program evaluation to demonstrate the extent to
which they deliver what they promise and justify why they
should not shut down (Norris, 2009). Thanks to the information
program evaluation generates, language program stakeholders
are able to identify what works in terms of language proficiency
gains (Ross, 2003). Program evaluation is therefore essential
not only to improve programs but also to meet institutional
requirements. Through program evaluation, language-education
programs are able to set precise program objectives, instruc-
tional strategies, assessment targets and program resources
(Lynch, 1996).
Although program evaluation helps programs demonstrate
how far they address quality, accountability and accreditation
concerns, stakeholders consider imposed program evaluation a
threat rather than an opportunity for help and improvement
(Norris, 2006). As a result, stakeholders undertake program
evaluation as an end rather than a means of “knowing oneself
and taking action, support for faculty development, recognition
of valued institutional practice, collaborative inquiry turning
program review into valued work … improvement, and impetus
for innovation and ownership of programs” (Byrnes, 2006: p.
576).
Despite the crucial importance of program evaluation, most
programs focus on doing rather than using program evaluation
(Norris, 2006). How program evaluation impacts on program
stakeholders remains somewhat absent (Elder, 2009; Kiely &
Rea-Dickins, 2005). Although the attention has recently tuned
to examining the value of program evaluation to programs and
stakeholders, only a few studies were concerned with the rela-
tionship between program evaluation and stakeholder career
satisfaction. Among such studies, some found program stake-
holders have negative attitudes towards their profession and the
program evaluation process (Byrnes, 2008; Gorsuch, 2009).
Other studies concluded that program stakeholders change
their negative attitudes toward program evaluation and their
career when they take ownership of the program evaluation
process (Byrnes, 2008; Carsten-Wickham, 2008). Some other
studies found positive concluding program evaluation experi-
ences result in positive changes in stakeholder attitudes towards
program evaluation and their career (Byrnes, 2008; Carsten-
Wickham, 2008; Chase, 2006). In light of the above review, the
present study sought to answer the following research ques-
tions:
1) How do initial program evaluation experiences influence
career satisfaction?
2) How do concluding program evaluation experiences in-
fluence career satisfaction?
3) Do program evaluation experiences and career satisfaction
correlate?
Method and Participants
Figure 1 shows positivism and survey/descriptive research
was followed to answer the first two research questions regard-
ing the influence of initial and concluding program evaluation
experiences on faculty members and program directors’ career
satisfaction. A cross-sectional design was particularly used to
concurrently collect and describe faculty opinions (Cohen, Man-
ion, & Morrison, 2011; Lester & Lester, 2010; Sapsford, 1999).
In particular, the influence of program evaluation experiences
on career satisfaction was examined in terms of having positive
or negative program evaluation experiences and seeing program
evaluation as an opportunity for learning or a threat. Career
satisfaction was also examined in terms of faculty members and
program directors’ enthusiasm about program evaluation in-
Copyright © 2012 SciRes. 1337
S. F. SHAWER, S. A. ALKAHTANI
Copyright © 2012 SciRes.
1338
Strategy 1:
Survey research: R
esearch questio ns 1& 2
Strategy 2
: Correlational research: R
esearch question 3
RATING SCALES (Q uesti onnaires):
Sampling strategy:
Simplerandom
Overall sample:
N=50
Program director:
3programdirectors
Faculty:
47facult ymem ber s
RESEARCH DESIGN
Methods& participants:
QUALITATIVE INTERVIEWS:
Semi-structured
One-to-one
Sample:
(convenienceN=12)
- 9facultymembers&3programdirectors
Qualitative data analysis:
- Conceptdevelo pm ent
- Categorization
- Forminganarrative
Descriptive statistical analysis:
- Sums,
minimum
andmaximumpossiblescores,percentagesandaverages
Inferential statistical analysis:
- Pearsonproductmomentcoefficientofcorrelation
Paradigm:
Qualitative interpretive
Strategy 3:
Qualitative ev aluation
-
Research questions 1& 2
Paradigm:
Positivi sm
Scale reliability:
Alphacoefficient
of0.84(34respondents)
Scale validity:
Conte ntva lidated
Qualitative
interview reliability:
- Repeatedpiloting
Qualitative interview validity:
Contentvalidated
Figure 1.
Research design.
volvement, ability to cope with career stress and perceptions
about the value of program evaluation.
To address the third research question, a correlational re-
search design was further used to examine the relationships
between program evaluation experiences and career satisfaction.
Although correlational research involves descriptions, it was
used mainly to examine relationships between variables (Gall,
M. D., Gall, J. P. & Borg, 2006). In this study, we examined if
a bivariate linear relationship exists between initial and final
program evaluation experiences and career satisfaction. The
correlational design involved asking faculty members for their
opinions about program evaluation experiences during the first
four weeks of the program evaluation process. After a twenty
months interval between the two questionnaire administrations,
the same faculty members’ opinions were collected during the
final four weeks of the program evaluation process. We finally
correlated the scores of first and second administrations for the
same respondents (Coakes & Steed, 2007; Gall et al., 2006).
The survey design addressed the first research question by
testing this null hypothesis through descriptive statistical analy-
ses: initial program evaluation experiences do not influence
career satisfaction. Two alternative hypotheses were posed in
case the null hypothesis was not accepted: 1) initial program
evaluation experiences bring about career dissatisfaction and 2)
initial program evaluation experiences bring about career sat-
isfaction. The survey design also addressed the second research
question by testing this second null hypothesis through also
descriptive statistical analyses: final program evaluation ex-
periences do not influence career satisfaction. Two alternative
hypotheses were also posed in case the second null hypothesis
was rejected: 1) final program evaluation experiences bring
about career satisfaction and 2) final program evaluation ex-
periences bring about career dissatisfaction. Coefficient correlation is a mathematical value between (0
and 1). A zero (.00) coefficient value indicates no correlation
whereas a 1.00 coefficient value indicates a complete correla-
tion. The differences between 0 and 1 refer to the strength of
the relationship. A relationship is positive (+) when one vari-
able increases, the other variable also increases. By contrast, a
negative relationship (–) is where an increase in one variable is
accompanied by a decrease in the other (Coakes & Steed, 2007;
Gall et al., 2006; Shawer, 2012).
Although survey research could address the first two research
questions, the standardized data it yielded did not provide
enough understanding or highlight the context of faculty mem-
bers and program directors regarding the influence of program
evaluation on their career development. As also shown in Figure
1, a qualitative paradigm was particularly necessary to provide
convincing answers to the first two research questions. This is
because it allows the researchers to interact with the respon-
dents and understand their context. This involved using qualita-
tive evaluation to explore the influence of program evaluation
on program stakeholders’ career development through the col-
lection, analysis and interpretation of spoken and written dis-
course about program evaluation impact in order to use the
resulting information for improving career satisfaction (Shawer,
2012). Evaluation research assesses program effectiveness, in-
cluding planning, implementation, instructional methods, cur-
riculum materials, facilities, equipment, educators and students
better than other research strategies (Clarke, 1999; Patton, 1990).
The correlational design addressed the third research ques-
tion by testing this null hypothesis through inferential statistical
analyses: program evaluation experiences and career satisfac-
tion do not correlate at a .05 level of significance. Four alterna-
tive hypotheses were tested in case the null hypothesis was not
accepted: 1) initial program evaluation experiences and career
satisfaction negati ve ly correlate at a .05 level of significance; 2)
initial program evaluation experiences and career satisfaction
positively correlate at a .05 level of significance; 3) final pro-
gram evaluation experiences and career satisfaction negatively
S. F. SHAWER, S. A. ALKAHTANI
correlate at a .05 level of significance; and 4) final program
evaluation experiences and career satisfaction positively cor-
relate at a .05 level of significance.
Figure 1 further shows a nonprobability sampling strategy
was used to select a simple random sample of 50 language-
education faculty members at the Arabic Language Institute,
King Saud University. The respondents were assured that their
names would not be mentioned to maintain anonymity or reveal
any information about their identities to assure confidentiality
(Lester & Lester, 2010; Sapsford, 1999). A questionnaire of 10
items was designed to collect the research data (see the Appen-
dix). This scale was administered on the 50 faculty members by
the end of first four weeks and was re-administered on the same
members at the beginning of the final four weeks of the pro-
gram evaluation process that extended over two years. The
administration interval period was about 20 months.
Five language-education professors examined the question-
naire content and agreed it met the research purpose (Bell, 1993;
Blaikie, 2000; Shawer, 2012). Questionnaire reliability was then
checked for internal consistency to ensure the respondents’ per-
formance on all of the scale’s items is consistent. Using SPSS
(version 18), the calculation of responses of 34 faculty mem-
bers resulted in a .84 Cronbach’s Alpha. Gall et al. (2006) con-
firm that scales which yield coefficients of .80 or above are
deemed reliable. The data were first analysed through descrip-
tive statistics, including averages, percentages and standard de-
viations. Having found mean differences between pre and post
questionnaire administrations, the Pearson product-moment co-
efficient of correlation was calculated to describe a simple biva-
riate and linear relationship (also zero-order correlation) be-
tween two continuous set of scores (interval data) (Coakes &
Steed, 2007). The results sections show the ways in which the
data were analysed.
Semi-structured one-to-one interviews were used to collect
qualitative data from the three program directors and three fac-
ulty members in each program. The average time of interviews
was 50 minutes. Interviews were qualitative to allow the re-
spondents to describe in their own terms the influence of pro-
gram evaluation on their career satisfaction. The interview data
were content validated through five professors who ensured the
questions addressed the research purpose (Patton, 1990). Inter-
views reliability was checked through piloting and accuracy of
transcribed tapes. Interviews were analysed through coding,
grouping similar concepts under exclusive categories and form-
ing a narrative (Kvale, 1996).
Quantitative Results
Initial Program Evaluation Experiences Impact on
Career Satisfaction
This section addressed this first research question (how do
initial program evaluation experiences influence career satis-
faction?). Before presenting the findings, we explain the proc-
ess of data analysis. Table 1 shows 50 faculty members who
responded to two variables. Their responses were analyzed th-
rough calculating sums, the minimum and maximum possible
scores, percentages and averages. Table 1 (row 1) shows the
sum of responses to the first variable (initial program evalua-
tion experiences) was 613, the minimum score was 500 (10
(items) × 1 (minimum possible responses) = 10 × 50 (number
of respondents)) while the maximum score was 2500 (10 (items)
× 5 (maximum possible responses) = 50 × 50 (number of re-
spondents). The percent was 25 (613 (sum of responses) ÷ 2500
(maximum possible responses) × 100).
Table 1 shows faculty responses (25%) indicate initial pro-
gram evaluation experiences brought about faculty dissatisfac-
tion about their career and the program evaluation process.
They felt under threat, did not have good experiences, com-
plained of workloads and did not expect to benefit from the
program evaluation process. They not only felt they would not
learn from assigned tasks but also formed negative attitudes
toward the program evaluation process and the profession. Nei-
ther did they see the evaluation process as a learning opportu-
nity. Besides feeling reluctant to learn from the evaluation pro-
cess as a whole, faculty members expected they would not be
able to cope with the extra workload.
These findings in terms of such a very low percentage (25%)
and mean (12.26) indicate that initial program evaluation ex-
periences revealed faculty dissatisfaction about their career and
the program evaluation process. Such findings therefore pro-
vide evidence to reject the null hypothesis that indicates no
influence of initial program evaluation experiences on career
satisfaction while accepting the alternative hypothesis that
states initial program evaluation experiences bring about ca-
reer dissatisfaction. However, the second alternative hypothesis
stating that initial program evaluation experiences bring about
career satisfaction was not accepted.
Final Program Evaluation Experiences Impact on
Career Satisfaction
This section addressed the second research question (how do
concluding program evaluation experiences influence career
satisfaction?). Table 1 (row 2) shows the sum of responses to
the second variable (concluding program evaluation experi-
ences) was 2310, the minimum score was 500 (10 (items) × 1
(minimum possible responses) = 10 × 50 (number of respon-
dents)) while the maximum score was 2500 (10 (items) × 5
(maximum possible responses) = 50 × 50 (number of respon-
dents). The percentage was 92 (2310 (sum of responses) ÷ 2500
(maximum possible responses) × 100). Such high responses
(92%) indicate that subsequent positive program evaluation
experiences brought about faculty satisfaction about their career
and the evaluation process. The findings clearly indicated fac-
ulty members no longer feel threatened by the evaluation proc-
ess and that their negative feelings tuned positive. Not only did
they stop complaining about workloads but also felt they bene-
fited from the process. They became even convinced that the
evaluation process is a learning opportunity. Further, their abil-
Table 1.
Descriptive statistics (initial and final program evaluation experiences).
No. Variable Sum Min. Score Maxim. ScorePercent Mean
1 At the Start 613 500 2500 25% 12.26
2 Toward the End 2310 500 2500 92% 46.2
Copyright © 2012 SciRes. 1339
S. F. SHAWER, S. A. ALKAHTANI
ity to cope with workloads increased.
These findings in terms of such a high percentage (92%) and
average (46) therefore provide enough evidence to refute the
null hypothesis that states final program evaluation experiences
do not influence career satisfaction while accepting the alterna-
tive hypothesis that indicates final program evaluation experi-
ences bring about career satisfaction. However, the second
alternative hypothesis stating that final program evaluation
experiences bring about career dissatisfaction was not accepted.
Having found clear differences between initial and conclud-
ing program evaluation experiences in faculty career satisfac-
tion, program evaluation experiences and faculty career satis-
faction were examined further to find out whether they corre-
late in the following section.
Relationship between Program Evaluation
Experiences and Career Satisfaction
This section addressed the third research question (Do pro-
gram evaluation experiences and career satisfaction correlate?).
Before presenting the findings, we explain how our research
design met the assumptions of correlational analysis. The pa-
rametric Pearson product-moment coefficient of correlation was
used to describe a simple bivariate and linear relationship (also
zero-order correlation) between two continuous variables (in-
terval data). Before actual calculation of correlation, the data
were screened to ensure they meet five assumptions required
for sound correlational analysis. The data were collected from
related pairs where every score obtained on the X variable was
accompanied by obtaining a score on the Y variable from the
same respondent (first assumption) (Coakes & Steed, 2007).
The second assumption (scale of measurement) was also ad-
dressed through using interval data. Although the third assump-
tion (normal score distribution) could be examined graphically
through, for example, histograms and boxplots or statistically
through, for example, Kolmogorov-Smirnov, Shapiro-Wilk or
skewness and kurtosis calculations, the Shapiro-Wilk was cal-
culated because it is used with samples under 100. The Shapiro-
Wilk insignificant ratio (p .05) assumed normality. Both the
fourth (linearity) and fifth (homoscedasticity) assumptions were
also met. Linearity means the relationship between the two
variables are linear. Homoscedasticity means score variability
values for one variable are almost the same as those of the other.
In other words, the values of both variables show a uniform
cluster round the regression line. As shown in Figure 2, the
scatterplot reveals a linear relationship between the scores of
the two variables. This uniform cluster of scores around the re-
gression line indicates the linearity and homoscedasticity as-
sumptions were not violated (Coakes & Steed, 2007).
By looking at the coefficient (r = –.393) and its associated
significance value (p .05) in Table 2, the Pearson coefficient
of correction value confirms the scatterplot results about the
existence of a significant negative relationship between the two
variables (initial and concluding program evaluation experi-
ences). Although this indicates variables correlation, the rela-
tionship was relatively weak since the correlation value was
just –.393. This relationship indicates that the more program
evaluation experiences faculty members have (first variable),
the less dissatisfaction they show about their career and pro-
gram evaluation process (second variable). In other words, any
increase in faculty experiences in the program evaluation proc-
ess is accompanied by a decrease in faculty dissatisfaction
about their career and program evaluation experiences. Since
this correlation is relatively weak, the increase in the first vari-
able is not met with a similar decrease in the second.
Figure 2.
Scatterplot of two variables.
Copyright © 2012 SciRes.
1340
S. F. SHAWER, S. A. ALKAHTANI
Table 2.
Correlation between the two variables.
At the Start Toward the End
At the Start Pearson Correlation 1 –.393**
Sig. (2-tailed) .005
N 50 50
Pearson Correlation –.393** 1
Sig. (2-tailed) .005 Toward the End
N 50 50
**Correlation is significant at the .01 level (2-tailed).
These findings indicated that program evaluation experiences
and faculty career satisfaction correlate, which provides evi-
dence to reject the null hypothesis stating no relationship be-
tween program evaluation experiences and career satisfaction.
In contrast, both the first alternative hypothesis (initial program
evaluation experiences and career satisfaction negatively cor-
relate) and third alternative hypothesis (final program evalua-
tion experiences and career satisfaction negatively correlate)
were accepted. However, the second alternative hypothesis
(initial program evaluation experiences and career satisfaction
positively correlate) and fourth alternative hypothesis (final
program evaluation experiences and career satisfaction posi-
tively correlate) were therefore rejected.
Qualitative Results
Impact of Initial Program Evaluation Experiences on
Career Satisfaction
To gain a deeper understanding about faculty career satisfac-
tion, qualitative data were also employed. The narrative analy-
sis revealed a different impact on faculty career satisfaction and
attitude toward the evaluation process itself between initial and
concluding program evaluation experiences. “In the first weeks
of this program review, my colleagues and I were under a huge
stress. All at a sudden, we had to make everything perfect. We
had to meet not only good standards but very high standards of
good quality performance.” Faculty members had hard times in
the first weeks. “I felt this process would threaten my whole
career.” The reason was that “from time to time, we attend
lectures and receive jargon terms about accreditation and qual-
ity assurance. Several forms, manuals and brochures are circu-
lated. We have to understand and implement what we receive
alongside teaching 14 hours a week.” They felt program evalua-
tion had negative rather than positive consequences. “I will be
honest with you. I wish this process failed so that we get back
to normal. I mean I did not want to do my work but I was asked
to do many things that I do not understand. I felt my career was
on the line. This process seemed as if it was directed toward
assessing us.”
Faculty members did not feel comfortable with their profess-
sion. “I many times thought of searching for another place to
work. Unfortunately, my family suffered with me. In those
early weeks, I felt I was lost which affected my classroom tea-
ching in negative ways. I no longer had time to prepare extra
materials or give students enough time in my office hours. I had
to devote much time to keep up with the new developments.”
Faculty members agreed. “I expected no good from this review
process because those early experiences were extremely threat-
ening and destructive. In our meetings, we spent much time
trying to find our way through this invasion! In many cases, we
could not discuss how to do things because of complaining
about workloads.” The program early experiences were nega-
tive enough that “I did not expect good from the whole process.
We completed tasks without understanding why we did them.
Despite spending time in completing tasks, we made little use
of them.”
Even program directors expressed similar negative feelings
towards their career and the evaluation process. “The Deanship
of Quality at the university demanded that we should demon-
strate that our programs meet their standards otherwise they
would close us down.” These early experiences made them to
feel bad. “I had to give up many things that we planned to do in
order to meet their standards. I was particularly under fire be-
cause I have to demonstrate my program deserves to continue.
We were overwhelmed by a new terminology and paper work. I
had to understand the process in the first place so that I can
guide program members.” Program directors shared this state-
ment. “I could not blame faculty members for complaining. I
felt what they felt but I was under far more pressure than them.
I did not think this process would have much benefit because
too many things had to be done. This made us become con-
cerned about the future of our career. We never thought of the
benefits.”
Impact of Concluding Program Evaluation
Experiences on Career Satisfaction
Similar to quantitative data, qualitative data showed differ-
ences between initial and concluding program evaluation experi-
ences. Although initial program evaluation experiences brought
about negative feelings of faculty members about their career
and the evaluation process, concluding experiences brought
about career satisfaction. “As we went through the process, we
began to understand. Thanks to the support provided by the
Vice-deanship of Quality in the Institute and the dean, things
became clear and possible.” Faculty members changed their ne-
gative feelings into positive because “we were assured the pro-
cess was not initiated to assess or punish us and that the whole
Institute, including the dean, is under the same pressure.” That
was the turning point. “Instead of being passive and indifferent
about the outcome of this process, we felt we were in one boat.
We either all sink or swim.” Moreover, faculty members be-
came positive because “we received help about the issues that
we did not understand. We were also paid for the extra work we
did. We wanted this process to succeed so that we succeed with
it.”
Faculty members started to feel positive about their work and
the evaluation process because “I learned many things. For
example, the new course specification template helped me learn
Copyright © 2012 SciRes. 1341
S. F. SHAWER, S. A. ALKAHTANI
in action how to better plan my courses. I now set precise
course objectives and learning outcomes. I am able to deter-
mine course topics and assign them to the teaching hours. I
learned not only how to determine the knowledge and skills my
students should attain but also to think ahead of the teaching
strategies that would enable my students achieve target skills
and information.” The course specification experiences also
helped them “determine and design the instruments to be used
for assessing student learning and to align classroom teaching
with assessment targets. I learned many things.” Such positive
program evaluation experiences resulted in faculty professional
development and satisfaction. “As we went on the program
evaluation implementation, we understood what was required
from us and worked hard to be able to do it. This gave me the
feeling I am learning and developing while I do my job. I was
keen and committed to the work.”
The signs of career dissatisfaction also stopped as a result of
faculty positive program evaluation experiences. “I no longer
have the right to complain about workload because I got sup-
port when I needed. Professional people were out there to ex-
plain what we should do and how to do it. We were also paid
for the extra work. Above all, I felt I was developing. I ac-
knowledge I was wrong about my initial feelings.” Faculty
members shared this statement. “I did things in unprofessional
ways in the past. Now, thanks to the new experiences, I have
become aware of many things. I learned how to design reliable
and valid tests, how to mark, analyze and interpret test results. I
am now able to survey, analyze and interpret student opinions. I
can design a whole course, many, many things. It was an in-
vestment.”
The concluding program evaluation experiences brought
about positive program director satisfaction in ways similar to
those of faculty members. “As we moved on through the proc-
ess, things got clear. This made it easy to assign roles and
monitor performance. I acknowledge that I learned many things
as we went further in the implementation process. I did not ex-
pect that at the beginning.” Program directors learned a new
management style. “I used to get involved in the planning and
monitoring of everything. As we had to let program stake-
holders have a say in planning processes, I formed a number of
committees where program members became responsible for all
program undertakings. This worked very well and made it easy
for me to make time for improvement and development issues.”
They learned because “I had to prepare the program specifica-
tion, program report and annual program self-study.” This in-
volved “revising program mission, goals and objectives to for-
mulate new and suitable ones. This also required me to define
in broad terms the program domains of skills and information
and develop my classroom research skills, particularly those re-
lating to learning assessment.”
Program directors learned because “I had also to set out pro-
gram learning outcomes and suggest assessment tools capable
of checking they have been achieved. Issues of faculty and staff
development alongside many other issues had to be addressed. I
learned along the way.” Such positive experiences resulted in a
real satisfaction. “I started to feel positive about my work as a
result of what I have been through. Program evaluation helped
improve the program, the skills of faculty and staff members as
well as my own skills. It was a real training course in the work-
place.” The final statements of program directors ranged be-
tween “thank you program evaluation,” “I am very happy about
myself, my faculty and staff and our work as a whole,” and “we
developed skills that we will definitely use over and over.”
Discussion
This study examined the relationship between program evalua-
tion experiences and faculty members and program directors’
career satisfaction. The quantitative findings answered this first
research question in negative: How do initial program evalua-
tion experiences influence career satisfaction? Initial program
evaluation experiences brought about faculty dissatisfaction
about their career and the program evaluation process in several
ways. They felt under threat, did not have good experiences,
complained about workloads and did not expect to benefit from
the program evaluation process. They not only felt they would
not learn from assigned tasks but also formed negative attitudes
toward the program evaluation process and the profession.
Neither did they see the evaluation process as a learning op-
portunity. Besides feeling reluctant to learn from the evaluation
process as a whole, faculty members expected they would not
be able to cope with the extra workload. The qualitative find-
ings also indicated that initial program evaluation experiences
brought about faculty and program director career dissatisfac-
tion. These findings agreed to some extent with the conclusions
made by Byrnes (2008), Elder (2009), Gorsuch (2009) and
Kiely and Rea-Dickins (2005).
The present findings indicate the crucial importance of initial
program evaluation experiences to faculty members as they
perceive imposed program review as a threat to their career.
Although researchers may examine why faculty members form
negative attitudes toward their career and program review at the
start of program evaluation, the present study provided some
explanations. Faculty members view the process as an assess-
ment of them rather than the program. They also perceive it as a
process conducted to blame them rather than take evaluation re-
sults to improve their work and the program as a whole. More-
over, they develop such negative feelings because they fear the
extra burdens ahead and possible punitive consequences. Before
program evaluation commences, program stakeholders should
have orientation to understand it and define roles clearly. What
is more important is to reassure stakeholders that the process
seeks to help the program and stakeholders improve perform-
ance more than being a personal assessment of each member.
These explanations also concurred with the recommendations
made, for example, by Byrnes (2008) and Carsten-Wickham
(2008).
The findings answer this second research question in positive:
How do concluding program evaluation experiences influence
career satisfaction? The quantitative findings clearly indicate
that subsequent program evaluation experiences brought about
faculty satisfaction about their career and the evaluation process.
For example, faculty members no longer felt threatened by the
evaluation process. The negative feelings even tuned positive at
the end of program evaluation. Not only did they stop com-
plaining about workloads but they also benefited from the pro-
cess, perceived the evaluation process as a learning opportunity
and felt their ability to cope with workloads increased. The qua-
litative findings confirmed the quantitative results in that con-
cluding program evaluation experiences brought about faculty
and program director career satisfaction. These results very
much concurred with those of Byrnes (2008), Carsten-Wick-
ham (2008) and (Chase, 2006).
Why then initial negative program evaluation experiences
Copyright © 2012 SciRes.
1342
S. F. SHAWER, S. A. ALKAHTANI
turn positive at the end. Why initial experiences get negative in
the first place. A possible explanation points to the program
evaluation context. In the present study, the context where fac-
ulty members worked seemed positive since program stake-
holders received extensive orientation before the process started
and assistance during the process through training courses on
several issues relating to the program evaluation, assessment
and effective teaching. Such training courses included, for ex-
ample, using the learning management system (Blackboard),
effective teaching strategies, classroom research and effective
means of assessing learning outcomes. Faculty members were
also assured the program review process was a challenge not
only to faculty members but also to the program and institution
administration as a whole. They were, therefore, encouraged to
cooperate as a team. Although the program context was positive
in many ways, the initial orientation was not effective since fac-
ulty members continued to have negative feelings about their
career despite receiving that orientation. Future researchers may
examine the influence of orientation on faculty satisfaction dur-
ing the initial weeks.
The quantitative findings (inferential part) answered this fi-
nal research question to some extent in positive: Do program
evaluation experiences and career satisfaction correlate? Al-
though the relationship was relatively weak, it indicates that the
more program evaluation experiences faculty members have,
the less dissatisfaction they show about their career and pro-
gram evaluation process. However, this correlation was rela-
tively weak where the increase in one variable is not met with a
similar decrease in the other. Although these findings do not
contradict the descriptive research design results and those of
the abovementioned previous research (e.g., Byrnes, 2008; Car-
sten-Wickham, 2008; Chase, 2006), they put question marks
about generalizing this weak relationship into other contexts
even similar ones. Researchers may examine this weak rela-
tionship further in various contexts. Moreover, this relationship
has been the outcome of various factors, including a positive
program context and availability of support. This means future
research should use partial correlation of such variables where
research designs should measure this linear association while
adjusting for the effects of other variables (e.g., program con-
text).
Conclusions, Recommendations and Limitations
The present study concluded that the initial stage of the pro-
gram evaluation process brought about faculty and program
director career dissatisfaction while the concluding experiences
turned faculty and program director dissatisfaction into a pro-
fessional satisfaction. A relatively weak negative relationship
was found between imposed initial and concluding program
evaluation experiences and faculty career satisfaction. The study
recommended briefing faculty members of the opportunities
that lie ahead in program evaluation. Faculty members should
be also briefed of their roles in the process and that the review
process is initiated to help rather than blame them. Positive and
supportive program evaluation contexts not only result in a suc-
cessful implementation of program evaluation but also help pro-
grams and program stakeholders to make use of program eva-
luation. The study recommended program evaluation as a re-
flection in action strategy not only for faculty development but
also for institutional, program, staff and student development.
Future researchers may study the influence of program evalua-
tion on faculty and staff professional development as well as
institution improvement.
Authors and Affiliations
Saad Shawer has published in various journals, including Tea-
ching & Teacher Education, The Curriculum Journal, Quality &
Quantity, Journal of Further & Higher education, Journal of Li-
teracy Research, Professional Development in Education and
several others. Saad Alkahtani is the dean of the Arabic Lan-
guage Institute, King Saud University. His research interests in-
clude: computer-assisted language learning, Computer applica-
tions in second language acquisition, and the Use of computer
in teaching Arabic as a second language.
Acknowledgements
The authors extend their appreciation to the Deanship of Sci-
entific Research at King Saud University for funding this work
through the research group grant number RGP-VPP-113. They
also thank faculty and staff members at the Arabic Language
Institute for their help with this work as part of the accreditation
process.
REFERENCES
Beck, C., & Kosnik, C. (2001). Reflection-in action: In defense of
thoughtful teaching. Curriculum Inquiry, 3 1, 217-227.
doi:10.1111/0362-6784.00193
Bell, J. (1993). Doing your research project (3rd ed.). Philadelphia:
Open University Press.
Bernhardt, E. B. (2006). Student learning outcomes as professional de-
velopment and public relations. Modern Language Journal, 90, 588-
590. doi:10.1111/j.1540-4781.2006.00466_5.x
Blaikie, N. (2000). Designing social research . Cambridge: Polity Press.
Byrnes, H. (2006). Perspectives. The Modern Language Journal, 90,
574-576. doi:10.1111/j.1540-4781.2006.00466_1.x
Byrnes, H. (2008). Owning up to ownership of foreign language pro-
gram outcomes assessment. ADFL Bulletin, 39, 28-30.
doi:10.1632/adfl.39.2.28
Carr, W., & Kemmis, S. (1986). Becoming critical: Education, knowl-
edge and action research. London: Falmer.
Carsten-Wickham, B. (2008). Assessment and foreign languages: A
chair’s perspective. ADFL Bulletin, 39, 36-43.
doi:10.1632/adfl.39.2.36
Chase, G. (2006). Focusing on learning: Reframing our roles. Modern
Language Journal, 90 , 583-588.
doi:10.1111/j.1540-4781.2006.00466_3.x
Coakes, S. J., & Steed, L. (2007). SPSS Version 14.0 for windows:
Analysis without anguish. Milton: John Wiley & Sons.
Cochran-Smith, M. (2003). Learning and unlearning: The education of
teacher educators. Teaching and Teacher Education, 19, 5-28.
doi:10.1016/S0742-051X(02)00091-4
Cohen, L., Manion, L., & Morrison, K. (2011). Research methods in
education (7th ed.). London: Routledge.
Elder, C. (2009). Reconciling accountability and development needs in
heritage language education: A communication challenge for the
evaluation consultant. Language Teaching Research, 13, 15-33.
doi:10.1177/1362168808095521
Gall, M. D., Gall, J. P., & Borg, W. R. (2006). Educational research:
An introduction (8th ed.). Boston: Allyn and Bacon.
Gorsuch, G. (2009). Investigating second language learner self-efficacy
and future expectancy of second language use for high-stakes pro-
gram evaluation. Foreign Language A nnals, 42, 505-540.
doi:10.1111/j.1944-9720.2009.01034.x
Huberman, M. (1993). The lives of teachers. New York: Teachers Col-
lege Press.
Copyright © 2012 SciRes. 1343
S. F. SHAWER, S. A. ALKAHTANI
Copyright © 2012 SciRes.
1344
Kelchtermans, G., & Vandenberghe, R. (1994). Teachers’ professional
development: A biographical perspective. Journal of curriculum
studies, 26, 45-62. doi:10.1080/0022027940260103
Kiely, R., & Rea-Dickins, P. (2005). Program evaluation in language
education. New York: Palgrave Macmillan.
doi:10.1057/9780230511224
Kvale, S. (1996). Interviews: An introduction to qualitative research
interviewing. Thousand Oaks, CA: Sage.
Lester, J. D. & Lester, J. D. (2010). Writing research papers: A com-
plete Guide (13th ed.). Boston: Longman, Pearson.
Lynch, P. K. (1996). Language program evaluation theory and practice.
Cambridge: Cambridge University Press.
Norris, J. M. (2006). The why (and how) of assessing student learning
outcomes in college foreign language programs. The Modern Lan-
guage Journal, 90, 576-583.
doi:10.1111/j.1540-4781.2006.00466_2.x
Norris, J. M. (2009). Understanding and improving language education
through program evaluation: Introduction to the special issue. Lan-
guage Teaching Research, 13, 7-13. doi:10.1177/1362168808095520
Patton, M. (1990). Qualitative evaluation and research methods (2nd
ed.). Newbury Park: sage.
Reynolds, A., Ross, S. M., & Rakow, J. H. (2002). Teacher retention,
teaching effectiveness, and professional preparation: A comparison
of teacher professional development school and nonprofessional de-
velopment school graduates. Teaching and Teacher Education, 18,
289-303. doi:10.1016/S0742-051X(01)00070-1
Rosenholtz, S. (1991). Teachersworkplace. New York & London:
Teachers College Press.
Ross, S. J. (2003). A diachronic coherence model for language program
evaluation. Language learning, 53, 1-33.
doi:10.1111/1467-9922.00209
Rossi, P., Freeman, H., & Lipsey, M. (1999). Evaluation: A systematic
approach (6th ed.). Thousand Oaks, CA: Sage Publications.
Sapsford, R. (1999). Survey research. London: Sage.
Schön, D. (1983). The reflective practitioner: How professionals think
in action. Hants: Aldershot.
Shawer, S. F. (2010a). Classroom-level curriculum development: EFL
teachers as curriculum-developers, curriculum-makers and curricu-
lum-transmitters. Teaching and Teacher Education: An International
Journal of Research and Studies, 26, 173-184.
doi:10.1016/j.tate.2009.03.015
Shawer, S. F. (2010b). Classroom-level teacher professional develop-
ment and satisfaction: Teachers learn in the context of classroom-
level curriculum development. Professional Development in Educa-
tion, 36, 597-620. doi:10.1080/19415257.2010.489802
Shawer, S. F. (2011). Curriculum design. URL (last checked 3 March
2012).
http:// oxfordbibliographiesonline.com
Shawer, S. F. (2012). Standardized assessment and test construction
without anguish: The complete step-by-step guide to test design, ad-
ministration, scoring, analysis, and interpretation. New York: Nova
Science Publishers.
Shawer, S., Gilmore, D., & Banks-Joseph, S. (2008). Student cognitive
and affective development in the context of classroom-level curricu-
lum development. Journal of the Scholarship of Teaching and Learn-
ing, 8, 1-28.
Stake, R. E. (2011). Program evaluation particularly responsive evalua-
tion. Multidisciplinary Evaluation, 7, 180-201.
Sullivan, J. H. (2006). The importance of program evaluation in colle-
giate foreign language programs. Modern Language Journal, 90,
590-593. doi:10.1111/j.1540-4781.2006.00466_6.x
Appendix: Program Evaluation Impact on
Career Satisfaction
First Scale Administration:
This scale is used to collect your opinion of the initial pro-
gram review process influence on your career satisfaction.
You will find statements about each program element. Please
read each one and circle the response (1, 2, 3, 4 or 5) that tells
HOW TRUE OF YOU THE STATEMENT IS.
1 = Never or almost never true.
2 = Usually not true.
3 = Somewhat true.
4 = Usually true.
5 = Always or almost always true.
At the beginning of the program re view process,
1) I felt I was not under threat. 1 2 3 4 5
2) I felt I would have good experiences. 1 2 3 4 5
3) I did not complain of workloads. 1 2 3 4 5
4) I thought I would make benefit from it. 1 2 3 4 5
5) I learned from assigned tasks. 1 2 3 4 5
6) I had a positive attitude towards the process. 1 2 3 4 5
7) The process was an opportunity for learning. 1 2 3 4 5
8) I managed to cope with workloads. 1 2 3 4 5
9) I made use of the process. 1 2 3 4 5
10) I was keen to learn from assigned tasks. 1 2 3 4 5
Second Scale Administration:
This scale is used to collect your opinion of the concluding
program review process influence on your career satisfaction.
You will find statements about each program element. Please
read each one and circle the response (1, 2, 3, 4 or 5) that tells
HOW TRUE OF YOU THE STATEMENT IS.
1 = Never or almost never true.
2 = Usually not true.
3 = Somewhat true.
4 = Usually true.
5 = Always or almost always true.
Towards the end of the program review process,
1) I felt I was not under threat. 1 2 3 4 5
2) I felt I would have good experiences. 1 2 3 4 5
3) I did not complain of workloads. 1 2 3 4 5
4) I thought I would make benefit from it. 1 2 3 4 5
5) I learned from assigned tasks. 1 2 3 4 5
6) I had a positive attitude towards the process. 1 2 3 4 5
7) The process was an opportunity for learning. 1 2 3 4 5
8) I managed to cope with workloads. 1 2 3 4 5
9) I made use of the process. 1 2 3 4 5
10) I was keen to learn from assigned tasks. 1 2 3 4 5