Attitudes toward Surveillance: Personality, Belief and Value Correlates


Two hundred and fifty adults completed a number of questionnaires about their attitudes to surveillance. They included measures of personality, paranoia, political cynicism, attitudes to authority, belief in conspiracy theories and the Big Five personality traits. The 25-item, surveillance scale, developed for this study, factored neatly into pro- and anti-surveillance attitudes. The strongest and most consistent correlates were attitudes to authority and political cynicism. Regressions indicated that the most powerful correlates of pro-surveillance attitudes were attitudes to authority, trait openness, conformity and right-wing authoritarianism. The most powerful anti-surveillance correlates were attitudes to authority, political cynicism, belief in conspiracy theories and paranoia. Implications and limitations of this study are considered.

Share and Cite:

Furnham, A. and Swami, V. (2019) Attitudes toward Surveillance: Personality, Belief and Value Correlates. Psychology, 10, 609-623. doi: 10.4236/psych.2019.105039.

1. Introduction

It is difficult to get accurate figures on the number and growth of CCTV cameras in any town or country, as these will not be released by various authorities who use them. They are in both private and public hands, and both inside and outside buildings. Further, new technology has produced many cost-effective and easily available ways of monitoring employees, and putting them under constant surveillance. Members of the public may or may not know how and when they are monitored by cameras and other sensors. As a consequence, academics from many different backgrounds from ethics to sociology have become interested in surveillance and monitoring (Lyon, 1994, 2001; Neyland, 2006) . There are now journals dedicated to the topic as well as edited books resulting from symposia (Danielson, 2005; Goold, 2003) .

The dramatic increase in workplace surveillance and electronic monitoring has led to a number of books, conferences and papers that have looked at diverse aspects of the process (Adler & Tompkins, 1997; Botan & Vorvoreanu, 2005; Danielson, 2005; Kizza & Ssanyu, 2005; Moussa, 2015; Smith & Amick, 1989) . There have also been various attempts to develop frameworks and theories to understand the whole surveillance process (Ball, 2002, 2009; Lund, 1992) . Indeed, nearly twenty years ago, Stanton (2000) developed a model with various components (organisational context; monitoring characteristics, individual differences, trust in management and supervisor) to understand reactions to employee performance monitoring.

Inevitably, work psychologists and sociologist have been particularly interested in surveillance in the workplace (Furnham & Swami, 2015; Higgins & Grant, 1989; Sewell & Barker, 2006; Sewell & Wilkinson, 1992; Townsend, 2005) . This can take many forms as the sophistication of technology advances.

There have been many studies on the behavioural effects of being monitored (Kolb & Aiello, 1996; Amick & Smith 1992) . Many show negative consequences in work-related behaviour when people know they are being monitored by superiors and owners. However, there appears to be a dearth of psychometrically evaluated measures to examine surveillance attitudes which is one aim of this study. The second aim is to explore personality, belief and value correlates of these attitudes. This would allow researchers to understand when, where and why certain individuals are so hostile to surveillance while others appear quite unperturbed by the prospect of constant surveillance.

Oz, Glass and Behling (1999) noted eight methods of computer assisted electronic monitoring at work: video cameras (such as CCTV) computer sampling, e-mail interception, access codes, expert systems, transaction audits, phone taps and hidden microphones. The growth in surveillance has ignited controversy over ethical issues involved in surveillance at work.

There are a number of papers in this area with specialist journals but few are empirical or examine the correlates of attitudes to, surveillance and monitoring (Goold, 2003; Harper, 2008; Monaghan, 2014; Powell & Edwards, 2005; Wright, Heynen, & van der Meulen, 2015) . Some have looked at surveillance in specific industries like call centres (Ball & Margulis, 2011; Ellway, 2013) , while others have considered the legal and ethical issues involved in surveillance (Halpern, Reville, & Grunewald, 2008; Martin & Freeman, 2003; West & Bowman, 2014) . Stanton and Weiss (2000) reported an interview study and their content analysis suggested that behaviour was influenced by the capabilities of monitoring in combination with managerial expectations. Employees’ attitudes about monitoring were related to the uses to which monitoring information was put. Also, employees had assimilated managerial concerns about organizational reputation.

This study looks at people’s attitudes to surveillance. There appears to be a few, if any, attitudes or belief measures, except perhaps Furnham and Swami (2015) which was used in a recent study to test the hypotheses that attitudes towards surveillance moderate the relationship between the perceived level of surveillance and counterproductive work behaviour (Martin, Wellen, & Grimmer, 2016) .

One attitudinal study done by Oz et al. (1999) looked at the attitudes of 823 employees’ reactions to surveillance. They found supervisors supported electronic monitoring, because they believed it would reduce theft. Subordinates believed that electronic monitoring would create tensions at work. Further, women were more likely than men to think that monitoring would reduce theft.

Samaranayake and Gamage (2011) found that job satisfaction was positively correlated with a positive opinion of electronic monitoring. This supports the idea that monitoring is fair, unbiased and provides a fuller image of the employee. The greater the perception of invasion of privacy, the lower the job satisfaction. Results also showed that the effect of surveillance on job satisfaction was weaker in workers with a higher professional experience.

Furnham and Swami (2015) developed a new 16 item surveillance at work measure. This factored into positive and negative attitudes to surveillance and they found that negative attitudes to surveillance were correlated with lower job satisfaction and job autonomy, greater perceived discrimination at work, and more negative attitudes to authority. Those who were more positive about surveillance had higher job satisfaction and were more positive attitudes toward authority.

This study looked at general, rather than work-based, attitudes to surveillance. One aim was to devise a comprehensive and useful measure of these attitudes for further use in this research area. We began by searching the literature for similar questionnaires, however we found very few. However in our literature review we found a number of statements from researchers and their participants on this topic. We also interviewed around a dozen people on their general attitudes to surveillance: what types they knew about; when was it acceptable and when not. This generated a long list of around 50 statements. Pilot work reduced this number and those that were very similar were also removed. We were left with 25 items, that formed the basis of our questionnaire.

A second aim was to examine correlates of these beliefs. In all, a number of attitude/belief variables were chosen based on previous papers in this area. We based this research strategy on our programmatic beliefs into why people believe in conspiracy theories, as we thought some aspects of reaction to surveillance would be related to conspiracist thinking (Swami et al., 2010, 2011, 2017) . Indeed we chose many of the same attitudinal measures and set out below our tentative hypotheses.

We examined whether there were personality correlates of these beliefs. No hypotheses were formed but there is an extensive literature on personality and social attitudes and beliefs which suggests that there would be various relations. We also examined various other measures. We included four groups of measures all described in detail in the method section. First, we included a clinical measure of paranoia as we hypothesized that those with sub-clinical paranoid perceptions would be weary of, and hold negative attitudes to surveillance (H1). Second, we measured political attitudes notably political cynicism, but also support for democratic values as we assumed those who were more cynical would be more hostile towards surveillance. (H2ab). Third, we measured beliefs in conspiracy theories as we assumed from the extant literature that those who were conspiracists would be more hostile to being monitored (H3). Finally, we measured various aspects of attitudes to authority and authoritarianism and conformity as previous studies have shown this to be a powerful indicator of attitudes to surveillance (H4).

2. Method

2.1. Participants

Of the 250 participants obtained, the age was 37.15 (SD = 12.32), and the sample comprised of 117 men and 133 women. The majority were European Caucasian (77.60%); the remainder identifying as Asian (13.20%); or other ethnicity/not indicated (9.20%). Over half the participants were Christian (58.00%), the rest comprising no religion/atheists (28.40%) or other religion/not indicated (13.60%). The majority of the sample were well educated; with 25% having completed schooling (12th Grade/A levels), 50% having a Bachelor’s degree or equivalent, 25% with a Master’s degree or higher. Details as to the demography of the sample are shown in Table 1.

Table 1. Demographics and descriptive statistics of participant sample (in percentages unless otherwise stated).

2.2. Measures

1) The Right Wing Authoritarianism scale (RWA short form; Zakrisson, 2005 )

This is a 15-item measure of the willingness to submit to legitimately perceived authorities, including strength of hostility towards those who oppose conventional norms. Higher scores indicate a more positive perception of authoritarian attitudes and more positive perception of authoritarian attitudes. Previous research has demonstrated that the scale has good reliability and validity (Zakrisson, 2005) . This study found high reliability, α = .93.

2) The Ten-Item-Personality-Inventory (TIPI; Gosling, Rentfrow, & Swann, 2003 )

This is a brief measure of the Big Five personality traits, with demonstrable; demonstrating convergent and divergent validity, and test-retest reliability at a 6-week interlude (Gosling et al., 2003) . Reliability analysis for each trait revealed an α-value of .69 for Extraversion; α = .55 for Agreeableness; α = .64 for Conscientiousness; α = .73 for Emotional Stability; and α = .50 for Openness. Although these reliabilities range from low-to-adequate, there are only two items per trait and are similar to published standards (Gosling et al., 2003) .

3) Belief in Conspiracy Theories Inventory (BCTI; Swami et al., 2011 )

This is a 15-item scale composed of statements regarding popular conspiracy theories. A total score was computed which indicated stronger belief in conspiracy theories. Previous research has demonstrated this scale to have high reliability (α = .86 - .90); (Swami et al., 2011) . The current study found similarly reliability: α = .92.

4) The Political Cynicism scale (PCS; Citrin & Elkins, 1975 )

This is a 13-item inventory that assesses the extent to which individuals express dissatisfaction with politics and politicians. A total score was calculated which indicated higher scores meant greater political cynicism. Previous research has demonstrated that this excellent reliability and convergent validity (Swami et al., 2011) . The current study found similar high reliability, α = .86.

5) Support for the Democratic Principle scale (Kaase, 1971)

This is a 9-item measure of attitudes towards democratic systems and principles. A total score was computed which indicated lower democratic sentiment. The measure has good reliability and convergent validity (Swami et al., 2011) . In the current study we found poor reliability α = .44, and as such was excluded from further analysis.

6) Attitudes to Authority Scale (AA; Reicher & Emler, 1985 )

The modified version by Swami, Chamorro-Premuzic, & Furnham (2010) and Swami et al. (2011) was used in this study. This is a 10-item measure of attitudes towards authority with higher scores representing of a negative attitude towards authority. Previous research has shown good reliability and patterns of convergent validity (Swami et al., 2011) . The current study found high reliability, α = .81.

7) Conformity Scale (Mehrabian, 2005)

This is an 11-item scale assessing a willingness to identify with as well as copy others. An overall score is computed by taking the sum of items. High scores are indicative of a strong propensity to conform to others. Previous research has found that the scale has good reliability. The current study found similar high reliability, α = .80.

8) Paranoia Checklist (Freeman et al., 2005)

This is an 18-items scale assessing the prevalence of paranoid beliefs of a clinical nature. High total scores indicate greater experience of paranoid thoughts and beliefs. The scale has high reliability and validity (Freeman et al., 2005) . The current study found similar high reliability, α = .94.

9) Surveillance Attitudes Questionnaire (SAQ)

This is a 25-item scale measuring positive and negative sentiment towards surveillance. It was devised for this study as apparently few other measure exists. It involved reading the salient, but limited, literature, and devising clear attitudinal and belief statements. These were piloted for clarity, overlap and comprehensiveness. Each item had a 7 point response scale where 1 = Strongly Disagree and 7 = Strongly Agree.

10) Demographic Measures

Additionally, questions regarding the participants’ age, sex, education, sexuality and religion were included. They also indicated their political orientation on a 7-point scale (1 = “strongly right wing”, 7 = “strongly left wing”), and strength of their religious beliefs on a 7-point scale (1 = Not at all religious, 7 = Very religious). The demographics can be seen in Table 1.

2.3. Procedure

All data collection was conducted through Amazon Mechanical Turk (MTurk), which has been shown to be demographically diverse, and of high quality compared to other online and offline data collection methods for social science research (Buhrmester, Kwang, & Gosling, 2011) . After giving informed consent, the participants, who were all over 18 years old, were given information regarding the study, and completed a battery of questionnaires. 200 participants, mainly from continental Europe, America and Asia were collected from MTurk. They were paid $3.00 each for their contribution. Additionally, a sample of 50 British participants was recruited in an offline setting, with all three authors using their contacts. There were no significant differences between the demography and individual difference variables in the two samples.

The task took on average 16.48 minutes (SD = 7.91). Participants were warned that the task would take around this time, and to take a break if they so wished. Initial screening of the results means that around 20 participants were rejected. Reasons for rejection were taking less than 10 minutes to complete a task, extreme outlier scores, or evidence of non- differentiation in responses.

3. Results

Our aim was first to look at the factor structure of out central measure of surveillance, then to study correlates of these attitudes. Finally, we decided to use a regression to determine the extent to which the demographic, attitudinal or personality factors predicted surveillance attitudes.

3.1. Surveillance Attitudes Questionnaire Exploratory and Confirmatory Factor Analysis

Our first aim was to examine the factor structure in the questionnaire to attempt to assess if it were measuring subtly different attitudes. To assess the factor structure of the SAQ, we used Maximum Likelihood (ML) Exploratory Factor Analysis (EFA), as our data met the criteria of normal distribution of items by Q-Q plot distribution (Ghasemi & Zahediasl, 2012) for the ML fitting procedure. Furthermore, ML fitting provides factors with greater external validity than other fitting procedures, and permits significance testing of factor loadings (Fabrigar, Wegener, MacCallum, & Strahan, 1999) .

Items were selected on the basis of Clark and Watson’s 1995 guidelines. No items demonstrated skew above 10, or inter-item correlations < .40. The number of factors extracted was dictated by eigenvalues above 1.0 (EGV1 criterion), scree-plot analysis (Cattell, 1966) and Monte Carlo Parallel Analysis (MCPA). MCPA reduces the likelihood of factor over-retention by generating eigenvalues from random data sets with comparable parameters to the data set. It is considered more accurate for the correct number of factors to extract than compared to using EGV1 and scree-plot criteria alone (Hayton, Allen, & Scarpello, 2004) .

Bartlett’s test of sphericity, χ2(300) = 3445.03, p < .001, the KMO measure of sampling adequacy, KMO = .93, and a 10:1 participant to item ratio demonstrated that the SAQ items have sufficient common variance for factor analysis. The EGV1 criterion suggested four factors for extraction, and similarly the scree-plot suggested it was possible to extract three to four factors after four iterations using the varimax rotation method. However, the maximum eigenvalue generated by the MCPA was 1.71, of which only two factors eigenvalues from the real data set exceeded (9.56 and 3.18 respectively). The Goodness-Of-Fit test, χ2(251) = 577.17, p < .001, demonstrated that the two-factor model fitted the data well. As such, these two factors were retained, which accounted for 47.50% of the variance.

The first factor (Table 2) included items that were related to negative attitudes towards surveillance in society, and accounted for 26.31% of the variance. The second factor was reflective of more positive attitudes towards surveillance in society, and accounted for 20.71% of the variance. As item number 19 did not significantly load onto either factor, it was removed. There were significant cross-loadings between the two factors for Items 2, 10, 21, 22 and 23. However, these factors provide clearer understanding of the latent factors to which they belong. Furthermore, the positive and negative surveillance scales with these items demonstrate similar correlations with other independent variables compared to the full item positive and negative surveillance scales. As such, these items were retained in the final positive and negative surveillance attitudes scales,

Table 2. Items in the novel surveillance attitudes questionnaire and factor loadings following exploratory factor analysis (values in bold indicate items which load onto a factor).

with 24 questions in total; comprising 14 items for negative attitudes; and 10 for positive attitudes.

The two-factor model of surveillance attitudes was subjected to CFA testing on the participant sample. The fit of the model was low to adequate; χ2(240, n = 250) = 481.44, p < .001, GFI = .86, PGFI = .69, CFI = .93, RMSEA = .06, and RMR = .20.

Similar scale dimensionality on surveillance related topics has been reported by Furnham and Swami (2015) . It is quite probable that the factor structure was affected by the positive and negative wording of the items (Schmitt & Stults, 1985) . The two factors were significantly negatively correlated, r = −.52, p < .001, suggesting that the factors are indeed reflective of opposing attitudes towards surveillance. However, although they share around 27% of the variance, they are different enough to be conceptualized as different latent factors. Therefore, the overall score for each factor is computed by taking the mean of each factor’s items. Both negative attitudes and positive attitudes demonstrate high internal reliability (α = .92, .90, respectively).

Thus, for further analysis we had two criterion variables representing positive and negative attitudes to surveillance. A paired-sample t-test demonstrated that there was no significant difference between participants’ ratings of positive attitudes and negative attitudes towards surveillance; t(249) = .71, p = .48, M = .09, SD = 2.10, d = .07. Independent-sample t-tests indicated there was no difference between genders in positive attitude towards surveillance scores; t(248) = −1.87, p = .06, d = −.24; or negative attitude scores; t(248) = 1.45, p = .14, d = .18. Further there was no relationship between participant age and education on either factor. Results did show a marginally significant correlation between religious beliefs (r = .13, p < .05) and political orientation (r = −.13, p < .05) and positive attitudes but no relationship for negative attitudes. The results showed that less religious, more right-wing participants were more positive about surveillance.

3.2. Inter-Scale Correlations

Next, we tested various hypotheses by simple correlational analysis relating the two surveillance factors with scores from the various other tests. In other words correlates of positive and negative attitudes towards surveillance in society were computed using bivariate correlations with all other scale scores (Table 3). Higher scores on positive attitudes towards surveillance were significantly positively correlated with right wing authoritarianism, conformity and surveillance acceptability beliefs; and negatively correlated with political cynicism and attitudes towards authority. Higher scores on negative attitudes towards surveillance were significantly positively correlated with belief in conspiracy theories, political cynicism, attitudes towards authority, and paranoia; and negatively correlated with Extraversion and surveillance acceptability beliefs. This confirmed many of the above hypotheses.

Finally, we combined all predictor variables in an analysis to see which was the strongest correlate of the two criterion scores. A series of hierarchical multiple regressions were conducted (forced entry method) with positive and negative

Table 3. Inter-scale correlations.

attitudes towards surveillance and surveillance acceptability as criterion variables respectively; age and gender in Block 1 as controls; and the remaining scale variables in Block 2 (Table 4).

The results of the hierarchical regression for positive surveillance attitudes demonstrated a non-significant regression in Block 1; F(2, 249) = 2.59, p > .05; accounting for around 1% of the variance. The regression became statistically significant in Block 2; F(13, 249) = 6.57, p < .001, ΔR2 = .25, ΔF = 7.16, p < .001; accounting for around 23% of the variance. The most significant predictors being trait Openness, Beta = .21, p < .005; attitudes towards authority, Beta = −.34, p <.001; and conformity, Beta = .16, p < .05.

The results of the hierarchical regression for negative surveillance attitudes demonstrated a non-significant regression in Block 1; F(2, 249) = 1.05, p > .05; accounting for negligible variance. The regression became statistically significant in Block 2; F(13, 249) = 9.34, p < .001, ΔR2 = .30, ΔF = 10.77, p < .001; accounting for around 30% of the variance. The most significant predictors being belief in conspiracy theories, Beta = .15, p <.05; political cynicism, Beta = .17, p < .005; attitudes towards authority, Beta = .38, p < .001; and paranoia, Beta = .18, p < .005.

4. Discussion

This is, we believe, one of the few empirical studies on attitudes to surveillance. The study showed, similar to Furnham and Swami’s (2015) study of attitudes to surveillance at work, that attitudes factor into two clear, opposite beliefs essential favourable (positive) or unfavourable (negative) to all forms of, and reasons for surveillance. We had thought that beliefs would be more varied and complex, but this could be a function of the items we wrote.

Table 4. Hierarchical regression analyses surveillance attitudes and surveillance acceptability.

***p < .001, **p < .005, *p < .05.

Interestingly, the items which attracted most agreement were 19 (I probably could not tell if I were under surveillance), 7 (Surveillance systems are necessary because they help in the identification and apprehension of criminals) and 1 (Surveillance systems help protect society from terrorists and criminals) while those that attracted most disagreement was 16 (I sometimes feel as though I am constantly being watched or monitored by surveillance), 25 (Surveillance does or would motivate me to be a better citizen) and 18 (Surveillance alienates people because it makes the more likely to self-police each other).

The correlational results shown in Table 3 show three things. First, overall personality factors are weakly related to either positive or negative attitudes to surveillance. Second, some beliefs are correlated only with either positive or negative surveillance attitudes. Thus, those who have right wing authoritarianism beliefs and are more conforming were significantly pro, but not anti-surveillance, while those who believe in conspiracy theories and tend to be sub-clinically paranoid are significantly anti-surveillant but unrelated to positive attitudes to surveillance. Third, the two scales most clear related to attitudes to surveillance were attitudes to authority and political cynicism: anti-authority, politically cynical people that have low positive and high negative attitudes to authority.

The results of the regression confirmed in part the correlational results. They showed that four belief/attitudinal variables were related to positive attitudes surveillance and that we could account for nearly a quarter of the variance. Open, conforming individuals who had positive attitudes to authority and right wing Authoritarian beliefs tended to be pro-surveillance. The results for the negative attitudes showed that four scales could account for nearly a third of the variance. Those with negative attitudes to authority, and who were marginally paranoid and politically cynical and also believed in conspiracy theories held more negative views.

Overall, the results showed that demographic and personality factors were weakly related to attitudes to surveillance while general attitudes to authority were the strongest predictor. Attitudes to authority are associated with delinquency and general alienation from society.

This paper attempted to investigate a neglected topic. Like Furnham and Swami (2015) we expected attitudes to surveillance would be more complex and related to both different types of, and reasons for surveillance. The question for future research remains: to what other intra- or inter-individual difference factors more strongly relate to attitudes to surveillance. Future studies may profitably consider how the explanations organisations give for the introduction of new surveillance technology is related to how it is received. It may also be that customer reactions to shop and street surveillance are worthy of investigation. If anything there is likely to be a great increase in the use of surveillance technology to include not only cameras but hear and weight detectors.

A limitation of this research was that we used only self-report studies which are open to dissimulation though there is no reason to suspect the participants were faking. Next, this was not a representative sample as they were younger and better educated than the general population. Also, we have become aware of the fact that some of the items were UK-centric in the sense that some items seemed relevant only to those with CCTV experiences in Great Britain. Future work would do well to ensure that items are more universally applicable.

Conflicts of Interest

The authors declare no conflicts of interest.


[1] Adler, G., & Tompkins, P. (1997). Electronic Performance Monitoring: An Organizational Justice and Concertive Control Perspective. Management Communication Quarterly, 10, 259-288.
[2] Amick, B. C., & Smith, M. J. (1992). Stress, Computer-Based Work Monitoring and Measurement Systems: A Conceptual Overview. Applied Ergonomics, 23, 6-16.
[3] Ball, K. (2002). Elements of Surveillance: A New Framework and Future Directions. Information, Communication and Society, 5, 573-590.
[4] Ball, K. (2009). Exposure: Exploring the Subject of Surveillance. Information, Communication and Society, 12, 639-657.
[5] Ball, K., & Margulis, S. (2011). Electronic Monitoring and Surveillance in Call Centres. New Technology Work and Employment, 26, 113-126.
[6] Botan, C., & Vorvoreanu, M. (2005). What Do Employees Think about Electronic Surveillance at Work. In J. Weckert (Ed.), Electronic Monitoring in the Workplace: Controversies and Solutions (pp. 123-144). Melboune: Idea Group Publishing.
[7] Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon’s Mechanical Turk a New Source of Inexpensive, Yet High-Quality Data? Perspectives on Psychological Science, 6, 3-5.
[8] Cattell, R. B. (1966). The Scree Test for the Number of Factors. Multivariate Behavioral Research, 1, 245-276.
[9] Citrin, J., & Elkins, D. J. (1975). Political Disaffection among British University Students: Concepts, Measurement, and Causes. Institute of International Studies, University of California.
[10] Clark, L. A., & Watson, D. (1995). Constructing Validity: Basic Issues in Objective Scale Development. Psychological Assessment, 7, 309-320.
[11] Danielson, P. (2005). Ethics of Workplace Surveillance Games. In J. Weckert (Ed.), Electronic Monitoring in the Workplace: Controversies and Solutions (pp. 19-34). Melboune: Idea Group Publishing.
[12] Ellway, B. (2013). Making It Personal in a Call Centre: Electronic Peer Surveillance. New Technology, Work and Employment, 28, 37-50.
[13] Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the Use of Exploratory Factor Analysis in Psychological Research. Psychological Methods, 4, 272-280.
[14] Freeman, D., Garety, P. A., Bebbington, P. E., Smith, B., Rollinson, R., Fowler, D., Dunn, G. et al. (2005). Psychological Investigation of the Structure of Paranoia in a Non-Clinical Population. British Journal of Psychiatry, 186, 427-435.
[15] Furnham, A., & Swami, V. (2015). An Investigation of Attitudes toward Surveillance at Work and Its Correlates. Psychology, 6, 1668.
[16] Ghasemi, A., & Zahediasl, S. (2012). Normality Tests for Statistical Analysis: A Guide for Non-Statisticians. International Journal of Endocrinology and Metabolism, 10, 486-489.
[17] Goold, B. (2003). Public Area Surveillance and Police Work. Surveillance and Society, 1, 191-203.
[18] Gosling, S. D., Rentfrow, P. J., & Swann, W. B. (2003). A Very Brief Measure of the Big-Five Personality Domains. Journal of Research in Personality, 37, 504-528.
[19] Halpern, D., Reville, P., & Grunewald, D. (2008). Management and Legal Issues Regarding Electronic Surveillance of Employers in the Workplace. Journal of Business Ethics, 80, 175-180.
[20] Harper, D. (2008). The Politics of Paranoia. Surveillance and Society, 5, 1-32.
[21] Hayton, J. C., Allen, D. G., & Scarpello, V. (2004). Factor Retention Decisions in Exploratory Factor Analysis: A Tutorial on Parallel Analysis.
[22] Higgins, C., & Grant, R. (1989). Monitoring Service Workers via the Computer: The Effect on Employees, Productivity and Service. National Productivity Review, 8, 101-112.
[23] Kaase, M. (1971). Demokratische Einstellungen in der Bundesrepublik Deutschland. Sozialwissenschaftliches Jahrbuch für Politik, 2, 119-326.
[24] Kizza, J., & Ssanyu, J. (2005). Workplace Surveillance. In J. Weckert (Ed.), Electronic Monitoring in the Workplace: Controversies and Solutions (pp. 1-18). Hershey, PA: Idea Group.
[25] Kolb, K. J., & Aiello, J. R. (1996). The Effects of Electronic Performance Monitoring on Stress: Locus of Control as a Moderator Variable. Computers in Human Behavior, 12, 407-423.
[26] Lund, J. (1992). Electronic Performance Monitoring: A Review of Research Issues. Applied Ergonomics, 23, 54-58.
[27] Lyon, D. (1994). The Electronic Eye: The Rise of the Surveillance Society. Cambridge: Polity.
[28] Lyon, D. (2001). Surveillance Society: Monitoring Everyday Life. Buckingham: Open University Press.
[29] Martin, K., & Freeman, R. (2003). Some Problems with Employee Monitoring. Journal of Business Ethics, 43, 353-361.
[30] Martin, A., Wellen, J., & Grimmer, M. (2016). An Eye on Your Work. International Journal of Human Resource Management, 27, 2635-2651.
[31] Mehrabian, A. (2005). Manual for the Conformity Scale. Monterey, CA: University of California, Los Angeles.
[32] Monaghan, J. (2014). Security Traps and Discourses of Radicalization. Surveillance and Society, 12, 485-501.
[33] Moussa, M. (2015). Monitoring Employee Behaviour through the Use of Technology and Issues of Employee Privacy in America (pp. 1-13). Sage Publications.
[34] Neyland, D. (2006). Privacy, Surveillance and Public Trust. Basingstoke: Palgrave MacMillan.
[35] Oz, E., Glass, R., & Behling, R. (1999). Electronic Workplace Monitoring: What Employees Think. International Journal of Management Science, 27, 167-177.
[36] Powell, J., & Edwards, M. (2005). Surveillance and Morality. Surveillance and Society, 3, 96-106.
[37] Reicher, S., & Emler, N. (1985). Delinquent Behaviour and Attitudes to Formal Authority. British Journal of Social Psychology, 24, 161-168.
[38] Samaranayake, V., & Gamage, C. (2011). Employee Perception towards Electronic Monitoring at Work Place and Its Impact on Job Satisfaction of Software Professionals in Sri Lanka. Telematics and Informatics, 29, 233-244.
[39] Schmitt, N., & Stults, D. M. (1985). Factors Defined by Negatively Keyed Items: The Result of Careless Respondents? Applied Psychological Measurement, 9, 367-373.
[40] Sewell, G., & Barker, J. (2006). Coercion versus Care: Using Irony to Make Sense of Organizational Surveillance. Academy of Management Review, 31, 934-961.
[41] Sewell, G., & Wilkinson, B. (1992). Someone to Watch over Me: Surveillance, Discipline and the Just-in-Time Labour Process. Sociology, 26, 271-281.
[42] Smith, M., & Amick, B. (1989). Electronic Monitoring in the Workplace: Implications for Job Control and Worker Stress. In C. Cooper (Ed.), Job Control and Worker Health. Chichester: Wiley.
[43] Stanton, J. (2000). Reactions to Employee Performance Monitoring: Frameworks, Review, and Research Directions. Human Performance, 13, 85-113.
[44] Stanton, J. M., & Weiss, E. M. (2000). Electronic Monitoring in Their Own Words: An Exploratory Study of Employees’ Experience with New Types of Surveillance. Computers in Human Behaviour, 16, 423-440.
[45] Swami, V., Chamorro-Premuzic, T., & Furnham, A. (2010). Unanswered Questions: A Preliminary Investigation of Personality and Individual Difference Predictors of 9/11 Conspiracist Beliefs. Applied Cognitive Psychology, 24, 749-761.
[46] Swami, V., Coles, R., Stieger, S., Pietschnig, J., Furnham, A., Rehim, S., & Voracek, M. (2011). Conspiracist Ideation in Britain and Austria: Evidence of a Monological Belief System and Associations between Individual Psychological Differences and Real-World and Fictitious Conspiracy Theories. British Journal of Psychology, 102, 443-463.
[47] Swami, V., Barron, D., Weis, L., Voracek, M., Stieger, S., & Furnham, A. (2017). An Examination of the Factorial and Convergent Validity of Four Measures of Conspiracist Ideation, with Recommendations for Researchers. PLoS ONE, 12, e0172617.
[48] Townsend, K. (2005). Electronic Surveillance and Cohesive Teams: Room for Resistance in an Australian Call Centre? New Technology, Work and Employment, 20, 47-59.
[49] West, J., & Bowman, J. (2014). Electronic Surveillance at Work: An Ethical Analysis. Administration and Society, 20, 1-14.
[50] Wright, J., Heynen, R., & van der Meulen, E. (2015). It Depends on Who You Are, What You Are. Surveillance and Society, 13, 265-282.
[51] Zakrisson, I. (2005). Construction of a Short Version of the Right-Wing Authoritarianism (RWA) Scale. Personality and Individual Differences, 39, 863-872.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.