A Conservative ’ s View from the Academic Trenches : Reply to Duarte , Crawford , Stern , Haidt , Jussim , and Tetlock ( 2015 )

Although conservative scholars may face a variety of forms of discrimination in academia and other challenges, as elaborated in the first part of this comment, they may also have a set of unique advantages that may facilitate more careful theoretical and empirical scientific work. They may be more sensitive to flawed methodologies in some areas of controversy. In such areas, these assets of conservative scholars may be especially important.


Introduction
As a political conservative, I would like to supplement the discussion started by Duarte et al. [1] by commenting on some of their points and by also pointing out some of the advantages of being a conservative in today's social science academic environment, an environment that often seems anti-conservative.This is important because the lack of diversity in social science may seem to make being an academic conservative an exercise in futility or a career without a future or on the wrong side of history [1].Furthermore, conservative students may be seen as obstacles to diversity rather than those most able to accept it [2].Therefore, I will discuss what some of the advantages may be for conservative scholars in academia currently.However, I will begin by acknowledging some of the disadvantages.

Confirmation Bias
Confirmation bias is a very important matter, as the authors [1] noted.In particular, one statement was that confirmation bias could lead to "widely accepted claims that reflect the scientific community's blind spots more than they reflect justified conclusions".Research on lesbian, gay, bisexual, and transgender (LGBT) families is an area, in my opinion, with an abundance of such blind spots.

Topical Example: Parental and Child Sexual Orientation
There is no doubt that a majority of social scientists believe that, for example, parental sexual orientation has absolutely no correlation with their children's sexual orientation, but there are dozens of research studies featuring substantial evidence to the contrary, as summarized elsewhere [3]- [5].There are also several studies suggesting how parents might encourage their children to experiment with same-sex sexual behavior or to identify as LGBT [3] [4].

Biased Reviews
The authors [1] mentioned biased review processes.The bias of reviewers can be seen in several ways in controversial areas of research.
1) Example of how an unjustified estimate was incorrectly accepted as fact for decades in social science and legal articles in spite of presumably rigorous peer reviews One example of how reviews failed for over fifty studies concerning LGBT parenting [6] is that those studies cited earlier works [7]- [11] that stated as a fact that as many as 14 million children were being raised by LGBT parents in the USA, although Raley [12] and Selekman [13] appeared to estimate something closer to 28 million children (assuming there were 14 million gay/lesbian parents in the USA and two children per family).However, the origin of that "fact" was not a scientific study but merely an unexplained estimate from a 1984 USA Today newspaper article [14].Many of the studies did not cite the USA Today article with the correct page number (3D was correct, 3 and 30 were common errors).Interestingly, Patterson [15], who was one of the first scholars to cite the 14 million estimate from the USA Today article in a top tier social science journal [11], stated that "widely repeated estimates… tend to put the numbers of children of lesbian and gay parents at between six and fourteen million" and then wrote that "Although these estimates have been widely repeated, no empirical studies are cited in connection with them.Hence, it is difficult to be certain about the origin of these figures or to evaluate their reliability" (p.242).Did Patterson forget that she had cited the same estimates (with no empirical support) herself in 1992 [11] as if there had been substance to them?Regardless, Patterson [15] concluded that the 14 million figure might have been correct after all (p.258).However, more recent studies [16]- [21] indicate that perhaps no more than 200,000 -250,000 children are being raised by LGBT parents, certainly fewer than 500,000, a far cry from the tens of millions cited by dozens of social science and legal articles despite presumably rigorous peer review that should have "caught" errors of facts.Peer review would appear to have failed a "common sense" test; with approximately 60 million children in the United States in the mid-1980's, having 14 -28 million children of gay parents would have implied that 20% -50% of all parents were LGBT (seriously?).
2) Despite peer reviews, citation frequency may not reflect article quality, may reflect the opposite Another flaw in peer review was apparent in my analysis of how often 12 studies on LGBT parenting had been cited as a function of the quality of their methodology; the association was such that (r = 0.58, d = 1.4,p < 0.05) the worse the methodological quality of the studies, the more likely they were to have been cited [22].After controlling for year of publication, the partial correlation remained large, by Cohen's [23] definition of effect size, at 0.42 (d = 0.93) though not significant.If peer review was "working" well, one would expect that studies of higher quality would tend to be cited more often, not less often.
3) Literature reviews (citation frequency) may reflect political acceptability more than research quality As another example, which comes about as close to a natural experiment as one could hope for, there were three articles published by some of the same authors, from the same institution, even in the same journal from the same data source and the same cohort, between 1979 and 1981 on lesbian mothering [24].Two of the articles [25] [26] reported favorable information about the lesbian mothers while one article [27] published adverse information.As of April 2016, according to Google Scholar, the two articles had been cited a total of 174 times compared to 10 (of which four were by myself) citations for the third article, a ratio of over 14:1 per ar-ticle, not including my citations.Such a ratio is difficult to explain as anything other than a bias in favor of citing research more supportive of lesbian parenting given that the authors, journals, academic institution, time frame of publication, and data were essentially the same.More recently, in 2011, two studies [28] [29] were published by the same authors (Goldberg, Bos, Gartrell) using the same data set (U.S. National Longitudinal Lesbian Family Study) in highly ranked journals (Journal of Health Psychology, Archives of Sexual Behavior), published in the same year (2011); one article reported adverse outcomes (higher drug use) for children of lesbian mothers while the other reported mixed results, mostly favorable, for children of lesbian mothers.According to ProQuest as of April 5, 2016, the former article had been cited 8 times while the latter had been cited 37 times, a difference significant (p < 0.001) by a one-sample chi-square test (18.6,df = 1).Google Scholar on the same date indicated 16 and 76 citations, respectively, significant (p < 0.001) by the same test (39.1,df = 1).Thus, there is both proximal and distal evidence of bias in citation frequency based, not on the quality of methodology of a study, but on its outcome relative to politically desirable objectives from a liberal/progressive viewpoint.
4) My experience as a journal editor suggests that peer review reflects political bias more than scientific evaluation As an editor of the peer-reviewed journal Marriage & Family Review, I can testify that the most common reviewer response to a controversial submission is that the reviewers take sides and their evaluations tend to reflect their political viewpoints, with liberals rejecting conservative ideas and conservatives rejecting liberal ideas.One time I published a critique [30] of editorials that I had done [31] [32]-I had requested and welcomed such critiques-only to be called on the carpet by a liberal legal organization challenging my publisher (they did not contact me) how a comment by someone on their hate watch list could have had a paper accepted into an apparently otherwise professional scholarly journal.I suppose I was supposed to feel intimidated.Fortunately, my editorial policy has been to publish research from a wide diversity of perspectives, both in types of theory and research methodology, not to mention author characteristics such as gender, age, race/ethnicity, religious affiliation, sexual orientation, or political views.Sometimes I am told by liberals that they will not accept research as credible, unless it is published in a liberal journal; that virtually guarantees they will never have to consider research with conservative results as credible, regardless of its actual methodological quality.On the other hand, I think that Marriage & Family Review has more diversity in more ways than many journals led by presumably more liberal editors.
5) The focus of literature reviews is often too narrow despite peer review One might hope that conscientious peer review would mean that scientists would be evaluated on the entire scope of their research, rather than any one research article, and that any one research article would be evaluated in the context of the full scope of the author's research program.The work of Dr. Sotirios Sarantakos, an Australian professor, now retired, comes to mind.Sarantakos wrote favorably about gays and lesbians and was in favor of same-sex marriage [33] [34].Yet he published a research study [35] in an Australian journal that found effect sizes as large as 3.75 (over four times the size of 0.80 that Cohen [23] indicated as a large effect in social science) between children of gay/lesbian parents compared to heterosexual parents.Not only did his critics [36]- [38] ignore Sarantakos's larger area of research, they did not appear to understand the internal details of his article [35].While they blamed the results on teacher bias and relationship instability, they did not explain how control variables would be able to account for such large effects nor how other variables were the only factors at work when at least one academic outcome in the study favored the children of same-sex parents, even given that many of the same-sex parents had been from previous heterosexual relationships that had broken up and some teacher bias may have been involved.None of his critics acknowledged the large size of many of the effects reported by Sarantakos [35].Remarkably, almost no other scholars other than Marks [39] or Allen [40] have recognized that Sarantakos had published a great deal of other work on same-sex families and their children [41]- [43] while his critics' focus seems to have been more on discrediting him on the basis of one article [35] rather than first considering the entire scope of his social science research on both heterosexual [44]-[68] and same-sex families [35] [41]- [43].One might have hoped that sound peer review would have recognized the breadth of his research rather than maintaining a narrow focus on one particular published article, not to mention overlooking the strengths of Sarantakos's research [35] as well as its limitations, when his critics' comments received peer review prior to publication.More information on Sarantakos's research program with same-sex families is available elsewhere [5].

Serious Problems with Methodological Weaknesses Are Often Disregarded to Favor
Politically Popular Research Outcomes Why do we encourage children to engage in athletic training or sports?I would propose that one reason is to help them learn to live with rules of the game that members of all teams are expected to follow despite the competitive nature of the games.As adult researchers, I was trained to believe that methodological rules were to be followed by scientists, regardless of their political values.Now some are telling me things like "If you are not cheating in sports, you are not trying hard enough".Maybe that attitude has infected science.I have seen so many weak or improper methodologies accepted not only as valid but as methodologically superior that under normal circumstances (non-controversial research) would likely not have been tolerated or published.To make this point, I will describe sixteen examples of research studies on controversial topics that were published, usually in top tier journals, using neutral labels to make them non-controversial.I ask the reader how many of these examples, assuming rigorous peer review, should have been published or at best featured methodologies sound enough to be used to inform courts and government officials about needed policy changes?Please note that, to protect the guilty, I have hidden the identity of the studies and have, in some cases, modified the details for that purpose as well.In one example given, I combined errors of two distinct studies into one case.
1) Example one: unbalanced designs Groups A and B are compared on an outcome variable Y; Group A has 4 cases and Group B has 500 cases.The non-significant result is interpreted as proof that Group A is equivalent to Group B rather than an artifact of the unbalanced design or very low statistical power.In this case, for there to have been a statistically significant result using a two-sided Fisher's Exact Test, if there was a 25% result for Group A, then Group B would have to had a one percent or less result.Methodological rules violated: Using balanced designs, using larger samples.
2) Example two: using dead persons to represent stable couples The stability of romantic relationships is being studied on a longitudinal basis over four years.The study included as "stable" couples, 96 couples where one member of the couple died over the four years of the study, sometimes in year two or earlier.Such couples were counted as stable over the four years of the study.The use of such dead persons is not mentioned in the study as a limitation.Methodological rules violated: Do not mislead, even by omission, readers of the nature of your sample, especially with respect to sample attrition; clarify what is meant by concepts such as "stable" so the study can be replicated accurately.
3) Example three: comparison groups should be clearly defined with mutually exclusive membership criteria Groups A and B are being compared on child outcomes.Group A consists of 44 families that clearly belong only to Group A. Group B consists of 44 families but of those 44, at least 26, possibly 27, might actually belong to Group A. Although 90% of the results favored the children of Group A, with effect sizes are large as 0.27, despite the lack of clarity of group membership, the results are treated as proof of equivalence between Groups A and B. In a different study, there are three Groups A, B, and C. Groups A and B are compared but each group contains 10% Group C members, though Groups A (90%) and B (90%) could have been compared without the Group C members.Methodological rules violated: report effect sizes, be sure comparison groups are mutually exclusive in their membership and not contaminated with members of other groups.
4) Example four: comparison groups should be equivalent in terms of selection effects, design effects, or other potential biases in the research protocols Groups A and B are being compared.Group B is a subsample from a random, national study that participants were blinded to the research objectives of this study, as were the original researchers leading the national study.Group A members as well as the lead researchers of this study were not blinded to the nature of the study.Group A was a convenience, non-random sample.The data from the two groups was collected at least ten years apart in time.No attempt was made to control for volunteer or selection effects, cohort effects, or social desirability response bias.Groups A and B are compared as if the pre-existing group differences were not meaningful or important limitations.Methodological rules violated: Both sides of a study should be blinded equally; both sides of a study should be based on random data; if likely, social desirability effects should be measured and controlled statistically; cohort effects should be taken into consideration.
5) Example five: comparison groups should be equivalent in terms of background demographic characteristics and mental health characteristics Groups A and B are being compared.Group A households have an average annual income of over $200,000 and most have hired full-time in home childcare for an average of 1.3 children.Group B households have an average annual income of $70,000 and most have limited child care for an average of 3.4 children.The mental health of the Group B mothers is significantly and substantially worse than the mental health of the Group A mothers while the parental stress levels are significantly higher for the Group B mothers.Without statistical controls for any of the group differences (e.g., per capita household income, parental stress, mental health), Groups A and B are compared, and the results being non-significant, a conclusion is drawn that Groups A and B are equivalent in terms of child outcomes.Methodological rules violated: Pre-existing group differences should be controlled through random assignment to groups or through statistical controls if random assignment is not possible.
6) Example six: using theory poorly when evaluating models statistically Theory predicts that variable A predicts variable B which predicts variable C, in logical causal order.Research finds that the three variables are correlated as expected.However, if one predicts C from A, controlling for B, then a non-significant result is obtained.Had the indirect effect of A on C been tested, it would have been found to be statistically significant.Rather than concluding that the effect of C on A is fully mediated through variable B, it is concluded that variable A is not an important factor in understanding or predicting variable C, via direct or indirect effects.Methodological rule violated: theoretical models should be evaluated statistically based on sound causal theory, looking at both direct and indirect effects; endogenous (mediating/intervening) variables should not be used as control (exogenous) variables.
7) Example seven: overuse of control variables should be avoided An outcome variable Y is predicted from a variable X.The relationship is substantial (effect size, 0.36) and significant statistically.After the researcher controls for 77 other variables, the relationship is still strong (effect size, 0.30) but no longer statistically significant (p < 0.06).In the same model, a control variable with an effect size of 0.31 is statistically significant (p < 0.05).The conclusion is drawn that variable X has no effect whatsoever on variable Y. Notably, in the first six models using fewer control variables, variable X remained statistically significant.Only in the seventh model, with the most control variables, did it become non-significant.Methodological rules violated: in order to prove the null hypothesis, one should not simply keep adding control variables until the desired null effect is achieved; effect sizes should not be ignored, especially when levels of significance are in the vicinity of 0.05.
8) Example eight: the ratio of cases to model variables should be at least 5: 1 A researcher has a study with 153 cases.Y is predicted from X using 68 control variables.Even though no one recommends using statistical analyses when the ratio of cases to variables falls below 3:1 (5:1 or more is preferred), the results are deemed extremely important for legal and policy development.Methodological rule violated: the ratio of cases to variables should be at least 5:1 or more; control variables should not be overused.9) Speculation should not trump data when assessing research A study finds differences between Groups A and B involving effect sizes of 3.0 or greater.Some unmeasured differences between the two groups are mentioned by the researcher, but not controlled (since they had not been measured).An independent researcher shows that unless the unmeasured factors had effect sizes of 2.0 or greater, those factors could not completely explain away the group differences.Nonetheless, a critic concludes, on the basis of speculation alone, that any differences between the two groups were due entirely to the other factors.The critic says nothing about the effect sizes involved nor the fact that some differences favored each group, regardless of the unmeasured factors.Methodological rules violated: discuss effect sizes when evaluating research; consider both positive and negative outcomes of research; speculation alone should not be used to entirely discredit research, especially research featuring medium to large or greater effect sizes.
10) Example ten: statistical tests should match theory A researcher says that variable Y should have a nonlinear, quadratic relationship with variable X.The data pattern of the 46 cases fits the hypothesized nonlinear relationship.The researcher tests the pattern with a zero-order (linear) correlation and finds a non-significant result, concluding that variables A and Y have no relationship whatsoever.Had the researcher tested for a quadratic effect, a significant result would have been obtained.When a critic points this out to the journal's editor, the editor refuses to publish a comment or correction, allowing the incorrect result to stand.Methodological rules violated: Statistical tests should match the theory (i.e., linear tests for linear theory; nonlinear tests for nonlinear theory); published errors should be correctable through errata or comments.
11) Literature reviews should accurately reflect the research studies reviewed A researcher compares Groups A and B on six variables.For two variables, the results are not significant.For four variables, the results are significant and involve medium to large effect sizes favoring Group B. A scholar doing a literature review discusses the research and concludes that most of the results were not significant and those that were significant favored Group A, without discussing any of the effect sizes or their specific direction of effect.In another situation, a reviewer concludes that there are no differences between two groups of children; however, four of five studies show higher levels of substance use among children from Group A. A meta-analysis of the five studies yields a significant effect, with significantly higher levels of substance use for children from Group A. An amici brief before the U.S. Supreme Court argues that the research evidence is clear that there are no differences in substance use between children from Groups A and B. Methodological rules violated: discuss effect sizes; accurately represent the outcomes of research when conducting a literature review.
12) Beware of the potential effects of multicollinearity in multivariate models A researcher is comparing child academic outcomes across two groups of parents.The study is longitudinal.From the start of the study to fifth grade, one group of parents is 95% stable.The other group include no couples whatsoever that are stable through fifth grade (i.e., the group factor is highly correlated with stability/instability).The researcher predicts the child outcomes from group and from parental stability, finding that there are no group difference effects once stability is controlled.The researcher does not report in the article what the different stability levels were for the two groups of parents.Methodological rules violated: report descriptive results for key control variables, as well as for demographic differences; when a control variable is nearly 100% correlated with an independent variable, be careful that results are not biased by multicollinearity; do not treat an endogenous (mediating/intervening) variable as if it were a control (exogenous) variable.
13) Recognize that the conventional 0.05 level of significance is not sacred A book is published on children of lesbian mothers.Because of the small sample size, a statistical significance level of 0.10 is adopted rather than the more conventional 0.05 level.When discussing the results in the book, a conservative scholar accepts their decision to use a 0.10 level of significance, to maintain consistency when discussing the content of the book.His comments are "discredited" in a legal trial on the basis that no reputable social scientist would ever use or report significance levels other than 0.05, even though the American Psychological Association recommends reporting all levels of significance ( [69], p. 34) and the critics themselves had done so [70].Methodological rules violated: Adapt levels of significance in order to achieve greater statistical power, if warranted for small samples; do not confuse "conventional" with methodologically "correct" or "incorrect", "sound" or "unsound".
14) Consider the types and implications of missing data A study is conducted with adolescents.For some analyses, over seventy percent of the respondents did not answer the questions in the survey.Missing data is not assessed in terms of its being missing completely at random, missing at random, or not missing at random, nor are the implications of such a large amount of missing data discussed.In another study, missing data (of unknown type) is only 50% but reduces the sample size for analysis to only 20 cases.Methodological rules violated: Discuss the type and limitations involved in missing data in one's research ( [69], p. 33); consider the impact of extensive missing data on sample size and the associated reduction in statistical power of one's analyses.
15) Report results accurately A study compares two groups.The claim is made that none of the comparisons are statistically significant.In fact, some of the results were statistically significant, with effect sizes are large as 0.54.No effect sizes were reported, however.In another study, it is claimed that a result was not statistically significant.However, the result actually was statistically significant.Methodological rules violated: report statistical results accurately; report effect sizes.16) Do not treat different things as if they were the same A study purports to be a study of lesbian mother families.In some families, a child was born into a lesbian family and the child has lived with both lesbian mothers for up to nine years.In other families, the mother did not begin a lesbian relationship with another woman until the child was over nine years old and the child has lived with two lesbian mothers for only a few months.In the statistical comparisons, these families are treated as if they were the same (stable lesbian mother families).In another study, a few lesbian families involved two stable lesbian mothers and a child from birth; others involved a child who lived with two lesbian mothers for a few years; others involved a child whose lesbian mother's partner never lived with the child.The "lesbian" families are treated as if they were the same, regardless of family history.One study receives extensive criticism for this, the other does not (guess which study featured a conservative researcher?).Methodological rules violated: Do not overlook family history when defining family types; apply similar standards of research criticism to studies, regardless of the political views of the researchers.17) Conclusion to section 2.1.3Such are the kinds of methodological problems that seem to pass muster not only with scholarly peer review but are deemed strong enough and of enough scholarly merit to be used as sound scientific evidence for liberal causes before the U.S. Supreme Court.In some amici briefs one even hears about research such as that above conforming to the highest standards of social science research.Really?Yet as a conservative I must ask if any of the above studies would ever be accepted in virtually any journal if the subject matter was not politically sensitive and liberal bias was not involved?Many of these methodological problems clearly violate the research principles recommended by the American Psychological Association [69]- [72].The normal result is that if a conservative scholar critiques the above sorts of methodological problems, the comments are dismissed as biased, unfounded, and a consequence of the scholar's personal values rather than any sound reasoning or logic or methodological expertise.The situation is so common, this author once wrote a parody of how one could prove that tobacco use was harmless (or maybe even good for your health) if one was allowed to make the same sorts of methodological errors [73].One of my concerns as a scholar, a concern shared by others [1], is that once the public catches on to such methodological nonsense being passed off as sound-if not the best -scientific evidence, they may lose any faith they might have had in the social sciences.

Disadvantages of Being a Conservative Scholar
Jussim [74] eloquently has discussed how liberal privilege is enjoyed by most social scientists, a situation that Redding [75] has described as "prejudice and discrimination, straight up" (p.512).Sadly, I can cite many incidents in my academic career as a conservative Republican that match Jussim's critique (although I began my career as a moderate Democrat).I will list some examples under several themes.

Rejection by Students
In my undergraduate and graduate classes, I often critique research as a way to help students understand how to interpret research or to do research properly.However, in one case, some students dropped my class because I exposed some medical research as corrupt; the students didn't feel that an undergraduate class was the place for learning such things.In another class, I critiqued several journal articles but one student took exception to my critique of one of the articles and complained to my department head who sided with the student and then told me that because of this one complaint on one day, I would get a substandard rating for the entire year in all areas, teaching, research, grantsmanship, and service, no matter how well I had done so far or would do for the rest of the year.And I was criticized for not being receptive emotionally to this sort of treatment.Speaking of turnabout [1] [74], how would you as a liberal feel if the same thing happened to you? Would you feel such treatment was fair?Another time, students at a rally on campus called for my being fired for presenting research they didn't like to the local city council.Another time I was invited to present a lecture at a major university in California, which was a pretty vanilla lecture, but my reputation has preceded me and I noticed how most of the students in the lecture hall were glaring at me with what appeared to be hatred, even before I had said anything.

Rejection by Other Scholars
One time I had been invited to submit a paper to a journal as part of an issues debate.I prepared the paper and it was accepted by the editor as an article [76] that was published as part of the larger issue.Later, however, political pressure on the journal led the Editor to retract the entire issue, for all authors on both sides of the issue.Since the liberal authors had submitted reprints of previously published articles, which could not truly be retracted, the retraction of the entire issue amounted to a retraction-or rather censorship-of only the conservative articles.Another time, I was promised that if I presented a paper at a conference, that a law journal would publish the papers of all of the presenters, whether liberal or conservative.Once the editor of the journal realized that the issue would therefore include articles from conservatives, the idea of the special issue was cancelled for all potential authors, although I think some of the liberals's articles might have been published later, on an ad hoc basis.At a national conference, I dared to raise methodological issues with a presenter's research.That led a scholar to stand up and denounce me for knowing nothing about any kind of research, which was pretty remarkable given that I had over 200 publications at that point in refereed journals.Afterwards, there was a recep-tion at which no one dared sit with me or allowed me to sit with them lest they fall under the same dark cloud of denunciation.Fortunately, the organization's president and a few colleagues who had done military research (as I have also done) eventually joined me, the academic "outcast", at a table, which was very reassuring and a class act in my opinion.On more than one occasion, I have detected substantial errors in published articles and have attempted to publish a comment or correction in that journal.Many times, editors refuse to accept any such comments or to publish corrections of any form whatsoever.One time, a correction was published, with even more errors than I had detected [77], but my comments were only accepted elsewhere [78].On several occasions, I have been lectured by others on the basis of my disagreement with policy opinions rendered by professional social science organizations, along the lines of "how dare you disagree with organization XYZ!" It is difficult for some folks to realize that professional organizations do not always have a corner on all truth or even science in some cases, especially when politically hot topics are involved.Some such organizations seem to me to have become more dogmatic in some ways than many religious organizations.

Rejection by Lawyers
At one deposition, I found it interesting that I was not allowed to take restroom breaks when I needed them; I had to ask permission of the lawyers, who sometimes (both sides) denied such privileges; it brought back memories of elementary school when once a teacher didn't give me permission and I drenched myself, much to the amusement of the rest of the class.I have had lawyers grill me on my religious beliefs, which is an interesting exercise when you are not even sure of them yourself (in some details at least).It is even more interesting when such attempts to discredit you had been deemed illegal by state law, but such minor matters mean nothing to the lawyers (of either side).I have read of other conservative colleagues getting similar treatment, such as being asked by a lawyer if homosexuals were going to hell? (How would a scholar know the empirical answer to that question?).If you, as a conservative, publish an article in a lower tier journal, you may be criticized by lawyers as if your research is useless, even though there is evidence that journal tiers and citation rates do not always correspond [5] [79]; yet, if an article [25] published in a lower tier journal supports a liberal objective, seldom will you hear those scholars or their research criticized by lawyers on that basis.I have had a legal association challenge my right as a journal editor to publish controversial comments or research articles.I have also been rejected by conservative lawyers for refusing to say bad things about homosexuals.Once I lost thousands of dollars because I would not say that research supported certain stereotypes about homosexuals, but my view was that if you say anything for money, you lose your academic credibility.Another time, an ACLU speaker at a national conference said that I had betrayed the conservative opponents of same-sex adoption at a trial.My intent was not to side with anyone per se but to present evidence of some of the methodological problems associated with same-sex parenting research and some trends in the available data, of which there was relatively little with respect to gay fathers.As far as the specifics of the case, I was not informed of them before or during the trial, so I declined to comment about that because I had no basis to judge (As a side note, one of the gay fathers, one of the plaintiffs in that case defended me on an internet blog, telling some of my critics to back down, since my testimony had done their family little to no harm, a class act on his part to defend me!).One time I was instructed by lawyers for a deposition to not bring anything with me, no books, no articles, no notes, no smart phone, no laptop computer, no internet connection; this frustrated both myself and the other side's lawyers because it was impossible for me to answer many of the questions since I had been instructed not to bring any supporting materials or devices that could connect me to my supporting materials.In another case, lawyers asked me to evaluate matters way beyond my area of expertise and I declined to do so, even though that may have cost me thousands of dollars.The net effect has been that most lawyers, regardless of their positions, hesitate to ask for my expert testimony, as my opinions seem to be unpredictable rather than ideologically based.From my perspective, I am not trying to be ideological or unpredictable, I am just trying to stick with what I can defend empirically, which means I often "see" things in gray, rather than the black or white that many lawyers may seem to prefer since their goal is binary in nature (win/lose the case).Other professors, regardless of their political orientation, who wish to serve as expert witnesses need to keep such issues in mind as they negotiate how they will coordinate matters with their legal team.

Rejection by Potential Employers
Early in my career, I interviewed for a position in Missouri.I was grilled by the faculty extensively to try to determine my "biases".My defense that "everyone has biases, you need to be aware of them and not let them get the better of you" was not well received (I didn't get the job).Another time, I was asked during an interview how I was sure that the questions in my surveys about premarital counseling were being interpreted in the same way by all respondents, this question coming from a post-modern theorist professor.I said that I wasn't sure, that is part of the error and uncertainty involved in doing research.I went on to say that it reminded me of when I was a boy and I went trout fishing in Vermont with my uncle.The audience, I am sure, was wondering with great curiosity how a story about a little boy fishing was going to tie into the issue of using advanced statistics to evaluate premarital counseling programs.So, I said that I could not speak "trout" and the trout could not understand English but the trout ended up in my frying pan, not I in his.Which was to say that I must have understood something about trout and trout fishing, some small degree of shared variance, even if not everything.The graduate students in attendance thought my response was extremely hilarious, which embarrassed the professor who had asked the question (no job there either).Another time I was told by a potential employer that I had to censor my research to make it conform to that church's and the university's religious doctrines.I responded that I could not do so in good conscience because it would destroy any academic credibility I might have (I didn't get that job, though the person who did get the position almost immediately adopted most of my recommendations for the new military research center, recommendations I had made during the interview process).

Rejection by Judicial Authorities
My experience in court cases as a conservative witness has not been pleasant.In one case, I decided, with court approval, to drive to the trial because that way I could carry several boxes of references.However, when I was only two hours away after a two day trip, I was told that the case had been postponed; how peculiar it was that expert witnesses from the other side, even those coming from overseas, were told about this before they departed!At the same trial, the judge ordered both sides to present their materials in writing before the trial so each side could read what the other was going to say.The conservative side prepared materials, per the state's instructions, as rough working papers, not necessarily at the publishable level (to save taxpayer funds) weeks before the trial.Those papers were farmed out to the experts on the liberal side and at least one such expert spent 70 hours (being paid, if I remember correctly, $14,000, looking over the working papers for flaws-which should not be hard to find in working papers even with only a few hours of review).The liberal side presented nothing in writing in advance but did present materials in writing at trial, in direct violation of the court's orders; however, this unfair advantage was deemed by the court to be no problem even though I had no chance to evaluate the other side's testimony so carefully in advance, yet they mentioned numerous issues that would have been avoided if my papers had been of higher quality than working papers.At the same trial, I was not allowed to read what the other experts had said about my working papers, but I was asked what I thought about what those experts had said about myself.All I could honestly say was that they were known as good scholars and had many publications, even though I had been told they had savagely attacked my credibility in the days prior.On the other hand, even the judge, as well as the media were unable to correctly identify my academic rank (full professor for 18 years) and labeled me as an associate professor or even an untenured assistant professor, suggesting they were not paying much attention to the most basic sorts of information about the conservative expert witnesses.
In other trials, liberal expert witnesses have been allowed to make arguments with virtually no serious rebuttal (e.g., the various methodological rule violations previously mentioned were not brought up by the opposing attorneys, funding sources were not discussed) and yet the conservative witnesses were examined or cross-examined in extreme detail about their research and funding sources.To my knowledge, it is only conservative witnesses that have been asked questions like "Have you read every word of every research article cited in your paper?"If you answer "no", then that will be used to imply you are a careless scholar with no credibility.If you answer "yes", then it will be said that reading every word is impossible and you are not telling the truth and cannot be trusted as an expert witness.Another trick question is to ask you in deposition a question such as "When did Professor X invite you to submit a paper to conference Y?" You cannot recall for sure, because the event was six years ago, so you say you think it was December 2002.At trial, now seven years later, the same question is posed to you and you, having thought the question at the deposition was a harmless one, and having forgotten what you said before, say that you are not sure, but maybe it was January 2003.Well, now the discrepancy you have just generated, between two points in time a year apart about an event seven years prior, will be used against you to attack your academic credibility (how can a man with such an imperfect memory who can-not even remember the year something so important happened be credible for any sort of expertise!).On the other hand, if you refuse to give an exact time, then you will be criticized for your blatant ignorance, terrible memory, or stubborn refusal to answer simple questions.My point is that such questions were not being asked to get at some key truth, but to set you up for the destruction of your credibility, no matter how you answer the question.They are, in my view, not intellectually honest questions, but rather only traps to try to destroy your credibility (Luke 20: 20 -26; Matthew 22: 15 -22; Mark 12: 13 -17).Questions often concern funding sources, as if that alone might negate the value of one's testimony.I do not think it is fair for a court to conclude that a conservative scholar funded by a conservative organization is automatically biased while a liberal scholar funded by a liberal organization is automatically not at all biased, as seems from news reports to have occurred at some trials.Some conservative witnesses were told they had to answer all questions "yes" or "no", but social science and statistics are fuzzy enough (involve random error) and complex enough that simplistic yes or no answers often cannot convey the truth, which, after all, is what the expert states under oath that he or she will try to explain (you don't take an oath to give only yes or no answers).All in all, I question how well conservative lawyers have tended to prepare conservative expert witnesses for the rigors of being an expert witness in a hostile environment.I had hoped that courts would be given the opportunity to see a fair and even-handed discussion of scientific issues, as in an academic debate forum, but instead I have seen more of an attempt to keep the truth from coming out and to discredit experts on whatever minor issues or technicalities will suffice.I think that courts should demand that expert testimony include effect sizes as well as significance levels and sample sizes as well for all key contrasts.When such expectations are not set in advance for all expert witnesses, you can have an expert that hides his use of 96 dead people and his comparison of four cases against hundreds of cases, as if that were a meaningful statistical comparison.I think courts also need to be aware of the limitations of research; for example, at the time of the trial concerning two gay men adopting children there were probably fewer than a dozen studies on gay fathers, fewer that involved comparisons of gay and heterosexual fathers, and possibly even fewer that involved developmental outcomes for children (other than sexual orientation), and probably none that involved two gay fathers who had fathered a child together from the birth of that child through age 18 as part of a random nationally representative sample [39].In other words, no expert witness could truthfully say very much directly about outcomes for children as a result of long-term, stable gay fathering across a wide socioeconomic spectrum of gay father families, including relatively low income gay father families; furthermore, any adverse findings could always have been attributed to prior heterosexual divorce or related problems such as stigma or discrimination.
For their part, I think courts need to ask themselves why liberal expert witnesses so often outnumber conservative expert witnesses regarding many social issues.Is it because there are no conservative arguments?Is it because there are no data that might support conservative viewpoints?Or, is it perhaps because of liberal bias and the intimidation, even illegal intimidation, often experienced before, during, and after a trial by potential conservative expert witnesses?There have been cases where conservative expert witnesses arrived at the location of a trial in time for the trial but were intimidated into not testifying at all (and they have never told me why, but one can easily guess what sorts of things might have gone on).
Being a conservative scholar who tries to stick to what can be verified empirically seems to cross wires with both liberals and conservatives, if they are more interested in political agendas than genuine science.In other words, I have found that many people from both sides of the political spectrum want to use science to support what they already believe emotionally rather than allowing science to refute what they already believe, especially if their beliefs have strong emotional connections.Despite the many disadvantages previously mentioned, I can think of at least ten advantages of being a conservative scholar.Basically, many of the advantages resolve around being less blinded by liberal ideology and more open to considering novel ideas or the merits of old ideas.

Advantages of Being a Conservative Scholar
Despite the disadvantages previously mentioned, I can think of at least ten advantages of being a conservative scholar.Basically, many of the advantages resolve around being less blinded by liberal ideology and more free to consider alterative viewpoints, theories, or ways of doing research than many other scholars.I can rest assured that if my conservative biases are too evident or damaging, they will be challenged under peer review.On the other hand, a liberal might get a paper published without any challenges for its evident or implicit biases, perhaps not even for weak methodology as long as its conclusions were desirable from a progressive perspective.

It Is Legitimate to Seek Truth through Science
First, one advantage is that because I come from a background that believes, though we see it through imperfect lenses and our own moral imperfections, that there is objective truth to be discovered.Therefore, I have confidence that truth can be discovered.Jesus said, "You shall know the truth and the truth shall set you free" (John 8: 32) which presupposes that there is truth out there to be discovered.This provides me with a great motivation to seek new discoveries while some of my colleagues are so cynical about ever finding anything really true that they publish mainly to keep their jobs rather than from the excitement of the chase after truth.For me, it's fun to come to work each day, feeling that today might be the day I discover something true and meaningful.Of course, this view differs considerably from post-positivistic thinking in which everything, even gender, is socially constructed and we differ mostly in terms of our own narratives so that "truth" becomes entirely relative.I have conservative colleagues who differ with me on this, thinking that truth belongs only to the realm of religion and philosophy, not science.

Importance of Conducting Research Very Carefully and Not Dogmatically
A second advantage is that the risk of attack from liberals virtually requires that I stick very close to my data and be very careful in my statistical analyses.I know, in advance, that if I cannot defend my science, I run a serious risk of getting "grilled" for it (consider what happened to Professor Regnerus recently [80], even though similar research went unchallenged for years [81]), in academic conferences or in courts of law.Not that I am not willing to get discriminated against, but I don't want it to be deserved on the basis of my own incompetency or foolishness, if I can help it.So, I am very careful to try to both produce accurate statistical results and not to over-interpret them.Wilkinson et al. [71] noted that "Confession should not have the goal of disarming criticism" (1999, p. 602) and that acknowledging limitations is "for the purpose of qualifying results and avoiding pitfalls in future research" (p.602).In other words, just admitting limitations without using those limitations to limit the policy relevance or legal usefulness of your study is not appropriate; for example, results should not be generalized from a nonrandom sample to an entire population or if you do not control for appropriate forms of social desirability, you should not accept respondent opinions at face value, especially if they have reasons to exaggerate their responses.Admitting limitations should not be a way to immunize your research from criticism and/or to make it more applicable to public policy or law.Some disagree-Herek [36] has argued that (liberal) research with major limitations should be considered relevant for public policy decision-making; however, he does not want research he doesn't like (i.e., conservative research, [35] [39] [40] [75]) to be taken seriously for public policy!I continue to make a distinction between religious type dogma and science.A dogma may contend that something has been, is, and will be true in all places, all conditions, and all times but science usually allows for variation, even anticipates variation.Without variation, one might not even be able to do much with statistics and without statistics, social science at least would be very limited in its scope.Thus, when social scientists try to claim that no study ever, by any author, in any place, at any time, or under any conditions (not even as a matter of random statistical fluctuations) has found different results than "X", I get worried that some confusion is occurring between science and dogma (I guess often to please lawyers who in my view want to see the world in black and white rather than the shades of gray that I usually observe in reality).

Opportunities to Find Interesting Results in Unexpected Places
Third, because truth can be found almost anywhere, I have a great deal of fun finding it in unusual places and unusual times.After all, Jesus said "look around…" (Matthew 7: 26).For example, some years ago, some of my graduate students and I looked into the survival rates on the RMS Titanic and found that middle class passengers were the ones that most closely followed the rule "women and children first", contrary to the media idea that the rich men left the poor women and children to drown.We also looked at Pearl Harbor and found some statistical evidence about the situation that suggested the U.S. government knew more about the upcoming Japanese attack than has been acknowledged [82].More recently, a graduate student and I looked at the survival rates of different classes of passengers on the Korean ferry MW Sewol.We found that adults on that vessel had a survival rate as high as the crew, contrary to media reports that might have made one think that the crew abandoned the ship and left everyone else to drown.I have often found that in areas of either great controversy or great political correctness (or great apathy), that there are more opportunities to find truth in unexpected ways because those in power or on the more powerful side have been effective at suppressing ideas or research (Romans 1: 18) that may be contrary to what those in control want to be heard.In the case of apathy, it may be that few have ever looked into what is going on in such an area, so there is often more to be uncovered than usual.

Willingness to Admit to Mistakes
Fourth, as a conservative who believes that all persons can make mistakes, including myself, I have the freedom to be wrong and to be able to admit mistakes without feeling like I have betrayed the cause or whatever.As the Apostle John said, if we say we have no sin or never make mistakes, we are lying (I John 1:8).Therefore, I remain open, even welcome the opportunity of being proven wrong in what I have reported or said.This attitude feeds back into wanting to keep such things to a minimum since I feel an obligation to "fess up" if this occurs.For example, some of my internet critics are fond of pointing out that one book [83] I used in one of my articles [3] on sexual orientation of children of LGBT parents oversampled for LGBT children of LGBT parents, assuming that my error of not noticing that would entirely discredit the study.What my critics didn't realize was that in the same article [3] I noted that even if 20 of my codings for children as LGBT were incorrect and those children were actually heterosexual, my results for that part of the article would have remained statistically significant.Thus, even if I had recoded all of the LGBT children in that book as heterosexual or had I deleted the book entirely from the analysis, the results would have remained statistically significant.

Conservative Viewpoints May Lead to Different but Useful Ways of Evaluating Data
Fifth, I think it helps to have a different world view position from which to critique and evaluate theory and research that tends to come from a white, male, liberal Western perspective.In the case of the Titanic, for example, I don't think there are any liberal Western social science theories (e.g., social exchange, systems, feminist, functionalist, developmental, Marxist, post-modern) that would have predicted the rate of survival as a function of social class in a way that corresponded to the actual facts (of a nonlinear pattern between survival rates and social class) whereas there already was a sociological explanation of sorts from over two thousand years ago in Proverbs 30: 9 from that worldview perspective [82].It may be that this is true of other worldviews, from outside the West, as well.
As another example, as a graduate student, one of my professors was making the case that those people who held to a traditional sexual standard were intellectually challenged [84].However, he had defined traditional as "no sex before marriage".I pointed out that rather than being the least cognitively complex sexual standard, it could be seen as the most complex because it featured the most prerequisites for having sex (consent, openness, love, personal commitment, public commitment) relative to his view of the other standards [85].I also think that as a conservative I am more open to the possible long-term risks of instant gratification, recalling that Luntz [86] stated "two-thirds (66 percent) of nonreligious Americans agree with the statement 'If it feels good, do it,' despite its selfish, dangerous undertones.By comparison, fully 71% of religious Americans disagree with the concept of instant gratification.What we have here is a chasm between the value systems of these two American camps".(p.261).Delay of gratification, which is tied to delay discounting, is a key component of modern civilization ( [87] [88], pp.[38][39][40][41][42][43], an important aspect of child socialization.

Recognition of Complexity and Requirement for Depth of Analysis
Sixth, my worldview suggests that life is complex and surface appearances are often misleading.Jesus told us (John 7: 24) not to judge by appearances but by the underlying truth.Therefore, I have even more encouragement to dig deep to resolve scientific questions.Sometimes, this means being willing to reject simplistic explanations of things [4]- [6].For example, I have come across literature in which even hundreds of scholars have insisted that variable A was not related to variable B under any conditions whatsoever, such that no one had ever found such a relationship ever, in any journal, in any country, at any time, not even as a statistical artifact.It amazes me that anyone would be willing to take such an absolutist dogmatic position because it is much more easily refuted than a more cautious viewpoint.
Digging deep in practice means not only evaluating linear models, but nonlinear models, especially quadratic models in which extreme positions are more risky than moderate or centrist positions.Digging deep means evaluating interaction effects on the theory that what works for one group may not work so well for another group because people and groups can be different in how they respond to the conditions of life.Digging deep means cross-checking published research for inconsistencies, as noted previously [78].

Dogmas or Viewpoints of Those in Power Can Be Incorrect
Seventh, my worldview suggests that people can develop entrenched but incorrect views of reality.Jesus ran into such problems with many religious and political leaders of his day, of course.Once I had a student who wanted to study abortion attitudes among religious students from religious high schools.I told her that she had to leave her own ideology about abortion behind in order to do a scientifically valuable study.I said that she had to be willing to test hypotheses no matter what the religious authorities might want to hear, that those very opinions had to be subject to being falsified, no matter how entrenched they were.What she did find did not conform to what those authorities wanted to hear, and her willingness to rebuke them scientifically might mean she will never find employment within that particular religious system.If I were dealing, for example, with a Muslim graduate student, I would say that such a student would have to be willing to do research that had a chance of finding some aspect of Islam, even the Holy Qur'an itself, to be incorrect from a scientific perspective [89].And the same should be true for a Jewish or Christian student or even an atheist student with respect to their own perspectives.

Importance of Giving Serious Consideration to Different Perspectives
Eighth, my worldview encourages looking at both sides of any issue, looking seriously.This means that even if one side was "true" some of the time or even most of the time, it might not be true all of the time in all places among all groups.In effect, the idea is that there is usually something valid about nearly any opinion, even if it is largely incorrect.For example, when Jesus met with the Samaritan woman at the local well (John 4), he didn't tell her she was totally wrong about everything but rather highlighted the things she was right about, even though he made it clear that he didn't agree with her on everything.Rather than looking for points of disagreement, he looked for points of agreement and built their developing relationship on those rather than burning bridges with her by focusing on their differences.In practice, this means I should not "write off" someone else's views but give them full consideration and be open to recognizing them as better than my own in at least some respects.This may be one of the more costly points because I have lost funding from sponsors for my being willing to look at both sides of a controversial issue and for refusing to assume that the "other side" was totally wrong.I have even been accused of being a traitor to conservative causes.However, I do not try to be loyal to anyone, but to be loyal to the wisdom of looking at both sides as equally as I can, regardless of who might be offended by such an approach.

Refusal to Be Satisfied with Overly Simplistic Social Science Explanations
Ninth, I think that conservatives can contribute to theory development in what might seem to be new ways.For example, sexual minority stress theory is being used to explain discrepancies in health among gay youth and heterosexual youth [4].The idea is that sexual minority youth feel stigmatized by peers and suffer as a result.However, might it not be possible that any stigma was not caused by sexual orientation per se but by known correlates (e.g., higher drug use, bringing weapons to school, juvenile delinquency)?I might not care if my teenage son had a gay friend but I might be upset if he had a friend who was using cocaine, brought a loaded handgun to school every day, or was frequently in trouble with the law.If I told my son not to associate with the latter type of person, might not the person blame the rejection on his sexual orientation rather than the drug use and the other issues?For example, Goldberg, Bos, and Gartrell [28] compared drug use (marijuana or hashish) by adolescents with lesbian mothers as a function of stigma reported by adolescents; those who used those drugs monthly or more often were more likely to report stigma (52.2%) than those who used them less often (36.0%)-although the difference was not significant statistically, the effect size was in the small to medium range (0.31).Determining the causal direction here would not be easy, but my point is that just because one shows a correlation, does not mean that it has to signify a unidirectional effect, when the effect might be in the opposite direction or there might be reciprocal effects.To me it also seems too simplistic to think that all health discrepancies between any two groups could likely be explained away as a function of only one factor, regardless of what that factor might be.
Another example might be a reconsideration of theory for sexual orientation.Is it possible that for some persons sexual orientation is a matter more of behavioral opportunity rather than of sexual attraction per se?In other words, are there any persons who will engage in sexual activity with whatever presents itself as an opportunity, regardless of age or gender of the other?Perhaps they could be labeled opt-sexuals for opportunity sexuals.The key to this theory would be the use of boundaries regarding one's sexual behavior.Opt-sexuals would have very few boundaries.A homosexual man, for example, might have strong closed boundaries vis-à-vis women or children, but have open boundaries vis-à-vis other adult men.A strongly religious person might have open boundaries only with respect to their one partner, with closed boundaries for everyone else.Is it possible that homosexuals are sometimes blamed for the behavior of opt-sexuals who identify as gay but are not functioning on attraction as much as liking sex with whomever it can be experienced?

Trying to Keep Emotions from Getting the Better of Our Cognitions
Lastly, it has been argued by some that decision-making is more an emotional process than an intellectual one [90] [91].There is value in this viewpoint; even Adam and Eve (Genesis 3: 12 -13) seemed better at using their cognition to justify a mistake than at reasoning beforehand in order to avoid making a mistake because of the strong emotional attractions of a particular choice they were offered.However, as a conservative, I still see value in trying to make decisions, particularly scientific decisions, more on the basis of intellectual issues than on the basis of emotional issues.That is probably how I got into trouble with the scholar who said I knew nothing about research; my cognitive discourse represented an emotional threat to her family (thus, a defensive mama bear response).This may go against human nature according to some, and may seem insensitive to emotional needs, but I think it is a critical part of graduate education and of being a careful scientist, in social science or other areas of science.Though it is never easy, if anyone should be able to handle this challenge, it ought to be scientists.
Another aspect of this is that conservative scholars may be better able to detect confirmation bias showing up in how research results are interpreted.As noted, a recent review of the literature on same-sex parenting [92] noted that there had been one study [28] that had found higher rates of substance use among adolescents from same-sex families than from adolescents from heterosexual families; however, Manning et al. [92] appeared to minimize the importance of those results [28] stating that "at the bivariate level, adolescents from same-sex parent families have higher levels of occasional substance use, but similar levels of heavy substance use compared with children in the Monitoring the Future Data set" (p.494).Did liberal bias obscure the problem that even occasional illegal drug use is, well, against the law, especially for children under the age of 18? Furthermore, Manning et al. [92] did not mention that several other studies have found higher use of illegal drugs among the children of same-sex families, as detailed elsewhere [5].From a conservative point of view, one starts with several studies which are then reduced to one study, for which the results are minimized as having little meaning or relevance-is this not a case of confirmation bias?I recognize that progressive scholars might fault me for looking harder for studies that reject the "no difference" hypothesis, but in the end, did not my "bias" help find more studies for consideration than were found or reported in a major review of the literature [92] designed to present the case for same-sex marriage before the U.S. Supreme Court?
Similarly, Biblarz and Stacey [93] in their extensive review of the literature reported results for one study that found higher rates of relationship instability for lesbian mothers, but they did not discuss at least three other studies that featured similar results [94] [95].As a conservative I would probably not dare to conclude from one very small study (N = 14 lesbian couples) that lesbian mothers had more unstable relationships, but Biblarz and Stacey did so, stating that "Although research consistently indicates that such couples enjoy greater equality, compatibility, and satisfaction with their partners than their heterosexual counterparts, preliminary data hint that their relationships may prove less durable" (p.11).While I have to credit Biblarz and Stacey with openness to finding unexpected (from a progressive perspective) results, my point here is that their review of the literature was far from being complete, the same weakness of the review by Manning et al. [92].Another example is that when they found a study featuring sons of lesbian mothers scoring more than a standard deviation higher on femininity than sons of heterosexual mothers (p.14), they described the result as an indication of gender flexibility.With an effect size of greater than 1.0, with Cohen [23] labeling an effect size of 0.80 or greater as "large", it stretches my imagination to call such a huge difference as mere flexibility.I might agree that a small to moderate effect size difference might suggest flexibility, but if a huge effect size is nothing but flexibility, then why not describe any huge effect size for any variable of interest in social science as merely a matter of flexibility?Another example of manipulating words for a good political effect was their discussion of research results indicating emphasis on social conformity in children (p. 7).What had been found was that in one study heterosexual parents had expressed a higher value on teaching children "self-control", which has been predictive of better outcomes for children as adults [86]- [88].From a conservative perspective, I would interpret teaching children better self-control to be a positive parental goal, not a matter of oppressing them with "social conformity".From my perspective this seems like an approach to research in which you start by ignoring the results of most of the studies whose outcomes you don't like, then you minimize any adverse results of a study here or there (if you only find one study with adverse results, your findings can always be minimized by other scholars as, after all, only one study), and finally, if you must, you can attach better sounding labels to adverse results, magically wishing away any adverse political implications.It seems to me to be a recipe for success politically, no matter what the actual research might be.I have to ask-would most scholars, even progressive scholars, accept such an approach to reviewing the literature in any other, less controversial area?To conclude, I think that being a conservative helps me do a more complete review of the literature and to recognize possible attempts to marginalize research results that don't fit a progressive narrative very well.

Conclusion
While there are some serious disadvantages to being a conservative scholar in today's academic environment, I think that conservative scholars may have some unique advantages in terms of intellectual flexibility and openness to new ideas that may contradict culturally popular themes regarding the intellectual deficits of conservative scholars.My view of conservatism may be offensive to both liberals and conservatives, however, because I generally refuse to prejudge situations in accordance with ideology but try to ferret out the facts as best as I can, no matter the possible conclusions to which they might take me.However, I also think that some more openminded liberal scholars might admit that at least some of the advantages which I cite for conservative approaches to social science research might extend to all scientists, regardless of their political preferences.