A Conservative’s View from the Academic Trenches: Reply to Duarte, Crawford, Stern, Haidt, Jussim, and Tetlock (2015)

Abstract

Although conservative scholars may face a variety of forms of discrimination in academia and other challenges, as elaborated in the first part of this comment, they may also have a set of unique advantages that may facilitate more careful theoretical and empirical scientific work. They may be more sensitive to flawed methodologies in some areas of controversy. In such areas, these assets of conservative scholars may be especially important.

Share and Cite:

Schumm, W. (2016) A Conservative’s View from the Academic Trenches: Reply to Duarte, Crawford, Stern, Haidt, Jussim, and Tetlock (2015). Journal of Behavioral and Brain Science, 6, 149-166. doi: 10.4236/jbbs.2016.64017.

Received 25 February 2016; accepted 9 April 2016; published 12 April 2016

1. Introduction

As a political conservative, I would like to supplement the discussion started by Duarte et al. [1] by commenting on some of their points and by also pointing out some of the advantages of being a conservative in today’s social science academic environment, an environment that often seems anti-conservative. This is important because the lack of diversity in social science may seem to make being an academic conservative an exercise in futility or a career without a future or on the wrong side of history [1] . Furthermore, conservative students may be seen as obstacles to diversity rather than those most able to accept it [2] . Therefore, I will discuss what some of the advantages may be for conservative scholars in academia currently. However, I will begin by acknowledging some of the disadvantages.

2. Important Points in Duarte et al. (2015)

2.1. Confirmation Bias

Confirmation bias is a very important matter, as the authors [1] noted. In particular, one statement was that confirmation bias could lead to “widely accepted claims that reflect the scientific community’s blind spots more than they reflect justified conclusions”. Research on lesbian, gay, bisexual, and transgender (LGBT) families is an area, in my opinion, with an abundance of such blind spots.

2.1.1. Topical Example: Parental and Child Sexual Orientation

There is no doubt that a majority of social scientists believe that, for example, parental sexual orientation has absolutely no correlation with their children’s sexual orientation, but there are dozens of research studies featuring substantial evidence to the contrary, as summarized elsewhere [3] - [5] . There are also several studies suggesting how parents might encourage their children to experiment with same-sex sexual behavior or to identify as LGBT [3] [4] .

2.1.2. Biased Reviews

The authors [1] mentioned biased review processes. The bias of reviewers can be seen in several ways in controversial areas of research.

1) Example of how an unjustified estimate was incorrectly accepted as fact for decades in social science and legal articles in spite of presumably rigorous peer reviews

One example of how reviews failed for over fifty studies concerning LGBT parenting [6] is that those studies cited earlier works [7] - [11] that stated as a fact that as many as 14 million children were being raised by LGBT parents in the USA, although Raley [12] and Selekman [13] appeared to estimate something closer to 28 million children (assuming there were 14 million gay/lesbian parents in the USA and two children per family). However, the origin of that “fact” was not a scientific study but merely an unexplained estimate from a 1984 USA Today newspaper article [14] . Many of the studies did not cite the USA Today article with the correct page number (3D was correct, 3 and 30 were common errors). Interestingly, Patterson [15] , who was one of the first scholars to cite the 14 million estimate from the USA Today article in a top tier social science journal [11] , stated that “widely repeated estimates… tend to put the numbers of children of lesbian and gay parents at between six and fourteen million” and then wrote that “Although these estimates have been widely repeated, no empirical studies are cited in connection with them. Hence, it is difficult to be certain about the origin of these figures or to evaluate their reliability” (p. 242). Did Patterson forget that she had cited the same estimates (with no empirical support) herself in 1992 [11] as if there had been substance to them? Regardless, Patterson [15] concluded that the 14 million figure might have been correct after all (p. 258). However, more recent studies [16] - [21] indicate that perhaps no more than 200,000 - 250,000 children are being raised by LGBT parents, certainly fewer than 500,000, a far cry from the tens of millions cited by dozens of social science and legal articles despite presumably rigorous peer review that should have “caught” errors of facts. Peer review would appear to have failed a “common sense” test; with approximately 60 million children in the United States in the mid-1980’s, having 14 - 28 million children of gay parents would have implied that 20% - 50% of all parents were LGBT (seriously?).

2) Despite peer reviews, citation frequency may not reflect article quality, may reflect the opposite

Another flaw in peer review was apparent in my analysis of how often 12 studies on LGBT parenting had been cited as a function of the quality of their methodology; the association was such that (r = 0.58, d = 1.4, p < 0.05) the worse the methodological quality of the studies, the more likely they were to have been cited [22] . After controlling for year of publication, the partial correlation remained large, by Cohen’s [23] definition of effect size, at 0.42 (d = 0.93) though not significant. If peer review was “working” well, one would expect that studies of higher quality would tend to be cited more often, not less often.

3) Literature reviews (citation frequency) may reflect political acceptability more than research quality

As another example, which comes about as close to a natural experiment as one could hope for, there were three articles published by some of the same authors, from the same institution, even in the same journal from the same data source and the same cohort, between 1979 and 1981 on lesbian mothering [24] . Two of the articles [25] [26] reported favorable information about the lesbian mothers while one article [27] published adverse information. As of April 2016, according to Google Scholar, the two articles had been cited a total of 174 times compared to 10 (of which four were by myself) citations for the third article, a ratio of over 14:1 per article, not including my citations. Such a ratio is difficult to explain as anything other than a bias in favor of citing research more supportive of lesbian parenting given that the authors, journals, academic institution, time frame of publication, and data were essentially the same. More recently, in 2011, two studies [28] [29] were published by the same authors (Goldberg, Bos, Gartrell) using the same data set (U.S. National Longitudinal Lesbian Family Study) in highly ranked journals (Journal of Health Psychology, Archives of Sexual Behavior), published in the same year (2011); one article reported adverse outcomes (higher drug use) for children of lesbian mothers while the other reported mixed results, mostly favorable, for children of lesbian mothers. According to ProQuest as of April 5, 2016, the former article had been cited 8 times while the latter had been cited 37 times, a difference significant (p < 0.001) by a one-sample chi-square test (18.6, df = 1). Google Scholar on the same date indicated 16 and 76 citations, respectively, significant (p < 0.001) by the same test (39.1, df = 1). Thus, there is both proximal and distal evidence of bias in citation frequency based, not on the quality of methodology of a study, but on its outcome relative to politically desirable objectives from a liberal/progressive viewpoint.

4) My experience as a journal editor suggests that peer review reflects political bias more than scientific evaluation

As an editor of the peer-reviewed journal Marriage & Family Review, I can testify that the most common reviewer response to a controversial submission is that the reviewers take sides and their evaluations tend to reflect their political viewpoints, with liberals rejecting conservative ideas and conservatives rejecting liberal ideas. One time I published a critique [30] of editorials that I had done [31] [32] ―I had requested and welcomed such critiques-only to be called on the carpet by a liberal legal organization challenging my publisher (they did not contact me) how a comment by someone on their hate watch list could have had a paper accepted into an apparently otherwise professional scholarly journal. I suppose I was supposed to feel intimidated. Fortunately, my editorial policy has been to publish research from a wide diversity of perspectives, both in types of theory and research methodology, not to mention author characteristics such as gender, age, race/ethnicity, religious affiliation, sexual orientation, or political views. Sometimes I am told by liberals that they will not accept research as credible, unless it is published in a liberal journal; that virtually guarantees they will never have to consider research with conservative results as credible, regardless of its actual methodological quality. On the other hand, I think that Marriage & Family Review has more diversity in more ways than many journals led by presumably more liberal editors.

5) The focus of literature reviews is often too narrow despite peer review

One might hope that conscientious peer review would mean that scientists would be evaluated on the entire scope of their research, rather than any one research article, and that any one research article would be evaluated in the context of the full scope of the author’s research program. The work of Dr. Sotirios Sarantakos, an Australian professor, now retired, comes to mind. Sarantakos wrote favorably about gays and lesbians and was in favor of same-sex marriage [33] [34] . Yet he published a research study [35] in an Australian journal that found effect sizes as large as 3.75 (over four times the size of 0.80 that Cohen [23] indicated as a large effect in social science) between children of gay/lesbian parents compared to heterosexual parents. Not only did his critics [36] - [38] ignore Sarantakos’s larger area of research, they did not appear to understand the internal details of his article [35] . While they blamed the results on teacher bias and relationship instability, they did not explain how control variables would be able to account for such large effects nor how other variables were the only factors at work when at least one academic outcome in the study favored the children of same-sex parents, even given that many of the same-sex parents had been from previous heterosexual relationships that had broken up and some teacher bias may have been involved. None of his critics acknowledged the large size of many of the effects reported by Sarantakos [35] . Remarkably, almost no other scholars other than Marks [39] or Allen [40] have recognized that Sarantakos had published a great deal of other work on same-sex families and their children [41] - [43] while his critics’ focus seems to have been more on discrediting him on the basis of one article [35] rather than first considering the entire scope of his social science research on both heterosexual [44] - [68] and same-sex families [35] [41] - [43] . One might have hoped that sound peer review would have recognized the breadth of his research rather than maintaining a narrow focus on one particular published article, not to mention overlooking the strengths of Sarantakos’s research [35] as well as its limitations, when his critics’ comments received peer review prior to publication. More information on Sarantakos’s research program with same-sex families is available elsewhere [5] .

2.1.3. Serious Problems with Methodological Weaknesses Are Often Disregarded to Favor Politically Popular Research Outcomes

Why do we encourage children to engage in athletic training or sports? I would propose that one reason is to help them learn to live with rules of the game that members of all teams are expected to follow despite the competitive nature of the games. As adult researchers, I was trained to believe that methodological rules were to be followed by scientists, regardless of their political values. Now some are telling me things like “If you are not cheating in sports, you are not trying hard enough”. Maybe that attitude has infected science. I have seen so many weak or improper methodologies accepted not only as valid but as methodologically superior that under normal circumstances (non-controversial research) would likely not have been tolerated or published. To make this point, I will describe sixteen examples of research studies on controversial topics that were published, usually in top tier journals, using neutral labels to make them non-controversial. I ask the reader how many of these examples, assuming rigorous peer review, should have been published or at best featured methodologies sound enough to be used to inform courts and government officials about needed policy changes? Please note that, to protect the guilty, I have hidden the identity of the studies and have, in some cases, modified the details for that purpose as well. In one example given, I combined errors of two distinct studies into one case.

1) Example one: unbalanced designs

Groups A and B are compared on an outcome variable Y; Group A has 4 cases and Group B has 500 cases. The non-significant result is interpreted as proof that Group A is equivalent to Group B rather than an artifact of the unbalanced design or very low statistical power. In this case, for there to have been a statistically significant result using a two-sided Fisher’s Exact Test, if there was a 25% result for Group A, then Group B would have to had a one percent or less result. Methodological rules violated: Using balanced designs, using larger samples.

2) Example two: using dead persons to represent stable couples

The stability of romantic relationships is being studied on a longitudinal basis over four years. The study included as “stable” couples, 96 couples where one member of the couple died over the four years of the study, sometimes in year two or earlier. Such couples were counted as stable over the four years of the study. The use of such dead persons is not mentioned in the study as a limitation. Methodological rules violated: Do not mislead, even by omission, readers of the nature of your sample, especially with respect to sample attrition; clarify what is meant by concepts such as “stable” so the study can be replicated accurately.

3) Example three: comparison groups should be clearly defined with mutually exclusive membership criteria

Groups A and B are being compared on child outcomes. Group A consists of 44 families that clearly belong only to Group A. Group B consists of 44 families but of those 44, at least 26, possibly 27, might actually belong to Group A. Although 90% of the results favored the children of Group A, with effect sizes are large as 0.27, despite the lack of clarity of group membership, the results are treated as proof of equivalence between Groups A and B. In a different study, there are three Groups A, B, and C. Groups A and B are compared but each group contains 10% Group C members, though Groups A (90%) and B (90%) could have been compared without the Group C members. Methodological rules violated: report effect sizes, be sure comparison groups are mutually exclusive in their membership and not contaminated with members of other groups.

4) Example four: comparison groups should be equivalent in terms of selection effects, design effects, or other potential biases in the research protocols

Groups A and B are being compared. Group B is a subsample from a random, national study that participants were blinded to the research objectives of this study, as were the original researchers leading the national study. Group A members as well as the lead researchers of this study were not blinded to the nature of the study. Group A was a convenience, non-random sample. The data from the two groups was collected at least ten years apart in time. No attempt was made to control for volunteer or selection effects, cohort effects, or social desirability response bias. Groups A and B are compared as if the pre-existing group differences were not meaningful or important limitations. Methodological rules violated: Both sides of a study should be blinded equally; both sides of a study should be based on random data; if likely, social desirability effects should be measured and controlled statistically; cohort effects should be taken into consideration.

5) Example five: comparison groups should be equivalent in terms of background demographic characteristics and mental health characteristics

Groups A and B are being compared. Group A households have an average annual income of over $200,000 and most have hired full-time in home childcare for an average of 1.3 children. Group B households have an average annual income of $70,000 and most have limited child care for an average of 3.4 children. The mental health of the Group B mothers is significantly and substantially worse than the mental health of the Group A mothers while the parental stress levels are significantly higher for the Group B mothers. Without statistical controls for any of the group differences (e.g., per capita household income, parental stress, mental health), Groups A and B are compared, and the results being non-significant, a conclusion is drawn that Groups A and B are equivalent in terms of child outcomes. Methodological rules violated: Pre-existing group differences should be controlled through random assignment to groups or through statistical controls if random assignment is not possible.

6) Example six: using theory poorly when evaluating models statistically

Theory predicts that variable A predicts variable B which predicts variable C, in logical causal order. Research finds that the three variables are correlated as expected. However, if one predicts C from A, controlling for B, then a non-significant result is obtained. Had the indirect effect of A on C been tested, it would have been found to be statistically significant. Rather than concluding that the effect of C on A is fully mediated through variable B, it is concluded that variable A is not an important factor in understanding or predicting variable C, via direct or indirect effects. Methodological rule violated: theoretical models should be evaluated statistically based on sound causal theory, looking at both direct and indirect effects; endogenous (mediating/intervening) variables should not be used as control (exogenous) variables.

7) Example seven: overuse of control variables should be avoided

An outcome variable Y is predicted from a variable X. The relationship is substantial (effect size, 0.36) and significant statistically. After the researcher controls for 77 other variables, the relationship is still strong (effect size, 0.30) but no longer statistically significant (p < 0.06). In the same model, a control variable with an effect size of 0.31 is statistically significant (p < 0.05). The conclusion is drawn that variable X has no effect whatsoever on variable Y. Notably, in the first six models using fewer control variables, variable X remained statistically significant. Only in the seventh model, with the most control variables, did it become non-significant. Methodological rules violated: in order to prove the null hypothesis, one should not simply keep adding control variables until the desired null effect is achieved; effect sizes should not be ignored, especially when levels of significance are in the vicinity of 0.05.

8) Example eight: the ratio of cases to model variables should be at least 5: 1

A researcher has a study with 153 cases. Y is predicted from X using 68 control variables. Even though no one recommends using statistical analyses when the ratio of cases to variables falls below 3:1 (5:1 or more is preferred), the results are deemed extremely important for legal and policy development. Methodological rule violated: the ratio of cases to variables should be at least 5:1 or more; control variables should not be overused.

9) Speculation should not trump data when assessing research

A study finds differences between Groups A and B involving effect sizes of 3.0 or greater. Some unmeasured differences between the two groups are mentioned by the researcher, but not controlled (since they had not been measured). An independent researcher shows that unless the unmeasured factors had effect sizes of 2.0 or greater, those factors could not completely explain away the group differences. Nonetheless, a critic concludes, on the basis of speculation alone, that any differences between the two groups were due entirely to the other factors. The critic says nothing about the effect sizes involved nor the fact that some differences favored each group, regardless of the unmeasured factors. Methodological rules violated: discuss effect sizes when evaluating research; consider both positive and negative outcomes of research; speculation alone should not be used to entirely discredit research, especially research featuring medium to large or greater effect sizes.

10) Example ten: statistical tests should match theory

A researcher says that variable Y should have a nonlinear, quadratic relationship with variable X. The data pattern of the 46 cases fits the hypothesized nonlinear relationship. The researcher tests the pattern with a zero-order (linear) correlation and finds a non-significant result, concluding that variables A and Y have no relationship whatsoever. Had the researcher tested for a quadratic effect, a significant result would have been obtained. When a critic points this out to the journal’s editor, the editor refuses to publish a comment or correction, allowing the incorrect result to stand. Methodological rules violated: Statistical tests should match the theory (i.e., linear tests for linear theory; nonlinear tests for nonlinear theory); published errors should be correctable through errata or comments.

11) Literature reviews should accurately reflect the research studies reviewed

A researcher compares Groups A and B on six variables. For two variables, the results are not significant. For four variables, the results are significant and involve medium to large effect sizes favoring Group B. A scholar doing a literature review discusses the research and concludes that most of the results were not significant and those that were significant favored Group A, without discussing any of the effect sizes or their specific direction of effect. In another situation, a reviewer concludes that there are no differences between two groups of children; however, four of five studies show higher levels of substance use among children from Group A. A meta-analysis of the five studies yields a significant effect, with significantly higher levels of substance use for children from Group A. An amici brief before the U.S. Supreme Court argues that the research evidence is clear that there are no differences in substance use between children from Groups A and B. Methodological rules violated: discuss effect sizes; accurately represent the outcomes of research when conducting a literature review.

12) Beware of the potential effects of multicollinearity in multivariate models

A researcher is comparing child academic outcomes across two groups of parents. The study is longitudinal. From the start of the study to fifth grade, one group of parents is 95% stable. The other group include no couples whatsoever that are stable through fifth grade (i.e., the group factor is highly correlated with stability/instability). The researcher predicts the child outcomes from group and from parental stability, finding that there are no group difference effects once stability is controlled. The researcher does not report in the article what the different stability levels were for the two groups of parents. Methodological rules violated: report descriptive results for key control variables, as well as for demographic differences; when a control variable is nearly 100% correlated with an independent variable, be careful that results are not biased by multicollinearity; do not treat an endogenous (mediating/intervening) variable as if it were a control (exogenous) variable.

13) Recognize that the conventional 0.05 level of significance is not sacred

A book is published on children of lesbian mothers. Because of the small sample size, a statistical significance level of 0.10 is adopted rather than the more conventional 0.05 level. When discussing the results in the book, a conservative scholar accepts their decision to use a 0.10 level of significance, to maintain consistency when discussing the content of the book. His comments are “discredited” in a legal trial on the basis that no reputable social scientist would ever use or report significance levels other than 0.05, even though the American Psychological Association recommends reporting all levels of significance ( [69] , p. 34) and the critics themselves had done so [70] . Methodological rules violated: Adapt levels of significance in order to achieve greater statistical power, if warranted for small samples; do not confuse “conventional” with methodologically “correct” or “incorrect”, “sound” or “unsound”.

14) Consider the types and implications of missing data

A study is conducted with adolescents. For some analyses, over seventy percent of the respondents did not answer the questions in the survey. Missing data is not assessed in terms of its being missing completely at random, missing at random, or not missing at random, nor are the implications of such a large amount of missing data discussed. In another study, missing data (of unknown type) is only 50% but reduces the sample size for analysis to only 20 cases. Methodological rules violated: Discuss the type and limitations involved in missing data in one’s research ( [69] , p. 33); consider the impact of extensive missing data on sample size and the associated reduction in statistical power of one’s analyses.

15) Report results accurately

A study compares two groups. The claim is made that none of the comparisons are statistically significant. In fact, some of the results were statistically significant, with effect sizes are large as 0.54. No effect sizes were reported, however. In another study, it is claimed that a result was not statistically significant. However, the result actually was statistically significant. Methodological rules violated: report statistical results accurately; report effect sizes.

16) Do not treat different things as if they were the same

A study purports to be a study of lesbian mother families. In some families, a child was born into a lesbian family and the child has lived with both lesbian mothers for up to nine years. In other families, the mother did not begin a lesbian relationship with another woman until the child was over nine years old and the child has lived with two lesbian mothers for only a few months. In the statistical comparisons, these families are treated as if they were the same (stable lesbian mother families). In another study, a few lesbian families involved two stable lesbian mothers and a child from birth; others involved a child who lived with two lesbian mothers for a few years; others involved a child whose lesbian mother’s partner never lived with the child. The “lesbian” families are treated as if they were the same, regardless of family history. One study receives extensive criticism for this, the other does not (guess which study featured a conservative researcher?). Methodological rules violated: Do not overlook family history when defining family types; apply similar standards of research criticism to studies, regardless of the political views of the researchers.

17) Conclusion to section 2.1.3

Such are the kinds of methodological problems that seem to pass muster not only with scholarly peer review but are deemed strong enough and of enough scholarly merit to be used as sound scientific evidence for liberal causes before the U.S. Supreme Court. In some amici briefs one even hears about research such as that above conforming to the highest standards of social science research. Really? Yet as a conservative I must ask if any of the above studies would ever be accepted in virtually any journal if the subject matter was not politically sensitive and liberal bias was not involved? Many of these methodological problems clearly violate the research principles recommended by the American Psychological Association [69] - [72] . The normal result is that if a conservative scholar critiques the above sorts of methodological problems, the comments are dismissed as biased, unfounded, and a consequence of the scholar’s personal values rather than any sound reasoning or logic or methodological expertise. The situation is so common, this author once wrote a parody of how one could prove that tobacco use was harmless (or maybe even good for your health) if one was allowed to make the same sorts of methodological errors [73] . One of my concerns as a scholar, a concern shared by others [1] , is that once the public catches on to such methodological nonsense being passed off as sound-if not the best -scientific evidence, they may lose any faith they might have had in the social sciences.

3. Disadvantages of Being a Conservative Scholar

Jussim [74] eloquently has discussed how liberal privilege is enjoyed by most social scientists, a situation that Redding [75] has described as “prejudice and discrimination, straight up” (p. 512). Sadly, I can cite many incidents in my academic career as a conservative Republican that match Jussim’s critique (although I began my career as a moderate Democrat). I will list some examples under several themes.

3.1. Rejection by Students

In my undergraduate and graduate classes, I often critique research as a way to help students understand how to interpret research or to do research properly. However, in one case, some students dropped my class because I exposed some medical research as corrupt; the students didn’t feel that an undergraduate class was the place for learning such things. In another class, I critiqued several journal articles but one student took exception to my critique of one of the articles and complained to my department head who sided with the student and then told me that because of this one complaint on one day, I would get a substandard rating for the entire year in all areas, teaching, research, grantsmanship, and service, no matter how well I had done so far or would do for the rest of the year. And I was criticized for not being receptive emotionally to this sort of treatment. Speaking of turnabout [1] [74] , how would you as a liberal feel if the same thing happened to you? Would you feel such treatment was fair? Another time, students at a rally on campus called for my being fired for presenting research they didn’t like to the local city council. Another time I was invited to present a lecture at a major university in California, which was a pretty vanilla lecture, but my reputation has preceded me and I noticed how most of the students in the lecture hall were glaring at me with what appeared to be hatred, even before I had said anything.

3.2. Rejection by Other Scholars

One time I had been invited to submit a paper to a journal as part of an issues debate. I prepared the paper and it was accepted by the editor as an article [76] that was published as part of the larger issue. Later, however, political pressure on the journal led the Editor to retract the entire issue, for all authors on both sides of the issue. Since the liberal authors had submitted reprints of previously published articles, which could not truly be retracted, the retraction of the entire issue amounted to a retraction- or rather censorship-of only the conservative articles. Another time, I was promised that if I presented a paper at a conference, that a law journal would publish the papers of all of the presenters, whether liberal or conservative. Once the editor of the journal realized that the issue would therefore include articles from conservatives, the idea of the special issue was cancelled for all potential authors, although I think some of the liberals’s articles might have been published later, on an ad hoc basis. At a national conference, I dared to raise methodological issues with a presenter’s research. That led a scholar to stand up and denounce me for knowing nothing about any kind of research, which was pretty remarkable given that I had over 200 publications at that point in refereed journals. Afterwards, there was a reception at which no one dared sit with me or allowed me to sit with them lest they fall under the same dark cloud of denunciation. Fortunately, the organization’s president and a few colleagues who had done military research (as I have also done) eventually joined me, the academic “outcast”, at a table, which was very reassuring and a class act in my opinion. On more than one occasion, I have detected substantial errors in published articles and have attempted to publish a comment or correction in that journal. Many times, editors refuse to accept any such comments or to publish corrections of any form whatsoever. One time, a correction was published, with even more errors than I had detected [77] , but my comments were only accepted elsewhere [78] . On several occasions, I have been lectured by others on the basis of my disagreement with policy opinions rendered by professional social science organizations, along the lines of “how dare you disagree with organization XYZ!” It is difficult for some folks to realize that professional organizations do not always have a corner on all truth or even science in some cases, especially when politically hot topics are involved. Some such organizations seem to me to have become more dogmatic in some ways than many religious organizations.

3.3. Rejection by Lawyers

At one deposition, I found it interesting that I was not allowed to take restroom breaks when I needed them; I had to ask permission of the lawyers, who sometimes (both sides) denied such privileges; it brought back memories of elementary school when once a teacher didn’t give me permission and I drenched myself, much to the amusement of the rest of the class. I have had lawyers grill me on my religious beliefs, which is an interesting exercise when you are not even sure of them yourself (in some details at least). It is even more interesting when such attempts to discredit you had been deemed illegal by state law, but such minor matters mean nothing to the lawyers (of either side). I have read of other conservative colleagues getting similar treatment, such as being asked by a lawyer if homosexuals were going to hell? (How would a scholar know the empirical answer to that question?). If you, as a conservative, publish an article in a lower tier journal, you may be criticized by lawyers as if your research is useless, even though there is evidence that journal tiers and citation rates do not always correspond [5] [79] ; yet, if an article [25] published in a lower tier journal supports a liberal objective, seldom will you hear those scholars or their research criticized by lawyers on that basis. I have had a legal association challenge my right as a journal editor to publish controversial comments or research articles. I have also been rejected by conservative lawyers for refusing to say bad things about homosexuals. Once I lost thousands of dollars because I would not say that research supported certain stereotypes about homosexuals, but my view was that if you say anything for money, you lose your academic credibility. Another time, an ACLU speaker at a national conference said that I had betrayed the conservative opponents of same-sex adoption at a trial. My intent was not to side with anyone per se but to present evidence of some of the methodological problems associated with same-sex parenting research and some trends in the available data, of which there was relatively little with respect to gay fathers. As far as the specifics of the case, I was not informed of them before or during the trial, so I declined to comment about that because I had no basis to judge (As a side note, one of the gay fathers, one of the plaintiffs in that case defended me on an internet blog, telling some of my critics to back down, since my testimony had done their family little to no harm, a class act on his part to defend me!). One time I was instructed by lawyers for a deposition to not bring anything with me, no books, no articles, no notes, no smart phone, no laptop computer, no internet connection; this frustrated both myself and the other side’s lawyers because it was impossible for me to answer many of the questions since I had been instructed not to bring any supporting materials or devices that could connect me to my supporting materials. In another case, lawyers asked me to evaluate matters way beyond my area of expertise and I declined to do so, even though that may have cost me thousands of dollars. The net effect has been that most lawyers, regardless of their positions, hesitate to ask for my expert testimony, as my opinions seem to be unpredictable rather than ideologically based. From my perspective, I am not trying to be ideological or unpredictable, I am just trying to stick with what I can defend empirically, which means I often “see” things in gray, rather than the black or white that many lawyers may seem to prefer since their goal is binary in nature (win/lose the case). Other professors, regardless of their political orientation, who wish to serve as expert witnesses need to keep such issues in mind as they negotiate how they will coordinate matters with their legal team.

3.4. Rejection by Potential Employers

Early in my career, I interviewed for a position in Missouri. I was grilled by the faculty extensively to try to determine my “biases”. My defense that “everyone has biases, you need to be aware of them and not let them get the better of you” was not well received (I didn’t get the job). Another time, I was asked during an interview how I was sure that the questions in my surveys about premarital counseling were being interpreted in the same way by all respondents, this question coming from a post-modern theorist professor. I said that I wasn’t sure, that is part of the error and uncertainty involved in doing research. I went on to say that it reminded me of when I was a boy and I went trout fishing in Vermont with my uncle. The audience, I am sure, was wondering with great curiosity how a story about a little boy fishing was going to tie into the issue of using advanced statistics to evaluate premarital counseling programs. So, I said that I could not speak “trout” and the trout could not understand English but the trout ended up in my frying pan, not I in his. Which was to say that I must have understood something about trout and trout fishing, some small degree of shared variance, even if not everything. The graduate students in attendance thought my response was extremely hilarious, which embarrassed the professor who had asked the question (no job there either). Another time I was told by a potential employer that I had to censor my research to make it conform to that church’s and the university’s religious doctrines. I responded that I could not do so in good conscience because it would destroy any academic credibility I might have (I didn’t get that job, though the person who did get the position almost immediately adopted most of my recommendations for the new military research center, recommendations I had made during the interview process).

3.5. Rejection by Judicial Authorities

In other trials, liberal expert witnesses have been allowed to make arguments with virtually no serious rebuttal (e.g., the various methodological rule violations previously mentioned were not brought up by the opposing attorneys, funding sources were not discussed) and yet the conservative witnesses were examined or cross-ex- amined in extreme detail about their research and funding sources. To my knowledge, it is only conservative witnesses that have been asked questions like “Have you read every word of every research article cited in your paper?” If you answer “no”, then that will be used to imply you are a careless scholar with no credibility. If you answer “yes”, then it will be said that reading every word is impossible and you are not telling the truth and cannot be trusted as an expert witness. Another trick question is to ask you in deposition a question such as “When did Professor X invite you to submit a paper to conference Y?” You cannot recall for sure, because the event was six years ago, so you say you think it was December 2002. At trial, now seven years later, the same question is posed to you and you, having thought the question at the deposition was a harmless one, and having forgotten what you said before, say that you are not sure, but maybe it was January 2003. Well, now the discrepancy you have just generated, between two points in time a year apart about an event seven years prior, will be used against you to attack your academic credibility (how can a man with such an imperfect memory who cannot even remember the year something so important happened be credible for any sort of expertise!). On the other hand, if you refuse to give an exact time, then you will be criticized for your blatant ignorance, terrible memory, or stubborn refusal to answer simple questions. My point is that such questions were not being asked to get at some key truth, but to set you up for the destruction of your credibility, no matter how you answer the question. They are, in my view, not intellectually honest questions, but rather only traps to try to destroy your credibility (Luke 20: 20 - 26; Matthew 22: 15 - 22; Mark 12: 13 - 17). Questions often concern funding sources, as if that alone might negate the value of one’s testimony. I do not think it is fair for a court to conclude that a conservative scholar funded by a conservative organization is automatically biased while a liberal scholar funded by a liberal organization is automatically not at all biased, as seems from news reports to have occurred at some trials.

Some conservative witnesses were told they had to answer all questions “yes” or “no”, but social science and statistics are fuzzy enough (involve random error) and complex enough that simplistic yes or no answers often cannot convey the truth, which, after all, is what the expert states under oath that he or she will try to explain (you don’t take an oath to give only yes or no answers). All in all, I question how well conservative lawyers have tended to prepare conservative expert witnesses for the rigors of being an expert witness in a hostile environment. I had hoped that courts would be given the opportunity to see a fair and even-handed discussion of scientific issues, as in an academic debate forum, but instead I have seen more of an attempt to keep the truth from coming out and to discredit experts on whatever minor issues or technicalities will suffice. I think that courts should demand that expert testimony include effect sizes as well as significance levels and sample sizes as well for all key contrasts. When such expectations are not set in advance for all expert witnesses, you can have an expert that hides his use of 96 dead people and his comparison of four cases against hundreds of cases, as if that were a meaningful statistical comparison. I think courts also need to be aware of the limitations of research; for example, at the time of the trial concerning two gay men adopting children there were probably fewer than a dozen studies on gay fathers, fewer that involved comparisons of gay and heterosexual fathers, and possibly even fewer that involved developmental outcomes for children (other than sexual orientation), and probably none that involved two gay fathers who had fathered a child together from the birth of that child through age 18 as part of a random nationally representative sample [39] . In other words, no expert witness could truthfully say very much directly about outcomes for children as a result of long-term, stable gay fathering across a wide socioeconomic spectrum of gay father families, including relatively low income gay father families; furthermore, any adverse findings could always have been attributed to prior heterosexual divorce or related problems such as stigma or discrimination.

For their part, I think courts need to ask themselves why liberal expert witnesses so often outnumber conservative expert witnesses regarding many social issues. Is it because there are no conservative arguments? Is it because there are no data that might support conservative viewpoints? Or, is it perhaps because of liberal bias and the intimidation, even illegal intimidation, often experienced before, during, and after a trial by potential conservative expert witnesses? There have been cases where conservative expert witnesses arrived at the location of a trial in time for the trial but were intimidated into not testifying at all (and they have never told me why, but one can easily guess what sorts of things might have gone on).

Being a conservative scholar who tries to stick to what can be verified empirically seems to cross wires with both liberals and conservatives, if they are more interested in political agendas than genuine science. In other words, I have found that many people from both sides of the political spectrum want to use science to support what they already believe emotionally rather than allowing science to refute what they already believe, especially if their beliefs have strong emotional connections. Despite the many disadvantages previously mentioned, I can think of at least ten advantages of being a conservative scholar. Basically, many of the advantages resolve around being less blinded by liberal ideology and more open to considering novel ideas or the merits of old ideas.

4. Advantages of Being a Conservative Scholar

Despite the disadvantages previously mentioned, I can think of at least ten advantages of being a conservative scholar. Basically, many of the advantages resolve around being less blinded by liberal ideology and more free to consider alterative viewpoints, theories, or ways of doing research than many other scholars. I can rest assured that if my conservative biases are too evident or damaging, they will be challenged under peer review. On the other hand, a liberal might get a paper published without any challenges for its evident or implicit biases, perhaps not even for weak methodology as long as its conclusions were desirable from a progressive perspective.

4.1. It Is Legitimate to Seek Truth through Science

First, one advantage is that because I come from a background that believes, though we see it through imperfect lenses and our own moral imperfections, that there is objective truth to be discovered. Therefore, I have confidence that truth can be discovered. Jesus said, “You shall know the truth and the truth shall set you free” (John 8: 32) which presupposes that there is truth out there to be discovered. This provides me with a great motivation to seek new discoveries while some of my colleagues are so cynical about ever finding anything really true that they publish mainly to keep their jobs rather than from the excitement of the chase after truth. For me, it’s fun to come to work each day, feeling that today might be the day I discover something true and meaningful. Of course, this view differs considerably from post-positivistic thinking in which everything, even gender, is socially constructed and we differ mostly in terms of our own narratives so that “truth” becomes entirely relative. I have conservative colleagues who differ with me on this, thinking that truth belongs only to the realm of religion and philosophy, not science.

4.2. Importance of Conducting Research Very Carefully and Not Dogmatically

A second advantage is that the risk of attack from liberals virtually requires that I stick very close to my data and be very careful in my statistical analyses. I know, in advance, that if I cannot defend my science, I run a serious risk of getting “grilled” for it (consider what happened to Professor Regnerus recently [80] , even though similar research went unchallenged for years [81] ), in academic conferences or in courts of law. Not that I am not willing to get discriminated against, but I don’t want it to be deserved on the basis of my own incompetency or foolishness, if I can help it. So, I am very careful to try to both produce accurate statistical results and not to over-interpret them. Wilkinson et al. [71] noted that “Confession should not have the goal of disarming criticism” (1999, p. 602) and that acknowledging limitations is “for the purpose of qualifying results and avoiding pitfalls in future research” (p. 602). In other words, just admitting limitations without using those limitations to limit the policy relevance or legal usefulness of your study is not appropriate; for example, results should not be generalized from a nonrandom sample to an entire population or if you do not control for appropriate forms of social desirability, you should not accept respondent opinions at face value, especially if they have reasons to exaggerate their responses. Admitting limitations should not be a way to immunize your research from criticism and/or to make it more applicable to public policy or law. Some disagree-Herek [36] has argued that (liberal) research with major limitations should be considered relevant for public policy decision-making; however, he does not want research he doesn’t like (i.e., conservative research, [35] [39] [40] [75] ) to be taken seriously for public policy!

I continue to make a distinction between religious type dogma and science. A dogma may contend that something has been, is, and will be true in all places, all conditions, and all times but science usually allows for variation, even anticipates variation. Without variation, one might not even be able to do much with statistics and without statistics, social science at least would be very limited in its scope. Thus, when social scientists try to claim that no study ever, by any author, in any place, at any time, or under any conditions (not even as a matter of random statistical fluctuations) has found different results than “X”, I get worried that some confusion is occurring between science and dogma (I guess often to please lawyers who in my view want to see the world in black and white rather than the shades of gray that I usually observe in reality).

4.3. Opportunities to Find Interesting Results in Unexpected Places

Third, because truth can be found almost anywhere, I have a great deal of fun finding it in unusual places and unusual times. After all, Jesus said “look around…” (Matthew 7: 26). For example, some years ago, some of my graduate students and I looked into the survival rates on the RMS Titanic and found that middle class passengers were the ones that most closely followed the rule “women and children first”, contrary to the media idea that the rich men left the poor women and children to drown. We also looked at Pearl Harbor and found some statistical evidence about the situation that suggested the U.S. government knew more about the upcoming Japanese attack than has been acknowledged [82] . More recently, a graduate student and I looked at the survival rates of different classes of passengers on the Korean ferry MW Sewol. We found that adults on that vessel had a survival rate as high as the crew, contrary to media reports that might have made one think that the crew abandoned the ship and left everyone else to drown. I have often found that in areas of either great controversy or great political correctness (or great apathy), that there are more opportunities to find truth in unexpected ways because those in power or on the more powerful side have been effective at suppressing ideas or research (Romans 1: 18) that may be contrary to what those in control want to be heard. In the case of apathy, it may be that few have ever looked into what is going on in such an area, so there is often more to be uncovered than usual.

4.4. Willingness to Admit to Mistakes

Fourth, as a conservative who believes that all persons can make mistakes, including myself, I have the freedom to be wrong and to be able to admit mistakes without feeling like I have betrayed the cause or whatever. As the Apostle John said, if we say we have no sin or never make mistakes, we are lying (I John 1:8). Therefore, I remain open, even welcome the opportunity of being proven wrong in what I have reported or said. This attitude feeds back into wanting to keep such things to a minimum since I feel an obligation to “fess up” if this occurs. For example, some of my internet critics are fond of pointing out that one book [83] I used in one of my articles [3] on sexual orientation of children of LGBT parents oversampled for LGBT children of LGBT parents, assuming that my error of not noticing that would entirely discredit the study. What my critics didn’t realize was that in the same article [3] I noted that even if 20 of my codings for children as LGBT were incorrect and those children were actually heterosexual, my results for that part of the article would have remained statistically significant. Thus, even if I had recoded all of the LGBT children in that book as heterosexual or had I deleted the book entirely from the analysis, the results would have remained statistically significant.

4.5. Conservative Viewpoints May Lead to Different but Useful Ways of Evaluating Data

Fifth, I think it helps to have a different world view position from which to critique and evaluate theory and research that tends to come from a white, male, liberal Western perspective. In the case of the Titanic, for example, I don’t think there are any liberal Western social science theories (e.g., social exchange, systems, feminist, functionalist, developmental, Marxist, post-modern) that would have predicted the rate of survival as a function of social class in a way that corresponded to the actual facts (of a nonlinear pattern between survival rates and social class) whereas there already was a sociological explanation of sorts from over two thousand years ago in Proverbs 30: 9 from that worldview perspective [82] . It may be that this is true of other worldviews, from outside the West, as well.

As another example, as a graduate student, one of my professors was making the case that those people who held to a traditional sexual standard were intellectually challenged [84] . However, he had defined traditional as “no sex before marriage”. I pointed out that rather than being the least cognitively complex sexual standard, it could be seen as the most complex because it featured the most prerequisites for having sex (consent, openness, love, personal commitment, public commitment) relative to his view of the other standards [85] . I also think that as a conservative I am more open to the possible long-term risks of instant gratification, recalling that Luntz [86] stated “two-thirds (66 percent) of nonreligious Americans agree with the statement ‘If it feels good, do it,’ despite its selfish, dangerous undertones. By comparison, fully 71% of religious Americans disagree with the concept of instant gratification. What we have here is a chasm between the value systems of these two American camps”. (p. 261). Delay of gratification, which is tied to delay discounting, is a key component of modern civilization ( [87] [88] , pp. 38-43), an important aspect of child socialization.

4.6. Recognition of Complexity and Requirement for Depth of Analysis

Sixth, my worldview suggests that life is complex and surface appearances are often misleading. Jesus told us (John 7: 24) not to judge by appearances but by the underlying truth. Therefore, I have even more encouragement to dig deep to resolve scientific questions. Sometimes, this means being willing to reject simplistic explanations of things [4] - [6] . For example, I have come across literature in which even hundreds of scholars have insisted that variable A was not related to variable B under any conditions whatsoever, such that no one had ever found such a relationship ever, in any journal, in any country, at any time, not even as a statistical artifact. It amazes me that anyone would be willing to take such an absolutist dogmatic position because it is much more easily refuted than a more cautious viewpoint.

Digging deep in practice means not only evaluating linear models, but nonlinear models, especially quadratic models in which extreme positions are more risky than moderate or centrist positions. Digging deep means evaluating interaction effects on the theory that what works for one group may not work so well for another group because people and groups can be different in how they respond to the conditions of life. Digging deep means cross-checking published research for inconsistencies, as noted previously [78] .

4.7. Dogmas or Viewpoints of Those in Power Can Be Incorrect

Seventh, my worldview suggests that people can develop entrenched but incorrect views of reality. Jesus ran into such problems with many religious and political leaders of his day, of course. Once I had a student who wanted to study abortion attitudes among religious students from religious high schools. I told her that she had to leave her own ideology about abortion behind in order to do a scientifically valuable study. I said that she had to be willing to test hypotheses no matter what the religious authorities might want to hear, that those very opinions had to be subject to being falsified, no matter how entrenched they were. What she did find did not conform to what those authorities wanted to hear, and her willingness to rebuke them scientifically might mean she will never find employment within that particular religious system. If I were dealing, for example, with a Muslim graduate student, I would say that such a student would have to be willing to do research that had a chance of finding some aspect of Islam, even the Holy Qur’an itself, to be incorrect from a scientific perspective [89] . And the same should be true for a Jewish or Christian student or even an atheist student with respect to their own perspectives.

4.8. Importance of Giving Serious Consideration to Different Perspectives

Eighth, my worldview encourages looking at both sides of any issue, looking seriously. This means that even if one side was “true” some of the time or even most of the time, it might not be true all of the time in all places among all groups. In effect, the idea is that there is usually something valid about nearly any opinion, even if it is largely incorrect. For example, when Jesus met with the Samaritan woman at the local well (John 4), he didn’t tell her she was totally wrong about everything but rather highlighted the things she was right about, even though he made it clear that he didn’t agree with her on everything. Rather than looking for points of disagreement, he looked for points of agreement and built their developing relationship on those rather than burning bridges with her by focusing on their differences. In practice, this means I should not “write off” someone else’s views but give them full consideration and be open to recognizing them as better than my own in at least some respects. This may be one of the more costly points because I have lost funding from sponsors for my being willing to look at both sides of a controversial issue and for refusing to assume that the “other side” was totally wrong. I have even been accused of being a traitor to conservative causes. However, I do not try to be loyal to anyone, but to be loyal to the wisdom of looking at both sides as equally as I can, regardless of who might be offended by such an approach.

4.9. Refusal to Be Satisfied with Overly Simplistic Social Science Explanations

Ninth, I think that conservatives can contribute to theory development in what might seem to be new ways. For example, sexual minority stress theory is being used to explain discrepancies in health among gay youth and heterosexual youth [4] . The idea is that sexual minority youth feel stigmatized by peers and suffer as a result. However, might it not be possible that any stigma was not caused by sexual orientation per se but by known correlates (e.g., higher drug use, bringing weapons to school, juvenile delinquency)? I might not care if my teenage son had a gay friend but I might be upset if he had a friend who was using cocaine, brought a loaded handgun to school every day, or was frequently in trouble with the law. If I told my son not to associate with the latter type of person, might not the person blame the rejection on his sexual orientation rather than the drug use and the other issues? For example, Goldberg, Bos, and Gartrell [28] compared drug use (marijuana or hashish) by adolescents with lesbian mothers as a function of stigma reported by adolescents; those who used those drugs monthly or more often were more likely to report stigma (52.2%) than those who used them less often (36.0%)―although the difference was not significant statistically, the effect size was in the small to medium range (0.31). Determining the causal direction here would not be easy, but my point is that just because one shows a correlation, does not mean that it has to signify a unidirectional effect, when the effect might be in the opposite direction or there might be reciprocal effects. To me it also seems too simplistic to think that all health discrepancies between any two groups could likely be explained away as a function of only one factor, regardless of what that factor might be.

Another example might be a reconsideration of theory for sexual orientation. Is it possible that for some persons sexual orientation is a matter more of behavioral opportunity rather than of sexual attraction per se? In other words, are there any persons who will engage in sexual activity with whatever presents itself as an opportunity, regardless of age or gender of the other? Perhaps they could be labeled opt-sexuals for opportunity sexuals. The key to this theory would be the use of boundaries regarding one’s sexual behavior. Opt-sexuals would have very few boundaries. A homosexual man, for example, might have strong closed boundaries vis-à-vis women or children, but have open boundaries vis-à-vis other adult men. A strongly religious person might have open boundaries only with respect to their one partner, with closed boundaries for everyone else. Is it possible that homosexuals are sometimes blamed for the behavior of opt-sexuals who identify as gay but are not functioning on attraction as much as liking sex with whomever it can be experienced?

4.10. Trying to Keep Emotions from Getting the Better of Our Cognitions

Lastly, it has been argued by some that decision-making is more an emotional process than an intellectual one [90] [91] . There is value in this viewpoint; even Adam and Eve (Genesis 3: 12 - 13) seemed better at using their cognition to justify a mistake than at reasoning beforehand in order to avoid making a mistake because of the strong emotional attractions of a particular choice they were offered. However, as a conservative, I still see value in trying to make decisions, particularly scientific decisions, more on the basis of intellectual issues than on the basis of emotional issues. That is probably how I got into trouble with the scholar who said I knew nothing about research; my cognitive discourse represented an emotional threat to her family (thus, a defensive mama bear response). This may go against human nature according to some, and may seem insensitive to emotional needs, but I think it is a critical part of graduate education and of being a careful scientist, in social science or other areas of science. Though it is never easy, if anyone should be able to handle this challenge, it ought to be scientists.

Another aspect of this is that conservative scholars may be better able to detect confirmation bias showing up in how research results are interpreted. As noted, a recent review of the literature on same-sex parenting [92] noted that there had been one study [28] that had found higher rates of substance use among adolescents from same-sex families than from adolescents from heterosexual families; however, Manning et al. [92] appeared to minimize the importance of those results [28] stating that “at the bivariate level, adolescents from same-sex parent families have higher levels of occasional substance use, but similar levels of heavy substance use compared with children in the Monitoring the Future Data set” (p. 494). Did liberal bias obscure the problem that even occasional illegal drug use is, well, against the law, especially for children under the age of 18? Furthermore, Manning et al. [92] did not mention that several other studies have found higher use of illegal drugs among the children of same-sex families, as detailed elsewhere [5] . From a conservative point of view, one starts with several studies which are then reduced to one study, for which the results are minimized as having little meaning or relevance-is this not a case of confirmation bias? I recognize that progressive scholars might fault me for looking harder for studies that reject the “no difference” hypothesis, but in the end, did not my “bias” help find more studies for consideration than were found or reported in a major review of the literature [92] designed to present the case for same-sex marriage before the U.S. Supreme Court?

Similarly, Biblarz and Stacey [93] in their extensive review of the literature reported results for one study that found higher rates of relationship instability for lesbian mothers, but they did not discuss at least three other studies that featured similar results [94] [95] . As a conservative I would probably not dare to conclude from one very small study (N = 14 lesbian couples) that lesbian mothers had more unstable relationships, but Biblarz and Stacey did so, stating that “Although research consistently indicates that such couples enjoy greater equality, compatibility, and satisfaction with their partners than their heterosexual counterparts, preliminary data hint that their relationships may prove less durable” (p. 11). While I have to credit Biblarz and Stacey with openness to finding unexpected (from a progressive perspective) results, my point here is that their review of the literature was far from being complete, the same weakness of the review by Manning et al. [92] . Another example is that when they found a study featuring sons of lesbian mothers scoring more than a standard deviation higher on femininity than sons of heterosexual mothers (p. 14), they described the result as an indication of gender flexibility. With an effect size of greater than 1.0, with Cohen [23] labeling an effect size of 0.80 or greater as “large”, it stretches my imagination to call such a huge difference as mere flexibility. I might agree that a small to moderate effect size difference might suggest flexibility, but if a huge effect size is nothing but flexibility, then why not describe any huge effect size for any variable of interest in social science as merely a matter of flexibility? Another example of manipulating words for a good political effect was their discussion of research results indicating emphasis on social conformity in children (p. 7). What had been found was that in one study heterosexual parents had expressed a higher value on teaching children “self-control”, which has been predictive of better outcomes for children as adults [86] - [88] . From a conservative perspective, I would interpret teaching children better self-control to be a positive parental goal, not a matter of oppressing them with “social conformity”. From my perspective this seems like an approach to research in which you start by ignoring the results of most of the studies whose outcomes you don’t like, then you minimize any adverse results of a study here or there (if you only find one study with adverse results, your findings can always be minimized by other scholars as, after all, only one study), and finally, if you must, you can attach better sounding labels to adverse results, magically wishing away any adverse political implications. It seems to me to be a recipe for success politically, no matter what the actual research might be. I have to ask-would most scholars, even progressive scholars, accept such an approach to reviewing the literature in any other, less controversial area? To conclude, I think that being a conservative helps me do a more complete review of the literature and to recognize possible attempts to marginalize research results that don’t fit a progressive narrative very well.

5. Conclusion

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Duarte, J.L., Crawford, J.T., Stern, C., Haidt, J., Jussim, L. and Tetlock, P.E. (2015). Political Diversity Will Improve Social Psychological Science. Behavioral and Brain Sciences, 38, e164.
http://dx.doi.org/10.1017/S0140525X15000035
[2] Cook, A. and Callister, R. R. (2010) Increasing Positive Perceptions of Diversity for Religious Conservative Students. Creative Education, 2, 93-100.
http://dx.doi.org/10.4236/ce.2010.12014
[3] Schumm, W.R. (2010) Children of Homosexuals More Apt to Be Homosexuals? A Reply to Morrison and to Cameron Based on an Examination of Multiple Sources of Data. Journal of Biosocial Science, 42, 721-742.
http://dx.doi.org/10.1017/S0021932010000325
[4] Schumm, W.R. (2013) Intergenerational Transfer of Parental Sexual Orientation and Other Myths. International Journal of the Jurisprudence of the Family, 4, 267-742.
[5] Schumm, W.R. (2015) Sarantakos’s Research on Same-Sex Parenting in Australia and New Zealand: Importance, Substance, and Corroboration with Research from the United States. Comprehensive Psychology, 4, 1-29.
http://dx.doi.org/10.2466/17.cp.4.16
[6] Schumm, W.R. and Crawford, D.W. (2015) Violations of Fairness Norms in Social Science Research: The Case of Same-sex Marriage and Parenting. International Journal of the Jurisprudence of the Family, 6, 68-113.
[7] Bozett, F.W. (1987) Gay Fathers. In: Bozett, F.W., Ed., Gay and Lesbian Parents, Praeger, New York, 3-22.
[8] Bozett, F.W. (1987) Children of Gay Fathers. In: Bozett, F.W., Ed., Gay and Lesbian Parents, New York, Praeger, 39-57.
[9] Bozett, F.W. (1993) Gay Fathers: A Review of the Literature. In: Garnets, L.D. and Kimmel, D.C., Eds., Psychological Perspectives on Lesbian and Gay Male Experiences, Columbia University Press, New York, 437-457.
[10] Green, G.D. and Bozett, F.W. (1991) Lesbian Mothers and Gay Fathers. In: Gonsiorek, J.C. and Weinrich, J.D., Eds., Homosexuality: Research Implications for Public Policy, Sage Publications, Thousand Oaks, CA, 197-214.
http://dx.doi.org/10.4135/9781483325422.n13
[11] Patterson, C.J. (1992) Children of Gay and Lesbian Parents. Child Development, 63, 1025-1042.
http://dx.doi.org/10.2307/1131517
[12] Raley, J.A. (2013) Adolescents with Same Sex Parents: Does It Make a Difference? Adolescent Psychiatry, 3, 329-334.
[13] Selekman, J. (2007) Homosexuality in Children and/or Their Parents. Pediatric Nursing, 33, 453-457.
[14] Peterson, N. (1984) Coming to Terms with Gay Parents. USA Today, p. 3D.
[15] Patterson, C.J. and Friel, L.V. (2000) Sexual Orientation and Fertility. In: Bentley, G.R. and Mascie-Taylor, C.G.N., Eds., Infertility in the Modern World: Present and Future Prospects, Cambridge University Press, New York, 238-260.
http://dx.doi.org/10.1017/CBO9780511613036.007
[16] Meezan, W. and Rauch, J. (2005) Gay Marriage, Same-Sex Parenting, and America’s Children. The Future of Children, 15, 97-115.
http://dx.doi.org/10.1353/foc.2005.0018
[17] Miller, C.L. and Price, J. (2014) The Number of Children Being Raised by Gay or Lesbian Parents.
http://ssrn.com/abstract=2497095
[18] Gates, G.J. (2013) LGBT Parenting in the United States. The Williams Institute, University of California at Los Angeles (UCLA), Los Angeles.
[19] Gates, G.J. (2008) Diversity among Same-Sex Couples and Their Children. In: Coontz, S., Parson, M. and Raley, G., Eds., American Families: A Multicultural Reader, Routledge, New York, 394-399.
[20] Rosenfeld, M.J. (2014) Couple Longevity in the Era of Same-Sex Marriage in the United States. Journal of Marriage and Family, 76, 905-918.
http://dx.doi.org/10.1111/jomf.12141
[21] Brewster, K.L., Tillman, K.H. and Jokinen-Gordon, H. (2014) Demographic Characteristics of Lesbian Parents in the United States. Population Research and Policy Review, 33, 503-526.
http://dx.doi.org/10.1007/s11113-013-9296-3
[22] Schumm, W.R. (2008) Re-Evaluation of the “No Difference” Hypothesis Concerning Gay and Lesbian Parenting as Assessed in Eight Early (1979-1986) and Four Later (1997-1998) Dissertations. Psychological Reports, 103, 275-304.
http://dx.doi.org/10.2466/pr0.103.5.275-304
[23] Cohen, J. (1992) A Power Primer. Psychological Bulletin, 112, 155-159.
http://dx.doi.org/10.1037/0033-2909.112.1.155
[24] Schumm, W.R. (2010) Evidence of Pro-Homosexual Bias in Social Science: Citation Rates and Research on Lesbian Parenting. Psychological Reports, 106, 314-322.
http://dx.doi.org/10.2466/pr0.106.1.314-322
[25] Mucklow, B.M. and Phelan, G.K. (1979) Lesbian and Traditional Mothers’ Responses to Adult Response to Child Behavior and Self-Concept. Psychological Reports, 44, 880-882.
http://dx.doi.org/10.2466/pr0.1979.44.3.880
[26] Miller, J.A., Jacobsen, R.B. and Bigner, J.J. (1981) The Child’s Home Environment for Lesbian vs. Heterosexual Mothers: A Neglected Area of Research. Journal of Homosexuality, 7, 49-56.
http://dx.doi.org/10.1300/J082v07n01_05
[27] Miller, J.A., Mucklow, B.M., Jacobsen, R.B. and Bigner, J.J. (1980) Comparison of Family Relationships: Homosexual versus Heterosexual Women. Psychological Reports, 46, 1127-1132.
http://dx.doi.org/10.2466/pr0.1980.46.3c.1127
[28] Goldberg, N.G., Bos, H.M.W. and Gartrell, N.K. (2011) Substance Use by Adolescents of the USA National Longitudinal Lesbian Family Study. Journal of Health Psychology, 16, 1231-1240.
http://dx.doi.org/10.1177/1359105311403522
[29] Gartrell, N.K., Bos, H.M.W. and Goldberg, N.G. (2011) Adolescents of the U.S. National Longitudinal Lesbian Family Study: Sexual Orientation, Sexual Behavior, and Sexual Risk Exposure. Archives of Sexual Behavior, 40, 1199-1209.
http://dx.doi.org/10.1007/s10508-010-9692-2
[30] Cameron, P. and Cameron, K. (2012) Re-Examining Evelyn Hooker: Setting the Record Straight with Comments on Schumm’s (2012) Reanalysis. Marriage & Family Review, 48, 491-523.
http://dx.doi.org/10.1080/01494929.2012.700867
[31] Schumm, W.R. (2012) Reviewing the Reviews. Marriage & Family Review, 48, 415-417.
http://dx.doi.org/10.1080/01494929.2012.677390
[32] Schumm, W.R. (2012) Re-Examining a Landmark Research Study: A Teaching Editorial. Marriage & Family Review, 48, 465-489.
http://dx.doi.org/10.1080/01494929.2012.677388
[33] Sarantakos, S. (1998) Legal Recognition of Same-Sex Relationships. Alternative Law Journal, 23, 222-225.
[34] Sarantakos, S. (1999) Same-Sex Marriage: Which Way to Go? Alternative Law Journal, 24, 79-84, 107.
[35] Sarantakos, S. (1996) Children in Three Contexts: Family, Education, and Social Development. Children Australia, 21, 23-31.
http://dx.doi.org/10.1017/S1035077200007173
[36] Herek, G.M. (2014) Evaluating the Methodology of Social Science Research on Sexual Orientation and Parenting: A Tale of Three Studies. University of California-Davis Law Review, 48, 583-622.
[37] Patterson, C.J. (2005) Lesbian and Gay Parents and Their Children: Summary of Research Findings. In: American Psychological Association, Ed., Lesbian &Gay Parenting, American Psychological Association, Washington DC, 5-22.
[38] Wald, M.S. (2006) Adults’ Sexual Orientation and State Determinations Regarding Placement of Children. Family Law Quarterly, 40, 381-434.
http://dx.doi.org/10.2139/ssrn.920670
[39] Marks, L. (2012) Same-Sex Parenting and Children’s Outcomes: A Closer Examination of the American Psychological Association’s Brief on Lesbian and Gay Parenting. Social Science Research, 41, 735-751.
http://dx.doi.org/10.1016/j.ssresearch.2012.03.006
[40] Allen, D. (2015) More Heat than Light: A Critical Assessment of the Same-sex Parenting Literature, 1995-2013. Marriage & Family Review, 51, 154-182.
http://dx.doi.org/10.1080/01494929.2015.1033317
[41] Sarantakos, S. (2000) Same-Sex Couples. Harvard Press, Sydney.
[42] Sarantakos, S. (1996) Same-Sex Couples: Problems and Prospects. Journal of Family Studies, 2, 147-163.
http://dx.doi.org/10.5172/jfs.2.2.147
[43] Sarantakos, S. (1998) Sex and Power in Same-Sex Couples. Australian Journal of Social Issues, 33, 17-36.
[44] Sarantakos, S. (1975) Anatomy of Divorce. Australian Journal of Social Issues, 10, 169-178.
[45] Sarantakos, S. (1980) Marriage and the Family in Australia: A Multicultural Study. Budget Press, Sydney.
[46] Sarantakos, S. (1980) The Aged and Their Families: Towards Integration. Journal of Social Work, 33, 13-22.
http://dx.doi.org/10.1080/03124078008549666
[47] Sarantakos, S. (1982) Getting Married Unknowingly and Unwillingly. Australian Journal of Sex, Marriage, and Family, 3, 13-23.
[48] Sarantakos, S. (1982) To Marry or to Cohabit? Australian Social Work, 35, 3-8.
[49] Sarantakos, S. (1984) Living Together in Australia. Longman Cheshire, Melbourne.
[50] Sarantakos, S. (1985) Status of the Aged. Australian Journal of Aging, 4, 16-21.
http://dx.doi.org/10.1111/j.1741-6612.1985.tb00876.x
[51] Sarantakos, S. (1991) Cohabitation Revisited: Paths of Change among Cohabiting and Non-Cohabiting Couples. Australian Journal of Marriage & Family, 12, 144-155.
[52] Sarantakos, S. (1991) Unmarried Cohabitation: Perceptions of a Lifestyle. Australian Social Work, 44, 23-32.
http://dx.doi.org/10.1080/03124079108550160
[53] Sarantakos, S. (1992) Cohabitation in Transition. Keon Press, Sydney.
[54] Sarantakos, S. (1993) Social Research. MacMillan Education Australia, Brisbane.
[55] Sarantakos, S. (1994) Trial Cohabitation on Trial. Australian Social Work, 47, 13-25.
http://dx.doi.org/10.1080/03124079408410953
[56] Sarantakos, S. (1994) Unmarried Cohabitation: Options, Limits, and Possibilities. Australian Journal of Marriage & Family, 15, 148-160.
[57] Sarantakos, S. (1996) Modern Families: An Australian Text. MacMillan Education Australia, South Melbourne.
[58] Sarantakos, S. (1996) Troubled Children. APEX Publishing, Sydney.
[59] Sarantakos, S. (1997) Cohabitation, Marriage, and Delinquency: The Significance of Family Environment. Australian & New Zealand Journal of Criminology, 30, 187-199.
http://dx.doi.org/10.1177/000486589703000205
[60] Sarantakos, S. (1998) Working with Social Research. MacMillan Education Australia, South Melbourne.
[61] Sarantakos, S. (1998) Social Research. 2nd Edition, MacMillan Education Australia, South Melbourne.
[62] Sarantakos, S. (1999) Husband Abuse: Fact or Fiction? Australian Journal of Social Issues, 34, 231-252.
[63] Sarantakos, S. (2000) Marital Power and Quality of Marriage. Australian Social Work, 53, 43-50.
http://dx.doi.org/10.1080/03124070008415556
[64] Sarantakos, S. (2000) Quality of Life on the Farm. Journal of Family Studies, 6, 182-198.
http://dx.doi.org/10.5172/jfs.6.2.182
[65] Sarantakos, S. (2004) Deconstructing Self-Defense in Wife-to-Husband Violence. The Journal of Men’s Studies, 12, 277-296.
http://dx.doi.org/10.3149/jms.1203.277
[66] Sarantakos, S. (2005) Social Research. 3rd Edition, Palgrave Mac-Millan, New York.
[67] Sarantakos, S. (2007) Data Analysis. Vol. 1-4, Sage Publications, Thousand Oaks.
[68] Sarantakos, S. (2007) A Toolkit for Quantitative Data Analysis. Palgrave MacMillan, New York.
[69] American Psychological Association (2010) Publication Manual of the American Psychological Association. 6th Edition, American Psychological Association, Washington DC.
[70] Schumm, W.R., Pratt, K.K., Hartenstein, J.L., Jenkins, B.A. and Johnson, G.A. (2013) Determining Statistical Significance (Alpha) and Reporting Statistical Trends: Controversies, Issues, and Facts. Comprehensive Psychology, 2, 1-7.
http://dx.doi.org/10.2466/03.CP.2.10
[71] Wilkinson, L., and the Task Force on Statistical Inference, APA Board of Scientific Affairs (1999) Statistical Methods in Psychology Journals: Guidelines and Explanations. American Psychologist, 54, 594-604.
http://dx.doi.org/10.1037/0003-066X.54.8.594
[72] Cumming, G., Fidler, F., Kalinowski, P. and Lai, J. (2012) The Statistical Recommendations of the American Psychological Association Publication Manual: Effect Sizes, Confidence Intervals, and Meta-Analysis. Australian Journal of Psychology, 64, 138-146.
http://dx.doi.org/10.1111/j.1742-9536.2011.00037.x
[73] Schumm, W.R. (2012) Lessons for the “Devilish Statistical Obfuscator” or How to Argue for a Null Hypothesis: A Guide for Students, Attorneys, and Other Professionals. Innovative Teaching, 1, 1-13.
[74] Jussim, L. (2012) Liberal Privilege in Academic Psychology and the Social Sciences: Commentary on Inbar & Lammers (2012). Perspectives on Psychological Science, 7, 504-507.
http://dx.doi.org/10.1177/1745691612455205
[75] Redding, R.E. (2013) Likes Attract: The Sociopolitical Groupthink of (Social) Psychologists. Perspectives on Psychological Science, 7, 512-515.
[76] Schumm, W.R. (2009) Gay Marriage and Injustice. The Therapist, 21, 95-96.
[77] Sejvar, J.J., Labutta, R.J., Chapman, L.E., Grabenstein, J.D., Iskander, J. and Lane, J.M. (2005) Neurologic Adverse Events Associated with Smallpox Vaccination in the United States, 2002-2004. JAMA, 294, 2744-2750. (With Correction, JAMA, 2007, 298, 1864)
[78] Schumm, W.R., Nazarinia, R.R. and Bosch, K.R. (2009) Unanswered Questions and Ethical Issues Concerning U.S. Biodefence Research. Journal of Medical Ethics, 35, 594-598.
http://dx.doi.org/10.1136/jme.2008.025551
[79] Schumm, W.R. (2010) A Comparison of Citations across Multidisciplinary Psychology Journals: A Case Study of Two Independent Journals. Psychological Reports, 106, 314-322.
http://dx.doi.org/10.2466/pr0.106.1.314-322
[80] Redding, R.E. (2013) Politicized Science. Society, 50, 439-446.
http://dx.doi.org/10.1007/s12115-013-9686-5
[81] Schumm, W.R. (2014) Challenges in Predicting Child Outcomes from Different Family Structures. Comprehensive Psychology, 3, 1-12.
http://dx.doi.org/10.2466/03.17.49.CP.3.10
[82] Schumm, W.R., Webb, F.J., Castelo, C.C., Akagi, C.G., Jensen, E.J., Ditto, R.M., Spencer-Carver, E. and Brown, B. (2002) Enhancing Learning in Statistics Classes Through the Use of Concrete Historical Examples: The Space Shuttle Challenger, Pearl Harbor, and the RMS Titanic. Teaching Sociology, 30, 361-375.
http://dx.doi.org/10.2307/3211484
[83] Garner, A. (2005) Families Like Mine: Children of Gay Parents Tell It Like It Is. Perennial Currents, New York.
[84] Jurich, A.P. (1974) The Effect of Cognitive Moral Development upon the Selection of Premarital Sexual Standards. Journal of Marriage and the Family, 36, 736-741.
http://dx.doi.org/10.2307/350356
[85] Schumm, W.R. (1995) Non-Marital Sexual Behavior. In: Rekers, G.A., Ed., Handbook of Child and Adolescent Sexual Problems, Lexington Books, Lexington, 381-423.
[86] Luntz, F.I. (2009) What Americans Really Want…. Really. Hyperion, New York.
[87] Balter, M. (2008) Why We’re Different: Probing the Gap between Apes and Humans. Science, 319, 404-405.
http://dx.doi.org/10.1126/science.319.5862.404
[88] Nazininia Roy, R., Schumm, W.R. and Britt, S.L. (2014) Transition to Parenthood. Springer, New York.
[89] Gibson, D. (2011) Qur’anic Geography. Independent Scholars Press, Saskatoon.
[90] Tamg, Z. and Lin, Y. (2015) The Neural Mechanisms of Utility and Ethic in the Management of Moral Decision Making. Journal of Behavioral and Brain Science, 5, 157-162.
http://dx.doi.org/10.4236/jbbs.2015.54016
[91] Haidt, J. (2012) The Righteous Mind: Why Good People Are Divided by Politics and Religion. Pantheon Books, New York.
[92] Manning, W.D., Fettro, M.N. and Lamidi, E. (2014) Child Well-Being in Same-Sex Families: Review of Research Prepared for American Sociological Association Amicus Brief. Population Research and Policy Review, 33, 485-502.
http://dx.doi.org/10.1007/s11113-014-9329-6
[93] Biblarz, T.J. and Stacey, J. (2010) How Does the Gender of Parents Matter? Journal of Marriage and Family, 72, 3-22.
[94] Schumm, W.R. (2010) Comparative Relationship Stability of Lesbian Mother and Heterosexual Mother Families: A Review of Evidence. Marriage and Family Review, 46, 499-509.
http://dx.doi.org/10.1080/01494929.2010.543030
[95] Schumm, W.R. (2015) Navigating Treacherous Waters—One Researcher’s 40 Years of Experience with Controversial Scientific Research. Comprehensive Psychology, 4, 1-40.
http://dx.doi.org/10.2466/17.CP.4.24

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.