“Children of Homosexuals” Researcher More Apt To Ape Paul Cameron
October 17th, 2010
There’s a study out that’s causing quite a stir. It’s by Walter R. Schumm, a professor Kansas State University whose paper has appeared in latest issue of the Journal of Biosocial Science. (JBS was formerly The Eugenics Review from 1909 to 1968, at which time the Eugenics Society changed its journal’s name.) Schumm’s paper, titled “Children of homosexuals more apt to be homosexuals: A reply to Morrison and to Cameron based on an examination of multiple sources of data,” essentially picks up where a very similar 2006 paper by Paul Cameron left off, which claimed that 33% to 47% of children of gay parents wound up being gay. Schumm’s paper claims that children of gay parents were 1.7 to 12.1 times as likely to become gay as children of straight parents, “depending on the mix of child and parent genders.” The implication behind Schumm’s paper, as it was with Cameron’s, is that gay parenting can somehow influence a child’s sexuality, with the implication that homosexuality itself is not biological but determined according to how a child is raised.
Schumm’s study is currently making a big splash on AOLNews, where, according to an article by Paul Kix, Schumm has supposedly conducted a new “robust” study examining whether Cameron was right: Do gay parents make gay children? Cameron’s paper, also published in JBS, was just another example of the shoddy “scholarship” and deliberate distortion of other publications that we’ve come to expect from him. Schumm’s paper seeks to replicate Cameron’s work while acknowledging some of the criticisms of Cameron’s 2006 paper. It’s important to emphasize however that Schumm only acknowledges someof the criticisms. The most important criticism — the completely non-random nature of the so-called “dataset” that Cameron used — Schumm not only ignores, but he repeats that same flaw and embellishes it in a grandly enlarged form.
Schumm, like Cameron, calls his study a “meta-analysis” of ten smaller samples. (Cameron used only three.) When researchers use the term “meta-analysis,” they mean that they collected a bunch of data from a collection of other studies. And typically, these studies are drawn from what are called “convenience samples.”
To obtain a convenience sample, a researcher defines the type of population he’s looking for and recruits his sample according to eligibility requirements that he defined ahead of time from among people who are more or less conveniently available to him — hence the name. But critically, that researcher would have accepted everyone who volunteered and met the predefined criteria. While this isn’t a representative sample, it is, at least for the most part, a relatively random one, even if it is often very far from being a perfectly random one. Putting together nationally-representative samples is extremely costly and, therefore, extremely rare. Convenience samples are much more common. Good researchers, however, are very mindful of the limits of their sample and would never extrapolate their findings to the population as a whole.
Convenience samples have many weaknesses, and one of the weaknesses is that they tend to be small. A “meta-analysis” is intended to correct that problem. To perform a meta-analysis, a researcher collects a bunch of other studies and combines all of the data from their samples, re-crunches the data, and sees which trends hold up in the much larger sample. This too, is valuable, although it also has its pitfalls. It’s not important to go into them here, but for our purposes it’s fair to say that meta-analysis techniques are useful — as long as the studies gathered for the meta-analysis contain samples that were similarly constructed and were meant to examine the same set of questions. And that also means that the smaller samples were somewhat similarly random, even if they were not statistically representative. The larger meta-analysis retains the same weakness of the smaller random-but-not-representatives samples, but with the larger combined sample, it can tend to diminish some of the quirks (or “outliers”) of the smaller samples. These kinds of studies can be useful in identifying trends and correlations, but they cannot be used to extrapolate behaviors or conditions to the population as a whole.
But Schumm’s “meta-analysis” (and Cameron’s before him) doesn’t even have the benefit of being built off of random convenience samples. There were no convenience samples in any of the ten prior works that Schumm used for his meta-analysis. In fact, they weren’t even professional studies. They were popular books!
That’s right, each of the ten sources that Schumm used to construct his “meta-analysis” were from general-audience books about LGBT parenting and families, most of which are available on Amazon.com. Schumm read the books, took notes on each parent and child described in the book, examined their histories, and counted up who was gay and who was straight among the kids. The ten books were:
- Abigail Garner’s Families Like Mine: Children of Gay Parents Tell It Like It Is
- Andrew Gotlieb’s Sons Talk About Their Gay Fathers: Life Curves
- Noelle Howey and Ellen Samuels’ Out of the Ordinary: Essays on Growing Up with Gay, Lesbian, and Transgender Parents
- Maureen Asten’s Lesbian Family Relationships in American Society: The Making of an Ethnographic Film
- Mary Boenke’s Trans Forming Families: Real Stories About Transgendered Loved Ones
- Jane Drucker’s Families Of Value: Gay and Lesbian Parents and their Children Speak Out
- Peggy Gillespie’s Love Makes a Family: Portraits of Lesbian, Gay, Bisexual, and Transgender Parents and Their Families
- Louise Rafkin’s Different Mothers: Sons and Daughters of Lesbians Talk About Their Lives
- Myra Hauschild and Pat Rosier’s Get Used to It!: Children of Gay and Lesbian Parents
- And Lisa Saffron’s What About the Children: Sons and Daughters of Lesbian and Gay Parents Talk About Their Lives
The first three were also used in Cameron’s 2006 paper. Schumm comments these books, saying:
The authors of these ten books have done important data collection for the entire scientific community. While their samples may not be random, they may be no worse than the convenience and snowball samples used in much of previous researcher with gay and lesbian parents; certainly their combined dataset is far larger than that of the early studies on gay and lesbian parenting.
This is utter nonsense. None of the books contained any semblance of a sample — not even a convenience sample, and the authors certainly didn’t do anything approaching an “important data collection” by any stretch of the imagination. What they did was tell stories, or, rather, helped the families themselves to tell their own stories. The people chosen in each of these volumes were were not picked according to a pre-defined criteria in the manner in which a researcher would construct a sample. They were chosen solely because the authors and editors thought their stories were compelling. In 2006, Abigail Garner, an advocate for children of LGBT parents, was particularly incensed at Cameron’s misuse of her book and implying that the people selected to appear in it were in any way random. In fact, Abigail said that her book was intentionally non-random:
In fact, I had made a point of having a roughly even number of straight kids and second generation [gay, bisexual or transgender] kids so that both views would be evenly represented in the book. In other words, because of the goals of my book, I deliberately aimed to have 50% of the kids interviewed to be queer. Not because it is statistically reflective of the population, but to give it balance of perspective.
Schumm used Abigail’s book in precisely the same illegitimate way that Cameron did. Despite the fact that Abigail expressly said that she intentionally made her balance of gay kids to straight kids at about 50/50, Schumm used that sample as part of his “meta-analysis” to conclude that gay parents are more likely to create gay kids. Schumm doesn’t say how many of his 262 “samples” he derived from Abagail’s book. Cameron said he used “over 50″ of Abigail’s interviews, so it is likely to be a considerable chunk of Schumm’s “dataset” as well.
But even if the “dataset” from Abigail’s book was minimal, the other books won’t make up for the flaw. The books that Schuum chose are best characterized as literary works, many with essays and stories of kids “speaking out” about having gay parents. (Gotlieb’s Sons Talk About Their Gay Fathers is something of an exception. But here, too, his work is descriptive and not statistical. He also only talks about twelve young men.)
These stories were chosen for their literary and illustrative qualities, and for the compelling nature of each of their situations. The method for collecting the stories for these books is anything but random. In fact, the process is best described as anti-random. Sticking to a rule for randomness would have likely rendered these books both boring and unmarketable. The goal of these authors and editors was not to examine their subjects in a statistical sense, but in a literary sense — to explore issues and perspectives and different points of view, with each story chosen because it illustrates an issue that isn’t touched on by the other stories. And no matter how great or small the so-called “samples” were (Gotlieb’s consisted of only twelve young men), it’s a given that these authors and editors ensured that the experiences of LGBT children were well-represented alongside their straight compatriots, without regard to whether their numerical presence were in any way statistically representative.
That is how good stories are gathered, but it most certainly is not how a sample is collected for statistical purposes. To run statistics on this non-statistical (or anti-statistical) sample would be like judging the ratio of giraffes to chimpanzees in Africa by comparing the populations selected by the zookeepers at your local zoo. Whenever a non-random selection process is used, any attempt at statistics on that process is completely meaningless — and an abuse.
But to add further insult to that injury of statistics, Schumm needed a control sample of children from straight families. For that, he turned to a population-based representative sample from 1994: Edward O. Laumann, et al’s, The Social Organization of Sexuality: Sexual Practices in the United States. That’s right. He used a deliberately anti-random sample of children from LGBT parents and compared that number with a population-based nationally representative sample of children from households overall to conclude that gay parents are much, much more likely to cause their children to become gay.
Which means that he’s now comparing elephants to oranges.
Before I end this critique, I have another surprise: In his JBS paper, Schumm actually cited me by name and included a complete block-quote from my 2006 criticism of Cameron’s study — while blithely ignoring the main point of that very criticism. Of course, he had to, because the main point of my criticism of Cameron’s work can be multiplied three-and-a-thirdfold for Schumm’s.
And having become a subject of Schumm’s highly selective citation, I can’t help but notice that Cameron often did the same thing. He was famous for picking out a small paragraph of other researchers’ work while ignoring that researcher’s primary findings in the hope that nobody would notice.
But I noticed with Cameron and I’m noticing it again with Schumm. And I’m not surprised. Back in 2007 when Cameron tried to launch an online “journal,” Schumm agreed to be part of Cameron’s editorial board. Cameron’s “journal” failed to get off the ground, but Schumm continues on. More recently he served as an “expert” witness alongside George “Rentboy” Rekers in Florida’s gay adoption trial. As far as I can tell, Schumm comes off appearing more “sciencey” than Cameron, but his methodology is exactly the same. And when you use the same methodology, you end up with the same result: junk science.
Stay tuned. I’ll have more later.