Yes, Virginia, the social and data sciences are “science”

Dedicated to the memory of Will H. Moore

This is the first in a two-part series on leveraging quantitative social science programs to provide training in data science, inspired by a recent invitation to provide input on that topic at an undisclosed location outside of Trumpland. Where I may wish to seek asylum at some point, so I don’t want to mess it up.

In the process of working through my possible contributions, I realized my approach was predicated on the issue of whether these fields were sufficiently systematic and developed that the term “science” applied in the first place. They are—or at least the social science parts are—though as far as leveraging goes, as we shall see, the argument is more nuanced. [1] That’s for the next essay, however.

The “Is social science really science?” issue, on the other hand, never seems to go away, even as the arguments against it grow older and dumber with each passing year. In fact just a couple weeks ago I was out at a workshop in…well, let’s just be nice here: you know who you are…and once again had experiences that can best be compared to a situation where NASA has assembled a bunch of people to assess a forthcoming mission to a comet, and the conversation is going okay, if not great, and all of a sudden someone wearing the cap and gown of a medieval scholastic pipes up with “Excuse me, but your diagram has everything wrong: the Earth is at the center of the planetary system, not the Sun!” Harumph…thank you. Despite the interruption, the conversation continues, if more subdued, until about fifteen minutes later another distinguished gentleman—they are always male—also in cap and gown, exclaims “This is rank foolishness! Your so-called spacecraft can’t pierce the crystalline spheres carrying the planets, and it doesn’t need to: comets are an atmospheric phenomenon, and you should just use balloons!”

Protocol in these situations, of course, requires one to listen politely while others nod in agreement, all the while thinking “That is the dumbest f’ing thing I’ve heard since, well, since the last time I was in a meeting sponsored by this organization.” And hoping at least that the lunch-time sandwiches will be good. [they were]

The parade of nonsense invariably culminates in some august personage, who certainly hasn’t been exposed to anything remotely resembling philosophy of science since hearing a couple apocryphal stories in 8th grade about Galileo, woefully intones the mantra “I’m sorry, but it is simply impossible to study humans scientifically: they are too unpredictable.”

Yeah, brilliantly original observation Sherlock! And that’s why every morning we see Sergey Brin and Mark Zuckerberg on opposite sides of a Hwy 101 freeway entrance in Palo Alto holding cardboard signs reading “Will code for food.” So sad.

So, before proceeding to my bolt-hole essay [2], let’s pursue this larger issue in some detail with, of course, seven observations. I’m going to focus most of my remarks on the scientific development of political science, which is the case I know best, but every one of these characterizations applies to all of the social science sciences (in the case of economics, lagged by a good fifty years, and demography, perhaps 75 years).

I will be the first to acknowledge, of course, that “political science” created a bit of trouble for itself since originally, when the “American Political Science Association” (APSA)—now merely a real estate venture that happens to sponsor academic conferences and journals—labelled itself in 1903 (!) as “science” rather than say, “government” or “politics.” This designation was partially signaling a common cause with the Progressive reaction to the corrupt machine politics of the day, but mostly because in the midst of the electrical and chemical revolutions of the late 19th century, “science” was a really cool label. Such was the zeitgeist, and the nascent APSA’s branding was no different than what its contemporary Mary Baker Eddy did in the field of religion. [3]

From this admittedly dodgy start, political science did, nonetheless, gradually develop a strong scientific tradition (as economics had about fifty years earlier), notably with Charles Merriam at the University of Chicago in the 1930s—though frankly, Aristotle’s mostly-lost empirical work on governance structures appears to have been pretty decent science even in the 4th century BCE—and from the 1960s onward, surfing a veritable technological tsunami of the computer and communications revolutions during the late 20th century. The results of these changes will be outlined below. [4]

From the perspective of formally developing a philosophy of social science, however, these developments hit at a rather bad time, as the classical logical positivist agenda, dating back to the 1920s, had ground to a slow and painful halt on the insurmountable issue of the infinite regress of ancillary assumptions: turtles all the way down. That program was replaced, for better or worse (from the perspective of systematic definitions, mostly the latter) by the historical-sociological approach of Thomas Kuhn. On the positive side, the technological imperatives/opportunities were so great that the scientific parts of the field simply barged ahead anyway and—large scale human behavior not being infinitely mutable—probably ended up about where they would have had the entire enterprise been carefully designed on consistent philosophical principles.

And thus, we see the following characteristics as defining the scientific study of human social behavior:

1. A consistent set of core methodological concepts accepted by virtually all researchers and institutions using the approach.

In the case of political science, these are conveniently embodied in the still widely used—for all intents and purposes canonical—text first published in 1994: King, Keohane and Verba, Designing Social Inquiry: Scientific Inference in Qualitative Research (KKV) [5] Apply the concepts and vocabularyof KKV (which, recursively, deal with the critical importance of concepts and vocabulary) anywhere in the world where the scientific approach to the study of political behavior is used, and you will be understood. KKV certainly has its critics, including me, because it defines the discipline’s methodology in a narrow frequentist statistical framework—albeit that approach probably still characterizes about 90% of published work in the discipline—but those debates occur on the foundations provided by KKV, who had adopted these from the earlier developments starting with Merriam and his contemporaries, and with massive input from elsewhere in the social and behavioral sciences, particularly economics and psychology.

2. A set of well-understood tools that have been applied to a wide variety of problems.

Okay, as I’ve argued extensively elsewhere [unpaywalled director’s cut here], this has probably become too narrow a set of tools, but at least the [momentously horrible in far too many applications] downsides of those tools are well understood and readily critiqued. These are taught through a very [again, probably too] standardized curriculum both at elite graduate institutions but also through specialized summer programs in the US and Europe, with the quantitative analysis program of the University of Michigan-based Inter-University Consortium for Political and Social Research dating back more than half a century. These core tool sets have experienced the expected forking and specialization through the creation of new and more advanced techniques. [6]

That core is not static: while published political science research is increasingly dominated by linear and logistic regression [7] the past two decades saw the rapid introduction and application, for example, of both Bayesian model averaging for conventional numerical analysis and latent Dirichlet allocation models of textual topic models, the gradual introduction of various standard machine learning methods [8] and, I can say with certainty, we will soon see ample applications of various “deep learning” methods at least from researchers at institutions that can afford to estimate these.

3. Theories and methods of inference

As with the “Science” part of “APSA,” this one’s a little complicated. Or it is if you are listening to me.

On the surface—and this is the canonical KKV line—quantitative political science has a very clear model of inference, the “null hypothesis significance testing” or “frequentist” statistical approach, which is nearly universally used. Unfortunately, frequentism is a philosophical mishmash that was simply the last-thing-standing at the end of vicious methodological debates over statistical inference during the first three decades of the 20th century, is just barely logically coherent while being endlessly misinterpreted, and its default assumptions are not really applicable to most [not all] of the problems to which it is applied. Except for that, frequentism is great.

The alternative to frequentism is Bayesian inference, which is coherent, corresponds to how most people actually think about the relationship between theories and data, and in the past forty or so years has become technically feasible, which it was not true in the 1920s, when it was relegated to some intellectual netherworld by the victorious “ABBA”—anything but Bayesian analysis—faction.

Finally, creeping in from the side, without—to date—any real underlying philosophy beyond sheer pragmatism, are the machine learning methods. Though betting as ever on pragmatism, a yet-to-be-specified philosophical merging of Bayesian and machine learning, which will not be particularly difficult to do, is likely to develop and could well be the dominant approach in the field by, say, 2040. Just saying.

The important point here, however, is that the issue of inference is actively taught and debated, and in the case of the frequentist-Bayesian debate, this discussion goes back more than a century. There’s a lot going on here.

4. Theories and methods for assessing causality

Contrary to the incantations of the bozos who intone “correlation is not causality” at social scientists like we’ve never heard the phrase before, causation is a fabulously complex problem: The Oxford Handbook of Causation alone is 800 pages in length, Judea Pearl’s book on the subject is 500 pages, and the Oxford Handbook series, perhaps noting that a copy of Oxford Handbook of Causation is currently (19 April 2017) priced on Amazon’s secondary market at $2,592.22 [9] also offers an additional 750-page Oxford Handbook of Causal Reasoning.

Which is to say, the question of causality complicated, and it’s always been complicated. It’s even more complicated when dealing with social behavior because one can’t even assume a strict temporal ordering of causes and effects. [10]  But as with inference, we have frameworks and vocabularies for looking at the problem, about fifty years of work approaching it using a variety of different of systematic empirical methods (invariably incorporating a variety of different sets of assumptions and trade-offs), and throughout the development of the discipline, an increasingly sophisticated understanding of the issues involved.

5. Experimental methodologies, including laboratory, quasi- and synthetic

Classical experimental methods generally dropped out of political science for two or three decades: the issues of generalizing from laboratory studies—particularly those with subject pools consisting of reluctant middle-class white undergraduates—to more general behaviors seemed simply too great. But a couple decades ago the method got a second look and subsequently has developed a thriving research community using a variety of methods.

But even during the nadir of classical experiments, extensive analyses were done with “natural” or “quasi-” experiments, particularly in the policy realm. Starting in the 1990s, “synthetic” experiments using artificially matched controls (in the discipline, these are often classed in the “causation” literature) have also seen extensive work.

It is certainly the case that compared to chemistry or mechanics, it’s hard to do a true experiment in political science (though with a suitably cooperative government and suitably lax institutional review board, you can get pretty close). Last time I looked, however, it was also pretty hard to do these (outside a computer) in geology, astronomy and climatology as well. Last time I looked, no one (outside Trump administration appointees) was questioning the scientific status of those fields.

6. Consistent standards for the presentation and replication of data and results

Thanks in large part to the early and persistent efforts of Harvard’s Gary King, political science has been well ahead of the curve, by a good twenty years, in combating what is now called the “reproducibility crisis.” [11] Completely solving it, no—that won’t occur until we pry people away from thinking that loading lots of closely related variables into methods with the potential for providing wildly different results based on trivial differences in their methods of matrix inversion and gradient descent is a good idea [it isn’t]—but the checks are in place and, outside the profit-seeking journals (alas, most of them), the proper institutional expectations are as well.

The critical observation here, however, is that we can even have a reproducibility crisis: for three or four decades now quantitative political science has had a sufficiently strong foundation in shared data and methodologies that it is possible to assess whether something has gone wrong. Usually these are innocent mistakes due to carelessness and/or covariance matrices that are a bit too close to being singular, but every once in a while, not so innocent. Ending up as a story in every major publication in the country, as well as Nature and Science.

From the perspective of the scientific method, if not the individuals and institutions involved, that’s a good thing. Post-modernist approaches, I can assure you, will never experience a reproducibility crisis.

7. A mature international research community grounded in major universities and with active scientific communication through professional conferences and journals [12]

Which is to say, all of the features I’ve discussed in the previous sections have been thoroughly institutionalized, following exactly the model that goes back to the establishment of the Royal Society in London in 1660. Ironically, due to predictable institutional lock-in effects, it was necessary for the quantitative side of political science to actively break off from the APSA in the 1980s—a critical insight of John Jackson and Chris Achen, who founded and shepherded the development of the now-independent Society for Political Methodology in that period—but by the early 2000’s the SPM’s journal, Political Analysis, had well eclipsed the journals of the older organizations as measured by the usual impact factor scores. [13] Graduate students at [most] elite institutions can get state-of-the-art training in methodology, and go on to post-docs and, ever so occasionally, faculty positions at other elite institutions. [14] Faculty and grad students—well, prior to the outcome of the 2016 US election—move easily between institutions in North America and Europe. [15]


The upshot: the social “sciences” involve a complex community dealing with multiple and continually evolving approaches and debates at both the philosophical and methodological levels. Returning to our touchstone of the Oxford Handbook series, the Oxford Handbook of Political Methodology, whose three editors include two past presidents of the Society for Political Methodology, runs to 900 pages. Oh, and since we’re discussing the social sciences more generally, did I mention that the Oxford Handbooks relevant to statistical economic modeling will set you back about 3,000 additional pages?

Do you need to master every nuance of these volumes to productively participate in debates on the future applications of social science methodology? No, you don’t need to master all of it—no one does—but before making your next clever little remark generalizing upon issues whose history and complexity you are utterly clueless about, could you please at least get some of the basics down, maybe even at the level of a first-year graduate student, and let the rest of us who know the material at levels considerably above that of a first-year graduate student get some work done?

Just saying.

As promised, we will pick up on some of the more positive aspects of this issue in the near future.


1. So I’m not going to be using the phrase “data science” much in this essay, though in most of the “data sciences” one is interested in human individual and social behavior, so it’s the same thing.

2. Which is to say, now drawing to a close my tortured tale of the travails which I must endure to provide you, the reader, with a seemingly endless stream of self-righteously indignant snark.

3. In the domain of religion, L. Ron Hubbard would take this approach “to 11” in the 1950s, the Maharishi Mahesh Yogi would do the same in the 1970s and so on: history doesn’t repeat but it certainly rhymes. And by the way, anyone who thinks U.S. culture is relentlessly “anti-science” hasn’t spent much time looking at advertising for cosmetics and diet programs.

4. Those comments will focus on only the scientific aspects of political science, which retains a variety of distinct approaches—for example philosophical, legal-institutional, historical, observational, and, of course, in recognition of 4-20, we must acknowledge post-modernism—rather than being exclusively scientific. Let a hundred flowers bloom: My point is simply that if because of organizational imperatives (and, perhaps, the ability to get consistent reproducible results) you want a “scientific” study of politics, these methods are extensive, sophisticated, and well developed, and this has been the case for decades.

5. Two of the authors of this now canonical text were, remarkably, from Harvard, a little institution just outside of Boston you may have heard of, and yet even so the book managed to influence the entire field: imagine that! Despite the title, it’s a guide to quantitative research: the word “qualitative” is just in there to gaslight you.

6. Which the organization-which-shall-not-be-named ignores entirely, preferring instead to pay people to develop their own independent methods no one else in the field takes seriously. Ain’t it great to have untold gobs of money? Organization-which-shall-not-be-named: pretty slick title for this essay, eh?

7. All too commonly over-specified logistic models applied, often as not, to problems which could be far more robustly studied using simple analysis-of-variance methods originally developed by Laplace in the 1770s, but no longer taught as part of the methodological canon…I digress…

8. Mind you, some of us were applying machine learning methods to conflict forecasting in the 1980s—see Valerie Hudson’s edited Artificial Intelligence and International Politics (1991)though this didn’t catch on and, given the data and computational power we had at the time, was perhaps premature.

9. Seriously, but there’s a pretty mundane causal explanation for that sort of thing and it involves computers.

10. Classic case here in political science—the buzz-term is “endogeneity”—is determining the efficacy of political campaign expenditures: In general, more money will be spent on races that are anticipated as being close, so simple correlations will show high expenditures are associated with narrower victories, the opposite of the intended effects. There are ingenious ways for getting around this using statistical models—albeit they are particularly difficult on this problem because the true effects may actually be fairly weak, as Jeb Bush during the U.S. Republican Party primaries in 2016 was only the latest of many well-funded candidates to discover—and those methods are anything but intuitive.

11.  Shout-out to Charlottesville’s own Center for Open Science

12. Okay, so too many of those journals publish lowest-common-denominator research that is five years out of date, and they all are fiercely resisting open access, but that’s just rent-seeking behavior that could be stopped overnight with a suitable collaborative assault by about twenty-five deans and five research funders. Someday. I’m a Madisonian: I study humans, not angels.

13. Meanwhile both the political science experimentalists and social network analysts have split off from the more traditionally statistical SPM, forming their own organizations, with the text analysts probably soon to follow. Which really pisses off some people in SPM but hey, things change: go for it. Also it’s not like the SPM summer meetings, heading for their 34th consecutive year, are exactly undersubscribed.

14. Most do not and get jobs at lower ranked institutions: this is true in all disciplines and there’s even a name for it, which I can’t readily locate.

15. For whatever reason, I’ve long been more comfortable professionally with the Nordic and Germanic Europeans than with my fellow Americans. Perhaps because I write stuff like this. Or because the Europeans don’t harbor the same preconceptions about Kansas as do Americans…

I would also note that I’m using the phrase “North American and European” simply to reflect where things stand at the moment: the conservatism of the well-funded Asian universities (e.g. Japan, South Korea, older Chinese institutions) and the lack of resources  (and still more academic conservatism) in pretty much everywhere else in the world limit the possibilities for innovation. But in time this will happen: generations of graduate students from around the world have been getting trained in (and have made important contributions to) these methods, but the institutional barriers in transferring them, particularly when dealing with the study of potentially sensitive political issues (that’s you, China…), are substantial.

This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Yes, Virginia, the social and data sciences are “science”

  1. Pingback: So, punk, think ya can start a data science program?? | asecondmouse

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s