What if a few grad programs were run for the benefit of the graduate students?

I’ve got a note in my calendar around the beginning of August—I was presumably in a really bad mood at [at least] some point over the past year—to retweet a link to my blog post discussing my fondness for math camps—not!—but in the hazy-lazy-crazy days of summer, I’m realizing this would be rather like sending Donald Trump to meet with the leaders of U.S. allies: gratuitously cruel and largely meaningless. Instead, and more productively, an article in Science brought to my attention a recent report [1] by the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM)—these people, please note, are a really big deal. The title of the article—”Student-centered, modernized graduate STEM education”—provides the gist but here’s a bit more detail from the summary of the report provided in the Science article:

[the report] lays out a vision of an ideal modern graduate education in any STEM field and a comprehensive plan to achieve that vision. The report emphasizes core competencies that all students should acquire, a rebalancing of incentives to better reward faculty teaching and mentoring of students, increased empowerment of graduate students, and the need for the system to better monitor and adapt to changing conditions over time.  … [in most institutions] graduate students are still too often seen as being primarily sources of inexpensive skilled labor for teaching undergraduates and for performing research. …  [and while] most students now pursue nonacademic careers, many institutions train them, basically, in the same way that they have for 100 years, to become academic researchers

Wow: reconfigure graduate programs not only for the 21st century but to benefit the students rather than the institutions. What…a…concept!

At this point my readership now splits, those who have never been graduate students (a fairly small minority, I’m guessing) saying “What?!? Do you mean graduate programs aren’t run for the benefit of their students???” while everyone who has done time in graduate school is rolling their eyes and cynically saying “Yeah, right…” With the remainder rolling on the ground in uncontrollable hysterical laughter.[2]

But purely for the sake of argument, and because these are the lazy-hazy-crazy days of summer, and PolMeth is this week and I got my [application-focused!] paper finished on Friday (!!), let’s just play this out for a bit, at least as it applies to political methodology, the NAESM report being focused on STEM, and political methodology is most decidedly STEM. And in particular, given the continued abysmal—and worsening [3]—record for placement into tenure-track jobs in political science, let’s speculate for a bit what a teaching-centered graduate level program for methodologists, a.k.a. data scientists, intending to work outside of academia might look like. For once, I will return to my old framework of seven primary points:

1. It will basically look like a political methodology program

I wrote extensively on this topic about a year ago, taking as my starting point that experience in analyzing the heterogeneous and thoroughly sucky sorts of data quantitative political scientists routinely confront is absolutely ideal training for private sector “data science.” The only new observation I’d add, having sat through demonstrations of several absolutely horrible data “dashboards” in recent months, is formal training in UX—user interface/experience—in addition to the data visualization component. So while allowing some specialization, we’d basically want a program evenly split between the four skill domains of a data scientist:

  • computer programming and data wrangling
  • statistics
  • machine learning
  • data visualization and UX

2. Sophisticated problem-based approaches taught by instructors fully committed to teaching

One of the reasons I decided to leave academia was my increasing exposure to really good teaching methodologies combined with a realization that I had neither the time, energy, nor inclination to use these. “Sage on the stage” doesn’t cut it anymore, particularly in STEM.

Indeed, I’m too decrepit to do this sort of thing—leave me alone and just let me code (and, well, blog: I see from WordPress this is published blog post #50!)—but there are plenty of people who can enthusiastically do it and do it very well. The problem, as the NASEM report notes in some detail, is that in most graduate programs there are few if any rewards for doing so. But that’s an institutional issue, not an issue of the total lack of humans capable of doing the task, nor the absence of a reasonably decent body of research and best-practices—if periodically susceptible, like most everything social, to fads—on how to do it.

3. Real world problems solved using remote teaming

Toy problems and standardized data sets are fine for [some] instruction and [some] incremental journal publications, but if you want training applicable to the private sector, you need to be working with raw data that is [mostly] complete crap, digital offal requiring hours of tedious prep work before you can start applying glitzy new methods to it. Because that, buckeroos, is what data science in the private sector involves itself with, and that’s what pays the bills. Complete crap is, however, fairly difficult to simulate, so much better to find some real problems where you’ve got access to the raw data: associations with companies—the sorts of arrangements that are routine in engineering programs—will presumably help here, and as I’ve noted before, “data science” is really a form of engineering, not science. 

My relatively new suggestion is for these programs to establish links so that problem-solving can be done in teams working remotely. Attractive as the graduate student bullpen experience may be, it isn’t available once you leave a graduate program, and increasingly, it will not be duplicated in many of the best jobs that are available, as these are now done using temporary geographically decentralized teams. So get students accustomed to working with individuals they’ve never met in person who are a thousand or eight thousand or twelve thousand miles away and have funny accents and the video conferencing doesn’t always work but who nonetheless can be really effective partners. In the absence of some dramatic change in the economics and culture of data science, the future is going to look like the “fully-distributed team” approach of parse.ly , not the corporate headquarters gigantism of FAANG.

4. One or two courses on basic business skills

I’ve written a number of blog entries on the basics of self-employment—see here and here  and here—and for more information, read everything Paul Graham has ever written, and more prosaically, my neighbor and tech recruiter Ron Duplain always has a lot of smart stuff to say, but I’ll briefly reiterate a couple of core points here.

[Update 31 July: Also see the very useful EuroPython presentation from Ines Montani of explosion.ai, the great folks that brought you spaCy and prodigy. [9]]

Outside of MBA programs—which of course go to the opposite extreme—academic programs tend to treat anything related to business—beyond, of course, reconfiguring their curricula to satisfy the funding agendas of right-wing billionaires—as suspect at best and more generally utterly worthy of contempt. Practical knowledge of business methods also varies widely within academia: while the stereotype of the academic coddled by a dissertation-to-retirement bureaucracy handling their every need is undoubtedly true as the median case, I’ve known more than a few academics who are, effectively, running companies—they generally call them “labs”—of sometimes quite significant size.

You can pick up relevant business training—well, sort of—from selectively reading books and magazine articles but, as with computer programming, I suspect there are advantages to doing this systematically [and some of my friends who are accountants would definitely prefer if more people learned business methods more systematically]. And my pet peeve, of course, is getting people away from the expectations of the pervasive “start-up porn”: if you are reasonably sane, your objective should be not to create a “unicorn” but rather a stable and sustainable business (or set of business relationships) where you are compensated at roughly the level of your marginal economic contribution to the enterprise.[4]

That said, the business angle in data analytics is at present a rapidly moving target as the the transition to the predominance of remote work—or if you prefer, “gig-economy”—plays out. In the past couple of weeks, there were articles on this transition in both The Economist’s “The World If…” feature and Science magazine’s “Science Careers” [6 July 2018][5]. But as The Economist makes clear, we’re not there yet, and things could play out in a number of different ways.[6] Still, it is likely that most people in the software development and data analytics fields should probably at least plan for the contingency they will not be spending their careers as coddled corporate drones and instead will find themselves in one of those “you only eat what you—or you and your ten-person foraging party of equals—kill” environments. Where some of us thrive. Grrrrrrrr.  There are probably some great market niches for programs that can figure out what needs to be covered here and how to effectively teach it. 

5. Publication only in open-access, contemporaneous venues

Not paywalled journals. Particularly not paywalled journals with three to five year publication lags. As I note in one of several snarky asides in my PolMeth XXXV paper

Paywalled journals are virtually inaccessible outside universities so by publishing in these venues you might as well be burying your intellectual efforts beneath a glowing pile of nuclear waste somewhere in Antarctica. [italics in original]

Ideally, if a few of these student-centered programs get going, some university-sponsored open access servers could be established to get around the current proliferation of bogus open access sites: this is certainly going to happen sooner or later, so let’s try “sooner.” Bonus points: such papers can only be written using materials available from open access sources, since the minute you lose your university computer account, that’s the world you will live in.

It goes without saying that students in these programs should establish a track record of both individual and collective code on GitHub. GitHub (and StackOverflow) having already solved the open access collective action problem in the software domain.[7] 

6. Yes, you can still use these students as GTAs and GRAs provided you compensate them fairly

Okay, I was in academia long enough to understand the basic business model of generating large amounts of tuition credit hours—typically about half—in massive introductory classes staffed largely by graduate students. I was also in academia long enough to know that graduate training is not required for students to be able to competently handle that material: You just need smart people (the material, remember, is introductory) and, ideally, some training and supervision/feedback on teaching methods. To the extent that student-centered graduate programs have at least some faculty strongly committed to teaching rather than increasing the revenues of predatory publishers you may find MA-level students are actually better GTAs than research-oriented PhD students.

As far providing GRAs, my guess is that generating basic research—open access, please—out of such programs will also occur naturally and again, with because the programs have a focus on applications these students may prove better (or at least, less distracted) than those focused on the desperate—and in political science, for three-quarters, inevitably futile—quest for a tenure-track position. You might even be able to get them to document their code!

In either role, however, please provide those students with full tuition, a living wage and decent benefits, eh? The first law of parasitism being, of course, “don’t kill the host.” If that doesn’t scare you, perhaps the law of karma will.

7. Open, transparent, unambiguous, and externally audited outcomes assessments

Face it, consumers have more reliable information on the contents of a $1.48 can of cat food than they have on the outcomes of $100,000 business and law school programs, and the information on professional programs is usually far better than the information on almost all graduate programs in the social sciences. In a student-centered program, that has to change, lest we find, well, programs oriented towards training for jobs that only a quarter of their graduates have any chance of getting.

In addition to figuring out standards and establishing record-keeping norms, making such information available is going to require quite the sea change in attitudes, and thus far deans, associate deans, assistant deans, deanlets, and deanlings have successfully resisted open accountability by using their cartel powers.[8] In an ideal world, however, one would think that market mechanisms would favor a set of programs with transparent and reliable accountability.

Well, a guy can dream, eh?

See y’all—well, some subset of y’all—in Provo.

Footnotes

1. Paywalled, of course. Because elite not-for-profit organizations sustained almost entirely by a combination of tax monies and grants from sources who are themselves tax-exempt couldn’t possibly be expected to make their work accessible, particularly since the marginal cost of doing so is roughly zero.

2. What’s that old joke from the experimental sciences?: if you’re embarking on some procedure with potentially painful consequences, better to use graduate students rather than laboratory rats because people are less likely to be emotionally attached to graduate students.

3. The record for tenure track placement has gotten even worse, down to 26.3%, which the APSA notes “is the lowest reported figure since systematic observation began in the 2009-2010 academic year.” 

4. Or if you want to try for the unicorn startup—which is to say, you are a white male from one of a half-dozen elite universities—you at least understand what you are getting into, along with the probabilities of success—which make the odds of a tenure-track job in political science look golden in comparison—and the actual consequences, in particular the tax consequences, of failure. If you are not a white male from one of a half-dozen elite universities, don’t even think about it.

5. Science would do well to hire a few remote workers to get their web page functioning again, as I’m finding it all but inoperable at the moment. Science is probably spending a bit too much of their efforts breathlessly documenting a project which using a mere 1000 co-authors has detected a single 4-billion-year-old neutrino.

6. And for what it’s worth, this is a place where Brett Kavanaugh could be writing a lot of important opinions. Like maybe decisions which result in throwing out the vast kruft of gratuitous licensing requirements that have accumulated—disproportionately in GOP-controlled states—solely for the benefit of generally bogus occupational schools.

7. And recently received a mere $7.5-billion from Microsoft for their troubles: damn hippies and open source, never’ll amount to anything!

8. Though speaking of cartels—and graduate higher education puts OPEC, though not the American Medical Association, to shame on this dimension—the whole point of a cartel is to restrict supply. So a properly functioning cartel should not find itself in a position of over-producing by a factor of three (2015-2016 APSA placements) or four (2016-2017 placements). Oh…principal-agent problems…yeah, that…never mind…

9. Watch the presentation, but for a quick summary, her main point is that the increasingly popular notion that a successful company has to be large, loss-making, and massively funded is bullshit: if you actually know what you are doing, and are producing something people want to buy, you can be self-financing and profitable pretty much from the get-go. “Winner-take-all” markets are only a small part of the available opportunities—though you wouldn’t know that from the emphasis on network effects and FOMO in start-up porn, now amplified by the suckers [10] who pursue the opportunities in data science tournaments rather than the discipline of real markets—and there are plenty of possibilities out there for small, complementary teams who create well-designed, right-sized software for markets they understand. Thanks to Andy Halterman for the pointer.

10. Okay, “suckers” is probably too strong a word: more likely these are mostly people—okay, bros—who already have the luxury of an elite background and an ample safety net provided by daddy’s and mommy’s upper 1% income and social networks so they can afford to blow off a couple years doing tournaments just for the experience. But compare, e.g. to Steve Wozniak and Steven Jobs—and to a large extent, even with their top-1% backgrounds, Bill Gates and Paul Allen—who created things people actually wanted to buy, not just burning through billions to manipulate markets (Uber, and increasingly it appears, Tesla)

Posted in Higher Education, Methodology | Leave a comment

Witnessing a paradigm shift?

The philosopher of science Thomas Kuhn is famous—beyond an apparent penchant for throwing ashtrays [1]—for his vastly over-generalized concept of “paradigm shifts” in scientific understanding, where a set of ideas once thought unreasonable becomes the norm, exchanging this status with ideas on the same topic once almost universally accepted. [2] This typically involves a generational change—Max Planck famously observed that scientific progress occurs one funeral at a time —but can sometimes occur more quickly. And I think I’m watching one develop in the field of predictive models of conflict behavior.

The context here [3] was a recent workshop I attended in Europe on that topic. The details don’t matter but suffice it to say this involved an even mix of the usual suspects in quantitative conflict data and modeling—I’m realizing there are perhaps fifty of us in the world—and an assortment of NGOs and IGOs, mostly consumers of the information. [4]  Held amid the monumental-brutalist architecture housing the pan-European bureaucracy, presumably the model for the imperial capital in The Hunger Games, leading one to sympathize, at least to a degree, with European populist movements. And by the way, in two days of discussions no one mentioned Donald Orange-mop even once: we’re past that.

The promised paradigm change is on the issue of whether technical models for forecasting conflict are even possible—and as I’ve argued vociferously in the past, academic political science completely missed the boat on this—and it looks as though we’ve suddenly gone from “that’s impossible!” to “okay, where’s the model, and how can we improve it?” This new assessment being entirely due to the popularization over the past year of machine learning. The change, even taking into account that the Political Instability Task Force has been doing just this sort of thing, and doing it well, for at least fifteen years, has been stunningly rapid.

Not, of course, without more than a few bumps along the way. Per the persistent hype around “deep learning,” there’s a strong assumption that “artificial intelligence” is now best done with neural networks—and the more complex the better—whereas there’s consistent evidence both from this workshop and a number of earlier efforts I’m familiar with that because of the heterogeneity of the cases and the tiny number of positives, random forests are substantially better. There’s also an assumption that you can’t figure out which variables are important in a machine learning model: again, wrong, as this is routine in random forests and can be done to a degree even in neural nets, though it’s rather computationally intensive. One presenter—who had clearly consumed a bit too much of the Tensorflow Kool-Aid—noted these systems “learn on their own”: alas, that’s not true for this problem [6] and in fact we need lots of training cases, and in conflict forecasting models the aforementioned heterogeneity and rare positives still hugely complicate estimation.

So these models are not easy, but they are now considered possible, and there is an actual emerging paradigm: In the course of an hour I saw presentations by a PhD student in a joint program at Universities of Stockholm and Iceland developing a resource-focused conflict forecasting model and a data scientist from the World Bank and FAO working on famine forecasting [7] both implementing essentially the same very complex protocols for training, calibration, and cross-validation of various machine learning models. [8][15]

Well, we live in interesting times.

There’s a fairly standard rule-of-thumb in economic history stating it takes between one and two human generations—20 to 40 years—to effectively incorporate a major new technology into the production structure of organizations. The—yes, paradigmatic—cases are the steam engine, electricity, and computers. [9] I’ve sensed for quite some time that we’re in this situation, perhaps half-way through the process, with respect to technical forecasting models and foreign policy decision-making. [10] As Tetlock and others have copiously demonstrated, the accuracy of human assessments in this field is very low, and as Kahneman and others have copiously demonstrated, decision-making on high-risk, low-probability issues is subject to systematic biases. Until quite recently, however, data [11] and computational constraints meant there were no better alternatives. But there are now, so the issue is how to properly use this information. 

And not every new technology takes a generation before it is adopted: to take some examples most readers will be familiar with, word-processing, MP3 music files, flat-screen displays, and cell phones displaced their earlier rivals almost in a historical eye-blink, albeit except for word processing this was largely in a personal rather than organizational context. In the long-ago research phase of ICEWS—a full ten years ago now, wow…—I had a clever slide (well, I thought it was clever) showing a robot saying “We bomb Mindanao in six hours” and a medal-bedecked general responding “Yes, master” to illustrate what technical forecasting models are not designed to do. But with accuracy 20% to 30% better than human forecasts, one would think these approaches should have some impact on the process. That is going to take time and effort to figure that out, particularly since human egos and status are involved, and the models will make mistakes. And present a new set of challenges, just as electrical power presents a different sets of risks and opportunities than the steam and water power it replaced. But their eventual incorporation into policy-making seems inevitable.

Finally, this might have implications for the future demand for event data, as models customized for very specific organizational needs finally provide a “killer app” using event data as a critical input. As it happens, no one has yet to come up with something that does the job of event data—recording day to day interactions of political actors as reported in the open press—without simply looking pretty much like plain old event data: Both the CAMEO and PLOVER [12] event coding systems still have the basic structure of the 60-year-old WEIS, because WEIS corporates most things in the news of interest to analysts (and their quantitative models). While the forecasting models I’m currently seeing primarily use annual (and state-level) structural data, as soon as one drops to the sub-annual level (and, increasingly, sub-state, as geocoding of event data improves) event data are really the only game in town. [13]

Footnotes

1. Recently back in the news…well, sort of…thanks to a thoroughly unflattering book by documentary film-maker Errol Morris, whose encounters with Kuhn when Morris was a graduate student left a traumatic impression of Kuhn being a horse’s ass of truly mythic proportions, though some have suggested parts of the book may themselves border on mythic…nonetheless, be civil to your grad students lest they become award winning film makers and/or MacArthur Award recipients long after you and any of your friends are around to defend your reputation. Well, and perhaps because being nice to your grad students is simply the right thing to do.

2. And thus the hitherto obscure word “paradigm” entered popular parlance: a number of years ago, at the height of the dot-com bubble, social philosopher David Barry proposed simply making up a company name, posting this on the web, and seeing how much money would pour in. The name he proposed was “Gerbildigm”, combining “gerbil” and “paradigm.” Mind you, that’s scarcely different than what actual companies were doing in the late 1990s to generate funding. Nowadays, in contrast, they simply say they are exploring applications of deep learning.

3. And by the way, this isn’t the snark-fest promised in the previous blog entry; that’s still to come, though events are so completely depressing at the moment—okay, “Christian” conservatives, you won the right not to bake damn wedding cakes, but at the price of suborning tearing infants out of the arms of their mothers: you really think that tradeoff is a good deal? Will your god? You’ve got an exemption from Matthew 25:35-40 now, eh? You’re completely confident about this? You sure?—I’m having difficulty gearing up for a snark-fest even though it is half-written. Though stuff I have half-written would fill a not-inconsequentially sized bookshelf.

4. It is also notable that the gender ratio at this very technical workshop was basically 50/50, and this included the individuals developing the data and models, not just the consumers. In the U.S., that ratio would have been 80/20 or even 90/10. So by chance is the USA excluding some very talented potential contributors to this field? [5] And is this related to the work of Jayhawk economist Donna Ginther, highlighted on multiple occasions by The Economist over the past few months, that in the academic discipline of economics, gender discrimination appears to be considered a feature rather than a bug? Which cascaded over into the academic field of political methodology, though thanks to the efforts of people like Janet Box-Steffensmeier, Sara Mitchell, Caroline Tolbert, and institutions like VIM is not as bad as it once was. But compared to my experiences in Europe, could still improve.

5. I recently stumbled onto historian Marie Hicks’s study titled Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing.  Brogrammers take note: gender discrimination doesn’t necessarily have a happy ending.

6. Self-learning is, famously, possible for games like poker, chess and go, which have the further advantage that the average person can understand the application, thus providing ample fodder for breathless headlines, further leading to fears that our new Go-and-Texas-Hold’em neural network overlords will, like Daleks and Cylons, shortly lethally threaten us, even if they still can’t manage to control machines sufficiently well to align the doors to shut properly on a certain not-so-mass-produced electric vehicle produced by a company owned by one of the more notable alarmists concerned about the dangers of machine intelligence. Plus there’s the little issue of control of the power cord. I digress.

7.  Amusingly, for the World Bank work, the analyst then has to run comparable regression models because that’s apparently the only thing the economists there understand. At the moment.

8. Nor was this the standard protocol for producing a regression model which, gentle reader, I would remind you has the following steps (as Adam Smith pointed out in 1776, for maximal efficiency, assemble a large team of co-authors with specialists doing each task!):

  1. Develop some novel but vaguely plausible “theory”
  2. Assemble a set of 25 or so variables from easily available data sets
  3. Run transformations and subsets of these, ideally using automated scripts to save thought and labor, until one or more combinations emerge where the p-values on your pet variables are ≤0.05. Justify any superfluous variables required to achieve this via collinearity—say, parakeets-per-capita—as “controls.” Bonus points for using some new variant of regression for which the data do not remotely satisfy the assumptions and which mangles the coefficients beyond any hope of credible interpretation. Avoid, at all costs, out-of-sample assessments of any form.
  4. Report this in a standardized social science format 35 ± 5 pages in length (with a 100-page web appendix) starting with an update of the literature review from your dissertation[s], copiously citing your friends and any likely reviewers, and interpreting the coefficients as though they were generated using OLS estimation. Make sure the “Discussion” and “Conclusions” sections essentially duplicate each other and the statistical tables.
  5. Publish in a proprietary journal which will appear in print after a lag of at least three years, firewalled and thus inaccessible to the policy community, but no one will ever look at it anyway. Though previously you will have presented the problem, methodology, and results in approximately 500 seconds (you’re on a five paper panel, of course) at a major conference where your key slide will show 4 variants of the final 16-variable model with the coefficients to 6 decimal places and several p-values reported as “0.000.” The five people in the audience would be unable to read the resulting 3-point type except they are browsing the conference program instead of listening; the discussant asks why you didn’t include four addition controls.
  6. PROFIT!

I jest. I wish.

9. In fact quite a few people have suggested that computers still aren’t being used to their full capacity in corporations because they would render many middle managers irrelevant, and these individuals, unlike Yorkshire handloom weavers, are in a position to resist their own displacement: The Economist had a nice essay to this effect a couple weeks ago.

10. The concept of a systematic foreign policy is, of course, at present quaintly anachronistic in the U.S., where foreign policy, such as it is, is made on the basis of wild whims and fantasies gleaned from a steady if highly selective diet of cable TV, combined with a severe case of dictator-envy and the at least arguable proposition that poutine constitutes a threat to national security. But ever the optimist I can imagine the U.S. returning to a more civilized approach somewhere in the future, just as Rome recovered from both Nero and Caligula. Also as noted, this workshop was in Europe, which has suddenly been incentivized to get serious about foreign policy.

11. This is an important caveat: the data are every bit as important as the methods, and for many remote geographical areas under high conflict risk, we probably still don’t have all the data we need, even though we have a lot more than we once did. But data is hard, and data can be very boring—certainly it’s not going to attract the headlines that a glitzy new game-playing or kitten-identifying machine learning application can attract, and at the moment this field is dependent on a large number of generally underfunded small projects, the long-term Scandinavian commitments to PRIO and the Uppsala UCDP being exceptions. In the U.S., the continued funding of the ICEWS event data is very tenuous and the NSF RIDIR event data funding runs out in February-2018…just saying…

12. Speaking of PLOVER, at yet another little workshop, I was asked about the painfully slow progress towards implementing PLOVER, and it occurred to me that it’s currently trying to cross a technological “valley of death” [14] where PLOVER, properly implemented, would be clearly superior to CAMEO, but CAMEO already exists, and there is abundant CAMEO data (and software for coding it) available for free, and existing models already do a reasonably good job of accommodating the problems of CAMEO. “Free and already available” is a serious advantage if your fundamental interest is the model, not the data: This is precisely why WEIS, despite being proposed as a first-approximation to what would certainly be far better approaches, was used for about 25 years and CAMEO, which wasn’t even intended as a general-purpose coding scheme, is heading towards the two-decade mark, despite well-known issues with both.

13. Though the other thing to watch here is the emerging availability of low-cost, and frequently updated, remote sensing data. The annualized NASA night-light data is already being used increasingly to provide sub-state information with high geographical precision, and new private sector data, as well as new versions of night-lights, are likely to be available at a far greater frequency.

14. Googling this phrase to get a clean citation, I see it has been used to mean about twenty different things, but the one I’m employing here is a common variant.

15. And while I’m on the topic of unsolicited advice to grad students, yet another vital professional skills they don’t teach you in graduate school is flying to Europe and being completely alert the day after you arrive. My formula:

  1. Sleep as much as you can on the overnight flight (sleeping on planes, ideally without alcohol, is another general skill)
  2. Take at most a one hour nap before sunset, and spend most of the rest of the time outside walking;
  3. Live on the East Coast
  4. Don’t change planes (or at least terminals) at Heathrow
Posted in Methodology | 1 Comment

Should an event coder be more like a baby?

Last evening, as is my wont, I was reading the current issue of Science [1]—nothing like a long article on, say, the latest findings on mantle convection beneath the Hawai’i hotspot to lull one to sleep—when an article titled “Basic Instincts: Some say AI needs to learn like a child” jolted me into one of those “OMG, this is exactly the issue I’ve been dealing with!” experiences.

That issue: whether there is any future to dictionary-based political event coders. Of late—welcome to my life, such as it is—I’ve been wrestling with whether to invest my time:

  • Writing a new coder based on universal dependency parsing and my mudflat proof-of-concept: seems like low-hanging fruit
  • Adapting an existing universal dependency coder (seems increasingly unlikely for an assortment of reasons)
  • Or just tossing the whole project since everybody—particularly every U.S. government funder—knows that dictionary-based coders are oh-so-1990s and from this point on everything will be done with machine learning (ML) classifiers

This article may tilt the scale back to the first option. At least for me.

The “baby” reference here and in the article comes from the almost irrefutable evidence that humans are born hard-wired to efficiently learn various skills, and probably the most complex of these is language. A normally developing human child picks up language, typically using sound, but sign language is learned with equal facility—and outside the United States, usually multiple languages, and keeps them distinct—at a phenomenal rate. Ask any three-year-old. And try to shut them up. Provide a chimpanzee with exactly the same stimuli—yes, the experiment has been tried, on multiple occasions—and they never achieve remotely similar abilities to that of humans.

However, there’s an attraction to ML classifiers in being, well, rather mindless. [2] But this comes with the [huge] problem of requiring an extraordinary number of labeled training cases, which we simply don’t have for event data, nor does anyone seem inclined to generate them, because that process is expensive and involves the recruitment, management, and, critically, successful retention of a large number well-trained human coders. [3] Consequently event data coding is in a totally different situation from ML problems where vast numbers of labeled cases are available, typically from the web at little expense.

It’s completely possible, of course, to generate essentially unlimited labelled event data cases from the existing coding systems, and it is certainly conceivable that the magic of neural networks (or your classifier of choice) will produce some wonderful generalization that cannot be obtained from those coders. Or, more likely, will produce one interesting generalization that we will then see repeated endlessly, much like the man-woman-king-queen example for word embeddings. But another possibility is the classifiers will just sloppily approximate what the dictionary-based systems are already doing.

And doing reasonably well because dictionary-based automated event coding has been around for more than a quarter century, and now benefits from a wide range of on-going developments throughout the field of computational natural language processing. As a consequence, those programs start with a lot of “instinct.” Consider just how much comes embedded in a contemporary system:

  • The language model of the parser, which is the result of thousands of hours of experimentation across multiple major NLP research projects across decades
  • In some systems, notably VRA-Reader, PETRARCH-2 and Raytheon/BBN’s ACCENT/Serif, an explicit language model for political events
  • Models of language subcomponents such as dates, locations, and named entities
  • Two decades of human-coded dictionary development from the KEDS and TABARI projects [4]
  • The WordNet synonym sets, again the product of thousands of hours of effort, which have been incorporated into those dictionaries
  • A variety of very large data sets such as rulers.org, CIA World Leaders and Wikipedia for named-entity resolution
  • Extensive idiomatic human translation by native speakers of the Spanish and Arabic dictionaries currently being produced by the NSF RIDIR event data project

Okay, people, I know that your neural networks are cool—like they are really, really cool, fabulously cool, in fact you can’t even begin to tell me how cool they are, even if a four-variable logit model matches their performance out-of-sample—but frankly, I’ve just presented you with a rather extensive list of things that the dictionary-based coders are already starting with but which the ML programs have to learn on their own. [5] 

So in practical terms, for example, the VRA-Reader coder from the 1990s—now lost, alas, because it was proprietary…sad…—provided 128 templates for the possible structure of a sentence describing a political event. JABARI in the early 2010s—now lost, alas, because it was proprietary, and was successfully targeted by a duplicitous competitor…sad…—gained an additional 15% accuracy over TABARI using a set of very specific tweaks dealing with idiosyncratic characteristics of political events (e.g. the fact that the Red Cross rarely if ever engages in armed attacks). A dictionary-based system knows from the beginning that if A meets with B, B met with A, but if A arrests B, B didn’t arrest A. More generally, the failure—in numerous attempts across decades—of generic event “triple” coding systems to compete in this space is almost certainly due to the fact that domain-specific information provides a very significant boost to performance.

Furthermore, the environment in which we are deploying dictionary-based coding programs is becoming increasingly friendly: In the 1990s KEDS and VRA-Reader only had the texts and small dictionaries to work with, and had to do this on very limited hardware. Contemporary systems, in contrast, have access to sophisticated parsers and huge dictionaries with hardware easily able to accommodate both. Continuing the childhood metaphor, this is the difference between riding a tricycle and riding a 20-speed bicycle. With an electric assist.

I don’t expect this simple metaphor to be the last word on the subject and I may, in the end, decide that classifiers are going to rule us all (and in any case, that seems to be pretty much where all of the funding is going at the moment anyway but if that’s the case, please, can’t someone, somewhere fund an open set of gold standard records??). But I’m also beginning to think dictionary based approaches—or more probably, a hybrid of dictionary and classifier approaches—are more than an anachronistic “damn those neural nets: young whippersnappers don’t appreciate what it was like hacking into Nexis from the law school library account via an acoustical modem for weeks every morning from 2 a.m. to 5 a.m. [6]…get off my lawn” but rather, given the remarkable resources we can now deploy on the problem, dictionary-based coding represents a hugely more efficient approach than learning by example.

Time (and experimentation) will tell.

Footnotes

1. Okay, so it was actually last week’s issue: I wait for the paper version to arrive and hope it doesn’t get too soaked in the mailbox. The Economist I read electronically as soon as it is available.

2. The article quotes in passing Oregon State CS professor Thomas Dietterich that “[academic] computer scientists…have an aversion to debugging complex code.” Yeah, tell me about it…followed closely by their aversion to following quality control practices that have been common in industry since the 1990s. I digress.

3. The relatively new prodigy software is certainly a far more efficient approach to doing this than many earlier alternatives—I’ve also written a simple low-footprint variant of its annotation functions here—but human annotation remains vastly more labor intensive than, say, downloading millions of labeled images of cats and dogs.

4. Which I’ve got pretty good empirical evidence still provide most of the verb patterns for all of the CAMEO-based coding systems…figuring out verb patterns used to generate any data where you know both the codings and the URL of the source text is relatively straightforward, at least for the frequent patterns.

5. The other fabulously cool recent application of deep learning, the ability to play Go at levels beyond that of the best human expert, depended on a closed environment with fixed rules: event data coding is nothing like this.

6. Not a joke: this is the KEDS project ca 1990.

Posted in Methodology, Programming | 3 Comments

Entropy, Data Generating Processes and Event Data

Or more precisely, the Santa Fe Institute, Erin Simpson, and, well, event data. With a bit of evolutionary seasoning from Robert Wright, who is my current walking-commute listening.

Before we get going, let me make completely clear that there are perhaps ten people—if that—on this entire planet who will gain anything from reading this—particularly the footnotes, and particularly footnote 12—and probably only half of them will, and as for everyone else: this isn’t the blog you are looking for, you can go about your business, move along. Really: this isn’t even inside baseball, it’s inside curling.[1] TL;DR!

This blog is inspired by a series of synchronistic events over the past few days, during which I spent an appallingly large period—if during inclimate weather—going through about 4,000 randomly-selected sentences to ascertain the degree to which some computer programs could accurately classify these into one of about 20 categories. Yes, welcome to my life.

The first 3,000 of these were from a corpus of Reuters lede sentences from 1979-2015 which are one of the sources for the Levant event data set which I’ve been maintaining for over a quarter-century. While the programs—both experimental—were hardly flawless, overall they were producing, as their multiple predecessors had produced, credible results, and even the classification errors were generally for sentences that contained political content.

I then switched to another more recent news corpus from 2017 which, while not ICEWS, was not dissimilar to ICEWS in the sense of encompassing a massive number of sources, essentially a data dump of the world’s English-language press. This resulted in a near total meltdown of the coding system, with most of the sentences themselves, never mind the codings, bordering on nonsense so far as meaningful political content was concerned. O…M…F…G… But, here and there, a nice clean little coding poked its little event datum head out of the detritus, as if to say “Hey, look, we’re still here!”, rather as seedlings sprout with the first rainfall in the wake of a forest fire.

So what gives? Synchronicity being what it is, up pops a link to Erin Simpson’s Monktoberfest  talk where Dr. Simpson, as ever, pounds away at the importance of understanding the data generating process (DGP) before one blithely dives into any swamp presenting itself as “big data.” Particularly when that data is in the national security realm. Having wallowed in the coding of 4,000 randomly sequenced news ledes, particularly the last largely incoherent 1,000, her presentation resulted in an immediate AHA!: the difference I observed is accounted, almost totally, by the fact that international news sources such as Reuters [2] have an almost completely different DGP than that of the local sources.

Specifically, the international sources, since the advent of modern international journalism around the middle of the 19th century [3] have fundamentally served one purpose: providing people who have considerable money with information that they can use to get even more money. Yes, there are some positive externalities attached to this in terms of culture and literature, but reviews of how a Puccini opera was received at La Scala didn’t pay the bills: information valuable in predicting future events did.

This objective conveniently aligning the DGP of the wire services quite precisely with the interests of [the half-dozen or so…] consumers of political event data, since in the applied realm event data is used almost exclusively for forecasting.

Now, turning our attention to local papers in the waning years of the second decade of the 21st century: O…M…F…G: these can be—and almost certainly are—pretty much absolutely anything except—via the process of competitive economic exclusion—they aren’t international wire services.[4] In the industrialized world, start with the massive economic changes that the internet has wrought on their old business models, leading to two decades of nearly continuous decline in staffing and bureaus. In the industrializing world, the curve goes the other way, with the internet (and cell phone networks) enabling access and outreach never before possible. Which can be a good thing—more on this below—but is not necessarily a good thing, as there is no guarantee either of the focus nor, most certainly, the stability of these sources. The core point, however, is that the DGP of local sources is fundamentally different than the DGP of international sources.[5]

So, different DGPs, yeah, got that, in fact I had you at “Erin Simpson”, right? But what’s with the “entropy” stuff?

Well, I’m now really going to go weird—well, SFI/complexity theory weird, which is, well, pretty weird—on you here, so again, you’ve been warned, and thus you probably just want to break off here, and go read something about chief executives and dementia or somesuch. But if you are going to continue…

Last summer there was an article which despite including the phrase “Theory of reality” in the title—this is generally a signal to dive for the ditches—got me thinking—and by the way, I am probably about to utterly and completely distort the intent of the authors, who are likely a whole lot smarter than me even if they don’t realize that one should never, ever, under any circumstances put the phrase “theory of reality” on anything other than a Valentine’s Day candy heart or inside a fortune cookie…I digress…—on their concept of “effective information”:

With [Larissa] Albantakis and [Guilio] Tononi (both neuroscientists at Univ of Wisconsin-Madison),  [Erik] Hoel (Columbia neuroscience)[6] formalized a measure of causal power called “effective information,” which indicates how effectively a particular state influences the future state of a system. … The researchers showed that in simple models of neural networks, the amount of effective information increases as you coarse-grain over the neurons in the network—that is, treat groups of them as single units. The possible states of these interlinked units form a causal structure, where transitions between states can be mathematically modeled using so-called Markov chains.[7] At a certain macroscopic scale, effective information peaks: This is the scale at which states of the system have the most causal power, predicting future states in the most reliable, effective manner. Coarse-grain further, and you start to lose important details about the system’s causal structure. Tononi and colleagues hypothesize that the scale of peak causation should correspond, in the brain, to the scale of conscious decisions; based on brain imaging studies, Albantakis guesses that this might happen at the scale of neuronal microcolumns, which consist of around 100 neurons.

With this block quote, we are moving from OMFG to your more basic WTF: why oh why would one have the slightest interest in this, or what oh what does this possibly have to do with event data???

Author pauses to take a long drink of Santa Fe Institute Kool-Aid…

The argument just presented is at the neural level but hey, self-organization is self-organization, right? Let’s just bump up things up to the organizational level and suddenly we have answers to two (or three) puzzles:

  • why is the content of wire service news reports relatively stable across decades?
  • why can that information be used by both [some] humans and [most] machines to predict political events at a consistently high level?
  • why are reductionist approaches on modeling organizational behavior doomed to fail? [8]

My version of the Hoel-Albantakis-Tonini hypothesis is that there is a point in organizational structure where organizations, assuming they are under selective pressure, will settle on a scale (and mechanisms) which maximizes—or at least goes to some local maximum on an evolutionary landscape—the tradeoff between the utility of predictive power and the cost of information required to maintain that level of predictability. While the sorts of political organizations which are the focus of event data forecasts have costly private information, the international media provide the inexpensive shared public information that sustains this system. In particular, we can readily demonstrate through a variety of statistical and/or machine-learning models (or human “super-forecasters”) that this public information alone is sufficient to predict most of these political behaviors, typically to 80% to 85% accuracy. [9] Information to get the remaining 15% to 20%, however, is not going to be found in local sources (with one exception noted below) and, as I’ve argued elsewhere (for years…) most of the remaining random error is irreducible due to a set of about eight fundamental sources of uncertainty that will be found in any human (or, presumably, non-human) political system.[10][30]

In order to survive, organizations must be able to predict the consequences of their actions and the future state of the world with sufficient accuracy that they can take actions in the present that will have implications for their well-being far into the future: the feed-forward problem: Check. [11] You need information of some sort to do this: Check. Information is costly and information-collection detracts from other tasks required for the maintenance and perpetuation of the organization: Check.[12] Therefore, in the presence of noise and systems which are open and stochastic, a point is reached—and probably with a lot less information than we think we need [13]—where information at a disaggregated scale is more expensive than the benefits it provides for forecasting. QED. [14]

Take ICEWS. Please… The DARPA-sponsored research phase, not the operational version, such as it is. Consider the [in]famous ICEWS kick-off meeting in the fall of 2007, where the training data were breathlessly unveiled along with the fabulously difficult evaluation metrics for the prediction problems, vetted by no less than His Very Stable Genius Tony Tether.[15] Every social scientist in the room skipped the afternoon booze-up with the prime contractor staff, went back to their hotel rooms with their laptops and by, say, about 7 p.m. had auto-regressive models far exceeding the accuracy of the evaluation metrics. Can we just have the money and go home now? The subsequent history of the predictive modeling in ICEWS—leaving aside the fact that the Political Instability Task Force (PITF) modeling groups had already solved essentially the same problems five years earlier—was one of the social scientists finding straightforward models which passed the metrics, Tether and his minions (abetted, of course, by the prime contractors, who did not want the problems solved: there is no funding for solved problems) imposing additional and ever-more ludicrous constraints, and then new models developed which got around even those constraints.

But this only worked for the [perfectly plausible] ICEWS “events-of-interest,” which were at a very predictable nation-year scale. The ICEWS approach could be (and in the PITF research, to some degree has been) scaled downwards quite a bit, probably to geographical scales on the order of a typical province and temporal scales on the order of a month, but there is a point where the models will not scale down any further and, critically, adding the additional information available in local news sources will not reliably improve forecasting once the geo-temporal scale is below the level where organizations have optimized the Hoel-Albantakis-Tonini effective information. Below the Hoel-Albantakis-Tonini limit, more is not better: more is just noise, because organizations aren’t paying attention to this information and consequently it is having no effect on their future behaviors.

And in the case of event data, a particular type of noise. [Again, I warned you, we’re doing inside curling here.] There are basically three approaches to generating event data

  • Codebook-based (used by human coders, and thus irrelevant to real-time coding)
  • Pattern-based (used by the various dictionary-based coding programs that are currently responsible for all of the existing data sets)
  • Example-based (used by the various machine-learning systems currently under development, though none, to my knowledge, currently produce operational data)

While at present there is great furor—mind you, among not a whole lot of frogs in not a particularly large pond—as to whether the pattern-based or example-based approaches will prove more effective [16], this turns out to be irrelevant to this issue of noise: Both the pattern-based and example-based systems, alas, are subject to the same weakness in generating noise [17] as each generates false positives when they encounter something in a politically-irrelevant news context that sufficiently resembles something from the politically-relevant cases they were originally developed to code that it triggers the production of an event. As more and more local data—which is almost but not quite always irrelevant—is thrown into the system, the false positive rate soars.[18][19]

CAVEAT: Yes, as I keep promising, there is a key caveat here: For a variety of reasons, most importantly institutional lag, language differences, and areas with a high risk and low interest (for example Darfur, South Sudan, or Mexican and Central American gang violence) the coverage of the international news sources is not uniform, and there are some places where local coverage can fill in gaps. Two places where I think this has been (or will be once the non-English coding systems come on-line) quite important is the “cell phone journalism” coverage of violence in Nigeria and southern Somalia, and Spanish and Portuguese language coverage in Latin America.[20] But by far, the vast bulk of the local sources currently used to generate event data do not have this characteristic.

Whew…so you’ve made it this far, what are the practical implications of this screed? I see five:

First, the contemporary mega-source data sets are a combination of two populations with radically different DGPs: the “thick head” of international sources, most of which are coded tolerably well by the techniques which, by and large, were originally developed for international sources, and the “thin tail” of local sources, which are generally neither coded particularly well, nor particularly relevant even when coded correctly.[21]

Second, as noted earlier, in event data, more is not necessarily better. “More” may be relatively harmless—well, for the consumers of the data; it remains at least somewhat costly to the producers [22]—when the models involve just central tendency (the Central Limit Theorem is, as ever, our friend) and the false positives are pretty much randomly distributed.[23] Models sensitive to measurement error, heterogeneous samples, and variance in the error terms—for example most regression-based approaches—are likely to experience problems.

Third—sorry DARPA [24]—naive reductionism is not the answer, nor is blindly throwing more machine cycles and mindless data-dumps at a problem. Any problem.[25] Figuring out the scale and the content of the effective information is important [26], and this requires substantive knowledge. Some aspects of the effective information problem are what political and organizational theories have been dealing with for at least a century. Might think about taking a look at that sort of thing, eh? Trust in the Data Fairy alone has, once again, proven woefully misplaced.

Fourth, keep in mind my CAVEAT! above: it is not the case that all local data are useless. But it is almost certainly the case that because the DGPs differ so greatly between contemporary local sources and international sources, it is very likely that separate customized coding protocols will be needed for these, at the very least well-tested filters to eliminate irrelevant texts and in many cases customized patterns/training cases. That said, the effective information scale can vary by process, and if, for example, one is focused on a localized conflict (say Boko Haram or al-Shabab) some of those sources could be quite useful, again possibly with customization. But the vast bulk of local sources are just generating noise.[27]

Finally, don’t listen to me: experiment! Most of the issues I’ve raised here can be readily tested in your approach of choice using existing event sequences: for your problem of choice, go out and actively test the extent to which the exclusion of local sources (or specific local sources) does or does not affect your results. And please publish these in some venue with a lag time of less than five years! [28]  

Footnotes

1. As with everything in this blog, these opinions are mine and mine alone and no one I have ever worked for or with directly or indirectly anytime now or in the past or future bears any responsibility for them. And that includes Mrs. Chapman whose lawn I mowed when I was in seventh grade.

2.  And the other major news wires such as Xinhua, Agence France Press, assorted BBC monitoring services and the Associated Press, but that’s pretty much the list.

3. This largely coincided with the proliferation of telegraph connections, though older precedents existed in the age of sail once a sufficiently independent business class—and weakening of state control of communications—existed to sustain it. 

4. Just two or three decades ago, “newspapers of record” such as the Times of London and the New York Times served much the same role that the international wire services do today by focusing on international coverage for a political elite, using their vast networks of foreign correspondents, proverbially gin- and/or -whiskey-soaked Graham Greene wannabees hanging out in the bars of cheap hotels convenient to the Ministry of Information of—hey, Trump’s making me do this, I can’t miss my one chance to use this word!—shithole countries. Or colonies. Those days are long gone, though for this same reason historical time series data based on the NYT such as that produced by the Cline Center may be quite useful.

5. In contrast, your typical breathlessly hyped commercial big data project involves data generated by a relatively uniform DGP: people looking at, and then eventually buying [or not] products on Amazon are, well, all looking at, and then eventually making a decision about, products on Amazon, and they are also mostly people (Amazon presumably having learned to filter out the price-comparison bots and deal with them separately). Actually, except for the human-bot distinction, it is hard to think of a comparable cases in data science where the data generators are as divergent as a Reuters editor and the reporters for a small city newspaper in Sumatra. Unless it is the difference between the DGP of news media, even local, and the DGP of social media…

6. Affiliations provided to indicate that the authors’ qualifications go beyond “Part-time cannabis sales rep living in parents’ basement in Ft. Collins and in top 20 percentile of Star Wars: Battlefront II players.” Which would be the credentials of the typical author of a discourse on “theory of reality.”

7. Otherwise known as “Markov chains.”

8. This last is a puzzle only if you’ve had to sit through multiple agonizingly stupid meetings over the years with people who believe otherwise.

9. Thanks to the work of Tetlock and his colleagues, the breadth of this accuracy across a wide variety of political domains is far more systematically established for human superforecasters than it is for statistical and machine-learning methods, but I’m fairly confident this will generalize.

10. Dunbar’s number is probably another example of this in social groups generally. I’m currently involved in a voluntary organization whose size historically was well below Dunbar’s number and consequently was run quite informally, but is now beginning to push up against it. The effects are, well, interesting.

11. For an extended discourse on counter-examples, see this. Which for reasons I do not understand is the most consistently viewed posting on this blog.

12. Descending first into the footnotes, then into three interesting additional rather deep rabbit holes on this:

  1. I’m phrasing this in terms of “organizations,” which have institutionalized information-processing structures. But certainly we are also interested in mass mobilization, where information processing exists but is much more diffuse (and in particular, is almost certainly influenced more by networks than formal rules, though some of those networks, e.g. in families, are sufficiently common that they might as well be rules). I think the core argument for a prediction-maximizing scale is still relevant—in mass mobilization in authoritarian systems the selection pressures are very intense but even non-authoritarian mobilizations have the potential costs of collective action problems—but they are likely to be different than the scale for organizations, as well as differing with the scale of the action. That said, the incentives for the international wire services to collect this information remain the same, and the combination of [frequent] anonymity and correspondents not being dependent on local elites for a paycheck (the extent to which this is true varies widely and has certainly changed in recent years with the introduction of the internet) may result in these international sources being considerably more accurate than local sources. Local media sources which are under the control of local political and/or economic elites may actually be at their least informative when the conditions in an area are ripe for mass mobilization. [29]
  2. An interesting corollary is that liberal democracies have an advantage in being generally robust against the manipulation of information—the rise of fascist groups in the inter-war period in the 20th century and, possibly, recent Russian manipulation of European and US elections through social media are possible exceptions—and consequently they don’t incur the costs of controlling information. This is in contrast to most authoritarian regimes, and specifically the rather major example of China, which spends a great deal of effort doing this, presumably due to a [quite probably well-founded] fear by its elites that the system is not robust against uncontrolled information. Even if the Chinese authorities can economically afford this control—heck, they can afford the bullet trains the US seems totally incapable of creating—this suggests a brittleness to the system which is not trivial. Particularly in light of a rapidly changing information environment. Much the same can probably be said of Saudi Arabia and Russia.
  3. A really deep rabbit hole here, but given the fact that the support for the international news media is very diffuse, what we probably see here is essentially a generally stable [Nash? product-possibility-frontier? co-evolution landscape?] equilibrium between information producers and consumers where the costs and benefits of supply and demand end up with a situation where the organizations (both public and private) can work fairly well with the available information and the producers have learned to focus on providing information that will be useful. From the perspective of political event data, for example, the changes between the WEIS and COPDAB ontologies from the 1960s and the 2016 PLOVER ontology—all focusing on activities relevant to forecasting political conflict—are relatively minor compared to total scope of human behaviors: political event ontologies have consistently used on the order of 10¹ primary categories, whereas ontologies covering all human behaviors tend to have on the order of 10². Furthermore, except for the introduction of idiomatic expressions like “ethnic cleansing” and “IED”, vocabulary developed for articles from the 1980s still works well for articles in the 2010s (and works vastly better than trying to cross levels of scale from international to local sources). Organizations, particularly those associated with large nation states, will of course have information beyond those public sources—this is the whole point of intelligence collection—but opinions vary widely—wow, do they ever vary widely!—as to the utility of such information at the “policy relevant forecasting interval” of 6 to 24 months.  Meanwhile, given its level of decentralization, the system that has depended on this information ecosystem is phenomenally stable compared to any other period in human history.

13. In the early “AI” work in human-crafted “expert systems” in 1980s, “knowledge engineers”—a job title insuring generous compensation at the time, rather like “data scientist” today—generally found that if an expert said they needed some information that couldn’t be objectively measured, but they knew it by “intuitive feelings” or something equivalent, when the models were finally constructed and tested, it turned out these variables were not needed: the required classification information was contained in variables that could be measured. The positive interpretation of this is that the sub-cognitive “intuition” was in fact integrating these other factors through a process similar to, well, maybe neural networks? Ya-think? The negative interpretation is that the individuals were trying to preserve their jobs. 

14. With a bit more work, one could probably align this—and the Hoel-Albantakis-Tonini approach more generally—with Hayek’s conceptualization of markets as information processing systems, and certainly the organizational approach is consistent with Hayek’s critique of central planning. Even if Hayek is probably the second-most ill-used thinker in contemporary so-called conservative discourse, after Machiavelli (and followed by Madison). Seriously. I digress.

15. Tether, of course, was the model for Snoke in The Last Jedi, presiding over DARPA seated on a throne of skulls in a cavernous torch-lit room surrounded by black-robed guandao-armed guards. His successor at DARPA, of course, was the model for Miranda Priestly in The Devil Wears Prada.

16. The answer, needless to say, is that hybrid approaches using both will be best. Part of the reason I annotated 4000 sentences over the weekend and am planning to do a lot more on a couple of upcoming transoceanic flights.

17. This is similar to the argument that all machine-learning systems are effectively using the same technique—partitioning very high dimensional spaces—and therefore allowing for similar levels of complexity will have similar levels of accuracy, particularly out of sample.

18. Even worse: the coding of direct quotations, where the DGP varies not just with the reporter but with the speaker. These are the perfect storm for computer-assisted self-deception: as social animals, we have evolved to consider direct quotations to be the single most important source of information we can possibly have, and thus our primate brains are absolutely screaming: “Code the quotations! Code the comments! Code the affect! General Inquirer could do this in the 1960s using punch cards, why aren’t you coding comments???”

But our primate brains evolved to interpret quotations deeply embedded in a social context that, during most of evolution, usually involved individuals in a small group with whom we’d spent our entire lives. A rather different situation than trying to interpret quotations first non-randomly selected, then often as not translated, out of a barely-known set of circumstances—possibly including “paraphrased or simply made up”—that were spoken by an unknown individual operating in a culture and context we may understand only vaguely. And that’s before we get to the issues of machine coding. “Friends don’t let friends code quotations.”

For this reason, by the way, PLOVER  has eliminated the comment-based categories found in CAMEO, COPDAB, and WEIS.

19. Okay, it’s a bit more complicated: the false positive rate is obviously going to depend on the tradeoff a given coding system has made between precision and recall, and a system that was really optimized for precision could probably avoid this issue, or at least dramatically reduce it. But none of the systems I’m familiar with have done that.

20. Raising another split in the event data community, whether machine-translation combined with English-language coders will be sufficient or whether native-language coders are needed. Fortunately, once the appropriate multiple-language coding programs are available, this can be resolved empirically.

21. See this table which was generated from a sample of ICEWS sources from 2016. Of the sources which could be determined, 37.5% are from 10 international sources, and fully 27.2% from just four sources: Xinhua, Reuters, AFP and BCC. Incongruously, three Indian sources account for another 15.2%, then past this “thick head” we go to a “thin tail” of 372 sources accounting for the remaining 47.3% of the known sources (with an additional 25.8% of the total cases being unidentified: these could either be obscure local sources or garbled headers on the international sources, which occurs more frequently than one might expect).

22. Rather like bitcoin mining. I actually checked into the possibility that one could use bitcoin mining computers—which I suspect at some point will be flooding the market in vast quantities—to do, well, something, anything that could enhance the production or use of event data. Nope: they are specialized and optimized to such a degree that “door stop” and “boat anchor” seem to be about the only options.

23.They may or not be: With a sufficiently representative set of gold standard records—which more than fifty years into event data coding we still don’t have—this becomes an empirical question. My own guess is that they are randomly distributed to a larger degree than one might expect, at least for pattern-based coders.

24. Yeah, right… “I feel a great disturbance in the Force, as if millions of agent-based-models suddenly cried out in terror and were suddenly silenced, their machine cycles repurposed to mining bitcoin. I fear something, well, really great, has happened. Except we all know it won’t.”

25. Sharon Weinberger Imagineers of War (2017)—coming soon to the Virginia Festival of the Book!—is a pretty sobering assessment of DARPA’s listless intellectual drift in the post-Cold War period, particularly in dealing with prediction problems and anything involving human behavior. Though its record on that front during the Vietnam War also left a bit to be desired. Also see this advice from the developer of spaCy.

26. “Data mining” may identify useful surrogate indicators that provide substitutes for other more complex and less-accessible data: PITF’s discovery of the robustness of infant mortality rate as a surrogate for economic development and state capacity is probably the best example of this. These, however, are quite rare, and tend to be structural rather than dynamic. The fate of the supposed correlation between Google searches for flu symptoms and subsequent flu outbreaks (and a zillion other post-hoc correlations discovered via data mining) is a useful cautionary tale. Not to mention the number of such correlations that appear to be urban legends.

27. I attempted to get a reference from the American Political Science Review for a colleague a couple days ago and found that their web site doesn’t even allow one to access the table of contents without paying for a membership! Cut these suckers off: NSF (and tenure committees) should not allow any references or publications that are not open access. Jeff Flake is rapidly becoming my hero. And I hope Francis Bacon is haunting their dreams in revenge for their subversion of the concept of “science.”

Addendum: Shortly after writing this I was at a fairly high level meeting in Europe discussing the prospects for developing yet another quantitative conflict early warning system, and got into an extended discussion with a couple quite intelligent, technically knowledgable and diligent fellows who, alas, had been trying to learn about the state of the art of political science applications of machine-learning techniques by reading “major” political science journals. And were generally appalled at what they found: an almost uninterrupted series of thoroughly dumbed-down—recalling Dave Barry’s observation that every addition you make to a group of ten-year-old boys drops their effective IQ by ten points, that’s the effect of peer review these days—logistic regressions with 20 highly correlated “controls'” and even these articles only available—well, paywalled—five years after the original research was done. So I tried to explain that there is plenty of state-of-the-art work going on in political science, but it’s mostly at the smaller specialized conferences like Political Methodology and Text-As-Data, though some of it will make it into a very small number of methodologically sophisticated journals like Political Analysis and Political Science Research and Methods. But if you are trying to impress people who are genuinely sympathetic to quantitive methods using the contents of the “major” journals, you’ll find that’s equivalent to demonstrating you can snare rats and cook them on a stick over an open fire, and using this as evidence that you should work in a kitchen that carries a Michelin star.

Alas, from the perspective of the typical department chair/head, dean, associate dean, assistant dean, deanlet and deanling, I suppose there is a certain rationale to encouraging this sort of thing, as it makes your faculty far less attractive for alternative employment, and maybe the explanation for the persistence of this problem is no more complicated than that. Though the 35% placement rate in political science may be an unfortunate side-effect. If it is indeed a side-effect. Another “side effect” may be the precipitous decline in public support for research universities.

Again, whoever is in charge of this circus needs to stop supporting the existing journal system, insist on publications which have roughly contemporaneous and open access—if people want to demonstrate their incompetence, let them do so where all can see, and the sooner the better—and let the folks currently trying to run journals get back to their core competency, managing urban real estate.

28. Also, as has been noted from the dawn of event data, “local” sources are incorporated into the international news services, as these depend heavily on local “stringers,” often among the most well-connected and politically savvy individuals in their region, and not necessarily full-time journalists. I was once in a conversation with the husband of a visiting academic from Morocco and asked what he thought about The Economist‘s coverage of that country. He gave me a quizzical look and then said “You’ll have to ask someone else: I am The Economist‘s correspondent for Morocco.”

29. In rare circumstances, this can be a signal: in 1979 the Soviet media source Pravda went suddenly quiet on the topic of Afghanistan a week or so before the invasion after having covered the country with increasing alarm for several months. This sort of thing, however, requires both stupidity and very tight editorial control, and I doubt that it is a common occurrence. At least the tight editorial control part.

30. Addendum (which probably deserves expansion into its own essay at some point) There’s an interesting confirmation of this from an [almost] entirely different domain in an article in Science (359:6373 19 Jan 2018, pg. 263; the original research is reported in Science Advances 10.1126/sciadv.aao5580 (2018)) which found that similar results on predicting criminal recidivism could be obtained from

  • A proprietary 137-variable black-box system costing $22,000 a year
  • Humans recruited from Mechanical Turk and provided with 7 variables
  • A two-variable regression model

It turns out that for this problem, there is a widely-recognized “speed limit” on accuracy of around 70%—the various methods in this specific study are a bit below that, particularly the non-expert humans—and, as with conflict forecasting, multiple methods can achieve this.

On reading this, I realize that there is effectively an “PITF predictive modeling approach” which evolved over the quarter-century of that organization’s existence:

  • Accumulate a large number of variables and exhaustively explore combinations of these using a variety of statistical and machine-learning approaches: this establishes the out-of-sample “speed limit”
  • The “speed limit” should be similar to the accuracy of human “super-forecasters”
  • Construct operational models with “speed limit” performance using very simple sets of variables—typically fewer than five—using the most robustly measured of the relevant independent variables

This is, of course, quite a different approach than the modeling gigantism practiced by the organization-that-shall-not-be-named under the guidance of the clueless charlatans who have quite consistently been directing it down fruitless blind alleys—sort of a reverse Daniel Boone—for decades. Leaving aside the apparent principle that these folks don’t want to see the problems solved—there are no further contracting funds for a solved problem—I believe there are two take-aways here:

  • Anyone who is promising arbitrarily high levels of accuracy is either a fool or is getting ready to skin you. If government funding is involved, almost certainly the latter. There are “speed limits” to predictive accuracy in every open complex system.
  • Anyone who is trying to sell a black-boxed predictive model based on its complexity and data requirements is also either a fool or is getting ready to skin you: everything in our experience shows that simple models are the most robust models over the long term.
Posted in Methodology | 7 Comments

Violence in Charlottesville and what we might gain from the Heather Heyers of this world

As you’ve probably been aware, things have been rather, well, difficult in these parts over the past few days. Living in State College I quickly learned that when you find your town on the front page of the New York Times and Washington Post things are probably not going particularly well, and here in Charlottesville I’ve learned that if you get that attention plus the lead article on The Economist Espresso app, things are really not going well.

As my initial grist for this entry, I’d written a good 5500 words meticulously employing appropriate theories of collective action and the usual obtuse historical analogies—those to whom I owe a couple of reports, sorry—but with the utterly mind-boggling levels of craziness pouring out of the White House, the subtleness of any detailed analysis would be lost to the winds, to say nothing of potentially misinterpreted. And on further reflection, this is not a time for analysis, but neither is it a time for silence, and consequently I’m just going to go with my gut.

A few hours ago I attended, along with about a thousand other people, the public funeral for Heather Heyer, the woman killed on Saturday in an act of domestic terrorism. And yes, terrorism is precisely what it was, by every known definition of the term. Coming out of that event, I can say with certainty that Heather is not someone who died because she was in the wrong place at the wrong time. No, Heather Heyer was murdered because she was exactly where she wanted to be, as a witness to the causes of justice, tolerance and equality to which she had been fiercely committed her entire life.

But that funeral brought home another message that I think is even more important, and too easily missed. That strength and commitment were shown not just by Heather but by the entire network which took to the microphone to speak in her memory: her father, pastor, friends, boss and, dramatically, her mother Susan Bro who transcended the pain and grief of losing her only daughter to make a powerful and impassioned statement for the values Heather had lived by.

And who were these people? Heather had only a high school education. She was raised by her single mom and her grandparents. The accents we heard were the soft tones of rural Virginia and the powerful cadences of African-American churches, not the carefully refined language of Ivy League eating clubs or the arrogant bombast of TED-X talks. Her African-American boss managed a bankruptcy firm and had hired Heather when she was a waitress, telling her that she had smarts and a work ethic, and he could teach her what she needed to know to become a paralegal. Smarts, a work ethic, and deep reserves of empathy, as he related a story of watching Heather gently work with a dual-career couple with multiple advanced degrees who, nonetheless, found themselves filing for bankruptcy.

Heather Heyer, with just a high school degree, and growing up in central Virginia, is the sort of person who would be completely invisible, totally beyond even the remotest consideration, to those of us in the tech community.

Heather’s funeral was held at the Paramount Theater, the largest venue in the downtown. As it happened, the last time I’d been in that theater I’d been listening to the rants of a foul-mouthed misogynistic venture capitalist, later exposed as one of the most notorious serial sexual harassers on the West Coast, who had been brought in with taxpayer assistance to be glorified as an exemplar on whom we should model our lives. The ersatz “tech festival” sponsoring him went on in a similar vein for days—perfect people with their perfect accents, perfect degrees, perfect bodies, perfect LinkedIn profiles and perfect access to Other People’s Money.  And yet in the one hour of Heather’s funeral, I heard more wisdom than I found in three days of that earlier celebration of education and entitlement.

Charlottesville is a wonderful place to be a tech developer, and I want that to continue. But there is more to life than technical acumen, advanced degrees, and knowing the right people. Charlottesville, and the world, is not just tech and venture capital, but people like Heather Heyer and her amazing family and friends, and their profoundly deep values, moral strength, and commitments. Let’s not forget that.

And to the heavily-armed jokers who descended upon our quiet community to march in Nuremberg-style torchlit parades chanting “You will not replace us”: we will replace you. Oh, most assuredly we will replace you.

And it is on the strength and convictions of people like Heather Heyer and Susan Bro that we will replace you.

Posted in Politics | 2 Comments

So, punk, think ya can start a data science program??

This is the second part of a two-essay series addressing some of the features one might wish to include in a contemporary “data science” program using resources in existing quantitative “social science” programs. The first, a rather rambling polemic, addressed a series of more general questions about the state of the “social sciences”, concluding that, as sciences, they are quite mature, and have been so for decades: People who have played “bet your career” on the systematic study and prediction of human behavior are, shall we say, generally doing just fine.

This essay moves on to some more specific questions on how social science approaches might be adapted given the rapid developments in analytical methods that have been occurring to a large degree elsewhere, typically under the titles “machine learning” or “data science.”

These observations should be taken with, well, maybe a truckload of salt as they are based on an assortment of unsystematic primary and secondary observations of the current “data science” scene as viewed from an infinitesimally minor player located in the southern-most suburb of Washington, DC: Charlottesville, Virginia (or as we call it locally, CVille [1]). Despite having not a lot of skin in the game, dogs in the fight, or monkeys in the circus—merely an obnoxious penchant for clichéd metaphors—the rapid evolution of this “space”, as the contemporary phrasing goes, has been interesting to observe.

So, let’s get this show on the road, and our butt in gear. I’m generally addressing this to those who might be considering modifying one or more parts of an existing [implicitly academic] social science curriculum to be more [implicitly applied] “data science friendly.” As the number of people in that situation is rather limited—albeit I’m going to be talking to some in the near future—these observations may also be of some utility to the much larger number of people who are wondering “hey, what differentiates a well-trained social science statistician from a data scientist?” [Answer: mostly the title on their business card…and probably their compensation.]

As usual, seven observations, in this instance phrased as sometimes nuanced pairs moving from existing social science “statistics” approaches in the direction of a newer “data science.”  

1. “Science” vs “Engineering”

I am very much a child of the “Sputnik generation” and some of my earliest technical memories were discussions of the actual Sputnik, orbiting overhead, then watching U.S. rockets launching (and, presumably, blowing up) on our small television, then the whole space-race thing culminating in watching the first moon landing live. This was a [more civilized] period where elites, whether conservative or liberal, revered “science” and as a duly impressionable youngster, I imagined myself growing up to be “a scientist.”

Which I did, though in ways I definitely hadn’t imagined while watching rockets blow up, utilizing the fact that I was pretty good at math and was eventually able to combine this with my love of history and a good intuitive understanding of politics.  All in all, it made for a nice academic career, particularly during a period of massive developments in the field of quantitative political science.

But despite my technical credentials, I was always a bit uncomfortable with the “science” part of things, and all the more so because I don’t have an intuitive understanding of philosophy: when you hang around people who do, you quickly realize when you don’t. Sure, I can piece together an argument, much as one pieces together a grammatically correct sentence in a language one is just beginning to learn, but it’s not natural for me: I’m not fluent.

“Science” is nonetheless the prestige game in the academic world—particularly for Sputnik-generation Boomers and their slacker “Greatest Generation” overlords—and it was only after getting out of that world four years ago, and in particular into the thriving software development ecosystem in CVille, did I finally realize what I’m actually good (and intuitive) for: the problem-solving side of things. Which is to say, engineering rather than science. [2]

What’s the difference? Science is pursuing ultimate truths, engineering is solving immediate problems. I’m going to go seriously geek on you here, but consider the following two [trust me, more or less equivalent] issues

Considering the issues of both programmer and execution time, and software maintainability, are concurrent locks or transactional memory the better solution for handling multiple threads accessing memory?

or

If I need to scale access to this database, will I screw it up?

The first is a scientific problem—mind you, probably unanswerable—and the second is an engineering problem.

Everything in an academic culture is going to point towards solving scientific problems. [3] And very slowly. Students seeking employment outside of academia need to know general scientific principles, but ultimately they are going to be hired, compensated, and promoted as engineers.

2. Statistics vs Machine Learning

As the table below shows in some detail, the statistical and machine learning (ML) approaches appear to be worlds apart, and the prospect of merging these would appear at first to be daunting:

Feature Statistics Machine Learning
Primary objective Determining if a variable has a “significant” effect Classification
Theoretical basis Probability theory “Hey, I wonder if this will work??”
Feature space Variable limited Variable rich
Measurement Should be careful and consistent Whatever
Cases labeled? Usually [4] Maybe, maybe not (supervised vs unsupervised learning)
Heterogeneous cases? Nooooo…. Bring it on…
Explicit data generating process? Ideally Rarely
Evaluation method Usually full sample Usually split sample (training/test)
Evaluation metrics Correlation ROC AUC; accuracy/precision/recall
Importance of good fit? Limited: objective is accurately assessing error given a model How else will I get published?
Time series Specialized models covered in 800-page books Just another classification problem
Foundational result Central limit theorem Web scraping
Sainted ancestor Carl Friedrich Gauss Karen Spärck Jones
Distribution naming conventions Long dead European males Distributions?
Software naming conventions Dysfunctionally abbreviated acronyms [7] impossible to track on Google Annoyingly cute Millennial memes
Secret superpower Client is totally dependent on us to interpret the numbers Client doesn’t know we just downloaded the code from StackOverflow
Logistic regression is embarrassingly effective? Yes Yes

But from another perspective, these differences may actually be a good thing: it is quite possible that ML, like a rising tide, an invasive species finding empty ecological niches, coffee spilled on a keyboard, or whatever your preferred metaphor, has simply occupied the rather substantial low ground that the more constrained and generally analytically-derived statistical approaches have left vacant. Far from being competitors, they are complements.

Thus suggesting that the answer to “statistics or machine learning?” is “both.” I think this is particularly possible because while ML would be in addition to the statistical curriculum which has been carefully refined over the better part of a century [8], in the applied work I’m seeing, the bulk of practical ML really comes down to four methods

  • clustering, usually with k-means
  • support vector machines
  • random forests
  • neural networks, most recently in their “deep learning” modes

These are the general methods and do not cover more specialized niches such as speech and image recognition and textual topic modeling, but the degree of focus here is truly extraordinary given the time, effort and machine cycles that have been thrown at these problems over the past fifty years. Each method has, of course, a gadzillion variations—this is the “hyperparameter” issue for which there are also ML tools—but typically just going with the defaults will get you most of the way to the best solution you can obtain with any given set of data. Which is to say, the amount of ML one absolutely has to learn to do useful work in data science is quite finite.

3. Analytical mathematics vs programming

I have [fruitlessly] addressed this topic in much greater detail elsewhere but the bottom line is that for data science applications, you certainly need to be able to comprehend algebaic notation, ideally at a point where you can rapidly and idiomatically skim equations (including matrix notation) in order to figure where a proposed new method fits into the set of existing techniques. It certainly doesn’t hurt to have the equivalent of a couple semesters of undergraduate calculus [9], though [I think] I read somewhere that something like two-thirds of college graduates have this anyway (that figure probably includes a lot of AP credits). But beyond that, the return on investment in the standard analytical mathematical curriculum declines rapidly because most new methods are primarily developed as algorithms. [10]

The same can probably also be said for most of the formal coursework dealing with computer programming: it is very helpful to learn more than one computer language in some depth [11], learn something about data structures, including objects, and get some sense of how computers work at a level close to machine language (C is ideal for this, as well as useful more generally), but beyond that point, in the applied world, it’s really just practice, practice, practice.

While the rate of computer language and interface innovation has certainly slowed—C and Unix are now both nearing their half-century mark—it remains the case that one can be almost certain that in 2025 there will be some absolutely critical tool in wide use that only a handful (if that) of people today have even heard of. This is a completely different situation from analytical mathematics, where one could teach a perfectly acceptable calculus course using a textbook from 1850. As such, the value of intensive classroom instruction on the computer side is limited.

4. R vs Python

So, let the flame wars begin! Endlessly!

Well, maybe not. Really, you can manage just fine in data science in either R or Python (or more to the point, with the vast panoply of Python libraries for data analytics) but at least around here, the general situation is probably captured by a pitch I heard a couple weeks ago by someone informally recruiting programmers: “We’re not going to tell you what language to use, R or Python, just do good work. [long pause] Though we’re kinda transitioning to Python.”

So here’s the issue: Python is the offspring of a long and loving relationship between C and Lisp. Granted, C is the modest and dutiful daughter of the town merchant, and Lisp is the leader of a motorcycle gang, but it works for them, and we should be happy. Java, of course, is the guy who was president of student council and leaves anonymous notes on the doors of people who let their grass grow too tall. [12]

R is E.T. 

I will completely grant that R has been tremendously important in the rapid development of data analytics in the 21st century, both through its sophistication, its incredible user community, and of course the fact that it is open source. The diffusion of statistical, and later machine learning, innovation made a dramatic leap when R emerged as a lingua franca to displace most proprietary systems except in legacy applications. [13]

But to most programmers, R is just plain weird, and at least I can never escape the sense that I’m writing scripts that run on top of some real programming language. [14] Whereas Python (and Java) are modern general purpose languages with all of the things you expect in a programming language—and then some. Even though, like R, both are scripted and also running on top of interpreters (in Python, once again C), when you are writing code it doesn’t feel like this. At least to me…and, as I watch the ever-increasing expansion of Python libraries into domains once dominated by R, I don’t think it’s just me.

Do not show any of this to any group of programmers without being aware that they will all disagree vehemently with all of it. Including the words “and” and “the.” Shall we move on?…

5. Small toy problems vs big novel problems

It’s taken me quite a while, and no small amount of frustration, to finally “get it” on this but yeah, I think I finally have: academic computer scientists like small “toy” problems (or occasionally, big “toy” problems) because those data sets are already calibrated, and the only way you can tell whether the new technique you are trying to get published is actually an improvement (or, more commonly, assess the various tradeoffs involved with the technique: rarely is anything an improvement on every dimension) is by testing it on data sets that lots of other methods already have been tested on, and which are thoroughly understood. Fair enough.

Unfortunately, that’s not what the applied world looks like—we’re back to that “science” vs “engineering” thing again—where the best opportunities, almost by definition, are likely to be those involving data that no one has looked at before and which are not well understood. If the data sets that are the focus of most computer science development and evaluation covered all of the possibilities in the novel data we’d still be okay, but almost by definition, they won’t, so we aren’t.

I realize I’m being a bit unreasonable here, as increasing the corpora of well-understood data sets is both difficult and, if computer science attitudes towards data collection are anything like those in the social sciences, largely unrewarded, but (easy for me to say…) could you at least try a bit more? Something other than irises and the emails between the long-incarcerated crooks at Enron?

6. Clean structured data vs messy unstructured data

This looks like the same problem as the above, but is a bit different, and more practical than theoretical. As everyone who has ever done applied data science work will confirm, most of one’s time (unless you’re at a shop where someone else does this for you) is spent on data preparation. Which is never exactly what you expect and, if you are dealing with data generated over an extended period of time, it probably has to be done for two or three subtly different formats (as well as coping with the two or three other format changes you missed). Very little of this can be done with standard tools, much of the work provides little or no intellectual rewards (beyond building software tools you at least imagine you might use at a later date), and it’s mostly just a long slow slog before you get to the interesting stuff.

This does not translate well to a classroom environment:

Welcome to my class. Before we begin, here’s a multi-gigabyte file of digital offal that is going to take a good six weeks of tedious work before you can even get it to the point where the groovy software you came here to learn can read it, and when you finally do, you’ll discover about ten assumptions you made in the initial data cleaning were incorrect, leading to two additional weeks of work, but since eight weeks is well past the drop date for this class, you’ll all be gone by that point, which is fine with me because I’d rather just be left alone in my office to work on my start-up. Enjoy.  

No, if you want to learn about technique, better to work with data you know is okay, and devote your class time to experimenting with the effects of changing the hyper-parameters.

Again, I don’t have an obvious solution to this: Extended senior and M.A. projects with real data, possibly with teams, may be a start, though even there you are probably wasting a lot of time unless the data are already fairly clean. Or perhaps the solution is just to adopt the approach of law schools, which gradually removed all of the boring stuff to the point where what is taught in law school is all but completely irrelevant to the actual practice of law. Worked for them! Well, used to…

7. Unicorn-aspiring start-up culture vs lean and sustainable business culture

This one isn’t really an academic issue, but a more general one dealing with practical preparation for students who will be venturing into the realm of applied—which is to say, commercial—data analytics. As those who follow my tweets know, here in innocent little CVille we recently completed an annual affair I referred to as the hip hipster fest, a multi-day taxpayer-subsidized [15] celebration of those who are born on third base and going through life acting like they hit a triple. It was, to a remarkable degree, a parody of itself [16]—the square-jawed hedge fund managers holding court in invitation-only lunches, the Martha Stewart wannabee (and well on her way) arguing the future of the city lay in the creation of large numbers of seasonal, minimum wage jobs catering to the fantasies of the 1%, the venture capitalist on stage in flip-flops who couldn’t complete a paragraph without using the F-word at least once. [17] Everywhere the same monotonously stereotypical message: aim big, don’t be afraid to fail!

Yeah right. Don’t be afraid to fail, so long as you come from a family of highly educated professionals, went to a private high school, Princeton, Oxford, Harvard, and married someone who had tenure at Harvard. Under those circumstances, yeah, I’d say you probably don’t need to be afraid to fail.

Everyone else: perhaps a little more caution is in order. And oh by the way, all those people telling you not to be afraid to fail?: if you succeed, they are going to get the lion’s share of the wealth you generate. And if you do fail—and in these ventures, the odds are absolutely overwhelming this will be the outcome—they’ll toss you aside like a used kleenex and head out looking for the next wide-eyed sucker “not afraid to fail.” Welcome to the real world.

So over the past few years I’ve come around to seeing all these things—endlessly celebrated in the popular media—as more than a bit of a scam, and have formulated my alternative: train people to create businesses where, as much as possible, you can sustainably be assured that you can extract compensation roughly at the level of your marginal contribution. [18] That’s not a unicorn-aspiring start-up—leave those for the sons and daughters of the 1%—and it is certainly not the “gig economy,” where the entire business plan involves some company making sure you are getting far less than your marginal contribution, buying low (you) and selling high (them). Stay away from both and just repeat “Lean and sustainable: get compensated at the rate of your marginal contribution.”

It’s an old argument—Monique Tilford and Vicki Robin’s 1990s best-seller Your Money or Your Life focused on exactly this principle, and in his own hypocritical fashion, it was the gist of Thomas Jefferson’s glorification of the [unenslaved] yeoman farmer as the foundation of a liberal democracy. Mind you, in point of fact Jefferson was born into wealth and married into even more wealth, and didn’t have the business acumen to run a 10-cent lemonade stand, whereas his nemesis Alexander Hamilton actually worked his way up from the utter dregs of poverty so, well, “it’s complicated,” but—as with so much of Jefferson—there is a lot useful in what he said, if not in what he actually did.

Getting back to the present, if you want to really help the data scientists you are planning to send out into the world, tell them not to get suckered into the fantasies of a fairy-tale start-up, and instead acquire the practical skills needed to create and run a business that can sustain itself—with minimum external funding, since banks and hedge funds are certainly not going to loan to the likes of you!—for a decade or more. Basic accounting, not venture capital; local marketing, not viral social networking; basic incorporation, payroll and tax law [19] , not outsourcing these tasks to guys in expensive suits doing lines of cocaine. And fundamentally, finding a viable business niche that you can occupy, hold, and with the right set of skills and luck, expand over a period of years or decades, not just selling some half-baked idea to the uncle of one of your frat brothers over vodka shots in a strip club.[20]

A completely different approach than promoting start-up culture, and it’s not going to get on the front pages of business magazines sold in airline terminals but, you know, it might just be what your students actually need. [21][22]

And such an approach might also begin to put in dent in the rise of economic inequality through a process more benign than revolution, war, or economic catastrophe. That would be sorta nice as well, eh?

Footnotes

1. Or C-Ville or Cville.

2. One of the [few] interesting conversations I had at the recent CVille hip hipster fest—mocked in extended detail below—was with a young—well, young compared to me, though that’s true of most things now other than sequoia trees—African-American woman working as a mechanical engineer at a local company. Well, contractor. Well, in a SCIF, and probably developing new cruise missile engines, but this is CVille, right? Still, it’s Hidden Figures, MMXVII. Anyway, our conversation was how great CVille was because of the large community of people who work on solving complex technical problems, and how helpful it was to be surrounded by people who understood doing that for a living, even though the applied domains might vary widely. A very different sort of conversation than I would have had as an academic.

3. More of an issue for the previous essay, as the organization-which-shall-not-be-named is obsessed with this, but a somewhat related issue here is the irrelevance of “grand theory” in applied social science.

Let’s start with a little thought experiment: You’re shopping for a car. What is “the grand theory of the car” of General Motors compared to Toyota? Honda versus Volkswagon? And just how much do these “grand theories of the car” factor into your buying decision?

Not in the least, right? Because there are no grand theories of the car, but instead multiple interlocking theories of the various components and systems that go into a car. Granted, the marketing people—most of whom probably couldn’t even handle the engineering challenges involved in changing a tire—would dearly love you to believe that, at astonishingly fundamental levels, a Toyota is fantastically distinct from a Honda and this is because of the deep cultural and, yea, spiritual differences between the Toyota way—the Dao of Toyota—and the Honda way, but that’s all it is: marketing. Which is to say, crap. They’re just making cars, and cars are made of parts.

And so it is with humans and human societies, which have evolved with a wide variety of components to solve a wide variety of problems. Some of these solutions are convergent—somewhere in one of Steven Pinker’s books, I think The Language Instinct, he has a list some anthropologist put together of characteristics of human societies that appear to be almost universal, and it goes for about four pages—and in other cases quite divergent (e.g. the dominant Eastern and Western religious traditions are diametrically opposed on both the existence of a single omnipotent deity and whether eternal life is something to be sought after or escaped from). There are very useful theories about individual components of human behavior, just as one can more or less identify a “theory of batteries”, “theory of tires”, or even—about as close as we come to something comprehensive—a “theory of drive trains”, notably those based on electricity and those based on internal combustion engines. These various theories overlap to limited degrees, but none is a “theory of the car,” and one doesn’t need a grand “theory of society” to systematically study human behavior.

Such theories are, in fact, generally highly counter-productive compared to the domain-specific mid-level theories. The other dirty little secret which I’ve found to be almost universally shared across disciplines involved in the study of humans—the humanities as well as the social sciences—is that individuals obsessed with grand theories are typically rather pompous, but fundamentally sad, people who don’t have the intelligence and/or experience to do anything except grand theory. And as a consequence eventually—or frequently after just one or two years in grad school—they don’t get to play in any reindeer games. Maybe for a dozen people in a generation this is not true—and the ideas of only half of even those survive beyond their generation—but for the rest: losers.

There’s a saying, I’m pretty sure shared by researchers on both sides of the political divide, about studies of the Israeli-Palestinian conflict: “People come for a week and they write a book. They come for a month and they write an article. They come for a year and they can’t write anything.” Yep.

4. There’s a fascinating article to be written—how that for a cop-out?—on the decline of clustering methodology (or what would now be called unsupervised models) in quantitative political science. Ironically, when computer analyses first became practical in the 1960s, one actually saw a fair amount of this because there had been extensive, and rather ingenious, methods developed in the 1930s and 1940s in the fields of psychology (notably Cattell) and educational testing in order to determine latent traits based on a large number of indicators (typically test questions). These techniques were ready and waiting to be applied in new fields once computers reduced the vast amount of labor involved, and for example some of the earliest work involving clustering nation-states based on their characteristics and interactions by the late Rudolph Rummel—ahead of the curve and out of the box throughout his long career—would now seem perfectly at home as an unsupervised “big data” problem.

But these methods didn’t persist, and instead were almost completely shunted aside by frequentism (which, one will observe throughout these two essays, I also suspect may be involved in causing warts, fungal infections in gym shoes, and the recent proliferation of stink bugs) and by the 1990s had essentially disappeared in the U.S. Why?

I suspect a lot of this was the ein Volk, ein Reich, ein Führer approach that many of the proponents of quantitative methods adopted to get the approach into the journals, graduate curricula and eventually undergraduate curricula. This approach required—or was certainly substantially enhanced by—simplifying the message to The One True Way of doing quantitative research: frequentist null hypothesis significance testing. Unsupervised methods, despite continuing extensive developments elsewhere [5], did not fit into that model. [6]

The other issue is probably that humans are really good at clustering without any assistance from machines—this seems to be one of the core features of our social cognition. As I noted in the previous essay, Aristotle’s empirically-based typology of governance structures holds up pretty well even 2,400 years later, and, considerably better than Aristotle’s observations on mechanics and physiology. Whereas human cognition is generally terrible at probabilistic inference, so in this domain systematic methods can provide substantial added value.

5. In 2000, I went to a large international quantitative sociology meeting in Cologne and was amazed to discover—along with the fact that evening cruises on the Rhine past endless chemical plants are fun provided you drink enough beer—a huge research tradition around correspondence analysis (CA), which is a clustering and reduction-of-dimensionality method originally developed in linguistics. It was almost like being in one of those fictional mirror worlds, where everything is almost the same except for a couple key differences, and in this case all of the sophistication, specialized variations and so forth that I was seeing in North America around regression-based methods were instead being seen here in the context of CA. I was actually quite familiar with CA thanks to a software development gig I’d done earlier—at the level of paying for a kitchen remodel no less—with a Chicago-based advertising firm, for which I’d written a fairly sophisticated CA system to do—of course—market segmentation, but few of the other Society for Political Methodology folks attending (as I recall, several of us were there on a quasi-diplomatic mission) had even heard of it. I never followed up on any of this, nor ever tried to publish any CA work in political science, though in my methodology classes I usually tossed in a couple examples to get across the point that there are more things in heaven and earth, Horatio, dudes, than are dreamt of in your philosophy copy of King, Keohane and Verba.

6. This may be an urban legend—though I’m guessing it is not—but factor analysis (easily the most widely used of these methods) took a hit when it was discovered that a far-too-popular routine in the very early BMDP statistical package had an bug which, effectively, allowed you to get whatever results you wanted to from your data. Also making the point that the “reproducibility crisis” is not new.

7. “LDA”: latent Dirichlet allocation or linear discriminant analysis? “NLP”: natural language process or nonlinear programming? “CA”: content analysis or correspondence analysis?

8. Even if still uniformly detested by students…hey, get over it!…and just last night I was at yet another dinner party where yet another nursing student was going into some detail as to how much she hated her required statistics class: I have a suspicion that medical personnel disproportionately prescribe unusually invasive and painful tests, with high false positive rates, to patients who self-identify as statisticians. Across the threshold of any medical facility, I’m a software developer.

9. Or more realistically, a semester of differential calculus and a semester of linear algebra, which some programs now are configured to offer.

10. The very ad hoc, but increasingly highly popular, t-SNE reduction of dimensionality algorithm is a good example of this transition when compared to earlier analytically-derived methods such as principal components and correspondence analysis which accomplished the same thing.

11. While not strictly needed for data science, I think there’s much to be said for getting reasonable competence in basic interface tools: currently this would be SQL, javascript and php. More generally, the fundamental split in “IT” jobs is “front-end”—user interfaces, or UX—and “back-end”, which are the analytics which data science deals with; in most situations, databases (SQL and its successors) are the bridge between these two.

12. Encounters of this type are one of the reasons I no longer live in Pennsylvania, though the township minders, not the neighbors, were the culprit.

13. Like those government contracts where you sense that what they’d really like to require is that all computations be done in cuneiform on clay tablets. Because clay tablets have a track record for being very stable and were more than adequate for the government of Ashurbanipal.

But they don’t require cuneiform. Just MatLab.

14. Yes, sophisticated R programmers will just write critical code in C, effectively using R as a [still very weird] data management platform, but that’s basically C programming, not R.

15. Said subsidies furtively allocated by the city council in a process outside the normal review for such requests, which would have required the hip hipster fest to be audited, but that’s someone else’s story. Ideally someone doing investigative journalism.

16. Okay, some folks who clearly knew what they were doing put together an impressive 4-hour session on machine learning. Though I’m still not clear whether the relevant descriptor here was symbiosis or parasitism.

17. Dude, they’ve got drugs to treat that sort of condition now, don’t you know?

18. Caveat: I’ve now supported myself at a comfortable income level for almost four years as a software developer and data analyst—pretty much a successful business model by most reasonable definitions—but in that time I have not once managed to create a company whose capitalization exceeded a billion dollars! Alas, my perspective on business practice is undoubtedly affected by this.

19. But please do not expose students to the Pennsylvania corporate tax form[s] RCT-101  or they will immediately decide it would be way more pleasant to, say, become a street performer who chews glass light bulbs for spare change. Small business: just say no to Pennsylvania. I digress.

20. In light of the extended discussions over the past few days about just how psychologically challenging the academic world can be—teaching usually is rewarding but the rest of the job is largely one of nearly endless rejection and insult—another decided advantage of dealing with relatively short-term engineering problems—assuming, of course, that one can solve these successfully—is that one gets a lot of immediate gratification. And in the U.S. at least, running a small business has generally positive popular connotations even if, in practice, The Establishment, and both political parties (small business ≠ pay-to-play), and certainly Mr. Stiff-the-contractors President are very hostile, though probably no more so than they are towards academics. So individuals following this path are likely to be happier.

21. As it is partially paid for by tax dollars, I’d like to see the CVille hip hipster fest showcase some guy who graduated from the local community college and now runs his own welding shop, or some women from similar circumstances who are starting up a landscaping company. I’d also like to see pigs fly, which I’m guessing will happen first.

22. A very concrete example of this problem arose about a week later when I was attending a local geek-fest whose purpose is largely, if unofficially, recruiting, and while I’m not [currently] recruiting, I had some interesting chats with some folks who expressed interest in what I’m doing (and I’ve realized that compared to what most people in data science end up doing, the tasks typically undertaken by Parus Analytics, a really, really small little operation, actually are quite interesting…), so I asked them with they had business cards.

They didn’t have business cards.

Look, I am not going off on an extended screed about stupid Millennials, how could you not have business cards! (“Scotty, are the snark generators fully charged and engaged?” “Ay, Capt’n, and I’ve got them set at 11” “Excellent, so….GET OFF MY YARD!!!” I digress…) No, my point is that those sweet and helpful people who are telling Millennials to network, lean-in, and don’t-be-afraid-to-fail-because-you-have-a-large-trust-fund,-don’t-you? should also be advising “Don’t go to a networking/recruiting event without business cards,” and, while you’re on the topic, point out that the cost of 200 nicely individualized (from the zillion available templates) business cards runs about the same as a growler of craft beer, and take five minutes to set up and order, and if you hit a sale (likely common around graduation time), about the cost of a pint of craft beer.

But in the absence of a business card—yes, they seem very archaic but it’s hard to beat the combination of cost, bandwidth and specificity—the person you are trying to impress is far less likely to remember your name, and will probably misspell the URL of that web site you so carefully developed and instead get onto the home page of some Goth band whose leader bites off the heads of bats and whose drummer sells products for Amway.

Just saying…

Posted in Higher Education, Methodology, Programming | 4 Comments

Yes, Virginia, the social and data sciences are “science”

Dedicated to the memory of Will H. Moore

This is the first in a two-part series on leveraging quantitative social science programs to provide training in data science, inspired by a recent invitation to provide input on that topic at an undisclosed location outside of Trumpland. Where I may wish to seek asylum at some point, so I don’t want to mess it up.

In the process of working through my possible contributions, I realized my approach was predicated on the issue of whether these fields were sufficiently systematic and developed that the term “science” applied in the first place. They are—or at least the social science parts are—though as far as leveraging goes, as we shall see, the argument is more nuanced. [1] That’s for the next essay, however.

The “Is social science really science?” issue, on the other hand, never seems to go away, even as the arguments against it grow older and dumber with each passing year. In fact just a couple weeks ago I was out at a workshop in…well, let’s just be nice here: you know who you are…and once again had experiences that can best be compared to a situation where NASA has assembled a bunch of people to assess a forthcoming mission to a comet, and the conversation is going okay, if not great, and all of a sudden someone wearing the cap and gown of a medieval scholastic pipes up with “Excuse me, but your diagram has everything wrong: the Earth is at the center of the planetary system, not the Sun!” Harumph…thank you. Despite the interruption, the conversation continues, if more subdued, until about fifteen minutes later another distinguished gentleman—they are always male—also in cap and gown, exclaims “This is rank foolishness! Your so-called spacecraft can’t pierce the crystalline spheres carrying the planets, and it doesn’t need to: comets are an atmospheric phenomenon, and you should just use balloons!”

Protocol in these situations, of course, requires one to listen politely while others nod in agreement, all the while thinking “That is the dumbest f’ing thing I’ve heard since, well, since the last time I was in a meeting sponsored by this organization.” And hoping at least that the lunch-time sandwiches will be good. [they were]

The parade of nonsense invariably culminates in some august personage, who certainly hasn’t been exposed to anything remotely resembling philosophy of science since hearing a couple apocryphal stories in 8th grade about Galileo, woefully intones the mantra “I’m sorry, but it is simply impossible to study humans scientifically: they are too unpredictable.”

Yeah, brilliantly original observation Sherlock! And that’s why every morning we see Sergey Brin and Mark Zuckerberg on opposite sides of a Hwy 101 freeway entrance in Palo Alto holding cardboard signs reading “Will code for food.” So sad.

So, before proceeding to my bolt-hole essay [2], let’s pursue this larger issue in some detail with, of course, seven observations. I’m going to focus most of my remarks on the scientific development of political science, which is the case I know best, but every one of these characterizations applies to all of the social science sciences (in the case of economics, lagged by a good fifty years, and demography, perhaps 75 years).

I will be the first to acknowledge, of course, that “political science” created a bit of trouble for itself since originally, when the “American Political Science Association” (APSA)—now merely a real estate venture that happens to sponsor academic conferences and journals—labelled itself in 1903 (!) as “science” rather than say, “government” or “politics.” This designation was partially signaling a common cause with the Progressive reaction to the corrupt machine politics of the day, but mostly because in the midst of the electrical and chemical revolutions of the late 19th century, “science” was a really cool label. Such was the zeitgeist, and the nascent APSA’s branding was no different than what its contemporary Mary Baker Eddy did in the field of religion. [3]

From this admittedly dodgy start, political science did, nonetheless, gradually develop a strong scientific tradition (as economics had about fifty years earlier), notably with Charles Merriam at the University of Chicago in the 1930s—though frankly, Aristotle’s mostly-lost empirical work on governance structures appears to have been pretty decent science even in the 4th century BCE—and from the 1960s onward, surfing a veritable technological tsunami of the computer and communications revolutions during the late 20th century. The results of these changes will be outlined below. [4]

From the perspective of formally developing a philosophy of social science, however, these developments hit at a rather bad time, as the classical logical positivist agenda, dating back to the 1920s, had ground to a slow and painful halt on the insurmountable issue of the infinite regress of ancillary assumptions: turtles all the way down. That program was replaced, for better or worse (from the perspective of systematic definitions, mostly the latter) by the historical-sociological approach of Thomas Kuhn. On the positive side, the technological imperatives/opportunities were so great that the scientific parts of the field simply barged ahead anyway and—large scale human behavior not being infinitely mutable—probably ended up about where they would have had the entire enterprise been carefully designed on consistent philosophical principles.

And thus, we see the following characteristics as defining the scientific study of human social behavior:

1. A consistent set of core methodological concepts accepted by virtually all researchers and institutions using the approach.

In the case of political science, these are conveniently embodied in the still widely used—for all intents and purposes canonical—text first published in 1994: King, Keohane and Verba, Designing Social Inquiry: Scientific Inference in Qualitative Research (KKV) [5] Apply the concepts and vocabularyof KKV (which, recursively, deal with the critical importance of concepts and vocabulary) anywhere in the world where the scientific approach to the study of political behavior is used, and you will be understood. KKV certainly has its critics, including me, because it defines the discipline’s methodology in a narrow frequentist statistical framework—albeit that approach probably still characterizes about 90% of published work in the discipline—but those debates occur on the foundations provided by KKV, who had adopted these from the earlier developments starting with Merriam and his contemporaries, and with massive input from elsewhere in the social and behavioral sciences, particularly economics and psychology.

2. A set of well-understood tools that have been applied to a wide variety of problems.

Okay, as I’ve argued extensively elsewhere [unpaywalled director’s cut here], this has probably become too narrow a set of tools, but at least the [momentously horrible in far too many applications] downsides of those tools are well understood and readily critiqued. These are taught through a very [again, probably too] standardized curriculum both at elite graduate institutions but also through specialized summer programs in the US and Europe, with the quantitative analysis program of the University of Michigan-based Inter-University Consortium for Political and Social Research dating back more than half a century. These core tool sets have experienced the expected forking and specialization through the creation of new and more advanced techniques. [6]

That core is not static: while published political science research is increasingly dominated by linear and logistic regression [7] the past two decades saw the rapid introduction and application, for example, of both Bayesian model averaging for conventional numerical analysis and latent Dirichlet allocation models of textual topic models, the gradual introduction of various standard machine learning methods [8] and, I can say with certainty, we will soon see ample applications of various “deep learning” methods at least from researchers at institutions that can afford to estimate these.

3. Theories and methods of inference

As with the “Science” part of “APSA,” this one’s a little complicated. Or it is if you are listening to me.

On the surface—and this is the canonical KKV line—quantitative political science has a very clear model of inference, the “null hypothesis significance testing” or “frequentist” statistical approach, which is nearly universally used. Unfortunately, frequentism is a philosophical mishmash that was simply the last-thing-standing at the end of vicious methodological debates over statistical inference during the first three decades of the 20th century, is just barely logically coherent while being endlessly misinterpreted, and its default assumptions are not really applicable to most [not all] of the problems to which it is applied. Except for that, frequentism is great.

The alternative to frequentism is Bayesian inference, which is coherent, corresponds to how most people actually think about the relationship between theories and data, and in the past forty or so years has become technically feasible, which it was not true in the 1920s, when it was relegated to some intellectual netherworld by the victorious “ABBA”—anything but Bayesian analysis—faction.

Finally, creeping in from the side, without—to date—any real underlying philosophy beyond sheer pragmatism, are the machine learning methods. Though betting as ever on pragmatism, a yet-to-be-specified philosophical merging of Bayesian and machine learning, which will not be particularly difficult to do, is likely to develop and could well be the dominant approach in the field by, say, 2040. Just saying.

The important point here, however, is that the issue of inference is actively taught and debated, and in the case of the frequentist-Bayesian debate, this discussion goes back more than a century. There’s a lot going on here.

4. Theories and methods for assessing causality

Contrary to the incantations of the bozos who intone “correlation is not causality” at social scientists like we’ve never heard the phrase before, causation is a fabulously complex problem: The Oxford Handbook of Causation alone is 800 pages in length, Judea Pearl’s book on the subject is 500 pages, and the Oxford Handbook series, perhaps noting that a copy of Oxford Handbook of Causation is currently (19 April 2017) priced on Amazon’s secondary market at $2,592.22 [9] also offers an additional 750-page Oxford Handbook of Causal Reasoning.

Which is to say, the question of causality complicated, and it’s always been complicated. It’s even more complicated when dealing with social behavior because one can’t even assume a strict temporal ordering of causes and effects. [10]  But as with inference, we have frameworks and vocabularies for looking at the problem, about fifty years of work approaching it using a variety of different of systematic empirical methods (invariably incorporating a variety of different sets of assumptions and trade-offs), and throughout the development of the discipline, an increasingly sophisticated understanding of the issues involved.

5. Experimental methodologies, including laboratory, quasi- and synthetic

Classical experimental methods generally dropped out of political science for two or three decades: the issues of generalizing from laboratory studies—particularly those with subject pools consisting of reluctant middle-class white undergraduates—to more general behaviors seemed simply too great. But a couple decades ago the method got a second look and subsequently has developed a thriving research community using a variety of methods.

But even during the nadir of classical experiments, extensive analyses were done with “natural” or “quasi-” experiments, particularly in the policy realm. Starting in the 1990s, “synthetic” experiments using artificially matched controls (in the discipline, these are often classed in the “causation” literature) have also seen extensive work.

It is certainly the case that compared to chemistry or mechanics, it’s hard to do a true experiment in political science (though with a suitably cooperative government and suitably lax institutional review board, you can get pretty close). Last time I looked, however, it was also pretty hard to do these (outside a computer) in geology, astronomy and climatology as well. Last time I looked, no one (outside Trump administration appointees) was questioning the scientific status of those fields.

6. Consistent standards for the presentation and replication of data and results

Thanks in large part to the early and persistent efforts of Harvard’s Gary King, political science has been well ahead of the curve, by a good twenty years, in combating what is now called the “reproducibility crisis.” [11] Completely solving it, no—that won’t occur until we pry people away from thinking that loading lots of closely related variables into methods with the potential for providing wildly different results based on trivial differences in their methods of matrix inversion and gradient descent is a good idea [it isn’t]—but the checks are in place and, outside the profit-seeking journals (alas, most of them), the proper institutional expectations are as well.

The critical observation here, however, is that we can even have a reproducibility crisis: for three or four decades now quantitative political science has had a sufficiently strong foundation in shared data and methodologies that it is possible to assess whether something has gone wrong. Usually these are innocent mistakes due to carelessness and/or covariance matrices that are a bit too close to being singular, but every once in a while, not so innocent. Ending up as a story in every major publication in the country, as well as Nature and Science.

From the perspective of the scientific method, if not the individuals and institutions involved, that’s a good thing. Post-modernist approaches, I can assure you, will never experience a reproducibility crisis.

7. A mature international research community grounded in major universities and with active scientific communication through professional conferences and journals [12]

Which is to say, all of the features I’ve discussed in the previous sections have been thoroughly institutionalized, following exactly the model that goes back to the establishment of the Royal Society in London in 1660. Ironically, due to predictable institutional lock-in effects, it was necessary for the quantitative side of political science to actively break off from the APSA in the 1980s—a critical insight of John Jackson and Chris Achen, who founded and shepherded the development of the now-independent Society for Political Methodology in that period—but by the early 2000’s the SPM’s journal, Political Analysis, had well eclipsed the journals of the older organizations as measured by the usual impact factor scores. [13] Graduate students at [most] elite institutions can get state-of-the-art training in methodology, and go on to post-docs and, ever so occasionally, faculty positions at other elite institutions. [14] Faculty and grad students—well, prior to the outcome of the 2016 US election—move easily between institutions in North America and Europe. [15]

 

The upshot: the social “sciences” involve a complex community dealing with multiple and continually evolving approaches and debates at both the philosophical and methodological levels. Returning to our touchstone of the Oxford Handbook series, the Oxford Handbook of Political Methodology, whose three editors include two past presidents of the Society for Political Methodology, runs to 900 pages. Oh, and since we’re discussing the social sciences more generally, did I mention that the Oxford Handbooks relevant to statistical economic modeling will set you back about 3,000 additional pages?

Do you need to master every nuance of these volumes to productively participate in debates on the future applications of social science methodology? No, you don’t need to master all of it—no one does—but before making your next clever little remark generalizing upon issues whose history and complexity you are utterly clueless about, could you please at least get some of the basics down, maybe even at the level of a first-year graduate student, and let the rest of us who know the material at levels considerably above that of a first-year graduate student get some work done?

Just saying.

As promised, we will pick up on some of the more positive aspects of this issue in the near future.

Footnotes

1. So I’m not going to be using the phrase “data science” much in this essay, though in most of the “data sciences” one is interested in human individual and social behavior, so it’s the same thing.

2. Which is to say, now drawing to a close my tortured tale of the travails which I must endure to provide you, the reader, with a seemingly endless stream of self-righteously indignant snark.

3. In the domain of religion, L. Ron Hubbard would take this approach “to 11” in the 1950s, the Maharishi Mahesh Yogi would do the same in the 1970s and so on: history doesn’t repeat but it certainly rhymes. And by the way, anyone who thinks U.S. culture is relentlessly “anti-science” hasn’t spent much time looking at advertising for cosmetics and diet programs.

4. Those comments will focus on only the scientific aspects of political science, which retains a variety of distinct approaches—for example philosophical, legal-institutional, historical, observational, and, of course, in recognition of 4-20, we must acknowledge post-modernism—rather than being exclusively scientific. Let a hundred flowers bloom: My point is simply that if because of organizational imperatives (and, perhaps, the ability to get consistent reproducible results) you want a “scientific” study of politics, these methods are extensive, sophisticated, and well developed, and this has been the case for decades.

5. Two of the authors of this now canonical text were, remarkably, from Harvard, a little institution just outside of Boston you may have heard of, and yet even so the book managed to influence the entire field: imagine that! Despite the title, it’s a guide to quantitative research: the word “qualitative” is just in there to gaslight you.

6. Which the organization-which-shall-not-be-named ignores entirely, preferring instead to pay people to develop their own independent methods no one else in the field takes seriously. Ain’t it great to have untold gobs of money? Organization-which-shall-not-be-named: pretty slick title for this essay, eh?

7. All too commonly over-specified logistic models applied, often as not, to problems which could be far more robustly studied using simple analysis-of-variance methods originally developed by Laplace in the 1770s, but no longer taught as part of the methodological canon…I digress…

8. Mind you, some of us were applying machine learning methods to conflict forecasting in the 1980s—see Valerie Hudson’s edited Artificial Intelligence and International Politics (1991)though this didn’t catch on and, given the data and computational power we had at the time, was perhaps premature.

9. Seriously, but there’s a pretty mundane causal explanation for that sort of thing and it involves computers.

10. Classic case here in political science—the buzz-term is “endogeneity”—is determining the efficacy of political campaign expenditures: In general, more money will be spent on races that are anticipated as being close, so simple correlations will show high expenditures are associated with narrower victories, the opposite of the intended effects. There are ingenious ways for getting around this using statistical models—albeit they are particularly difficult on this problem because the true effects may actually be fairly weak, as Jeb Bush during the U.S. Republican Party primaries in 2016 was only the latest of many well-funded candidates to discover—and those methods are anything but intuitive.

11.  Shout-out to Charlottesville’s own Center for Open Science

12. Okay, so too many of those journals publish lowest-common-denominator research that is five years out of date, and they all are fiercely resisting open access, but that’s just rent-seeking behavior that could be stopped overnight with a suitable collaborative assault by about twenty-five deans and five research funders. Someday. I’m a Madisonian: I study humans, not angels.

13. Meanwhile both the political science experimentalists and social network analysts have split off from the more traditionally statistical SPM, forming their own organizations, with the text analysts probably soon to follow. Which really pisses off some people in SPM but hey, things change: go for it. Also it’s not like the SPM summer meetings, heading for their 34th consecutive year, are exactly undersubscribed.

14. Most do not and get jobs at lower ranked institutions: this is true in all disciplines and there’s even a name for it, which I can’t readily locate.

15. For whatever reason, I’ve long been more comfortable professionally with the Nordic and Germanic Europeans than with my fellow Americans. Perhaps because I write stuff like this. Or because the Europeans don’t harbor the same preconceptions about Kansas as do Americans…

I would also note that I’m using the phrase “North American and European” simply to reflect where things stand at the moment: the conservatism of the well-funded Asian universities (e.g. Japan, South Korea, older Chinese institutions) and the lack of resources  (and still more academic conservatism) in pretty much everywhere else in the world limit the possibilities for innovation. But in time this will happen: generations of graduate students from around the world have been getting trained in (and have made important contributions to) these methods, but the institutional barriers in transferring them, particularly when dealing with the study of potentially sensitive political issues (that’s you, China…), are substantial.

Posted in Uncategorized | 1 Comment

Reflecting on the suicide of Will Moore

I spent most of today working on a new blog post motivated in part by a re-tweet of a teaser for same by, well, Will Moore. Who I had seen in Phoenix only six weeks ago where he introduced my talk with—and I am now so glad I told him this at the time—one of the most thoughtful and eloquent introductions I’ve ever received. This was followed by dinner at an appropriately sleazy Tex-Mex place which Will, being Will, hoped would at least begin to match the appropriately sleazy zydeco joint he took me to outside Tallahassee a few years earlier.

Then, via Twitter, the village well of the political set, the news, and then reading through his final blog post. There’s more to say than really works on Twitter, and so that other blog post is going to wait a day or two, even though I suspect Will would be particularly fond of it. In this one, three thoughts for the living:

1. It is certainly the case that our relatively small community of experts on violent political conflict—of which Will was a part—do not have the most stressful of jobs: those go to the people sent, nowadays often repeatedly, into conflict zones by “leaders” who simply think it will send “a message”, who would never in a million years ask members of their own families to do this, yet mindlessly send fellow citizens into regions and cultures they don’t understand and have been given little practical preparation for, and on returning are told simply to “get over it” or, at best, wait at the end of a really long line. People on the receiving end of these cynical and soulless political “statements” don’t fare too well either. I’m not equating our academic and research experience to that.

Yet at the same time, I suspect, particularly over the long run, this work begins to take a toll. I left academia before “trigger warnings” came into vogue, but in my courses on conflict and on defense policy, there were books where every other chapter probably deserved a trigger warning. And those were just the ones I assigned: the background reading could be far worse. In the process of coding the PITF atrocities data, every month I get to read every story from anywhere in the world of journalists getting gunned down in front of their children, people desperately searching for the bodies of their wives or husbands in the debris of marketplace car bombings, and joyful wedding parties suddenly reduced to bloody carnage because some shell went astray or someone misinterpreted the video from a distant drone. You can’t take it all in, but you can’t, and shouldn’t, just ignore it either. When I’m doing this coding, my mood is invariably somber, and every once in a while, I’ll say something and realize no, most people don’t think like this.

And so, to those in this business: we’re too small a population to ever study, but this could well have effects, and I’m sure they are not positive. Be careful, eh?

2. Since the Deaton and Case study, there’s been a lot of concern—though of course, little real action—on the rise in “deaths by despair” in the white middle class. Is this another such situation?: perhaps, and Will’s final blog certainly points in that direction.

My grandfather and great-uncle in a down-and-out corner of nowhere in southwestern Indiana both killed themselves late in life: the family saying—Hoosiers, just a bundle of laughs they are—was “The Schrodt treatment for depression is drinking a pint of Drano.” So the prospect of depression—which I got close enough to a couple of times much earlier in my life to get a sense of the possibilities—is always there. I self-medicate a bit—St. John’s wort during the winter months, and exercising caution in my consumption of recreational depressants—and self-meditatelot. Both seem effective at keeping the beast at bay.

If not, there are people who understand these things way better than an affected individual can as an amateur, and please avail yourself of them: when it was needed— induced in my case by stresses in the academic world—I certainly did. The Hollywood/Woody Allen/New Yorker cartoon image of endless years of talking with some balding and bearded guy while on a couch: no, it’s not like that at all (or at least wasn’t for me), instead just a lot of focused and sensible discussions with one or more smart and empathetic people for a few weeks or months aimed at figuring out what is setting you off and how you might change this. My grandfather and great-uncle didn’t have that sort of help available, whereas I—and I would guess virtually everyone reading this blog—did have it, easily. Use it.

Your chances of needing therapy, however, will decrease if you’ve got some community. Any community: book group, Renaissance fairs, Saturday bicycling, roller derby, helping kids learn to code. Whatever: just some group of people who will occasionally inconvenience and aggravate you but which you trust could be occasionally inconvenienced and aggravated in return. And which will for a period of time get you away from staring at screens filled with pixels and wondering how you’ll find time to read (or write) yet another article. We’re social animals: we have no more evolved to be alone than we have evolved to live underwater. Bob Putnam pointed this out twenty years ago; little changed, and the outcome was Deaton and Case.

3. My final point is one I’ve made before, and is addressed specifically to aging academics: if you are feeling like you’ve done your time, as Will clearly did based on what he said in his final blog posting, get out! Voluntarily, not [just] for the sake of the next generation, but for your own sake. I did this four years ago and can quite literally say I have not for a second regretted the decision: there’s another world out there, new things to explore, new opportunities, things you never thought you’d do, go for them. If there is one thing I wish I could have said to Will on that trip in January—though I probably did, just not with sufficient effect—that would be it.

If you are happy in academia, great!—keep at it. But don’t keep at it if you are not happy: you’ve got golden handcuffs, and they may be golden, but they still are handcuffs. In something like Will’s situation, with family responsibilities in the past, the world is open to you and you won’t be able to even imagine some of those opportunities until you let go of the routines and grasp the freedom. I could (and have) go on and on about how academia, with its rigid schedules, suffocating bureaucratic complexity, repetitive debates, endless preening and hustling, and vast time horizons stretching far into the future, is not a country for old men (nor, generally, women of any age): there’s more to life than another stack of blue books and another faculty meeting considering reversing policies you had endless faculty meetings putting into place fifteen years ago, those in turn reversing policies established fifteen years before that.

Really.

When we were living in Norway, we noticed a common phrase on gravestones in little rural cemeteries: Takk for alt. Fred.  Thanks for everything. Peace.

Thanks for everything, Will. Peace.

 

Posted in Higher Education, Ramblings | 2 Comments

Reflections on Uber, brogrammers, and the effectiveness of working class programmers

The toxic “brogrammer” culture has been in the news again, initially with the blog posting of engineer Susan J. Fowler’s year-long experience with sexual harassment at Uber, the reaction of Uber’s CEO [1] Travis Kalanick who was shocked, shocked to discover he is surrounded by utter assholes, then new video proof that, shock, shock, Travis Kalanick himself is an utter asshole (as well as exhibiting a charmingly naive lack of awareness of the capabilities of dash-cams), followed by still more revelations of nefarious deeds [2], reaching a point where I wouldn’t be surprised to find the Trump administration simply deciding to hire Uber’s entire management team to fill their thousands of empty appointments. Oh, and did I mention that despite all this, Uber is losing fantastic sums of money?  Hey, you go guys!

But while Uber is perhaps unusually bad given the visibility and nominal value of the company, it is hardly unique, and in fact the bad-boy programmer is a long-running cultural meme: see for example the 1993 movie Jurassic Park where you get a guaranteed applause line when the “programmer”—subtly named “Dennis Nedry” and played as Jabba-the-Hut minus table manners—becomes Dilophosaurus chow. Two books I read last summer on the current programming start-up culture— Disrupted: My Misadventure in the Start-Up Bubble by Dan Lyons and Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley by Antonio Garcia Martinez—are essentially book-length renditions of the same story, and the racist, misogynistic culture of Silicon Valley generally is a story dating back two or three decades. [3][22]

So, what gives here? These stories are so pervasive that those outside the real world of programming probably are asking why there is even grist for a blog entry here. But at least some of those of us on the inside find it puzzling: how could people with the personality disorders it apparently takes to get ahead in these companies ever write decent code? Fundamentally, programming requires intense concentration for extended periods, a high level of attention to unforgiving detail, and a willingness to set aside your ego when trying to improve a product. Not, shall we say, exactly consistent with the nonstop Animal House bacchanalia that characterizes these companies. How are these people possibly writing decent code?

The answer, I would submit: they aren’t. And the intensity of their craziness and exclusivity is simply a smokescreen for that fact.

Or to be a bit more nuanced, what the brogrammers are riding on—and their success will almost certainly be temporary—is that fact that the contemporary programming environment allows an individual, or a relatively small group of individuals, to do absolutely extraordinary things with relatively little effort. Hence a team of brogrammers with only average skill can ship a reasonably workable product more or less on time while spending, max, perhaps 20 hours a week doing something with code while not hung-over or otherwise incapacitated [4], and spending the rest of the time on office politics, talking sports, violating every known workplace discrimination law, and ingesting every imaginable intoxicant. All the while claiming, of course, to be working 80 hour weeks. The sheer aggressiveness with which they work to exclude people from outside the brogrammer culture is born of the necessity to keep this fact from becoming common knowledge.

But, no, no, it can’t be!: our entire computer infrastructure, the very fact VCs are throwing billions  of dollars at us, is dependent on hiring arrogant misogynistic assholes and party animals! They must be tolerated, lest we be reduced to a state where our iPhones are useful only for killing rats for food! Nooooo…

Well, bunko, let me explain: it ain’t like that any more. Mind you, it probably never was like that—and I do hope that Hidden Figures, both the book [5] and the movie, begin to correct the historical misconceptions—but it certainly isn’t that way now, because getting computers to do incredibly impressive things is really easy now. [6]

This, in turn, is primarily due to two innovations which only came into play in the past decade (conveniently just beyond the learning horizon of a substantial number of people who are investing money in brogrammer teams): open-source toolkits, and web-based crowd-sourced documentation [20], in particular a site called Stack Overflow. In a nutshell, the toolkits mean that almost any general problem you need to solve, unless it is really recent—typically meaning weeks—will already have been coded, and that code is available for free. If you have any problems getting that code to work, and it’s been around more than six months or so, the solutions for those issues are on the web as well. [7] You basically just fill in the gaps relevant to your specific application, and you’re done. 

Really, it’s that simple. As William Gibson puts it “The future is already here; it is just unevenly distributed.”

To explore this a bit further, consider the recent story about a Nigerian programmer being stopped at the US border by the Stasi…errr, ICE…and “asked to write a function to balance a binary tree.” [8][19] As some wag on Twitter noted, the correct response to that question is “I’d look it up on Stack Overflow.” [10.] A slightly better question would have been “Tell me the circumstances under which you would need to balance a binary tree” but even then the correct answer would be “You first tell me a situation where I could possibly justify the time involved in setting up a binary tree rather than doing equivalent tasks with existing data structures in Python.” By that point, of course, anyone giving those responses would be blindfolded, in cuffs and in the cargo hold heading back to Lagos but still, those are the correct answers. Assuming that a working programmer needs to be able to balance a binary tree is like refusing to hire a house painter unless they know how to make their own brushes and mix their own pigments. 

With decided speed [16] the world of programming has evolved into one where people with decent training but generally average skills can do absolutely extraordinary things in very short amounts of time. This is the world of self-documenting open-source software, increasingly running in effectively universal hardware environments, which is the consequence of a series of relatively recent developments about which I will have an absolutely fantastic blog entry if I can ever get around to finishing it. [11] The necessity of retaining the arrogant irreplaceable toxic genius asshole and the swarms of party animal brogrammers indulged with a “work” environment characterized by substantial investments in toys and endless keggers was a gamble with the odds stacked against it in the best of times, and is completely unnecessary now.

Which brings us to my final point, the possibility of the emergence of the working class programmer. Working class in two respects. First, recognition that programming projects can be quite competently done by a suitably trained, responsible and, well, ordinary individuals. The phrase “programmer” as a generic job description was probably last relevant in the early 1970s, before the emergence of time-sharing, or at best in the early 1980s before the emergence of graphic user interfaces.[21] After those points in time, the field fragmented—in a perfectly normal and organic fashion—into ever-increasing sub-specialties which require different sets of skills [12]  but, like all professional specializations, can be mastered to a level where one can produce very competent code after about 500 to 1000 hours of training and practical experience. The notion that only a tiny number of people can achieve this is the much maligned “talent myth,” [13] which leads clueless managers and VCs to seek out “the best people”—who just happen to always be white, male, and great companions, particularly while downing vodka shots at strip clubs [14]—rather than putting together a calm, predictable, and competent team with the appropriate skill sets.

These jobs are also, perhaps a bit anachronistically, “working class” in the sense that the salaries required to get these people are the sort associated, with the usual adjustments of inflation, with those of workers with specialized skills in the industrial age (with health insurance, nowadays without a pension or union), which means in the $60K to $100K range, and working 40 hour weeks or something reasonably approximating those. But also with the expectation that one will follow contemporary professional standards. There are plenty of such people around.

[Shameless self-promotion alert!!] The other thing you might at least consider is getting a few people with a bit—maybe even more than a bit?—of experience, because about 90% to 100% of the stuff you need to do on any project will be routine, not the stuff that can (in theory though rarely practice) only be accomplished by arrogant irreplaceable toxic genius assholes. And when it comes to “routine”, or even some things that aren’t routine, experience makes things easy. In particular, take this quote from 2017 Superbowl champion quarterback Tom Brady 

“I have the answers to the test now,” Brady said. “You can’t surprise me on defense. I’ve seen it all. I’ve processed 261 games, I’ve played them all. It’s an incredibly hard sport, but because the processes are right and are in place, for anyone with experience in their job, it’s not as hard as it used to be.”

Now move that into the programming realm: you’ve got a brilliant new idea that you want to pitch to—or may already have been funded by—DARPA or IARPA. But I know the same thing has been tried five times over the past forty years, and based on the current state of the technology, I know which parts look easy but aren’t, which parts can in fact be efficiently solved now but couldn’t be earlier, and I’ve got a pretty good idea where things are going to take much longer than you think. I know which of the data sets you think exist in fact don’t, and the details of twenty more data sets which would actually be more useful but which you’ve never heard of. I can’t do this with every project, just as I’m guessing Tom Brady’s tennis game isn’t particularly exceptional, but in my field, I can do it with a lot, and in other fields, there are plenty of other people just like me, for whom “it’s not as hard as it used to be.”

Just sayin…

So where do you go from here? If your shop is festering with brogrammers, follow the advice of Nellie Forbush in South Pacific and “show’em what the door is for.” Though nine times out of ten—well, unless you are Uber [17]—just enforcing the rules in the employee handbook will be sufficient. Based on what I’ve seen, these brogrammers with no discernible talents beyond office politics,  partying, and harassing women and minorities would fit in pretty well as roofers—hanging dry-wall takes too much attention to detail—and will enjoy the camaraderie of following hailstorms across the Great Plains. [15]

But, but, I can’t do that! I have to lose money! Lots of money! In order to be successful and attract vast sums of investment capital, I have to make absolutely sure my start-up is horribly inefficient, misses deadlines and ships crappy code! If I don’t do that, the VCs won’t take my company seriously, and pretty soon I’ll be hanging drywall! You…just…don’t…understand!!!

Calm down, calm down…surely we can work this out. Yes, there are vast pots of dumb money floating around, and only so many good ideas, though if you knew how to assemble a competent team for programming, not just tail-gating, more good ideas than you might think. So, how about this:  just put a bunch of Syrian refugees on your payroll, tell them just to take the money and spend their time getting their lives back together and watch while their kids rapidly learn English to a level of fluency superior to that of the average Middlebury undergraduate, and tell the VCs the Syrians have brought you secret algorithms stolen from the Russians and smuggled out of Aleppo in the final days of the siege. You know, Rogue One. Key thing is they’ll be burning money, which the VCs want to see, and unlike the brogrammers, the refugees won’t be getting in the way of people actually adding value to your product. 

This, bunko, is a pathway to being hailed as a management genius and pretty soon, a TedX talk!

Footnotes

1. Chief enabling offender

2. That’s not to say Uber doesn’t have an eventually viable business model, just as sending around a bunch large guys carrying lead pipes to threaten to break the kneecaps of owners of pizza parlors is a viable business model. [18]  But recognize it for what it is, and stop pretending that it encompasses some sort of fantastic breakthrough in software and systems design: all Uber has done is implement a pretty obvious and easily duplicated idea with a reasonable level of competence. Albeit by most accounts, the sorts of operations that dispatch guys with lead pipes to engage the owners of pizza parlor owners turn a profit. Which Uber, as noted above, does not.

3. Though within the community, the Bastard [Systems] Operator from Hell—BOFH—was a long-standing meme dating to the early 1990s . Though unlike Uber managers, the BOFH was not presumed to be disproportionately harassing women, or at least that wasn’t part of the memes I was hearing—it probably does exist somewhere in the genre.

4. From an outsider’s perspective, one of the absolutely weirdest management fads (more like a cult) over the past couple of decades is “pair programming,” where you hire two programmers to do the work of one. It has its very strong advocates (and detractors)—just Google it—and doubtlessly worked in some instances. But note that this is the unscrupulous manager’s dream: “Wow, I get to hire all of my otherwise unemployable party-boy friends, and pair each one with a nerd who will not only do all of the work, but who I’ll allow them to terrorize with impunity, and things will be golden forever!” Can’t imagine that this model hasn’t been deployed in more than one situation.

5: Shout-out to Charlottesville author Margo Lee Shetterly!

6. What is programming? In my mind, if what you are doing uses logical loops and branches, you’re programming. Ada Lovelace wrote the first loop: that’s when programming started. If this statement makes no sense to you, you aren’t programming. Data structures also matter; the rest is pretty much just appropriately using an ever-changing collection of libraries and knowing how to efficiently debug code. Mostly the last.

7. We’re still on the leading edge on this, but almost all serious production applications will now be configured to run in the cloud, thus standardizing hardware and to some degree, operating systems. REST interfaces to Docker containers also seem to be emerging—at least for now—as a standard for hiding diversity in the underlying software.

8. The Stasi agent allegedly said “You don’t look like a programmer.” Which is rather making my point isn’t it?: “programmers” may be white, South Asian or Chinese, but never African. To say nothing of African-American. Which is to say, in terms of racial integration we’ve actually gone backwards from the opening scene in the movie Hidden Figures, where an archetypical redneck cop in tidewater Virginia could still be persuaded that African-American women were helping us beat the Russians. [9]

9. To say nothing of the anachronism that a [now] presumably Republican southern cop would view the Russian government with suspicion, rather than embracing them as BFF. My, how things do change…

10. For those unacquainted with Stack Overflow, it provides the equivalent of the fact that one can enter the phrase “which dinosaur killed programmer in Jurassic Park” into Google and instantly get a thorough answer, bypassing the inconvenience of watching the movie.

11. Or more precisely, editing it, as I’ve written 16 pages, and these entries should be about half that. 10 pages already written on the identity/economics crisis in the Democratic Party, 35 pages on the future of Western democracy…someday…

12. Generally not including balancing binary trees.

13. “Talent myth” links: originally the concept was Malcolm Gladwell’s in 2002; here’s a 2015 update from Huffington post http://www.huffingtonpost.com/amol-sarva/talent-is-a-myth_b_6793870.html and here’s a fairly influential version specifically directed at the programming community https://lwn.net/Articles/641779/ But really, any task whose effectiveness comes from summing a series of sub-tasks for which skill levels are randomly distributed, according to any distribution, will end up with the composite effectiveness having a Gaussian (bell-shaped) distribution: that the law! (specifically the Central Limit Theorem). Whereas it is nearly impossible to come up with a data generating process that would produce the U-shaped curves assumed by the talent myth.

14. The vast amounts of money funding keggers and tolerance of behaviors that keep HR’s lawyers awake at night are proof positive that high-income tax rates are too low, not too high. Another reliable rule-of-thumb: If you are spending more than a quarter of your time dealing with personnel conflicts or office politics, you are over-staffed.

15. Of course, where they will actually end up is in finance, not roofing—again, unless under-secretary positions are still available in the Trump administration—though that apparently doesn’t provide quite the number of jobs for the well-connected party-boy incompetents as it once did. Though it is unsurprising that VCs and hedge funds find this model attractive, as it is essentially the same as their own model: put up with large amounts of stuff that will fail (as no less than Warren Buffet has pointed out, hedge funds are actually a terrible deal) in the hope that one or two will pay off big time. It is true that a few arrogant genius asshole programmers actually might, in the right set of circumstances, add value to a project, and it is just possible that maybe one of them will make your company a unicorn. But the chances are low that you’ve actually managed to hire such an individual, whereas the chances are quite good that the ones you actually did hire will create such a sufficiently dysfunctional environment that your company will fail, or yet least not do as well as it would otherwise.

16. Doing my background research, I was astonished to see that Stack Overflow dates only to 2008: it feels as though it has been around forever.

17. Or Kay and Jared Jewelers

18. As it happens—I’m in meetings where this sort of thing is discussed seriously in the context of political instability—extortion, rather than dealing drugs, is actually the most straightforward way for most gangs to make money. They also don’t make nearly as much money as you probably imagine. Then again, neither does Uber.

19. Presumably the Stasi/ICE guy saw this question somewhere on LinkedIn.

20. I’m pretty sure this seemingly magical process of self-documentation accounts for the almost cult-like infatuation seen in the tech community for a forthcoming “singularity” where networked machines effectively become conscious. But no, it’s just people trying to be helpful in exchange for a bit of recognition, and that is a very fundamentally human thing—in fact quite possible one of the single most important things that makes us human—not a machine thing.

21. Not long after the Macintosh went on the market, I wrote a commercially successful ancillary for what became a best-selling political science textbook. “Commercial” in the sense that it was for a major textbook publisher, and I was paid reasonably well for my efforts. “Successful” in the sense that the textbook sold well—not necessarily for reasons related to my program—and we won a couple awards for the program, and the publisher continued it into multiple editions, eventually taking the work in-house. Writing for the Mac was a real eye-opener: less than ten years earlier, Kernigan and Ritchie had defined not only the C language, but major elements of structured programming, in only 200 pages. The books showing how to program the Mac ran to four—eventually five—thick volumes, with “every one assuming you had already mastered the others.” It was a completely new world.

22. I also should note that this rant is not directed at any people or projects I’ve worked with directly: it is motivated by the very consistent set of stories coming through in the autobiographical accounts of others. I’ve certainly worked on teams that had programmers who were handsomely paid while contributing absolutely nothing to the project—and am embarrassed to say that on at least a couple of occasions (none recent!!) I’ve been that person—but they’ve been pleasant about it. I’ve also run into plenty of the arrogant irreplaceable toxic genius asshole programmer types in academic settings and come to think of it, on one occasion made the very serious mistake of letting one into a project I was directing, but in my government and commercial work have successfully avoided—or perhaps more accurately, my various project managers have created teams that have avoided—encountering them. Then again, I’ve generally worked on projects that have produced pretty decent code that does what it is supposed to do, rather than merely blowing through billions of dollars of dumb money. Funny, that.

Posted in Methodology, Programming | 4 Comments

Seven Conjectures on the State of Event Data

[This essay was originally prepared as a memo in advance of the “Workshop on the Future of the CAMEO Ontology”, Washington DC, 11 October 2016, said workshop leading to the new PLOVER specification. I had intended to post the memo to the blog as well, but, obviously, neglected to do so at the time. Better late than never, and I’ve subsequently updated it a bit. It gets rather technical in places and assumes a fairly high familiarity with event data coding and methods. Which is to say, most people encountering this blog will probably want to skip past this one.]

The purpose of this somewhat unorthodox and opinionated document [1] is to put on the table an assortment of issues dealing with event data that have been floating around over the past year in various emails, discussions over beer and the like. None of these observations are definitive: please note the word “conjecture”.

1. The world according to CAMEO will look pretty much the same using any automated event coder and any global news source

The graph below shows the CAMEO frequencies across its major categories (two-digit) using three different coders, PETRARCH 1 and 2 [2], and Raytheon/BBN’s ACCENT (from the ICEWS data available on Dataverse) for the year 2014. This also reflects two different news sources: the two PETRARCH cases are Lexis-Nexis; ICEWS/ACCENT is Factiva, though of course there’s a lot of overlap between those.cameo_compare

 

 

 

 

Basically, “CAMEO-World” looks pretty much the same whichever coder and news source you use: the between-coder variances are completely swamped by the between-category variances. What large differences we do see are probably due to changes in definitions: for example PETRARCH-2 uses a more expansive definition of “express intent to cooperate” (CAMEO 03) than PETRARCH-1; I’m guessing BBN/ACCENT did a bunch of focused development on IEDs and/or suicide bombings so has a very large spike in “Assault” (18) and they seem to have pretty much defined away the admittedly rather amorphous “Engage in material cooperation” (06).

I think this convergence is due to a combination of three factors:

  1. News source interest, particularly the tendency of news agencies (which all of the event data projects are now getting largely unfiltered) to always produce something, so if the only thing going on in some country on a given day is a sister-city cultural exchange, that will be reported  (hence the preponderance of events in the low categories). Also the age-old “when it bleeds, it leads” accounts for the spike on reports of violence (CAMEO categories 17, 18,19).
  1. In terms of the less frequent categories, the diversity of sources the event data community is using now—as opposed to the 1990s, when the only stories the KEDS and IDEA/PANDA projects coded were from Reuters, which is tightly edited—means that as you try to get more precise language models using parsing (ACCENT and PETRARCH-2), you start missing stories that are written in non-standard English that would be caught by looser systems (PETRARCH-1 and TABARI). Or at least this is true proportionally: on a case-by-case basis, ACCENT could well be getting a lot more stories than PETRARCH-2 (alas, without access to the corpus they are coding, I don’t know) but for whatever reason, once you look at proportions, nothing really changes except where there is a really concentrated effort (e.g. category 18), or changes in definitions (ACCENT on category 06; PETRARCH-2 on category 03).
  2. I’m guessing (again, we’d need the ICEWS corpus to check, and that is unavailable due to the usual IP constraints) all of the systems have similar performance in not coding sports stories, wedding announcements, recipes, etc:  I know PETRARCH-1 and PETRARCH-2 have about a 95% agreement on whether a story contains an event, but a much lower agreement on exactly what the event is. The various coding systems probably also have a fairly high agreement at least on the nation-state level of which actors are involved.

2. There is no point in coding an indicator unless it is reproducible, has utility, and can be coded from literal text

IMHO, a lot of the apparent disagreements within the event data community about coding of specific texts, as well as the differences between the coding systems more generally stem from trying to code things that either can’t be consistently coded at all—by human or automated systems—or which will never be used. We should really not try to code anything unless it satisfies the following criteria:

  • It can be consistently categorized by human coders on multiple projects working with material from multiple sources who are guided solely by the written documentation. I.e. no project-level “coding culture” or “I know it when I see it.”; also see the discussion below on how little we know about true human coding accuracy.
  • The coded indicators are useful to someone in some model (which probably also puts a lower bound on the frequency with which a code will be found in the news texts). In particular, CAMEO has over 200 categories but I don’t think I’ve ever seen a published analysis that doesn’t either collapse these into the two-digit top-level cue categories, or more frequently the even more general “quad” or “penta” categories (“verbal cooperation” etc.), or else pick out one or two very specific categories. [3]
  • It can be derived from the literal text of a story (or, ideally, sentence): the coding of the indicators should do not require background knowledge except for information explicitly embedded in the patterns, dictionaries, models or whatever ancillary information is used by the automated system. Ideally, this information should be available in open source files that can be examined by users of the data.

If an indicator satisfies those criteria, I think we usually will find we have the ability to create automated extractors/classifiers for it, and to do so without a lot of customized development: picking a number out of the air, one should be able to develop a coder/extractor using pre-existing code (and models or dictionaries, if needed) for at least 75% of the system.

3. There is a rapidly diminishing return on additional English-language news sources beyond the major international sources

Back in the 1990s, with the beginnings of the expansion of the availability of news sources in aggregators and on the Web, the KEDS project at the University of Kansas was finally able to start using some local English-language sources in addition to Reuters, where we’d done our initial development. We were very surprised to find that while these occasionally contributed new events, they did not do so uniformly, and in most instances, the international sources (Reuters and AFP at the time) actually gave us substantially more events, and event streams more consistent with what we’d expected to see (we were coding conflicts in the former Yugoslavia, eastern Mediterranean, and West Africa). This is probably due to the following

  1. The best “international” reporters and the best “local” reporters are typically the same people: the international agencies don’t hire some whiskey-soaked character from a Graham Greene novel to sit in the bar of a fleabag hotel near the national palace, but instead hire local “stringers” who are established journalists, often the best in the country and delighted to be paid in hard currency. [19]
  2. Even if they don’t have stringers in place, international sources will reprint salient local stories, and this is probably even more true now that most of those print sources have web pages.
  3. The local media sources are frequently owned by elites who do not like to report bad news (or spin their own alt-fact version of it), and/or are subject to explicit or implicit government censorship.
  4. Wire-service sourcing is usually anonymous, which substantially enhances the life expectancy of reporters in areas where local interests have been known to react violently to coverage they do not like.
  5. The English and reporting style in local papers often differs significantly from international style, so even when these local stories contain nuggets of relevant information, automated systems that have been trained on international sources—or are dependent on components so trained: the Stanford CoreNLP system was trained on a Wall Street Journal corpus—will not extract these correctly.

This is not to say that some selected local sources could not provide useful information, particularly if the automated extractor was explicitly trained to work with them. There is also quite a bit of evidence that in areas where a language other than English predominates, even among elites, non-English local sources may be very important: this is almost certainly true for Latin America and probably also true for parts of the Arab-speaking world. But generally “more is better” doesn’t work, or at least it doesn’t have the sort of payoff people originally expected.

4. “One-a-day” (OAD) duplicate filtering is a really bad idea, but so is the absence of any duplicate filtering

I’m happy to trash OAD filtering without fear of attack by its inventor because I invented it. To the extent it was ever invented: like most things in this field, it was “in the air” and pretty obvious in the 1990s, when we first started using it.

But for reasons I’ve recently become painfully aware of, and I’ve discussed in an assortment of papers over the past eighteen months (see http://eventdata.parusanalytics.com/papers.dir/Schrodt.TAD-NYU.EventData.pdf for the most recent rendition), OAD amplifies, rather than attenuating, the inevitable coding errors found in any system, automated or manual.

Unfortunately, the alternative of not filtering duplicates carries a different set of issues. While those unfamiliar with international coverage typically assume that an article which occurs multiple times will be somehow “more important” than an article that appears only once (or a small number of times), my experience is that this is swamped by the effects of

  • The number of competing news stories on a given day: on a slow news day, even a very trivial story will get substantial replications; when there is a major competing story, events which otherwise would get lots of repetition will get limited mentions.
  • Urban and capital city bias. For example, when Boko Haram set off a car bomb in a market in Nigeria’s capital Abuja, the event generated in excess of 400 stories. Events of comparable magnitude in northeastern regional cities such as Maiduguri, Bui or Damaturu would get a dozen or so, if that. Coverage of terrorist attacks over the past year in Paris, Nice, Istanbul and Bangkok—if not Bowling Green—show similar patterns.
  • Type of event. Official meetings generate a lot of events. Car bombings generate a lot of events, particularly by sources such as Agence France Press (AFP) which broadcast frequent updates.[4] Protracted low level conflicts only generate events on slow news days and when a reporter is in the area. Low-level agreements generate very few events compared to their likely true frequency. “Routine” occurrences, by definition, generate no reports—they are not “newsworthy”—or generate these on an almost random basis.
  • Editorial policy: AFP updates very frequently; the New York Times typically summarizes events outside the US and Western Europe in a single story at the end of the day; Reuters and BBC are in between. Local sources generally are published only daily, but there are a lot of them.
  • Media fatigue: Unusual events—notably the outbreak of political instability or violence in a previously quiet area—get lots of repetitions. As the media become accustomed to the new pattern, stories drop off.[18] This probably could be modeled—it likely follows an exponential decay—but I’ve rarely seen this applied systematically.

So, what is to be done? IMHO, we need to do de-duplication at the level of the source texts, not at the level of the coded events. In fact, go beyond that and start by clustering stories, ideally run these through multiple coders—as noted earlier, I don’t think any of our existing coders are optimal for everything from a Reuters story written and edited by people trained at Oxford to a BBC radio transcript from a static-filled French radio report out of Goma, DRC and which is then quickly translated by a non-native speaker of either language—then base the coded events on those that occur frequently in that cluster of reports. Document clustering is one of the oldest applications in automated text analysis and there are methods that could be applied here.

5. Human inter-coder reliability is really bad on event data, and actually we don’t even know how bad it is.

We’ve got about fifty years of evidence that the human coding [5] on this material doesn’t have a particularly high correlation when you start, for example, comparing across projects, over time, and in the more ambiguous categories.[6] While the human coding projects typically started with coders at a 80% or 85% agreement at the end of their training (as measured by Kronbach’s-alpha, typically) [7], no one realistically believes that was maintained over time (“coding drift”) and across a large group of coders who, as the semester rolled on, are usually always on the verge of quitting. [8] And that is just within a single project.

The human-coded WEIS event data project [10] started out being coded by surfers [11] at UC Santa Barbara in the 1960s. During the 1980s WEIS was coded by Johns Hopkins SAIS graduate students working for CACI, and in Rodney Tomlinson’s final rendition of the project in the early 1990s [12], by undergraduate cadets at the U.S. Naval Academy. It defies belief that these disparate coding groups had 80% agreement, particularly when the canonical codebook for WEIS at the Inter-University Consortium for Social and Political Research was only about five (mimeographed) pages in length.

Cross-project correlations are probably more like 60% to 70% (if that) and, for example, a study of reliability on (I think [20]) some of the Uppsala (Sweden) Conflict Data Project conflict data a couple years ago found only 40% agreement on several variables, and 25% on one of them (which, obviously, must have been poorly defined).

The real kicker here is that because there is no commonly shared annotated corpus, we have no idea of what these accuracy rates actually are, nor measures of how widely these vary across event categories. The human-coded projects rarely published any figures beyond a cursory invocation of the 0.8 Kronbach’s-alpha for their newly-trained cohorts of human coders; the NSF-funded projects focusing on automated coding were simply not able to afford the huge cost of generating the large-scale samples of human-coded data required to get accurate measures, and various IP and corporate policy constraints have thus far precluded getting verifiable information on these measures on the proprietary coders.

6. Ten possible measures of coder accuracy

This isn’t a conjecture, just a point of reference. These are from  https://asecondmouse.wordpress.com/2013/05/10/seven-guidelines-for-generating-data-using-automated-coding-1/

  1. Accuracy of the source actor code
  2. Accuracy of the source agent code
  3. Accuracy of the target actor code: note that this will likely be very different from the accuracy of the source, as the object of a complex verb phrase is more difficult to correctly identify than the subject of a sentence.
  4. Accuracy of the target agent code
  5. Accuracy of the event code
  6. Accuracy of the event quad code: verbal/material cooperation/conflict [13]
  7. Absolute deviation of the “Goldstein score” on the event code [14]
  8. False positives: event is coded when no event is actually present in the sentence
  9. False negatives: no event is coded despite one or more events in the sentence
  10. Global false negatives: an event occurs which is not coded in any of the multiple reports of the event

This list is by no means comprehensive, but it is a start.

7. If event data were a start-up, it would be poised for success

Antonio Garcia Martinez’s highly entertaining, if somewhat misogynistic, Chaos Monkeys: Obscene Fortune and Random Failure in Silicon Valley quotes a Silicon Valley rule-of-thumb that a successful start-up at the “unicorn” level—at least a temporary billion-dollar-plus valuation—can rely on only a single “miracle.” That is, a unicorn needs to solve only a single heretofore unsolved problem. So for Amazon (and Etsy), it was persuading people that nearly unlimited choice was better than being able to examine something before they bought it; for AirBNB, persuading amateurs to rent space to strangers; for DocuSign [21], realizing that signing documents was such a pain that you could attain a $3-billion valuation just by providing a credible alternative [22].  If your idea requires multiple miracles, you are doomed.[15]

In the production of event data, as of 2016, we have open source solutions—or at least can see the necessary technology in open source—to solve all of the following parts for the low-cost near-real-time provision of event data:

  • Near-real-time acquisition and coding of news reports for a global set of sources
  • Automated updating of actor dictionaries through named-entity-recognition/resolution algorithms and global sources such as Wikipedia, the European Commission’s open source JRC-Names database, CIA World Leaders and rulers.org
  • Geolocation of texts using open gazetteers, famously geonames.org and resolution systems such as the Open Event Data Alliance’s mordecai.
  • Inexpensive cloud based servers (and processors) and the lingua franca of Linux-based systems and software
  • Multiple automated coders (open source and proprietary) that probably well exceed the inter-coder agreement of multi-institution human coding teams

More generally, in the past ten years an entire open source software ecosystem has developed relevant to this problem (but typically in contexts far removed from event data): general-purpose parsers, named-entity-recognition/resolution systems, geolocation gazetteers and text-to-location algorithms, near-duplicate text detection methods, phrase-proximity (word2vec etc) and so forth

The remaining required miracle:

  • Automated generation of event models, patterns or dictionaries: that is, generating and updating software to handle new event categories and refine the performance on existing categories.

This last would also be far more easy if we had an open reference set of annotated texts, and even Garcia Martinez allows that things don’t require exactly one miracle. And we don’t need a unicorn (or a start-up): we just need something that is more robust and flexible than what we’ve got at the moment.

SO…what happened???

The main result of the workshop—which covered a lot of issues beyond those discussed here—was the decision to develop the PLOVER coding and data interchange specification, which basically simplifies CAMEO to focus on the levels of detail people actually use (the CAMEO cue categories with some minor modifications [16]), as well as providing a systematic means—“modes” and “contexts”—for accommodating politically-significant behaviors not incorporated into CAMEO such as natural disasters, legislative and electoral behavior, and cyber events. This is being coordinated by the Open Event Data Alliance and involves an assortment of stakeholders (okay, mostly the usual suspects) from academia, government and the private sector. John Beieler and I are writing a paper on PLOVER that will be presented at the European Political Science Association meetings in Milan in June, but in the meantime you can track various outputs of this project at https://github.com/openeventdata/PLOVER. A second effort, funded by the National Science Foundation, will be producing a really large—it is aiming for about 20,000 cases, in Spanish and Arabic as well as English—set of PLOVER-coded “gold standard cases” which will both clearly define the coding system [16] and also simplify the task of developing and evaluating coding programs. Exciting times.[24]

Footnotes:

1. Unorthodox and opinionated for a workshop memo. Pretty routine for a blog.

2. The blue bar shows the count of codings where PETRARCH-1 and PETRARCH-2 produce the same result; despite the common name, they are essentially two distinct coders with distinct verb phrase dictionaries.

3. Typically with no attention as to whether these were really implemented in the dictionaries: I cringe when I see someone trying to use the “kidnapping” category in our data, as we never paid attention to this in our own work because it wasn’t relevant to our research questions.

4. I read a lot of car bomb stories: http://eventdata.parusanalytics.com/data.dir/atrocities.html

5. When such things existed for event data: There really hasn’t been a major human coded project since Maryland’s GEDS event project shut down about 15 years ago. Also keep in mind that if one is generating on the order of two to four thousand events per day—the frequency of events in the ICEWS and Phoenix systems—human coding is completely out of the picture.

6. In some long-lost slide deck (or paper) from maybe five or ten years ago, I contrasted the requirements of human event data coding to research—this may have been out of Kahneman’s Thinking Fast and Slow—on what the human brain is naturally good at. The upshot is that it would be difficult to design a more difficult and tedious task for humans to do than event data coding.

7. Small specialized groups operating for a shorter period, of course, can sustain a higher agreement, but small groups cannot code large corpora.

9. In our long experience at Kansas, we found that even after the best selection and training we knew how to do, about a third of our coders—actually, people developing coding dictionaries, but that’s a similar set of tasks—would quit in the first few weeks, and another sixth by the end of the semester. A project currently underway at the University of Oklahoma is finding exactly the same thing.

10.The WEIS (World Events Interactions Survey) ontology, developed in the 1960s by Charles McClelland, formed the basis of CAMEO and was the de facto standard for DARPA projects from about 1965 to 2005.

11. Okay, “students” but at UCSB, particularly in the 1960s, that was the same thing.

12. Tomlinson actually wrote an entirely new, and more extensive, codebook for his implementation of WEIS, as well as adding a few minor categories and otherwise incrementally tweaking the system, much as we’ve been seeing happening to CAMEO. Just as CAMEO was a major re-boot of WEIS, PLOVER is intended to be a major modification of CAMEO, not merely a few incremental changes.

13. More recently, researchers have started pulling out the high-frequency (and hence low information) “Make public statement” and “Appeal” categories out of “verbal cooperation”, leading to a “pentacode” system. PLOVER drops these.

14. The “Goldstein scale” actually applies to WEIS, not CAMEO: the CAMEO scale typically referred to as “Goldstein” was actually an ad hoc effort around 2002 by a University of Kansas political science grad student named Uwe Reising, with some additional little tweaks by his advisor to accommodate later changes in CAMEO. Which is to say, a process about as random as that which was used to develop the original Goldstein scale by an assistant professor and a few buddies on a Friday afternoon in the basement of the political science department at the University of Southern California. Friends don’t let friends use event scales: Event data should be treated as counts.

15. Another of my favorite aphorisms from Garcia Martinez: “If you think your idea needs an NDA, you might as well tattoo ‘LOSER’ on your forehead to save people the trouble of talking to you. Truly original ideas in Silicon Valley aren’t copied: they require absolutely gargantuan efforts to get anyone to pay serious attention to them.” I’m guessing DocuSign went through this experience: it couldn’t possibly to worth billions of dollars.

16. To spare you the suspense, we eliminated the two purely verbal “comment” and “agree” categories, split “yield” into separate verbal and material categories, combined the two categories dealing with potentially lethal violence, and added a new category for various criminal behaviors. Much of the 3- and 4-digit detail is still retained in the “mode” variable, but this is optional. PLOVER also specifies an extensive JSON-based data interchange standard in hopes that we can get a common set of tools that will work across multiple data sets, rather than having to count fields in various tab-delimited formats.

17. CAMEO, in contrast, had only about 350 gold standard cases: these have been used to generate the initial cases for PLOVER and are available at the GitHub site.

18. For example, a recent UN report covering Afghanistan 2016 concluded there had been about 4,000 civilian casualties for the year. I would be very surprised if the major international news sources—which I monitor systematically for this area—got even 20% of these, and those covered were mostly major bombings in Kabul and a couple other major cities.

19. With which they may use to buy exported whiskey, but at least that’s not the only thing they do.

20. Because, of course, the article is paywalled. One can buy 24-hour access for a mere $42 and 30-day access for the bargain rate of $401. Worth every penny since, in my experience, the publisher’s editing probably involved moving three commas in the bibliography, and insisting that the abstract be so abbreviated one needs to buy the article.

21. The original example here was Uber, until I read this. Which you should as well. Then #DeleteUber. This is the same company, of course, where just a couple years ago one of their senior executives was threatening a [coincidentally, of course…] female journalist. #DeleteUber. Really, people, this whole brogrammer culture has gotten totally out of control, on multiple dimensions.

Besides, conventional cabs can be, well, interesting: just last week I took a Yellow Cab from the Charlottesville airport around midnight, and the driver—from a family of twelve in Nelson County, Virginia, and sporting very impressive dreadlocks—was extolling his personal religious philosophy, which happened to coincide almost precisely with the core beliefs of 2nd-century Gnosticism. Which is apparently experiencing a revival in Nelson County: Irenaeus of Lyon would be, like, so unbelievably pissed off at this.

22. Arguably the miracle here was simply this insight, though presumably there is some really clever security technology behind the curtains. Never heard of DocuSign?: right, that’s because they not only had a good idea but they didn’t screw it up. Having purchased houses in distant cities both before and after DocuSign, I am inordinately fond of this company.

23. PLOVER isn’t the required “miracle” alluded to in item 7, but almost certainly will provide a better foundation (and motivation) for the additional work needed in order for that to occur. Like WEIS, CAMEO became a de facto “standard” by more or less accident—it was originally developed largely as an experiment in support of some quantitative studies of mediation—whereas PLOVER is explicitly intended as a sustainable (and extendible) standard. That sort of baseline should make it easier to justify the development of further general tools.

Posted in Methodology | 4 Comments