Yeah, I blog…

A while back I realized I’d hit fifty blog posts, and particularly as recent entries have averaged—with some variance—about 4000 words, that’s heading towards 200,000 words, or two short paperbacks, or about the length of one of the later volumes of the Harry Potter opus, or 60%-70% of a volume of Song of Ice and Fire. So despite my general admonishment to publishers that I am where book projects go to die, maybe at this point I have something to say on the topic of blog writing.

That and I recently received an email—I’m suspicious that it comes from a bot, though I’m having trouble figuring out what the objectives of the bot might be (homework exercise?)—asking for advice on blogging. Oh, and this blog has received a total of 88,000 views, unquestionably vastly exceeding anything I’ve published in a paywalled journal. [1] And finally I’ve recently been reading/listening, for reasons that will almost certainly never see the light of day [2] on the process of writing: Bradbury (magical) [3], Forster (not aging well unless you are thoroughly versed in the popular literature of a century ago), James Hynes’s Great Courses series on writing fiction, as well as various “rules for writing” lists by successful authors.

So, in my own style, seven observations.

1. Write, write, write

Yes, write, write, write: that’s the one of two consistent bits of advice every writer gives. [4] The best consistently write anywhere from 500 to 1500 words a day, which I’ve never managed (I’ve tried: it just doesn’t work for me) but you just have to keep writing. And if something doesn’t really flow, keep writing until it does (or drop it and try something else). And expect to throw away your first million words. [5]

But keep your day job: I’ve never made a dime off this, nor expect to: I suppose I’ve missed opportunities to earn some beer money by making some deal with Amazon for the occasional links to books, but doesn’t seem worth the trouble/conflicts of interest, and you’ve probably also noticed the blog isn’t littered with advertisements for tactical flashlights and amazing herbal weight-loss potions. [6] Far from making money, for all I know my public display of bad attitude has lost me some funding opportunities. Those which would have driven me (and some poor program manager) crazy.

2. Edit, edit, edit

Yes, in a blog you are freed from the tyranny of Reviewer #2, but with great power comes great responsibility, so edit ruthlessly. This has been easy for me, as Deborah Gerner and I did just that on the papers we wrote jointly for some twenty years, and at least some people noticed. [7] And as the saying goes, variously attributed to Justice Louis Brandeis and writer Robert Graves, “There’s no great writing, only great rewriting.”

In most cases these blog entries are assembled over a period of days from disjointed chunks—in only the rarest of cases will I start from the proverbial blank page/screen and write something from beginning to end—which gradually come together into what I eventually convince myself is a coherent whole, and then it’s edit, edit, edit. And meanwhile I’ll be writing down new candidate sentences, phrases, and snark on note cards as these occur to me in the shower or making coffee or walking or weeding: some of them work, some don’t. For some reason WordPress intimidates me—probably the automatic formating, I note as I’m doing final editing here—so now I start with a Google Doc—thus insuring an interesting selection of advertisements subsequently presented to me by the Google omniverse—and only transfer to WordPress in the last few steps. Typically I spend about 8 to 10 hours on an entry, and having carefully proofread it multiple times before hitting “Publish,” invariably find a half-dozen or so additional typos afterwards. I’ll usually continue to edit and add material for a couple days after “publication,” while the work is still in my head, then move on.

3. Be patient and experiment

And particularly at first: It took some time for me to find the voice where I was most comfortable, which is the 3000 – 5000 word long form—this one finally settled in at about 3100 words, the previous was 4100 words—rather than the 600-900 words typical of an essay or op-ed, to say nothing of the 140/280 characters of a Tweet. [8] My signature “Seven…” format works more often than not, though not always and I realized after a while it could be a straitjacket. [9]  Then there is the early commenter—I get very occasional comments, since by now people have figured out I’m not going to approve most and I’m not particularly interested in most feedback, a few people excepted [4]—who didn’t like how I handled footnotes, but I ignored this and it is now probably the most definitive aspect of my style.

4. Find a niche

I didn’t have a clear idea of where the blog would go when I started it six years ago beyond the subtitle “Reflections on social science, politics and education.” It’s ended up in that general vicinity, though “Reflections on political methodology, conflict forecasting and politics” is probably more accurate now. I’ve pulled back on the politics over the last year or so since the blogosphere is utterly awash in political rants these days, and the opportunities to provide anything original are limited: For example I recently started and then abandoned an entry on “The New Old Left” which reflected on segments of the Democratic Party returning to classical economic materialist agendas following a generation or more focused on identity but, like, well, duh… [10]  More generally, I’ve got probably half as much in draft that hasn’t gone in as that which has, and some topics start out promising and never complete themselves: you really have to listen to your subject. With a couple exceptions, it’s the technical material that generates the most interest, probably because no one else is saying it.

5. It usually involves a fair amount of effort. But occasionally it doesn’t.

The one entry that essentially wrote itself was the remembrance of Heather Heyer, who was murdered in the white-supremacist violence in Charlottesville on 12 August 2017. The commentary following Will Moore’s suicide was a close second, and in both of these cases I felt I was writing things that needed to be said for a community. “Feral…”, which after five years invariably still gets couple views a day [11], in contrast gestated over the better part of two years, and its followup, originally intended to be written after one year, waited for three.

Successful writers of fiction often speak of times where their characters—which is to say, their subconscious—take hold of a plot and drive it in unexpected but delightful ways. For the non-fiction writer, I think the equivalent is when you capture a short-term zeitgeist and suddenly find relevant material everywhere you look [18], as well as waking up and dashing off to your desk to sketch out some phrases before you forget them. [12]

6. Yeah, I’m repetitive and I’m technical

Repetitive: see Krugman, P., Friedman, T., Collins, G., Pournelle, J., and Hanh, T. N. Or, OMG, the Sutta Pikata. And yes, there is a not-so-secret 64-character catch-phrase that is in pretty much every single entry irrespective of topic.[13] As in music, I like to play with motifs, and when things are working well, it’s nice to resolve back to the opening chord.

Using the blog as technical outlet, notably on issues dealing with event data, has been quite useful, even if that wasn’t in the original plan. Event data, of course, is a comparatively tiny niche—at most a couple hundred people around the world watch it closely—but as I’ve recently been telling myself (and anyone else who will listen), the puzzle with event data is it never takes off but it also never goes away. And the speed with which the technology has changed over the past ten years in particular is monumentally unsuited to the standard outlets of paywalled journals with their dumbing-down during the review process and massive publication delays. [14] Two entries, “Seven observations on the [then] newly released ICEWS data” and “The legal status of event data” have essentially become canonical: I’ve seen them cited in formal research papers, and they fairly reliably get at least one or two views a week, and more as one approaches the APSA and ISA conferences or NSF proposal deadlines. [15]

7. The journey must be the reward

Again, I’ve never made a dime off this directly [16], nor do I ever expect to unless somehow enough things accumulate that they could be assembled into a book, and people buy it. [17] But it is an outlet that I enjoy and I also have become aware, from various comments over the years, that this has made my views known to people, particularly on the technical side in government, I wouldn’t ever have direct access to: They will mention they read my blog, and a couple times I believe they’ve deliberately done so in the earshot of people who probably wish they didn’t. But fundamentally, like [some] sharks have to keep moving to stay alive, and salmon are driven to return upstream, I gotta write—both of my parents were journalists, so maybe as with the salmon it’s genetic?—and you, dear reader, get the opportunity to read some of it.

Footnotes

1. But speaking of paywalled journals, the major European research funders are stomping down big-time!  No embargo period, no “hybrid models”, publish research funded by these folks in paywalled venues and you have to return your grant money. Though if health care is any model, this trend will make it across the Atlantic in a mere fifty to seventy-five years.

2. A heartfelt 897-page Updike-inspired novel centered on the angst of an aging computer programmer in a mid-Atlantic university town obsessed with declining funding opportunities and the unjust vicissitudes of old age, sickness, and death.

Uh, no.

African-Americans, long free in the mid-Atlantic colonies due to a successful slave revolt in 1711-1715 coordinated with native Americans—hey, how come every fictional re-working of U.S. history has to have the Confederacy winning the Civil War?—working as paid laborers on the ever-financially-struggling Monticello properties with its hapless politician-owner, now attacked by British forces seeking to reimpose Caribbean slavery (as well as being upset over the unpleasantness in Boston and Philadelphia). Plus some possible bits involving dragons, alternative dimensions most people experience only as dark energy, and of course Nordic—friendly and intelligent—trolls.

Or—totally different story—a Catalonian Jesuit herbalist—yeah, yeah, I’m ripping off Edith Pargeter (who started the relevant series at age 64!), but if there is the village mystery genre (Christie, Sayers (sort of…), Robinson) and the noir genre (Hammett, Chandler, Elroy), there’s the herbalist monk genre—working in the Santa Marie della Scala in the proud if politically defeated and marginalized Siena in the winter of 1575 who encounters a young and impulsive English earl of a literary bent who may or may not be seeking to negotiate the return of England to Catholicism, thus totally, like totally!!! changing the entire course of European history (oops, no, that’s Dan Brown’s schtick…besides, those sorts of machinations were going on constantly during that era. No dragons or trolls in this one.) but then a shot rings out on the Piazza del Campo, some strolling friars pull off their cloaks to reveal themselves as Swiss Guards, and a cardinal lies mortally wounded?

Nah…I’m the place where book projects go to die…

3. Ah, Ray Bradbury: Growing up in fly-over country before it was flown over, writing 1,000 words a day since the age of twelve, imitating various pulp genres until his own literary voice came in his early 20s. A friend persuades him to travel across the country by train to visit NYC where after numerous meetings with disinterested publishers, an editor notes that his Martian and circus short stories were, in fact, the grist for two publishable books—which I of course later devoured as a teenager—and he returns home to his wife and child in LA with checks covering a year’s food and rent. Then Bradbury, then only a high-school education, receives a note that Christopher Isherwood would like to talk with him, and then Isherwood says they really should talk to his friend Aldous Huxley. And by 1953, John Huston asks him to write a screenplay for Moby Dick, provided he do this while living in the gloom of Ireland.

4. And—beyond edit, edit, edit—about the only one. For example, Bradbury felt that a massive diet of movies in his youth fueled his imagination; Stephen King says if you can’t give up television, you’re not serious about writing. About half of successful writers apparently never show unfinished drafts to anyone, the other half absolutely depend on feedback from a few trusted readers, typically agents and/or partners.

Come to think of it, two other near-universal bits of advice: don’t listen to critics, and, closely related, don’t take writers’ workshops very seriously (even if you are being paid to teach in them).

5. Which I’d read first from Jerry Pournelle, but it seems to be general folklore: Karen Woodward has a nice gloss on this.

6. Or ads for amazing herbal potions for certain male body functions. I actually drafted a [serious] entry for “The Feral Diet” I’d followed with some success for a while but, alas, like all diet regimes, it only worked for weight loss for a while (weight maintenance has been fine): I ignore my details and just follow Michael Pollan and Gary Taubes

7. High point was when we were asked by an NSF program director if it would be okay to share one of our [needless to say, funded] proposals with people who wanted an example of what a good proposal looked like.

8. Twitter is weird, eh? I avoided Twitter for quite some time, then hopped—hey, bird motifs, right?—in for about a year and a half, then hopped out again, using it now only a couple times a week. What is interesting is the number of people who are quite effectively producing short-form essays using 10 to 20 linked tweets, which probably not coincidentally translates to the standard op-ed length of around 700 – 800 words, but the mechanism is awkward, and certainly wouldn’t work for a long-form presentation. If Twitter bites the dust due to an unsustainable financial model—please, please, please, if only for the elimination of one user’s tweets in particular—that might open a niche for that essay form, though said niche might already be WordPress.

While we’re on the topic of alternative media, I’ve got the technology to be doing YouTube—works for Jordan Peterson and, by inference, presumably appeals to lobsters—but I suspect that won’t last both because of the technological limitations—WordPress may not be stable but the underlying text—it’s UTF-8 HTML!—is stable—and the fact the video form itself is more conversational and hence more transient. Plus I rarely watch YouTube: I can read a lot faster than most people speak.

9. Same with restricting the length, which I tried for a while, and usually putting constraints around a form improves it. But editing for length is a lot of work, as any op-ed columnist will tell you, and this is an informal endeavor. The “beyond the snark” reference section I employed for a while also didn’t last—in-line links work fine, and the ability to use hyperlinks in a blog is wonderful, one of the defining characteristics of the medium.

10. I’ve got a “Beyond Democracy” file of 25,000 words and probably a couple hundred links reflecting on the emergence of a post-democratic plutocracy and how we might cope with it: several unfinished essays have been stashed in this file. Possibly that could someday jell as a book, but, alas, have I mentioned that I am the place where book projects go to die? Are you tired of this motif yet?

11. The other entry which is consistently on the “Viewed” list on the WordPress dashboard—mind you, I only look at this for the two or three days after I post something to get a sense of whether it is getting circulated—is “History’s seven dumbest self-inflicted political disasters.” Whose popularity—this is Schrodt doing his mad and disruptable William McNeill imitation (badly…)—I absolutely cannot figure out: someone is linking it somewhere? Or some bot is just messing with me?

12. Dreaming of a topic for [seemingly] half the night: I hate that. The only thing worse—of course, beyond the standard dreams of being chased through a dank urban or forested landscape by a menacing evil while your legs turn to molasses and you simply can’t run fast enough—is dreaming about programming problems. If your dreams have you obsessing with some bit of writing, get out of bed and write it down: it will usually go away, and usually in the morning your nocturnal insight won’t seem very useful. Except when it is. Same with code.

13. Not this one: that would make it too easy.

14. I recently reviewed a paper—okay, that was my next-to-last review, honest, and a revise-and-resubmit, and really, I’m getting out of the reviewing business, and Reviewer #2 is not me (!!)—which attempted to survey the state of the art in automated event coding, and I’d say got probably two-thirds of the major features wrong. But the unfortunate author had actually done a perfectly competent review of the published literature, the problem being that what’s been published on this topic is the tip of the proverbial iceberg in a rapidly changing field and has a massive lag time. This has long been a problem, but is clearly getting worse.

15. Two others are also fairly useful, if both a bit dated: “Seven conjectures on the state of event data” and [quite old as this field goes] “Seven guidelines for generating data using automated coding“. 

16. It’s funny how many people will question why one writes when there is no prospect of financial reward when I’ve never heard someone exclaim to a golfer: “What, you play golf for free?? And you even have to pay places to let you play golf? And spend hours and hours doing it? Why, that’s so stupid: Arnold Palmer, Jack Nicklaus, and Tiger Woods made millions playing golf! If you can’t, just stop trying!”

17. As distinct from Beyond Democracy, the fiction, and a still-to-jell work on contemporary Western Buddhism—like the world needs yet another book by a Boomer on Buddhism?—all of which are intended as books. Someday…maybe…but you know…

18. Like the Economist Espresso‘s quote of the day: “A person is a fool to become a writer. His [sic] only compensation is absolute freedom.” Roald Dahl (Charlie and the Chocolate Factory, Matilda, The Fantastic Mr. Fox). Yep.

Advertisements
Posted in Uncategorized | Leave a comment

Happy 60th Birthday, DARPA: you’re doomed

Today marks the mid-point of a massive self-congratulatory 60th anniversary celebration by DARPA [1]. So, DARPA, happy birthday! And many happy returns!! YEA!!!

That’s a joke, right? Why yes, how did you guess?

A 60th anniversary, of course, is very important landmark, but not in a good way: Chinese folklore says that neither fortune nor misfortune persist for more than three generations,[2] and the 14th century historian and political theorist Ibn Khaldun pegged three generations as the time it took a dynasty to go from triumph to decay. Calculating a human generation as 20 years and, gulp, that makes 60.

Vignette #1:

DARPA, perhaps aware of some of the issues I will be raising here, has embarked on some programs with “simplified” proposal processes (e.g. this https://www.darpa.mil/news-events/2018-07-20a). In DARPA-speak, “simplified” means a 20 to 30 page program description with at least 7 required file templates, the first being an obligatory PowerPoint™ slide. In industry-speak, this is referred to as “seven friggin’ PDF files and WTF a friggin’ required PowerPoint™ slide??—in 2018 who TF uses friggin’ PowerPoint™???” [3]

Vignette #2:

A few months back, I’d been alerted to an interesting DARPA DSO BAA under the aforementioned program, and concocted an approach involving another Charlottesville-based tech outfit (well, their CTO is in CVille: the company is 100% remote on technical work, across a number of countries) with access to vast amounts of relevant data. The CTO and I had lunch on a Friday—during which I learned the company had developed out of an earlier DARPA-funded project—and he was all ready to move ahead with this.

On Monday the project was dead, vetoed by their CFO: they have plenty of work to do already, and it is simply too expensive to work with DARPA as DARPA involves an entirely different set of contracting and collaboration norms than the rest of the industry. Sad.

Arlington, we have a problem.

But before we go any further, I already know what y’all are thinking: “Hey, Schrodt, so things have finally caught up with your obnoxious little feral strategy, eh? Left academia, no longer have access to an Office of Sponsored Research [5][6] so you can’t apply for DARPA funding any more. Nah, nah, nah! LOSER! LOSER!! LOSER!!!

Well, yeah, elements of that: per vignette #2, there are definitely DARPA [7] programs I’d like to be participating in, but no longer can, or rather, cannot assemble any conceivable rationale for attempting. Having sketched out this diatribe [8], I was on the verge of abandoning it as mere sour grapes when The Economist [1 September 2018] arrived with a cover story based on almost precisely the same complex social systems argument I’d already outlined for DARPA, albeit about Silicon Valley generally. So maybe I’m on to something. Thus we will continue.

As I was reminded at a recent workshop, DARPA was inspired by the scientific/engineering crisis of Sputnik. [9] DARPA’s challenge in the 21st century, however, is that it continues to presuppose the corporate laboratory structures of the Sputnik era, where business requirements and incentives were [almost] completely reversed from what they are today: the days of the technical supremacy of Bell Labs and Xerox PARC are gone, and they aren’t coming back. [10]

As The Economist points out in the context of the demise of Silicon Valley as an attractive geographical destination, Silicon Valley’s very technological advances—many originally funded by DARPA—have sown the seeds of its geographical destruction. DARPA faces bureaucratic rather than geographical challenges, but is essentially in the same situation at least in the world of artificial intelligence/machine learning/data science (AI/ML/DS) where DARPA appears to be desperately trying to play catch-up.

A few of the insurmountable social/economic changes DARPA is facing:

  • AI/ML/DS innovations can be implemented almost instantly with essentially no capital investment.[11] As The Economist [25 August 2018] notes, in 1975 only 17% of the value of the S&P 500 companies was in intangibles; by 2015 this was 84%.
  • The bifurcation/concentration of the economy, particularly in technical areas: the rate of start-ups has slowed, and those that exist quickly get snatched up by the monsters. Consider for example the evolution of the Borg-like SAIC/Leidos [12], which first gobbled up hundreds of once-independent defense consulting firms, then split, and now Leidos is getting purchased by Lockheed. You will be assimilated!
  • As some recent well-publicized instances have demonstrated, working with DARPA—or the defense/intelligence community more generally—will be actively opposed by some not insignificant fraction of the all-too-mobile employees of the technology behemoths. Good luck changing that.

As I’ve documented in quite an assortment of posts in this blog—I’ve been successfully walking this particular walk for more than five years now—these changes have led an an accelerating rise, particularly in the AI/ML/DS field, of the independent remote contractor—either an individual or a small self-managing team—due to at least five factors

  • Ubiquity of open source software which has zero monetary cost of entry and provides a standard platform across potential clients.
  • Cloud computing resources which can be purchased and cast aside in a microsecond with no more infrastructure than a credit card.
  • StackOverflow and GitHub putting the answers to almost any technical question a few keystrokes away: the relative advantage of having access to local and/or internal company expertise has diminished markedly.
  • A variety of web-based collaborative environments such as free audio and video conferencing, shared document environments, and collaboration-oriented communication platforms such as Slack, DropBox and the like.
  • Legitimation of the “gig economy” from both the demand and supply side: freelancers are vastly less expensive to hire and are now viewed as entrepreneurial trailblazers rather than as losers who can’t get real jobs. In fact, because of its autonomy, remote work is now considered highly desirable.

The upshot, as explosion.ai’s (spaCy, prodigy) Ines Montani explains in a recent EuroPython talk, small companies are now fully capable of doing what only massive companies could do a decade or so ago. Except, of course, dealing with seven friggin’ PDF files including a required friggin’ PowerPoint™ slide to even bid on a project with some indeterminate chance of being funded following a six to nine month delay. More shit sandwiches?: oh, so sorry, just pass the plate as I’ve already had my share.

As those who follow my blog are aware, I spend my days in a pleasant little office in a converted Victorian three blocks from the Charlottesville, Virginia pedestrian mall [13] in the foothills of the Blue Ridge, uninterrupted except by the occasional teleconference. I have nearly complete control of my tasks and my time, and as an introvert whose work requires a high level of concentration, this is heaven. My indirect costs are around 15%. In the five years I’ve supported myself in this fashion, my agreements with clients typically involve a few conversations, a one or two page SOW, and then we get to work.  

DARPA-compatible alternatives to this sort of remote work, of course, would involve transitioning to some open-office-plan hellhole beset with constant interruptions and “supervision” by clueless middle-managers who spend their days calling meetings and writing corporate mission statements because, well, that’s just what clueless middle managers are paid to do.[14] These work environments are horribly soul-sapping and inefficient—with indirect costs far exceeding mine—except for that rather sizable proportion of the employees who are in fact not adding any value to the enterprise but are enjoying an indefinitely extended adolescent experience where, with any luck at all, they can continue terrorizing the introverts who actually are writing quality code, just as they did in junior high school, which is pretty much what open-office-plan hellholes try to replicate. I digress.

So, I suppose, indeed I am irritated because there are opportunities out there I can’t even compete for without radically downgrading my situation, and even though I, and the contemporary independent contractor community more generally, could probably do these tasks at lower cost and higher quality than is being done by the corporate behemoths who will invarably end up with all that money, this despite the fact that a migration to remote teams with lower costs and higher output is precisely what we are seeing in the commercial sector. Says no less than The Economist.

Okay, okay, so the FAANG are leary about even talking to DARPA, and we’ve already established that the existing contractors aren’t giving DARPA what it is looking for [15], but you’ve still got academic computer science to fall back on, right? Right?

Uh, not really.

Once again, any reliance on academia has DARPA doing the time warp again and heading back to the glory days of the Sputnik crisis when, in fact, academic research was probably a pretty good deal. But now:

  • Tuition—which will be covered directly or indirectly—at all research universities has soared as the public funding readily available in the 1950s has collapsed.
  • Universities no longer have the newest and shiniest toys: those are in the private sector.
  • The best and brightest students zip through their graduate programs in record time, with FOMO private sector opportunities nipping at their tails. The ones who stick around…well, you decide.
  • The best and brightest professors have far more to gain from their startups and consultancies than from filling out seven friggin’ PDF files including one friggin’ required PowerPoint™ slide. Those with no such prospects, and the people building empires for the sake of empire building and/or aspirations to become deans, associate deans, assistant deans, deanlings or deanlets, yeah, you might get some of those. Congrats. Or something.

And these are impediments before we consider the highly dysfunctional publication incentives which have reduced academic computer science to only a single true challenge, the academic Turing Test—probably passed several years ago but the reality of this still hidden—for who will be the first to write a bot which can successfully generate unlimited publishable AI/ML/DS papers.[16] This and the fact that computer science graduate students tend to be like small birds, spending most of their time flitting around in pursuit of novelties in the form of software packages and frameworks with lifespans comparable to that of a dime-store goldfish. And all graduate students, on entering even the most lowly M.S.-level program, are sworn to a dark oath, enforced with the thoroughness of a Mafia pizza parlor protection racket, to never, ever, under any circumstances, comment, document or test their code. [21]

Academia will not save you. No one will save you.

HARUMPHHH!!! So if you bad-attitude remote contractors are so damn smart and so damn efficient, there’s an obvious market failure/arbitrage opportunity here which will self-correct because, as we all know, markets always work perfectly.

Well, maybe, but I’d suggest this is going to be tough, for at least three reasons.

The first issue, for the remote independent contractors as well as the FAANG, is simply “why bother?”: I’m not seeing a whole lot of press about AI/ML/DS unemployment, and if you can get work with a couple of phone calls and one-page SOW, why deal with seven friggin’ PDF forms and a friggin’ required PowerPoint™ slide?

Then there’s the unpleasant fact that anyone attempting to arbitrage the inefficiencies here is wandering into the arena with the likes of SAIC, Lockheed and BBN, massive established players more than happy to destroy you, and they consider seven friggin’ PDFs and all other barriers to entry a feature rather than a bug, as well as deploying legions of Armani-shod lobbyists to make damn sure things stay that way. But mostly, they’ll come after any threats to their lucrative niche faster than a rattlesnake chasing a pocket gopher. I suppose it could be done, but is not for the light of heart. Or bank account.

The final issue is that because DARPA [famously, and probably apocryphally] expects its projects to fail 80% of the time, there’s a frog-in-boiling-water aspect where DARPA won’t notice—until things are too late—structural problems which now cause projects to fail where they would have succeeded in the absence of those new conditions.  Well, until the Chinese get there first.[17]

There is, in the end, a [delightful?] irony here: one of the four foci within the DARPA Defense Sciences Office, those folks whose idea of “simple” is a 30 page BAA and seven friggin’ PDF files starting with a friggin’ obligatory PowerPoint™ slide, is called “complex social systems,” which in most definitions would include self-modifying systems.[18] And a second of those foci deals with “anticipating surprise.”

Well, buckeroos, you’ve got both of these phenomena going on right there in the sweet little River City of Arlington, VA: a complex self-modifying system that’s dropped a big surprise and in all likelihood there’s nothing you can do about.

Okay, maybe a tad too dramatic: at the most basic level, all that is going on here is a case of the well-understood phenomenon of disruptive innovation—please note my clever use of a link to that leftist-hippy-commy rag, the Harvard Business Reviewwhere new technologies enable the development of an alternative to the established/entrenched order which is typically in the initial stages not in fact “better” than the prevailing technology, but attains a foothold by being faster, cheaper and/or easier to use, thus appealing to those who don’t actually need “the best.”

Project proposals provided by remote independent contractors with 15% IDC will—assuming they even try—be inferior to those of the entrenched contractors with 60% IDC, since in addition to employing legions of Armani-shod lobbyists they also employ platoons of PowerPoint™ artistes, echelons of document editors, managers overseeing countless layers of internal reviews, and probably the occasional partridge in a pear tree.[19] You want a great proposal?: wow can these folks ever produce a great proposal!

They just can’t deliver on the final product [20] for the reasons noted above. Leaving us in this situation

What they propose

What they deliver

 

 

 

 

 


In contrast, the coin of the realm for the independent shops is their open code on GitHub: even if the contracted work will be proprietary, you’ve got to have code out where people can look at it, and that’s why contemporary companies are comfortable hiring people they’ve never met in person—and may never meet—and who will be working eight time zones away: it’s the difference between hiring someone to remodel your kitchen based on the number of glossy architectural magazines they bring to a meeting versus hiring them based on other kitchens they’ve remodeled. All of which is to say that in the contemporary AI/ML/DS environment, assembling an effective team is more Ocean’s 11 or Rogue One, much less The Office.

So on a marginally optimistic note, I’ll modify my title slightly: DARPA, until you find a structure that rewards people for writing solid code, not PowerPoint™ slides, you’re doomed.

Happy 60th.

Footnotes

1. If you don’t know what DARPA is, stop right here as the cultural nuances permeating the remainder of this diatribe will make absolutely no sense. I’m also obnoxiously refraining from defining acronyms such as DSO, BAA, SOW, PM, FAR, FOMO, FAANG, CFO, CTO, ACLED, ICEWS, F1, AUC, MAE, and IARPA because refraining from defining acronyms is like so totally intrinsic to this world.

2. This is apparently an actual Chinese proverb, though it is typically rendered as Wealth does not pass three generations” along with many variants on the same theme. There’s a nice exposition, including an appropriate reference to Ibn Khaldun, to be found, of all places, on this martial arts site.

3. A couple weeks ago the social media in CVille—ya gotta love this place—got into an extended tiff over whether the use of the word “fuck”—or more generally “FUCK!” or “FUCK!!” or “THAT’S TOTALLY FUCKING FUCKED, YOU FUCKING FUCKWIT!!!”—was or was not a form of cultural appropriation. Of course, it’s not entirely clear what “culture” is being appropriated, and thus offended, as the word has been in common use for centuries, but presumably something local as the latter phrase is pretty much representative of contemporary public discourse, such as it is, in our fair city.[4] Okay, not quite. Fuckwit.  So to avoid offense—not from the repetitive use of an obscenity, but the possibility of that indeterminate variant on cultural appropriation—I will continue to refer to the “seven friggin’ PDFs and one friggin’ required PowerPoint™ slide.”

4. Browsing the Politics and Prose bookstore in DC last weekend—this at the new DC Wharf, where the hoi polloi can gaze upon the yachts of the lobbyists for DARPA contractors—I noticed that if you would like to write a book, but really don’t have anything to say, adding FUCK to the title is a popular contemporary marketing approach. Unfortunately, these tomes—mind you, they are typically exceedingly short, so perhaps technically they are not really “tomes”—will probably all be pulped or repurposed as mulch, but should a few escape we can foresee archeologists of the future—probably sentient cockroaches—using telepathic powers to record in [cockroach-DARPA-funded] holographic crystals “Today our excavation reached the well-documented FUCK-titled-tome layer, reliably dated to 2015-2020 CE.” Though they will more likely have to be content with the “Keurig capsule layer”, which far less precisely locates accompanying artifacts only to 1995-2025 CE.

5. Or as Mike Ward eloquently puts it: OSP == Office for the Suppression of Research.

6. As Schrodt puts it, the OSP mascot is the tapeworm.

7. And IARPA: same set of issues, less money.

8. Though inspired in part after listening to some folks at the 2018 summer Society for Political Methodology meetings—unlike the three-quarters of political science Ph.D.s who will not find tenure track positions, political methodologists are eminently employable, albeit not necessarily in academia—literally laughing out loud—and this conference being in Provo, Utah, laughing out loud while stone-cold sober—about Dept of Defense attempts to recruit high-quality collaborators in the AI/ML/DS fields.

9. In this presentation, we were told “I’m sure no one here remembers Sputnik.” Dude, I not only remember Sputnik—vividly—I can even remember when the typical Republican thought the Russians were an insidious and unscrupulous enemy!

10. From the recent obituary of game theorist Martin Shubik

After earning his doctorate at Princeton, he worked as a consultant for General Electric and for IBM, whose thinking about research scientists he later described to The New York Times: “Well, these are like giant pandas in a zoo. You don’t really quite know what a giant panda is, but you sure as hell know (1) you paid a lot of money for it, and (2) other people want it; therefore it is valuable and therefore it’s got to be well fed.

11. Capital intensity is a key caveat here: as the price of the shiniest new toys increases, so does the competitiveness of DARPA compared to the commercial sector. So, for example, in areas such as quantum computing, nanotechnologies and most work on sensors, DARPA will do just fine. AI/ML/DS: not so much. So despite my dramatic title—hey, it’s a blog!—DARPA is probably not doomed in endeavors involving bashing metals or molecules. 

12. I wasn’t really sure how to find that Vanity Fair article on SAIC—which got quite the attention when it first came out more than a decade ago—but it popped right up when I entered the search term “SAIC is evil”. Also see this.

The sordid history of the likes of SAIC and Lockheed raises the topic/straw-man of whether DARPA PMs, in comparison to private sector research managers who can contract with a few phone calls and a short SOW, “must” be hemmed in by mountains of FARs and bureaucracy lest they be irresponsible with funds from the public purse. Yet these same managers routinely are expected—all but required thanks to the legions of Armani-shod lobbyists—to dole out billions to outfits like SAIC and Lockheed which have long—like really, really long—rap sheets on totally wasting public moneys. Sad.

13. Six coffee shops and counting.

14. Okay, your typical tech middle manager is also paid to knock back vodka shots in strip clubs while exchanging pointers on how to evade HR’s efforts to reduce sexual harassment, a phenomenon I have explored in greater detail here.

15. See Sharon Weinberger’s superbly researched history of DARPA, Imagineers of War, for further discussion, particularly her analysis of DARPA’s seemingly terminal intellectual and technical drift in the post-Cold-War period.

16: Academic computer science has basically run itself into a publications cul-de-sac—mind you, possibly quite deliberately, as said cul-de-sac guarantees their faculty can spend virtually all of their time working on their start-ups and consultancies—where publication has become defined solely by the ability to get some marginal increase in standardized metrics on standardized data sets.

Vignette: I’m generally avoiding reviewing journal articles now—I have only limited access to paywalled journals, and in any case don’t want to encourage that sort of thing—but a few weeks ago finally agreed to do so (for a proprietary journal I’d never heard of) after being incessantly harangued by an editor, presumably because I was one of about five people in the world who had worked in the past with all of the technologies used in the paper, and I decided to reward the effort that must have been involved to establish this connection. The domain, of course, was forecasting political conflict, and the authors had assembled a conflict time series from the usual suspects—ACLED, Cline Center, or ICEWS—and applied four different ML methods, which produced modestly decent results—as computer scientists, they felt no obligation, of course, to look at the existing literature which extends back a mere four or five decades—with a bit of variation in the usual metrics, probably F1, AUC and MAE. There was a serious discussion of these differences, discussions of the relative level of resources required for each estimator, blah blah blah. So far, a typical ML paper.

Until I got to a graphical display of the results. The conflict time series, of course, was a complex saw-toothed sequence. Every single one of the ML “models”: a constant! THE [fuckwit] IDIOTS HAD SUBMITTED A PAPER WHERE THE PREDICTED TIME SERIES HAD ZERO VARIANCE! And those various estimators didn’t even converge to the mean, hence the differences in the measures of fit!

I politely told the editor, in all sincerity, that this was the stupidest thing I had ever read in my life, and in political science it would have never gone out for review. The somewhat apologetic response allowed that it might not be the finest contribution from the field, as the journal was new (and, I’m sure, expensive: gotta make the percentage of library budgets that go to serials asymptotic to 100%!) and was being submitted for a special issue. Right.

After completing the review, I tracked down the piece (I follow the political science norm of respecting double-blind review processes): it was from one of the top computer science shops in the country, and the final “author” (who I presume had never even glanced at the piece) was the director of a research institute with vast levels of government funding. Such is the degeneracy of contemporary academic computer science. I’m hardly the only person to notice this issue: see this from Science.

17. This, of course, being the dominant issue in political-economy for the first half of the 21st century: the Chinese have created a highly economically successful competitor to liberal market polities, and we have also seen a convergence in market concentration in the new companies dominating the heights of markets in both systems. However, we’ve got 200 years of theorizing—once called “conservative” (and before that “liberal”) in the era before “conservative” became equated with following the constantly changing whims of a deranged maniac—arguing that decentralized economic political-economic systems should provide long-term advantages over authoritarian systems. But that sure the heck isn’t clear at the moment.

18: Hegel , of course, had similar ideas 200 years ago, but wasn’t very good with PowerPoint™: sad.

19. Do these elaborate proposal preparation shops figure into the high indirect costs of the established contractors? Nah…of course not, because we know proposals are done by legions of proposal fairies who subsist purely on dewdrops and sunlight, costing nothing. Or if they did, those costs would be reimbursed by the legions of proposal leprechauns and their endless supplies of gold. None of this ever figures into indirect cost rates, right?

20. As distinct from providing 200+PowerPoint™ slide decks for day-long monthly program reviews: they’ll be great on that as well!

21. Turns out astrophysics has a wonderful name for the undocumented code people write figuring no one will ever look at it only to find it’s still in use twenty years later: “dinosource.”

Posted in Higher Education, Methodology | Leave a comment

What if a few grad programs were run for the benefit of the graduate students?

I’ve got a note in my calendar around the beginning of August—I was presumably in a really bad mood at [at least] some point over the past year—to retweet a link to my blog post discussing my fondness for math camps—not!—but in the hazy-lazy-crazy days of summer, I’m realizing this would be rather like sending Donald Trump to meet with the leaders of U.S. allies: gratuitously cruel and largely meaningless. Instead, and more productively, an article in Science brought to my attention a recent report [1] by the U.S. National Academies of Sciences, Engineering, and Medicine (NASEM)—these people, please note, are a really big deal. The title of the article—”Student-centered, modernized graduate STEM education”—provides the gist but here’s a bit more detail from the summary of the report provided in the Science article:

[the report] lays out a vision of an ideal modern graduate education in any STEM field and a comprehensive plan to achieve that vision. The report emphasizes core competencies that all students should acquire, a rebalancing of incentives to better reward faculty teaching and mentoring of students, increased empowerment of graduate students, and the need for the system to better monitor and adapt to changing conditions over time.  … [in most institutions] graduate students are still too often seen as being primarily sources of inexpensive skilled labor for teaching undergraduates and for performing research. …  [and while] most students now pursue nonacademic careers, many institutions train them, basically, in the same way that they have for 100 years, to become academic researchers

Wow: reconfigure graduate programs not only for the 21st century but to benefit the students rather than the institutions. What…a…concept!

At this point my readership now splits, those who have never been graduate students (a fairly small minority, I’m guessing) saying “What?!? Do you mean graduate programs aren’t run for the benefit of their students???” while everyone who has done time in graduate school is rolling their eyes and cynically saying “Yeah, right…” With the remainder rolling on the ground in uncontrollable hysterical laughter.[2]

But purely for the sake of argument, and because these are the lazy-hazy-crazy days of summer, and PolMeth is this week and I got my [application-focused!] paper finished on Friday (!!), let’s just play this out for a bit, at least as it applies to political methodology, the NAESM report being focused on STEM, and political methodology is most decidedly STEM. And in particular, given the continued abysmal—and worsening [3]—record for placement into tenure-track jobs in political science, let’s speculate for a bit what a teaching-centered graduate level program for methodologists, a.k.a. data scientists, intending to work outside of academia might look like. For once, I will return to my old framework of seven primary points:

1. It will basically look like a political methodology program

I wrote extensively on this topic about a year ago, taking as my starting point that experience in analyzing the heterogeneous and thoroughly sucky sorts of data quantitative political scientists routinely confront is absolutely ideal training for private sector “data science.” The only new observation I’d add, having sat through demonstrations of several absolutely horrible data “dashboards” in recent months, is formal training in UX—user interface/experience—in addition to the data visualization component. So while allowing some specialization, we’d basically want a program evenly split between the four skill domains of a data scientist:

  • computer programming and data wrangling
  • statistics
  • machine learning
  • data visualization and UX

2. Sophisticated problem-based approaches taught by instructors fully committed to teaching

One of the reasons I decided to leave academia was my increasing exposure to really good teaching methodologies combined with a realization that I had neither the time, energy, nor inclination to use these. “Sage on the stage” doesn’t cut it anymore, particularly in STEM.

Indeed, I’m too decrepit to do this sort of thing—leave me alone and just let me code (and, well, blog: I see from WordPress this is published blog post #50!)—but there are plenty of people who can enthusiastically do it and do it very well. The problem, as the NASEM report notes in some detail, is that in most graduate programs there are few if any rewards for doing so. But that’s an institutional issue, not an issue of the total lack of humans capable of doing the task, nor the absence of a reasonably decent body of research and best-practices—if periodically susceptible, like most everything social, to fads—on how to do it.

3. Real world problems solved using remote teaming

Toy problems and standardized data sets are fine for [some] instruction and [some] incremental journal publications, but if you want training applicable to the private sector, you need to be working with raw data that is [mostly] complete crap, digital offal requiring hours of tedious prep work before you can start applying glitzy new methods to it. Because that, buckeroos, is what data science in the private sector involves itself with, and that’s what pays the bills. Complete crap is, however, fairly difficult to simulate, so much better to find some real problems where you’ve got access to the raw data: associations with companies—the sorts of arrangements that are routine in engineering programs—will presumably help here, and as I’ve noted before, “data science” is really a form of engineering, not science. 

My relatively new suggestion is for these programs to establish links so that problem-solving can be done in teams working remotely. Attractive as the graduate student bullpen experience may be, it isn’t available once you leave a graduate program, and increasingly, it will not be duplicated in many of the best jobs that are available, as these are now done using temporary geographically decentralized teams. So get students accustomed to working with individuals they’ve never met in person who are a thousand or eight thousand or twelve thousand miles away and have funny accents and the video conferencing doesn’t always work but who nonetheless can be really effective partners. In the absence of some dramatic change in the economics and culture of data science, the future is going to look like the “fully-distributed team” approach of parse.ly , not the corporate headquarters gigantism of FAANG.

4. One or two courses on basic business skills

I’ve written a number of blog entries on the basics of self-employment—see here and here  and here—and for more information, read everything Paul Graham has ever written, and more prosaically, my neighbor and tech recruiter Ron Duplain always has a lot of smart stuff to say, but I’ll briefly reiterate a couple of core points here.

[Update 31 July: Also see the very useful EuroPython presentation from Ines Montani of explosion.ai, the great folks that brought you spaCy and prodigy. [9]]

Outside of MBA programs—which of course go to the opposite extreme—academic programs tend to treat anything related to business—beyond, of course, reconfiguring their curricula to satisfy the funding agendas of right-wing billionaires—as suspect at best and more generally utterly worthy of contempt. Practical knowledge of business methods also varies widely within academia: while the stereotype of the academic coddled by a dissertation-to-retirement bureaucracy handling their every need is undoubtedly true as the median case, I’ve known more than a few academics who are, effectively, running companies—they generally call them “labs”—of sometimes quite significant size.

You can pick up relevant business training—well, sort of—from selectively reading books and magazine articles but, as with computer programming, I suspect there are advantages to doing this systematically [and some of my friends who are accountants would definitely prefer if more people learned business methods more systematically]. And my pet peeve, of course, is getting people away from the expectations of the pervasive “start-up porn”: if you are reasonably sane, your objective should be not to create a “unicorn” but rather a stable and sustainable business (or set of business relationships) where you are compensated at roughly the level of your marginal economic contribution to the enterprise.[4]

That said, the business angle in data analytics is at present a rapidly moving target as the the transition to the predominance of remote work—or if you prefer, “gig-economy”—plays out. In the past couple of weeks, there were articles on this transition in both The Economist’s “The World If…” feature and Science magazine’s “Science Careers” [6 July 2018][5]. But as The Economist makes clear, we’re not there yet, and things could play out in a number of different ways.[6] Still, it is likely that most people in the software development and data analytics fields should probably at least plan for the contingency they will not be spending their careers as coddled corporate drones and instead will find themselves in one of those “you only eat what you—or you and your ten-person foraging party of equals—kill” environments. Where some of us thrive. Grrrrrrrr.  There are probably some great market niches for programs that can figure out what needs to be covered here and how to effectively teach it. 

5. Publication only in open-access, contemporaneous venues

Not paywalled journals. Particularly not paywalled journals with three to five year publication lags. As I note in one of several snarky asides in my PolMeth XXXV paper

Paywalled journals are virtually inaccessible outside universities so by publishing in these venues you might as well be burying your intellectual efforts beneath a glowing pile of nuclear waste somewhere in Antarctica. [italics in original]

Ideally, if a few of these student-centered programs get going, some university-sponsored open access servers could be established to get around the current proliferation of bogus open access sites: this is certainly going to happen sooner or later, so let’s try “sooner.” Bonus points: such papers can only be written using materials available from open access sources, since the minute you lose your university computer account, that’s the world you will live in.

It goes without saying that students in these programs should establish a track record of both individual and collective code on GitHub. GitHub (and StackOverflow) having already solved the open access collective action problem in the software domain.[7] 

6. Yes, you can still use these students as GTAs and GRAs provided you compensate them fairly

Okay, I was in academia long enough to understand the basic business model of generating large amounts of tuition credit hours—typically about half—in massive introductory classes staffed largely by graduate students. I was also in academia long enough to know that graduate training is not required for students to be able to competently handle that material: You just need smart people (the material, remember, is introductory) and, ideally, some training and supervision/feedback on teaching methods. To the extent that student-centered graduate programs have at least some faculty strongly committed to teaching rather than increasing the revenues of predatory publishers you may find MA-level students are actually better GTAs than research-oriented PhD students.

As far providing GRAs, my guess is that generating basic research—open access, please—out of such programs will also occur naturally and again, with because the programs have a focus on applications these students may prove better (or at least, less distracted) than those focused on the desperate—and in political science, for three-quarters, inevitably futile—quest for a tenure-track position. You might even be able to get them to document their code!

In either role, however, please provide those students with full tuition, a living wage and decent benefits, eh? The first law of parasitism being, of course, “don’t kill the host.” If that doesn’t scare you, perhaps the law of karma will.

7. Open, transparent, unambiguous, and externally audited outcomes assessments

Face it, consumers have more reliable information on the contents of a $1.48 can of cat food than they have on the outcomes of $100,000 business and law school programs, and the information on professional programs is usually far better than the information on almost all graduate programs in the social sciences. In a student-centered program, that has to change, lest we find, well, programs oriented towards training for jobs that only a quarter of their graduates have any chance of getting.

In addition to figuring out standards and establishing record-keeping norms, making such information available is going to require quite the sea change in attitudes, and thus far deans, associate deans, assistant deans, deanlets, and deanlings have successfully resisted open accountability by using their cartel powers.[8] In an ideal world, however, one would think that market mechanisms would favor a set of programs with transparent and reliable accountability.

Well, a guy can dream, eh?

See y’all—well, some subset of y’all—in Provo.

Footnotes

1. Paywalled, of course. Because elite not-for-profit organizations sustained almost entirely by a combination of tax monies and grants from sources who are themselves tax-exempt couldn’t possibly be expected to make their work accessible, particularly since the marginal cost of doing so is roughly zero.

2. What’s that old joke from the experimental sciences?: if you’re embarking on some procedure with potentially painful consequences, better to use graduate students rather than laboratory rats because people are less likely to be emotionally attached to graduate students.

3. The record for tenure track placement has gotten even worse, down to 26.3%, which the APSA notes “is the lowest reported figure since systematic observation began in the 2009-2010 academic year.” 

4. Or if you want to try for the unicorn startup—which is to say, you are a white male from one of a half-dozen elite universities—you at least understand what you are getting into, along with the probabilities of success—which make the odds of a tenure-track job in political science look golden in comparison—and the actual consequences, in particular the tax consequences, of failure. If you are not a white male from one of a half-dozen elite universities, don’t even think about it.

5. Science would do well to hire a few remote workers to get their web page functioning again, as I’m finding it all but inoperable at the moment. Science is probably spending a bit too much of their efforts breathlessly documenting a project which using a mere 1000 co-authors has detected a single 4-billion-year-old neutrino.

6. And for what it’s worth, this is a place where Brett Kavanaugh could be writing a lot of important opinions. Like maybe decisions which result in throwing out the vast kruft of gratuitous licensing requirements that have accumulated—disproportionately in GOP-controlled states—solely for the benefit of generally bogus occupational schools.

7. And recently received a mere $7.5-billion from Microsoft for their troubles: damn hippies and open source, never’ll amount to anything!

8. Though speaking of cartels—and graduate higher education puts OPEC, though not the American Medical Association, to shame on this dimension—the whole point of a cartel is to restrict supply. So a properly functioning cartel should not find itself in a position of over-producing by a factor of three (2015-2016 APSA placements) or four (2016-2017 placements). Oh…principal-agent problems…yeah, that…never mind…

9. Watch the presentation, but for a quick summary, her main point is that the increasingly popular notion that a successful company has to be large, loss-making, and massively funded is bullshit: if you actually know what you are doing, and are producing something people want to buy, you can be self-financing and profitable pretty much from the get-go. “Winner-take-all” markets are only a small part of the available opportunities—though you wouldn’t know that from the emphasis on network effects and FOMO in start-up porn, now amplified by the suckers [10] who pursue the opportunities in data science tournaments rather than the discipline of real markets—and there are plenty of possibilities out there for small, complementary teams who create well-designed, right-sized software for markets they understand. Thanks to Andy Halterman for the pointer.

10. Okay, “suckers” is probably too strong a word: more likely these are mostly people—okay, bros—who already have the luxury of an elite background and an ample safety net provided by daddy’s and mommy’s upper 1% income and social networks so they can afford to blow off a couple years doing tournaments just for the experience. But compare, e.g. to Steve Wozniak and Steven Jobs—and to a large extent, even with their top-1% backgrounds, Bill Gates and Paul Allen—who created things people actually wanted to buy, not just burning through billions to manipulate markets (Uber, and increasingly it appears, Tesla)

Posted in Higher Education, Methodology | Leave a comment

Witnessing a paradigm shift?

The philosopher of science Thomas Kuhn is famous—beyond an apparent penchant for throwing ashtrays [1]—for his vastly over-generalized concept of “paradigm shifts” in scientific understanding, where a set of ideas once thought unreasonable becomes the norm, exchanging this status with ideas on the same topic once almost universally accepted. [2] This typically involves a generational change—Max Planck famously observed that scientific progress occurs one funeral at a time —but can sometimes occur more quickly. And I think I’m watching one develop in the field of predictive models of conflict behavior.

The context here [3] was a recent workshop I attended in Europe on that topic. The details don’t matter but suffice it to say this involved an even mix of the usual suspects in quantitative conflict data and modeling—I’m realizing there are perhaps fifty of us in the world—and an assortment of NGOs and IGOs, mostly consumers of the information. [4]  Held amid the monumental-brutalist architecture housing the pan-European bureaucracy, presumably the model for the imperial capital in The Hunger Games, leading one to sympathize, at least to a degree, with European populist movements. And by the way, in two days of discussions no one mentioned Donald Orange-mop even once: we’re past that.

The promised paradigm change is on the issue of whether technical models for forecasting conflict are even possible—and as I’ve argued vociferously in the past, academic political science completely missed the boat on this—and it looks as though we’ve suddenly gone from “that’s impossible!” to “okay, where’s the model, and how can we improve it?” This new assessment being entirely due to the popularization over the past year of machine learning. The change, even taking into account that the Political Instability Task Force has been doing just this sort of thing, and doing it well, for at least fifteen years, has been stunningly rapid.

Not, of course, without more than a few bumps along the way. Per the persistent hype around “deep learning,” there’s a strong assumption that “artificial intelligence” is now best done with neural networks—and the more complex the better—whereas there’s consistent evidence both from this workshop and a number of earlier efforts I’m familiar with that because of the heterogeneity of the cases and the tiny number of positives, random forests are substantially better. There’s also an assumption that you can’t figure out which variables are important in a machine learning model: again, wrong, as this is routine in random forests and can be done to a degree even in neural nets, though it’s rather computationally intensive. One presenter—who had clearly consumed a bit too much of the Tensorflow Kool-Aid—noted these systems “learn on their own”: alas, that’s not true for this problem [6] and in fact we need lots of training cases, and in conflict forecasting models the aforementioned heterogeneity and rare positives still hugely complicate estimation.

So these models are not easy, but they are now considered possible, and there is an actual emerging paradigm: In the course of an hour I saw presentations by a PhD student in a joint program at Universities of Stockholm and Iceland developing a resource-focused conflict forecasting model and a data scientist from the World Bank and FAO working on famine forecasting [7] both implementing essentially the same very complex protocols for training, calibration, and cross-validation of various machine learning models. [8][15]

Well, we live in interesting times.

There’s a fairly standard rule-of-thumb in economic history stating it takes between one and two human generations—20 to 40 years—to effectively incorporate a major new technology into the production structure of organizations. The—yes, paradigmatic—cases are the steam engine, electricity, and computers. [9] I’ve sensed for quite some time that we’re in this situation, perhaps half-way through the process, with respect to technical forecasting models and foreign policy decision-making. [10] As Tetlock and others have copiously demonstrated, the accuracy of human assessments in this field is very low, and as Kahneman and others have copiously demonstrated, decision-making on high-risk, low-probability issues is subject to systematic biases. Until quite recently, however, data [11] and computational constraints meant there were no better alternatives. But there are now, so the issue is how to properly use this information. 

And not every new technology takes a generation before it is adopted: to take some examples most readers will be familiar with, word-processing, MP3 music files, flat-screen displays, and cell phones displaced their earlier rivals almost in a historical eye-blink, albeit except for word processing this was largely in a personal rather than organizational context. In the long-ago research phase of ICEWS—a full ten years ago now, wow…—I had a clever slide (well, I thought it was clever) showing a robot saying “We bomb Mindanao in six hours” and a medal-bedecked general responding “Yes, master” to illustrate what technical forecasting models are not designed to do. But with accuracy 20% to 30% better than human forecasts, one would think these approaches should have some impact on the process. That is going to take time and effort to figure that out, particularly since human egos and status are involved, and the models will make mistakes. And present a new set of challenges, just as electrical power presents a different sets of risks and opportunities than the steam and water power it replaced. But their eventual incorporation into policy-making seems inevitable.

Finally, this might have implications for the future demand for event data, as models customized for very specific organizational needs finally provide a “killer app” using event data as a critical input. As it happens, no one has yet to come up with something that does the job of event data—recording day to day interactions of political actors as reported in the open press—without simply looking pretty much like plain old event data: Both the CAMEO and PLOVER [12] event coding systems still have the basic structure of the 60-year-old WEIS, because WEIS corporates most things in the news of interest to analysts (and their quantitative models). While the forecasting models I’m currently seeing primarily use annual (and state-level) structural data, as soon as one drops to the sub-annual level (and, increasingly, sub-state, as geocoding of event data improves) event data are really the only game in town. [13]

Footnotes

1. Recently back in the news…well, sort of…thanks to a thoroughly unflattering book by documentary film-maker Errol Morris, whose encounters with Kuhn when Morris was a graduate student left a traumatic impression of Kuhn being a horse’s ass of truly mythic proportions, though some have suggested parts of the book may themselves border on mythic…nonetheless, be civil to your grad students lest they become award winning film makers and/or MacArthur Award recipients long after you and any of your friends are around to defend your reputation. Well, and perhaps because being nice to your grad students is simply the right thing to do.

2. And thus the hitherto obscure word “paradigm” entered popular parlance: a number of years ago, at the height of the dot-com bubble, social philosopher David Barry proposed simply making up a company name, posting this on the web, and seeing how much money would pour in. The name he proposed was “Gerbildigm”, combining “gerbil” and “paradigm.” Mind you, that’s scarcely different than what actual companies were doing in the late 1990s to generate funding. Nowadays, in contrast, they simply say they are exploring applications of deep learning.

3. And by the way, this isn’t the snark-fest promised in the previous blog entry; that’s still to come, though events are so completely depressing at the moment—okay, “Christian” conservatives, you won the right not to bake damn wedding cakes, but at the price of suborning tearing infants out of the arms of their mothers: you really think that tradeoff is a good deal? Will your god? You’ve got an exemption from Matthew 25:35-40 now, eh? You’re completely confident about this? You sure?—I’m having difficulty gearing up for a snark-fest even though it is half-written. Though stuff I have half-written would fill a not-inconsequentially sized bookshelf.

4. It is also notable that the gender ratio at this very technical workshop was basically 50/50, and this included the individuals developing the data and models, not just the consumers. In the U.S., that ratio would have been 80/20 or even 90/10. So by chance is the USA excluding some very talented potential contributors to this field? [5] And is this related to the work of Jayhawk economist Donna Ginther, highlighted on multiple occasions by The Economist over the past few months, that in the academic discipline of economics, gender discrimination appears to be considered a feature rather than a bug? Which cascaded over into the academic field of political methodology, though thanks to the efforts of people like Janet Box-Steffensmeier, Sara Mitchell, Caroline Tolbert, and institutions like VIM is not as bad as it once was. But compared to my experiences in Europe, could still improve.

5. I recently stumbled onto historian Marie Hicks’s study titled Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing.  Brogrammers take note: gender discrimination doesn’t necessarily have a happy ending.

6. Self-learning is, famously, possible for games like poker, chess and go, which have the further advantage that the average person can understand the application, thus providing ample fodder for breathless headlines, further leading to fears that our new Go-and-Texas-Hold’em neural network overlords will, like Daleks and Cylons, shortly lethally threaten us, even if they still can’t manage to control machines sufficiently well to align the doors to shut properly on a certain not-so-mass-produced electric vehicle produced by a company owned by one of the more notable alarmists concerned about the dangers of machine intelligence. Plus there’s the little issue of control of the power cord. I digress.

7.  Amusingly, for the World Bank work, the analyst then has to run comparable regression models because that’s apparently the only thing the economists there understand. At the moment.

8. Nor was this the standard protocol for producing a regression model which, gentle reader, I would remind you has the following steps (as Adam Smith pointed out in 1776, for maximal efficiency, assemble a large team of co-authors with specialists doing each task!):

  1. Develop some novel but vaguely plausible “theory”
  2. Assemble a set of 25 or so variables from easily available data sets
  3. Run transformations and subsets of these, ideally using automated scripts to save thought and labor, until one or more combinations emerge where the p-values on your pet variables are ≤0.05. Justify any superfluous variables required to achieve this via collinearity—say, parakeets-per-capita—as “controls.” Bonus points for using some new variant of regression for which the data do not remotely satisfy the assumptions and which mangles the coefficients beyond any hope of credible interpretation. Avoid, at all costs, out-of-sample assessments of any form.
  4. Report this in a standardized social science format 35 ± 5 pages in length (with a 100-page web appendix) starting with an update of the literature review from your dissertation[s], copiously citing your friends and any likely reviewers, and interpreting the coefficients as though they were generated using OLS estimation. Make sure the “Discussion” and “Conclusions” sections essentially duplicate each other and the statistical tables.
  5. Publish in a proprietary journal which will appear in print after a lag of at least three years, firewalled and thus inaccessible to the policy community, but no one will ever look at it anyway. Though previously you will have presented the problem, methodology, and results in approximately 500 seconds (you’re on a five paper panel, of course) at a major conference where your key slide will show 4 variants of the final 16-variable model with the coefficients to 6 decimal places and several p-values reported as “0.000.” The five people in the audience would be unable to read the resulting 3-point type except they are browsing the conference program instead of listening; the discussant asks why you didn’t include four addition controls.
  6. PROFIT!

I jest. I wish.

9. In fact quite a few people have suggested that computers still aren’t being used to their full capacity in corporations because they would render many middle managers irrelevant, and these individuals, unlike Yorkshire handloom weavers, are in a position to resist their own displacement: The Economist had a nice essay to this effect a couple weeks ago.

10. The concept of a systematic foreign policy is, of course, at present quaintly anachronistic in the U.S., where foreign policy, such as it is, is made on the basis of wild whims and fantasies gleaned from a steady if highly selective diet of cable TV, combined with a severe case of dictator-envy and the at least arguable proposition that poutine constitutes a threat to national security. But ever the optimist I can imagine the U.S. returning to a more civilized approach somewhere in the future, just as Rome recovered from both Nero and Caligula. Also as noted, this workshop was in Europe, which has suddenly been incentivized to get serious about foreign policy.

11. This is an important caveat: the data are every bit as important as the methods, and for many remote geographical areas under high conflict risk, we probably still don’t have all the data we need, even though we have a lot more than we once did. But data is hard, and data can be very boring—certainly it’s not going to attract the headlines that a glitzy new game-playing or kitten-identifying machine learning application can attract, and at the moment this field is dependent on a large number of generally underfunded small projects, the long-term Scandinavian commitments to PRIO and the Uppsala UCDP being exceptions. In the U.S., the continued funding of the ICEWS event data is very tenuous and the NSF RIDIR event data funding runs out in February-2018…just saying…

12. Speaking of PLOVER, at yet another little workshop, I was asked about the painfully slow progress towards implementing PLOVER, and it occurred to me that it’s currently trying to cross a technological “valley of death” [14] where PLOVER, properly implemented, would be clearly superior to CAMEO, but CAMEO already exists, and there is abundant CAMEO data (and software for coding it) available for free, and existing models already do a reasonably good job of accommodating the problems of CAMEO. “Free and already available” is a serious advantage if your fundamental interest is the model, not the data: This is precisely why WEIS, despite being proposed as a first-approximation to what would certainly be far better approaches, was used for about 25 years and CAMEO, which wasn’t even intended as a general-purpose coding scheme, is heading towards the two-decade mark, despite well-known issues with both.

13. Though the other thing to watch here is the emerging availability of low-cost, and frequently updated, remote sensing data. The annualized NASA night-light data is already being used increasingly to provide sub-state information with high geographical precision, and new private sector data, as well as new versions of night-lights, are likely to be available at a far greater frequency.

14. Googling this phrase to get a clean citation, I see it has been used to mean about twenty different things, but the one I’m employing here is a common variant.

15. And while I’m on the topic of unsolicited advice to grad students, yet another vital professional skills they don’t teach you in graduate school is flying to Europe and being completely alert the day after you arrive. My formula:

  1. Sleep as much as you can on the overnight flight (sleeping on planes, ideally without alcohol, is another general skill)
  2. Take at most a one hour nap before sunset, and spend most of the rest of the time outside walking;
  3. Live on the East Coast
  4. Don’t change planes (or at least terminals) at Heathrow
Posted in Methodology | 1 Comment

Should an event coder be more like a baby?

Last evening, as is my wont, I was reading the current issue of Science [1]—nothing like a long article on, say, the latest findings on mantle convection beneath the Hawai’i hotspot to lull one to sleep—when an article titled “Basic Instincts: Some say AI needs to learn like a child” jolted me into one of those “OMG, this is exactly the issue I’ve been dealing with!” experiences.

That issue: whether there is any future to dictionary-based political event coders. Of late—welcome to my life, such as it is—I’ve been wrestling with whether to invest my time:

  • Writing a new coder based on universal dependency parsing and my mudflat proof-of-concept: seems like low-hanging fruit
  • Adapting an existing universal dependency coder (seems increasingly unlikely for an assortment of reasons)
  • Or just tossing the whole project since everybody—particularly every U.S. government funder—knows that dictionary-based coders are oh-so-1990s and from this point on everything will be done with machine learning (ML) classifiers

This article may tilt the scale back to the first option. At least for me.

The “baby” reference here and in the article comes from the almost irrefutable evidence that humans are born hard-wired to efficiently learn various skills, and probably the most complex of these is language. A normally developing human child picks up language, typically using sound, but sign language is learned with equal facility—and outside the United States, usually multiple languages, and keeps them distinct—at a phenomenal rate. Ask any three-year-old. And try to shut them up. Provide a chimpanzee with exactly the same stimuli—yes, the experiment has been tried, on multiple occasions—and they never achieve remotely similar abilities to that of humans.

However, there’s an attraction to ML classifiers in being, well, rather mindless. [2] But this comes with the [huge] problem of requiring an extraordinary number of labeled training cases, which we simply don’t have for event data, nor does anyone seem inclined to generate them, because that process is expensive and involves the recruitment, management, and, critically, successful retention of a large number well-trained human coders. [3] Consequently event data coding is in a totally different situation from ML problems where vast numbers of labeled cases are available, typically from the web at little expense.

It’s completely possible, of course, to generate essentially unlimited labelled event data cases from the existing coding systems, and it is certainly conceivable that the magic of neural networks (or your classifier of choice) will produce some wonderful generalization that cannot be obtained from those coders. Or, more likely, will produce one interesting generalization that we will then see repeated endlessly, much like the man-woman-king-queen example for word embeddings. But another possibility is the classifiers will just sloppily approximate what the dictionary-based systems are already doing.

And doing reasonably well because dictionary-based automated event coding has been around for more than a quarter century, and now benefits from a wide range of on-going developments throughout the field of computational natural language processing. As a consequence, those programs start with a lot of “instinct.” Consider just how much comes embedded in a contemporary system:

  • The language model of the parser, which is the result of thousands of hours of experimentation across multiple major NLP research projects across decades
  • In some systems, notably VRA-Reader, PETRARCH-2 and Raytheon/BBN’s ACCENT/Serif, an explicit language model for political events
  • Models of language subcomponents such as dates, locations, and named entities
  • Two decades of human-coded dictionary development from the KEDS and TABARI projects [4]
  • The WordNet synonym sets, again the product of thousands of hours of effort, which have been incorporated into those dictionaries
  • A variety of very large data sets such as rulers.org, CIA World Leaders and Wikipedia for named-entity resolution
  • Extensive idiomatic human translation by native speakers of the Spanish and Arabic dictionaries currently being produced by the NSF RIDIR event data project

Okay, people, I know that your neural networks are cool—like they are really, really cool, fabulously cool, in fact you can’t even begin to tell me how cool they are, even if a four-variable logit model matches their performance out-of-sample—but frankly, I’ve just presented you with a rather extensive list of things that the dictionary-based coders are already starting with but which the ML programs have to learn on their own. [5] 

So in practical terms, for example, the VRA-Reader coder from the 1990s—now lost, alas, because it was proprietary…sad…—provided 128 templates for the possible structure of a sentence describing a political event. JABARI in the early 2010s—now lost, alas, because it was proprietary, and was successfully targeted by a duplicitous competitor…sad…—gained an additional 15% accuracy over TABARI using a set of very specific tweaks dealing with idiosyncratic characteristics of political events (e.g. the fact that the Red Cross rarely if ever engages in armed attacks). A dictionary-based system knows from the beginning that if A meets with B, B met with A, but if A arrests B, B didn’t arrest A. More generally, the failure—in numerous attempts across decades—of generic event “triple” coding systems to compete in this space is almost certainly due to the fact that domain-specific information provides a very significant boost to performance.

Furthermore, the environment in which we are deploying dictionary-based coding programs is becoming increasingly friendly: In the 1990s KEDS and VRA-Reader only had the texts and small dictionaries to work with, and had to do this on very limited hardware. Contemporary systems, in contrast, have access to sophisticated parsers and huge dictionaries with hardware easily able to accommodate both. Continuing the childhood metaphor, this is the difference between riding a tricycle and riding a 20-speed bicycle. With an electric assist.

I don’t expect this simple metaphor to be the last word on the subject and I may, in the end, decide that classifiers are going to rule us all (and in any case, that seems to be pretty much where all of the funding is going at the moment anyway but if that’s the case, please, can’t someone, somewhere fund an open set of gold standard records??). But I’m also beginning to think dictionary based approaches—or more probably, a hybrid of dictionary and classifier approaches—are more than an anachronistic “damn those neural nets: young whippersnappers don’t appreciate what it was like hacking into Nexis from the law school library account via an acoustical modem for weeks every morning from 2 a.m. to 5 a.m. [6]…get off my lawn” but rather, given the remarkable resources we can now deploy on the problem, dictionary-based coding represents a hugely more efficient approach than learning by example.

Time (and experimentation) will tell.

Footnotes

1. Okay, so it was actually last week’s issue: I wait for the paper version to arrive and hope it doesn’t get too soaked in the mailbox. The Economist I read electronically as soon as it is available.

2. The article quotes in passing Oregon State CS professor Thomas Dietterich that “[academic] computer scientists…have an aversion to debugging complex code.” Yeah, tell me about it…followed closely by their aversion to following quality control practices that have been common in industry since the 1990s. I digress.

3. The relatively new prodigy software is certainly a far more efficient approach to doing this than many earlier alternatives—I’ve also written a simple low-footprint variant of its annotation functions here—but human annotation remains vastly more labor intensive than, say, downloading millions of labeled images of cats and dogs.

4. Which I’ve got pretty good empirical evidence still provide most of the verb patterns for all of the CAMEO-based coding systems…figuring out verb patterns used to generate any data where you know both the codings and the URL of the source text is relatively straightforward, at least for the frequent patterns.

5. The other fabulously cool recent application of deep learning, the ability to play Go at levels beyond that of the best human expert, depended on a closed environment with fixed rules: event data coding is nothing like this.

6. Not a joke: this is the KEDS project ca 1990.

Posted in Methodology, Programming | Leave a comment

Entropy, Data Generating Processes and Event Data

Or more precisely, the Santa Fe Institute, Erin Simpson, and, well, event data. With a bit of evolutionary seasoning from Robert Wright, who is my current walking-commute listening.

Before we get going, let me make completely clear that there are perhaps ten people—if that—on this entire planet who will gain anything from reading this—particularly the footnotes, and particularly footnote 12—and probably only half of them will, and as for everyone else: this isn’t the blog you are looking for, you can go about your business, move along. Really: this isn’t even inside baseball, it’s inside curling.[1] TL;DR!

This blog is inspired by a series of synchronistic events over the past few days, during which I spent an appallingly large period—if during inclimate weather—going through about 4,000 randomly-selected sentences to ascertain the degree to which some computer programs could accurately classify these into one of about 20 categories. Yes, welcome to my life.

The first 3,000 of these were from a corpus of Reuters lede sentences from 1979-2015 which are one of the sources for the Levant event data set which I’ve been maintaining for over a quarter-century. While the programs—both experimental—were hardly flawless, overall they were producing, as their multiple predecessors had produced, credible results, and even the classification errors were generally for sentences that contained political content.

I then switched to another more recent news corpus from 2017 which, while not ICEWS, was not dissimilar to ICEWS in the sense of encompassing a massive number of sources, essentially a data dump of the world’s English-language press. This resulted in a near total meltdown of the coding system, with most of the sentences themselves, never mind the codings, bordering on nonsense so far as meaningful political content was concerned. O…M…F…G… But, here and there, a nice clean little coding poked its little event datum head out of the detritus, as if to say “Hey, look, we’re still here!”, rather as seedlings sprout with the first rainfall in the wake of a forest fire.

So what gives? Synchronicity being what it is, up pops a link to Erin Simpson’s Monktoberfest  talk where Dr. Simpson, as ever, pounds away at the importance of understanding the data generating process (DGP) before one blithely dives into any swamp presenting itself as “big data.” Particularly when that data is in the national security realm. Having wallowed in the coding of 4,000 randomly sequenced news ledes, particularly the last largely incoherent 1,000, her presentation resulted in an immediate AHA!: the difference I observed is accounted, almost totally, by the fact that international news sources such as Reuters [2] have an almost completely different DGP than that of the local sources.

Specifically, the international sources, since the advent of modern international journalism around the middle of the 19th century [3] have fundamentally served one purpose: providing people who have considerable money with information that they can use to get even more money. Yes, there are some positive externalities attached to this in terms of culture and literature, but reviews of how a Puccini opera was received at La Scala didn’t pay the bills: information valuable in predicting future events did.

This objective conveniently aligning the DGP of the wire services quite precisely with the interests of [the half-dozen or so…] consumers of political event data, since in the applied realm event data is used almost exclusively for forecasting.

Now, turning our attention to local papers in the waning years of the second decade of the 21st century: O…M…F…G: these can be—and almost certainly are—pretty much absolutely anything except—via the process of competitive economic exclusion—they aren’t international wire services.[4] In the industrialized world, start with the massive economic changes that the internet has wrought on their old business models, leading to two decades of nearly continuous decline in staffing and bureaus. In the industrializing world, the curve goes the other way, with the internet (and cell phone networks) enabling access and outreach never before possible. Which can be a good thing—more on this below—but is not necessarily a good thing, as there is no guarantee either of the focus nor, most certainly, the stability of these sources. The core point, however, is that the DGP of local sources is fundamentally different than the DGP of international sources.[5]

So, different DGPs, yeah, got that, in fact I had you at “Erin Simpson”, right? But what’s with the “entropy” stuff?

Well, I’m now really going to go weird—well, SFI/complexity theory weird, which is, well, pretty weird—on you here, so again, you’ve been warned, and thus you probably just want to break off here, and go read something about chief executives and dementia or somesuch. But if you are going to continue…

Last summer there was an article which despite including the phrase “Theory of reality” in the title—this is generally a signal to dive for the ditches—got me thinking—and by the way, I am probably about to utterly and completely distort the intent of the authors, who are likely a whole lot smarter than me even if they don’t realize that one should never, ever, under any circumstances put the phrase “theory of reality” on anything other than a Valentine’s Day candy heart or inside a fortune cookie…I digress…—on their concept of “effective information”:

With [Larissa] Albantakis and [Guilio] Tononi (both neuroscientists at Univ of Wisconsin-Madison),  [Erik] Hoel (Columbia neuroscience)[6] formalized a measure of causal power called “effective information,” which indicates how effectively a particular state influences the future state of a system. … The researchers showed that in simple models of neural networks, the amount of effective information increases as you coarse-grain over the neurons in the network—that is, treat groups of them as single units. The possible states of these interlinked units form a causal structure, where transitions between states can be mathematically modeled using so-called Markov chains.[7] At a certain macroscopic scale, effective information peaks: This is the scale at which states of the system have the most causal power, predicting future states in the most reliable, effective manner. Coarse-grain further, and you start to lose important details about the system’s causal structure. Tononi and colleagues hypothesize that the scale of peak causation should correspond, in the brain, to the scale of conscious decisions; based on brain imaging studies, Albantakis guesses that this might happen at the scale of neuronal microcolumns, which consist of around 100 neurons.

With this block quote, we are moving from OMFG to your more basic WTF: why oh why would one have the slightest interest in this, or what oh what does this possibly have to do with event data???

Author pauses to take a long drink of Santa Fe Institute Kool-Aid…

The argument just presented is at the neural level but hey, self-organization is self-organization, right? Let’s just bump up things up to the organizational level and suddenly we have answers to two (or three) puzzles:

  • why is the content of wire service news reports relatively stable across decades?
  • why can that information be used by both [some] humans and [most] machines to predict political events at a consistently high level?
  • why are reductionist approaches on modeling organizational behavior doomed to fail? [8]

My version of the Hoel-Albantakis-Tonini hypothesis is that there is a point in organizational structure where organizations, assuming they are under selective pressure, will settle on a scale (and mechanisms) which maximizes—or at least goes to some local maximum on an evolutionary landscape—the tradeoff between the utility of predictive power and the cost of information required to maintain that level of predictability. While the sorts of political organizations which are the focus of event data forecasts have costly private information, the international media provide the inexpensive shared public information that sustains this system. In particular, we can readily demonstrate through a variety of statistical and/or machine-learning models (or human “super-forecasters”) that this public information alone is sufficient to predict most of these political behaviors, typically to 80% to 85% accuracy. [9] Information to get the remaining 15% to 20%, however, is not going to be found in local sources (with one exception noted below) and, as I’ve argued elsewhere (for years…) most of the remaining random error is irreducible due to a set of about eight fundamental sources of uncertainty that will be found in any human (or, presumably, non-human) political system.[10][30]

In order to survive, organizations must be able to predict the consequences of their actions and the future state of the world with sufficient accuracy that they can take actions in the present that will have implications for their well-being far into the future: the feed-forward problem: Check. [11] You need information of some sort to do this: Check. Information is costly and information-collection detracts from other tasks required for the maintenance and perpetuation of the organization: Check.[12] Therefore, in the presence of noise and systems which are open and stochastic, a point is reached—and probably with a lot less information than we think we need [13]—where information at a disaggregated scale is more expensive than the benefits it provides for forecasting. QED. [14]

Take ICEWS. Please… The DARPA-sponsored research phase, not the operational version, such as it is. Consider the [in]famous ICEWS kick-off meeting in the fall of 2007, where the training data were breathlessly unveiled along with the fabulously difficult evaluation metrics for the prediction problems, vetted by no less than His Very Stable Genius Tony Tether.[15] Every social scientist in the room skipped the afternoon booze-up with the prime contractor staff, went back to their hotel rooms with their laptops and by, say, about 7 p.m. had auto-regressive models far exceeding the accuracy of the evaluation metrics. Can we just have the money and go home now? The subsequent history of the predictive modeling in ICEWS—leaving aside the fact that the Political Instability Task Force (PITF) modeling groups had already solved essentially the same problems five years earlier—was one of the social scientists finding straightforward models which passed the metrics, Tether and his minions (abetted, of course, by the prime contractors, who did not want the problems solved: there is no funding for solved problems) imposing additional and ever-more ludicrous constraints, and then new models developed which got around even those constraints.

But this only worked for the [perfectly plausible] ICEWS “events-of-interest,” which were at a very predictable nation-year scale. The ICEWS approach could be (and in the PITF research, to some degree has been) scaled downwards quite a bit, probably to geographical scales on the order of a typical province and temporal scales on the order of a month, but there is a point where the models will not scale down any further and, critically, adding the additional information available in local news sources will not reliably improve forecasting once the geo-temporal scale is below the level where organizations have optimized the Hoel-Albantakis-Tonini effective information. Below the Hoel-Albantakis-Tonini limit, more is not better: more is just noise, because organizations aren’t paying attention to this information and consequently it is having no effect on their future behaviors.

And in the case of event data, a particular type of noise. [Again, I warned you, we’re doing inside curling here.] There are basically three approaches to generating event data

  • Codebook-based (used by human coders, and thus irrelevant to real-time coding)
  • Pattern-based (used by the various dictionary-based coding programs that are currently responsible for all of the existing data sets)
  • Example-based (used by the various machine-learning systems currently under development, though none, to my knowledge, currently produce operational data)

While at present there is great furor—mind you, among not a whole lot of frogs in not a particularly large pond—as to whether the pattern-based or example-based approaches will prove more effective [16], this turns out to be irrelevant to this issue of noise: Both the pattern-based and example-based systems, alas, are subject to the same weakness in generating noise [17] as each generates false positives when they encounter something in a politically-irrelevant news context that sufficiently resembles something from the politically-relevant cases they were originally developed to code that it triggers the production of an event. As more and more local data—which is almost but not quite always irrelevant—is thrown into the system, the false positive rate soars.[18][19]

CAVEAT: Yes, as I keep promising, there is a key caveat here: For a variety of reasons, most importantly institutional lag, language differences, and areas with a high risk and low interest (for example Darfur, South Sudan, or Mexican and Central American gang violence) the coverage of the international news sources is not uniform, and there are some places where local coverage can fill in gaps. Two places where I think this has been (or will be once the non-English coding systems come on-line) quite important is the “cell phone journalism” coverage of violence in Nigeria and southern Somalia, and Spanish and Portuguese language coverage in Latin America.[20] But by far, the vast bulk of the local sources currently used to generate event data do not have this characteristic.

Whew…so you’ve made it this far, what are the practical implications of this screed? I see five:

First, the contemporary mega-source data sets are a combination of two populations with radically different DGPs: the “thick head” of international sources, most of which are coded tolerably well by the techniques which, by and large, were originally developed for international sources, and the “thin tail” of local sources, which are generally neither coded particularly well, nor particularly relevant even when coded correctly.[21]

Second, as noted earlier, in event data, more is not necessarily better. “More” may be relatively harmless—well, for the consumers of the data; it remains at least somewhat costly to the producers [22]—when the models involve just central tendency (the Central Limit Theorem is, as ever, our friend) and the false positives are pretty much randomly distributed.[23] Models sensitive to measurement error, heterogeneous samples, and variance in the error terms—for example most regression-based approaches—are likely to experience problems.

Third—sorry DARPA [24]—naive reductionism is not the answer, nor is blindly throwing more machine cycles and mindless data-dumps at a problem. Any problem.[25] Figuring out the scale and the content of the effective information is important [26], and this requires substantive knowledge. Some aspects of the effective information problem are what political and organizational theories have been dealing with for at least a century. Might think about taking a look at that sort of thing, eh? Trust in the Data Fairy alone has, once again, proven woefully misplaced.

Fourth, keep in mind my CAVEAT! above: it is not the case that all local data are useless. But it is almost certainly the case that because the DGPs differ so greatly between contemporary local sources and international sources, it is very likely that separate customized coding protocols will be needed for these, at the very least well-tested filters to eliminate irrelevant texts and in many cases customized patterns/training cases. That said, the effective information scale can vary by process, and if, for example, one is focused on a localized conflict (say Boko Haram or al-Shabab) some of those sources could be quite useful, again possibly with customization. But the vast bulk of local sources are just generating noise.[27]

Finally, don’t listen to me: experiment! Most of the issues I’ve raised here can be readily tested in your approach of choice using existing event sequences: for your problem of choice, go out and actively test the extent to which the exclusion of local sources (or specific local sources) does or does not affect your results. And please publish these in some venue with a lag time of less than five years! [28]  

Footnotes

1. As with everything in this blog, these opinions are mine and mine alone and no one I have ever worked for or with directly or indirectly anytime now or in the past or future bears any responsibility for them. And that includes Mrs. Chapman whose lawn I mowed when I was in seventh grade.

2.  And the other major news wires such as Xinhua, Agence France Press, assorted BBC monitoring services and the Associated Press, but that’s pretty much the list.

3. This largely coincided with the proliferation of telegraph connections, though older precedents existed in the age of sail once a sufficiently independent business class—and weakening of state control of communications—existed to sustain it. 

4. Just two or three decades ago, “newspapers of record” such as the Times of London and the New York Times served much the same role that the international wire services do today by focusing on international coverage for a political elite, using their vast networks of foreign correspondents, proverbially gin- and/or -whiskey-soaked Graham Greene wannabees hanging out in the bars of cheap hotels convenient to the Ministry of Information of—hey, Trump’s making me do this, I can’t miss my one chance to use this word!—shithole countries. Or colonies. Those days are long gone, though for this same reason historical time series data based on the NYT such as that produced by the Cline Center may be quite useful.

5. In contrast, your typical breathlessly hyped commercial big data project involves data generated by a relatively uniform DGP: people looking at, and then eventually buying [or not] products on Amazon are, well, all looking at, and then eventually making a decision about, products on Amazon, and they are also mostly people (Amazon presumably having learned to filter out the price-comparison bots and deal with them separately). Actually, except for the human-bot distinction, it is hard to think of a comparable cases in data science where the data generators are as divergent as a Reuters editor and the reporters for a small city newspaper in Sumatra. Unless it is the difference between the DGP of news media, even local, and the DGP of social media…

6. Affiliations provided to indicate that the authors’ qualifications go beyond “Part-time cannabis sales rep living in parents’ basement in Ft. Collins and in top 20 percentile of Star Wars: Battlefront II players.” Which would be the credentials of the typical author of a discourse on “theory of reality.”

7. Otherwise known as “Markov chains.”

8. This last is a puzzle only if you’ve had to sit through multiple agonizingly stupid meetings over the years with people who believe otherwise.

9. Thanks to the work of Tetlock and his colleagues, the breadth of this accuracy across a wide variety of political domains is far more systematically established for human superforecasters than it is for statistical and machine-learning methods, but I’m fairly confident this will generalize.

10. Dunbar’s number is probably another example of this in social groups generally. I’m currently involved in a voluntary organization whose size historically was well below Dunbar’s number and consequently was run quite informally, but is now beginning to push up against it. The effects are, well, interesting.

11. For an extended discourse on counter-examples, see this. Which for reasons I do not understand is the most consistently viewed posting on this blog.

12. Descending first into the footnotes, then into three interesting additional rather deep rabbit holes on this:

  1. I’m phrasing this in terms of “organizations,” which have institutionalized information-processing structures. But certainly we are also interested in mass mobilization, where information processing exists but is much more diffuse (and in particular, is almost certainly influenced more by networks than formal rules, though some of those networks, e.g. in families, are sufficiently common that they might as well be rules). I think the core argument for a prediction-maximizing scale is still relevant—in mass mobilization in authoritarian systems the selection pressures are very intense but even non-authoritarian mobilizations have the potential costs of collective action problems—but they are likely to be different than the scale for organizations, as well as differing with the scale of the action. That said, the incentives for the international wire services to collect this information remain the same, and the combination of [frequent] anonymity and correspondents not being dependent on local elites for a paycheck (the extent to which this is true varies widely and has certainly changed in recent years with the introduction of the internet) may result in these international sources being considerably more accurate than local sources. Local media sources which are under the control of local political and/or economic elites may actually be at their least informative when the conditions in an area are ripe for mass mobilization. [29]
  2. An interesting corollary is that liberal democracies have an advantage in being generally robust against the manipulation of information—the rise of fascist groups in the inter-war period in the 20th century and, possibly, recent Russian manipulation of European and US elections through social media are possible exceptions—and consequently they don’t incur the costs of controlling information. This is in contrast to most authoritarian regimes, and specifically the rather major example of China, which spends a great deal of effort doing this, presumably due to a [quite probably well-founded] fear by its elites that the system is not robust against uncontrolled information. Even if the Chinese authorities can economically afford this control—heck, they can afford the bullet trains the US seems totally incapable of creating—this suggests a brittleness to the system which is not trivial. Particularly in light of a rapidly changing information environment. Much the same can probably be said of Saudi Arabia and Russia.
  3. A really deep rabbit hole here, but given the fact that the support for the international news media is very diffuse, what we probably see here is essentially a generally stable [Nash? product-possibility-frontier? co-evolution landscape?] equilibrium between information producers and consumers where the costs and benefits of supply and demand end up with a situation where the organizations (both public and private) can work fairly well with the available information and the producers have learned to focus on providing information that will be useful. From the perspective of political event data, for example, the changes between the WEIS and COPDAB ontologies from the 1960s and the 2016 PLOVER ontology—all focusing on activities relevant to forecasting political conflict—are relatively minor compared to total scope of human behaviors: political event ontologies have consistently used on the order of 10¹ primary categories, whereas ontologies covering all human behaviors tend to have on the order of 10². Furthermore, except for the introduction of idiomatic expressions like “ethnic cleansing” and “IED”, vocabulary developed for articles from the 1980s still works well for articles in the 2010s (and works vastly better than trying to cross levels of scale from international to local sources). Organizations, particularly those associated with large nation states, will of course have information beyond those public sources—this is the whole point of intelligence collection—but opinions vary widely—wow, do they ever vary widely!—as to the utility of such information at the “policy relevant forecasting interval” of 6 to 24 months.  Meanwhile, given its level of decentralization, the system that has depended on this information ecosystem is phenomenally stable compared to any other period in human history.

13. In the early “AI” work in human-crafted “expert systems” in 1980s, “knowledge engineers”—a job title insuring generous compensation at the time, rather like “data scientist” today—generally found that if an expert said they needed some information that couldn’t be objectively measured, but they knew it by “intuitive feelings” or something equivalent, when the models were finally constructed and tested, it turned out these variables were not needed: the required classification information was contained in variables that could be measured. The positive interpretation of this is that the sub-cognitive “intuition” was in fact integrating these other factors through a process similar to, well, maybe neural networks? Ya-think? The negative interpretation is that the individuals were trying to preserve their jobs. 

14. With a bit more work, one could probably align this—and the Hoel-Albantakis-Tonini approach more generally—with Hayek’s conceptualization of markets as information processing systems, and certainly the organizational approach is consistent with Hayek’s critique of central planning. Even if Hayek is probably the second-most ill-used thinker in contemporary so-called conservative discourse, after Machiavelli (and followed by Madison). Seriously. I digress.

15. Tether, of course, was the model for Snoke in The Last Jedi, presiding over DARPA seated on a throne of skulls in a cavernous torch-lit room surrounded by black-robed guandao-armed guards. His successor at DARPA, of course, was the model for Miranda Priestly in The Devil Wears Prada.

16. The answer, needless to say, is that hybrid approaches using both will be best. Part of the reason I annotated 4000 sentences over the weekend and am planning to do a lot more on a couple of upcoming transoceanic flights.

17. This is similar to the argument that all machine-learning systems are effectively using the same technique—partitioning very high dimensional spaces—and therefore allowing for similar levels of complexity will have similar levels of accuracy, particularly out of sample.

18. Even worse: the coding of direct quotations, where the DGP varies not just with the reporter but with the speaker. These are the perfect storm for computer-assisted self-deception: as social animals, we have evolved to consider direct quotations to be the single most important source of information we can possibly have, and thus our primate brains are absolutely screaming: “Code the quotations! Code the comments! Code the affect! General Inquirer could do this in the 1960s using punch cards, why aren’t you coding comments???”

But our primate brains evolved to interpret quotations deeply embedded in a social context that, during most of evolution, usually involved individuals in a small group with whom we’d spent our entire lives. A rather different situation than trying to interpret quotations first non-randomly selected, then often as not translated, out of a barely-known set of circumstances—possibly including “paraphrased or simply made up”—that were spoken by an unknown individual operating in a culture and context we may understand only vaguely. And that’s before we get to the issues of machine coding. “Friends don’t let friends code quotations.”

For this reason, by the way, PLOVER  has eliminated the comment-based categories found in CAMEO, COPDAB, and WEIS.

19. Okay, it’s a bit more complicated: the false positive rate is obviously going to depend on the tradeoff a given coding system has made between precision and recall, and a system that was really optimized for precision could probably avoid this issue, or at least dramatically reduce it. But none of the systems I’m familiar with have done that.

20. Raising another split in the event data community, whether machine-translation combined with English-language coders will be sufficient or whether native-language coders are needed. Fortunately, once the appropriate multiple-language coding programs are available, this can be resolved empirically.

21. See this table which was generated from a sample of ICEWS sources from 2016. Of the sources which could be determined, 37.5% are from 10 international sources, and fully 27.2% from just four sources: Xinhua, Reuters, AFP and BCC. Incongruously, three Indian sources account for another 15.2%, then past this “thick head” we go to a “thin tail” of 372 sources accounting for the remaining 47.3% of the known sources (with an additional 25.8% of the total cases being unidentified: these could either be obscure local sources or garbled headers on the international sources, which occurs more frequently than one might expect).

22. Rather like bitcoin mining. I actually checked into the possibility that one could use bitcoin mining computers—which I suspect at some point will be flooding the market in vast quantities—to do, well, something, anything that could enhance the production or use of event data. Nope: they are specialized and optimized to such a degree that “door stop” and “boat anchor” seem to be about the only options.

23.They may or not be: With a sufficiently representative set of gold standard records—which more than fifty years into event data coding we still don’t have—this becomes an empirical question. My own guess is that they are randomly distributed to a larger degree than one might expect, at least for pattern-based coders.

24. Yeah, right… “I feel a great disturbance in the Force, as if millions of agent-based-models suddenly cried out in terror and were suddenly silenced, their machine cycles repurposed to mining bitcoin. I fear something, well, really great, has happened. Except we all know it won’t.”

25. Sharon Weinberger Imagineers of War (2017)—coming soon to the Virginia Festival of the Book!—is a pretty sobering assessment of DARPA’s listless intellectual drift in the post-Cold War period, particularly in dealing with prediction problems and anything involving human behavior. Though its record on that front during the Vietnam War also left a bit to be desired. Also see this advice from the developer of spaCy.

26. “Data mining” may identify useful surrogate indicators that provide substitutes for other more complex and less-accessible data: PITF’s discovery of the robustness of infant mortality rate as a surrogate for economic development and state capacity is probably the best example of this. These, however, are quite rare, and tend to be structural rather than dynamic. The fate of the supposed correlation between Google searches for flu symptoms and subsequent flu outbreaks (and a zillion other post-hoc correlations discovered via data mining) is a useful cautionary tale. Not to mention the number of such correlations that appear to be urban legends.

27. I attempted to get a reference from the American Political Science Review for a colleague a couple days ago and found that their web site doesn’t even allow one to access the table of contents without paying for a membership! Cut these suckers off: NSF (and tenure committees) should not allow any references or publications that are not open access. Jeff Flake is rapidly becoming my hero. And I hope Francis Bacon is haunting their dreams in revenge for their subversion of the concept of “science.”

Addendum: Shortly after writing this I was at a fairly high level meeting in Europe discussing the prospects for developing yet another quantitative conflict early warning system, and got into an extended discussion with a couple quite intelligent, technically knowledgable and diligent fellows who, alas, had been trying to learn about the state of the art of political science applications of machine-learning techniques by reading “major” political science journals. And were generally appalled at what they found: an almost uninterrupted series of thoroughly dumbed-down—recalling Dave Barry’s observation that every addition you make to a group of ten-year-old boys drops their effective IQ by ten points, that’s the effect of peer review these days—logistic regressions with 20 highly correlated “controls'” and even these articles only available—well, paywalled—five years after the original research was done. So I tried to explain that there is plenty of state-of-the-art work going on in political science, but it’s mostly at the smaller specialized conferences like Political Methodology and Text-As-Data, though some of it will make it into a very small number of methodologically sophisticated journals like Political Analysis and Political Science Research and Methods. But if you are trying to impress people who are genuinely sympathetic to quantitive methods using the contents of the “major” journals, you’ll find that’s equivalent to demonstrating you can snare rats and cook them on a stick over an open fire, and using this as evidence that you should work in a kitchen that carries a Michelin star.

Alas, from the perspective of the typical department chair/head, dean, associate dean, assistant dean, deanlet and deanling, I suppose there is a certain rationale to encouraging this sort of thing, as it makes your faculty far less attractive for alternative employment, and maybe the explanation for the persistence of this problem is no more complicated than that. Though the 35% placement rate in political science may be an unfortunate side-effect. If it is indeed a side-effect. Another “side effect” may be the precipitous decline in public support for research universities.

Again, whoever is in charge of this circus needs to stop supporting the existing journal system, insist on publications which have roughly contemporaneous and open access—if people want to demonstrate their incompetence, let them do so where all can see, and the sooner the better—and let the folks currently trying to run journals get back to their core competency, managing urban real estate.

28. Also, as has been noted from the dawn of event data, “local” sources are incorporated into the international news services, as these depend heavily on local “stringers,” often among the most well-connected and politically savvy individuals in their region, and not necessarily full-time journalists. I was once in a conversation with the husband of a visiting academic from Morocco and asked what he thought about The Economist‘s coverage of that country. He gave me a quizzical look and then said “You’ll have to ask someone else: I am The Economist‘s correspondent for Morocco.”

29. In rare circumstances, this can be a signal: in 1979 the Soviet media source Pravda went suddenly quiet on the topic of Afghanistan a week or so before the invasion after having covered the country with increasing alarm for several months. This sort of thing, however, requires both stupidity and very tight editorial control, and I doubt that it is a common occurrence. At least the tight editorial control part.

30. Addendum (which probably deserves expansion into its own essay at some point) There’s an interesting confirmation of this from an [almost] entirely different domain in an article in Science (359:6373 19 Jan 2018, pg. 263; the original research is reported in Science Advances 10.1126/sciadv.aao5580 (2018)) which found that similar results on predicting criminal recidivism could be obtained from

  • A proprietary 137-variable black-box system costing $22,000 a year
  • Humans recruited from Mechanical Turk and provided with 7 variables
  • A two-variable regression model

It turns out that for this problem, there is a widely-recognized “speed limit” on accuracy of around 70%—the various methods in this specific study are a bit below that, particularly the non-expert humans—and, as with conflict forecasting, multiple methods can achieve this.

On reading this, I realize that there is effectively an “PITF predictive modeling approach” which evolved over the quarter-century of that organization’s existence:

  • Accumulate a large number of variables and exhaustively explore combinations of these using a variety of statistical and machine-learning approaches: this establishes the out-of-sample “speed limit”
  • The “speed limit” should be similar to the accuracy of human “super-forecasters”
  • Construct operational models with “speed limit” performance using very simple sets of variables—typically fewer than five—using the most robustly measured of the relevant independent variables

This is, of course, quite a different approach than the modeling gigantism practiced by the organization-that-shall-not-be-named under the guidance of the clueless charlatans who have quite consistently been directing it down fruitless blind alleys—sort of a reverse Daniel Boone—for decades. Leaving aside the apparent principle that these folks don’t want to see the problems solved—there are no further contracting funds for a solved problem—I believe there are two take-aways here:

  • Anyone who is promising arbitrarily high levels of accuracy is either a fool or is getting ready to skin you. If government funding is involved, almost certainly the latter. There are “speed limits” to predictive accuracy in every open complex system.
  • Anyone who is trying to sell a black-boxed predictive model based on its complexity and data requirements is also either a fool or is getting ready to skin you: everything in our experience shows that simple models are the most robust models over the long term.
Posted in Methodology | 5 Comments

Violence in Charlottesville and what we might gain from the Heather Heyers of this world

As you’ve probably been aware, things have been rather, well, difficult in these parts over the past few days. Living in State College I quickly learned that when you find your town on the front page of the New York Times and Washington Post things are probably not going particularly well, and here in Charlottesville I’ve learned that if you get that attention plus the lead article on The Economist Espresso app, things are really not going well.

As my initial grist for this entry, I’d written a good 5500 words meticulously employing appropriate theories of collective action and the usual obtuse historical analogies—those to whom I owe a couple of reports, sorry—but with the utterly mind-boggling levels of craziness pouring out of the White House, the subtleness of any detailed analysis would be lost to the winds, to say nothing of potentially misinterpreted. And on further reflection, this is not a time for analysis, but neither is it a time for silence, and consequently I’m just going to go with my gut.

A few hours ago I attended, along with about a thousand other people, the public funeral for Heather Heyer, the woman killed on Saturday in an act of domestic terrorism. And yes, terrorism is precisely what it was, by every known definition of the term. Coming out of that event, I can say with certainty that Heather is not someone who died because she was in the wrong place at the wrong time. No, Heather Heyer was murdered because she was exactly where she wanted to be, as a witness to the causes of justice, tolerance and equality to which she had been fiercely committed her entire life.

But that funeral brought home another message that I think is even more important, and too easily missed. That strength and commitment were shown not just by Heather but by the entire network which took to the microphone to speak in her memory: her father, pastor, friends, boss and, dramatically, her mother Susan Bro who transcended the pain and grief of losing her only daughter to make a powerful and impassioned statement for the values Heather had lived by.

And who were these people? Heather had only a high school education. She was raised by her single mom and her grandparents. The accents we heard were the soft tones of rural Virginia and the powerful cadences of African-American churches, not the carefully refined language of Ivy League eating clubs or the arrogant bombast of TED-X talks. Her African-American boss managed a bankruptcy firm and had hired Heather when she was a waitress, telling her that she had smarts and a work ethic, and he could teach her what she needed to know to become a paralegal. Smarts, a work ethic, and deep reserves of empathy, as he related a story of watching Heather gently work with a dual-career couple with multiple advanced degrees who, nonetheless, found themselves filing for bankruptcy.

Heather Heyer, with just a high school degree, and growing up in central Virginia, is the sort of person who would be completely invisible, totally beyond even the remotest consideration, to those of us in the tech community.

Heather’s funeral was held at the Paramount Theater, the largest venue in the downtown. As it happened, the last time I’d been in that theater I’d been listening to the rants of a foul-mouthed misogynistic venture capitalist, later exposed as one of the most notorious serial sexual harassers on the West Coast, who had been brought in with taxpayer assistance to be glorified as an exemplar on whom we should model our lives. The ersatz “tech festival” sponsoring him went on in a similar vein for days—perfect people with their perfect accents, perfect degrees, perfect bodies, perfect LinkedIn profiles and perfect access to Other People’s Money.  And yet in the one hour of Heather’s funeral, I heard more wisdom than I found in three days of that earlier celebration of education and entitlement.

Charlottesville is a wonderful place to be a tech developer, and I want that to continue. But there is more to life than technical acumen, advanced degrees, and knowing the right people. Charlottesville, and the world, is not just tech and venture capital, but people like Heather Heyer and her amazing family and friends, and their profoundly deep values, moral strength, and commitments. Let’s not forget that.

And to the heavily-armed jokers who descended upon our quiet community to march in Nuremberg-style torchlit parades chanting “You will not replace us”: we will replace you. Oh, most assuredly we will replace you.

And it is on the strength and convictions of people like Heather Heyer and Susan Bro that we will replace you.

Posted in Politics | 2 Comments