This is the second part of a two-essay series addressing some of the features one might wish to include in a contemporary “data science” program using resources in existing quantitative “social science” programs. The first, a rather rambling polemic, addressed a series of more general questions about the state of the “social sciences”, concluding that, as sciences, they are quite mature, and have been so for decades: People who have played “bet your career” on the systematic study and prediction of human behavior are, shall we say, generally doing just fine.
This essay moves on to some more specific questions on how social science approaches might be adapted given the rapid developments in analytical methods that have been occurring to a large degree elsewhere, typically under the titles “machine learning” or “data science.”
These observations should be taken with, well, maybe a truckload of salt as they are based on an assortment of unsystematic primary and secondary observations of the current “data science” scene as viewed from an infinitesimally minor player located in the southern-most suburb of Washington, DC: Charlottesville, Virginia (or as we call it locally, CVille ). Despite having not a lot of skin in the game, dogs in the fight, or monkeys in the circus—merely an obnoxious penchant for clichéd metaphors—the rapid evolution of this “space”, as the contemporary phrasing goes, has been interesting to observe.
So, let’s get this show on the road, and our butt in gear. I’m generally addressing this to those who might be considering modifying one or more parts of an existing [implicitly academic] social science curriculum to be more [implicitly applied] “data science friendly.” As the number of people in that situation is rather limited—albeit I’m going to be talking to some in the near future—these observations may also be of some utility to the much larger number of people who are wondering “hey, what differentiates a well-trained social science statistician from a data scientist?” [Answer: mostly the title on their business card…and probably their compensation.]
As usual, seven observations, in this instance phrased as sometimes nuanced pairs moving from existing social science “statistics” approaches in the direction of a newer “data science.”
1. “Science” vs “Engineering”
I am very much a child of the “Sputnik generation” and some of my earliest technical memories were discussions of the actual Sputnik, orbiting overhead, then watching U.S. rockets launching (and, presumably, blowing up) on our small television, then the whole space-race thing culminating in watching the first moon landing live. This was a [more civilized] period where elites, whether conservative or liberal, revered “science” and as a duly impressionable youngster, I imagined myself growing up to be “a scientist.”
Which I did, though in ways I definitely hadn’t imagined while watching rockets blow up, utilizing the fact that I was pretty good at math and was eventually able to combine this with my love of history and a good intuitive understanding of politics. All in all, it made for a nice academic career, particularly during a period of massive developments in the field of quantitative political science.
But despite my technical credentials, I was always a bit uncomfortable with the “science” part of things, and all the more so because I don’t have an intuitive understanding of philosophy: when you hang around people who do, you quickly realize when you don’t. Sure, I can piece together an argument, much as one pieces together a grammatically correct sentence in a language one is just beginning to learn, but it’s not natural for me: I’m not fluent.
“Science” is nonetheless the prestige game in the academic world—particularly for Sputnik-generation Boomers and their slacker “Greatest Generation” overlords—and it was only after getting out of that world four years ago, and in particular into the thriving software development ecosystem in CVille, did I finally realize what I’m actually good (and intuitive) for: the problem-solving side of things. Which is to say, engineering rather than science. 
What’s the difference? Science is pursuing ultimate truths, engineering is solving immediate problems. I’m going to go seriously geek on you here, but consider the following two [trust me, more or less equivalent] issues
Considering the issues of both programmer and execution time, and software maintainability, are concurrent locks or transactional memory the better solution for handling multiple threads accessing memory?
If I need to scale access to this database, will I screw it up?
The first is a scientific problem—mind you, probably unanswerable—and the second is an engineering problem.
Everything in an academic culture is going to point towards solving scientific problems.  And very slowly. Students seeking employment outside of academia need to know general scientific principles, but ultimately they are going to be hired, compensated, and promoted as engineers.
2. Statistics vs Machine Learning
As the table below shows in some detail, the statistical and machine learning (ML) approaches appear to be worlds apart, and the prospect of merging these would appear at first to be daunting:
|Primary objective||Determining if a variable has a “significant” effect||Classification|
|Theoretical basis||Probability theory||“Hey, I wonder if this will work??”|
|Feature space||Variable limited||Variable rich|
|Measurement||Should be careful and consistent||Whatever|
|Cases labeled?||Usually ||Maybe, maybe not (supervised vs unsupervised learning)|
|Heterogeneous cases?||Nooooo….||Bring it on…|
|Explicit data generating process?||Ideally||Rarely|
|Evaluation method||Usually full sample||Usually split sample (training/test)|
|Evaluation metrics||Correlation||ROC AUC; accuracy/precision/recall|
|Importance of good fit?||Limited: objective is accurately assessing error given a model||How else will I get published?|
|Time series||Specialized models covered in 800-page books||Just another classification problem|
|Foundational result||Central limit theorem||Web scraping|
|Sainted ancestor||Carl Friedrich Gauss||Karen Spärck Jones|
|Distribution naming conventions||Long dead European males||Distributions?|
|Software naming conventions||Dysfunctionally abbreviated acronyms  impossible to track on Google||Annoyingly cute Millennial memes|
|Secret superpower||Client is totally dependent on us to interpret the numbers||Client doesn’t know we just downloaded the code from StackOverflow|
|Logistic regression is embarrassingly effective?||Yes||Yes|
But from another perspective, these differences may actually be a good thing: it is quite possible that ML, like a rising tide, an invasive species finding empty ecological niches, coffee spilled on a keyboard, or whatever your preferred metaphor, has simply occupied the rather substantial low ground that the more constrained and generally analytically-derived statistical approaches have left vacant. Far from being competitors, they are complements.
Thus suggesting that the answer to “statistics or machine learning?” is “both.” I think this is particularly possible because while ML would be in addition to the statistical curriculum which has been carefully refined over the better part of a century , in the applied work I’m seeing, the bulk of practical ML really comes down to four methods
- clustering, usually with k-means
- support vector machines
- random forests
- neural networks, most recently in their “deep learning” modes
These are the general methods and do not cover more specialized niches such as speech and image recognition and textual topic modeling, but the degree of focus here is truly extraordinary given the time, effort and machine cycles that have been thrown at these problems over the past fifty years. Each method has, of course, a gadzillion variations—this is the “hyperparameter” issue for which there are also ML tools—but typically just going with the defaults will get you most of the way to the best solution you can obtain with any given set of data. Which is to say, the amount of ML one absolutely has to learn to do useful work in data science is quite finite.
3. Analytical mathematics vs programming
I have [fruitlessly] addressed this topic in much greater detail elsewhere but the bottom line is that for data science applications, you certainly need to be able to comprehend algebaic notation, ideally at a point where you can rapidly and idiomatically skim equations (including matrix notation) in order to figure where a proposed new method fits into the set of existing techniques. It certainly doesn’t hurt to have the equivalent of a couple semesters of undergraduate calculus , though [I think] I read somewhere that something like two-thirds of college graduates have this anyway (that figure probably includes a lot of AP credits). But beyond that, the return on investment in the standard analytical mathematical curriculum declines rapidly because most new methods are primarily developed as algorithms. 
The same can probably also be said for most of the formal coursework dealing with computer programming: it is very helpful to learn more than one computer language in some depth , learn something about data structures, including objects, and get some sense of how computers work at a level close to machine language (C is ideal for this, as well as useful more generally), but beyond that point, in the applied world, it’s really just practice, practice, practice.
While the rate of computer language and interface innovation has certainly slowed—C and Unix are now both nearing their half-century mark—it remains the case that one can be almost certain that in 2025 there will be some absolutely critical tool in wide use that only a handful (if that) of people today have even heard of. This is a completely different situation from analytical mathematics, where one could teach a perfectly acceptable calculus course using a textbook from 1850. As such, the value of intensive classroom instruction on the computer side is limited.
4. R vs Python
So, let the flame wars begin! Endlessly!
Well, maybe not. Really, you can manage just fine in data science in either R or Python (or more to the point, with the vast panoply of Python libraries for data analytics) but at least around here, the general situation is probably captured by a pitch I heard a couple weeks ago by someone informally recruiting programmers: “We’re not going to tell you what language to use, R or Python, just do good work. [long pause] Though we’re kinda transitioning to Python.”
So here’s the issue: Python is the offspring of a long and loving relationship between C and Lisp. Granted, C is the modest and dutiful daughter of the town merchant, and Lisp is the leader of a motorcycle gang, but it works for them, and we should be happy. Java, of course, is the guy who was president of student council and leaves anonymous notes on the doors of people who let their grass grow too tall. 
R is E.T.
I will completely grant that R has been tremendously important in the rapid development of data analytics in the 21st century, both through its sophistication, its incredible user community, and of course the fact that it is open source. The diffusion of statistical, and later machine learning, innovation made a dramatic leap when R emerged as a lingua franca to displace most proprietary systems except in legacy applications. 
But to most programmers, R is just plain weird, and at least I can never escape the sense that I’m writing scripts that run on top of some real programming language.  Whereas Python (and Java) are modern general purpose languages with all of the things you expect in a programming language—and then some. Even though, like R, both are scripted and also running on top of interpreters (in Python, once again C), when you are writing code it doesn’t feel like this. At least to me…and, as I watch the ever-increasing expansion of Python libraries into domains once dominated by R, I don’t think it’s just me.
Do not show any of this to any group of programmers without being aware that they will all disagree vehemently with all of it. Including the words “and” and “the.” Shall we move on?…
5. Small toy problems vs big novel problems
It’s taken me quite a while, and no small amount of frustration, to finally “get it” on this but yeah, I think I finally have: academic computer scientists like small “toy” problems (or occasionally, big “toy” problems) because those data sets are already calibrated, and the only way you can tell whether the new technique you are trying to get published is actually an improvement (or, more commonly, assess the various tradeoffs involved with the technique: rarely is anything an improvement on every dimension) is by testing it on data sets that lots of other methods already have been tested on, and which are thoroughly understood. Fair enough.
Unfortunately, that’s not what the applied world looks like—we’re back to that “science” vs “engineering” thing again—where the best opportunities, almost by definition, are likely to be those involving data that no one has looked at before and which are not well understood. If the data sets that are the focus of most computer science development and evaluation covered all of the possibilities in the novel data we’d still be okay, but almost by definition, they won’t, so we aren’t.
I realize I’m being a bit unreasonable here, as increasing the corpora of well-understood data sets is both difficult and, if computer science attitudes towards data collection are anything like those in the social sciences, largely unrewarded, but (easy for me to say…) could you at least try a bit more? Something other than irises and the emails between the long-incarcerated crooks at Enron?
6. Clean structured data vs messy unstructured data
This looks like the same problem as the above, but is a bit different, and more practical than theoretical. As everyone who has ever done applied data science work will confirm, most of one’s time (unless you’re at a shop where someone else does this for you) is spent on data preparation. Which is never exactly what you expect and, if you are dealing with data generated over an extended period of time, it probably has to be done for two or three subtly different formats (as well as coping with the two or three other format changes you missed). Very little of this can be done with standard tools, much of the work provides little or no intellectual rewards (beyond building software tools you at least imagine you might use at a later date), and it’s mostly just a long slow slog before you get to the interesting stuff.
This does not translate well to a classroom environment:
Welcome to my class. Before we begin, here’s a multi-gigabyte file of digital offal that is going to take a good six weeks of tedious work before you can even get it to the point where the groovy software you came here to learn can read it, and when you finally do, you’ll discover about ten assumptions you made in the initial data cleaning were incorrect, leading to two additional weeks of work, but since eight weeks is well past the drop date for this class, you’ll all be gone by that point, which is fine with me because I’d rather just be left alone in my office to work on my start-up. Enjoy.
No, if you want to learn about technique, better to work with data you know is okay, and devote your class time to experimenting with the effects of changing the hyper-parameters.
Again, I don’t have an obvious solution to this: Extended senior and M.A. projects with real data, possibly with teams, may be a start, though even there you are probably wasting a lot of time unless the data are already fairly clean. Or perhaps the solution is just to adopt the approach of law schools, which gradually removed all of the boring stuff to the point where what is taught in law school is all but completely irrelevant to the actual practice of law. Worked for them! Well, used to…
7. Unicorn-aspiring start-up culture vs lean and sustainable business culture
This one isn’t really an academic issue, but a more general one dealing with practical preparation for students who will be venturing into the realm of applied—which is to say, commercial—data analytics. As those who follow my tweets know, here in innocent little CVille we recently completed an annual affair I referred to as the hip hipster fest, a multi-day taxpayer-subsidized  celebration of those who are born on third base and going through life acting like they hit a triple. It was, to a remarkable degree, a parody of itself —the square-jawed hedge fund managers holding court in invitation-only lunches, the Martha Stewart wannabee (and well on her way) arguing the future of the city lay in the creation of large numbers of seasonal, minimum wage jobs catering to the fantasies of the 1%, the venture capitalist on stage in flip-flops who couldn’t complete a paragraph without using the F-word at least once.  Everywhere the same monotonously stereotypical message: aim big, don’t be afraid to fail!
Yeah right. Don’t be afraid to fail, so long as you come from a family of highly educated professionals, went to a private high school, Princeton, Oxford, Harvard, and married someone who had tenure at Harvard. Under those circumstances, yeah, I’d say you probably don’t need to be afraid to fail.
Everyone else: perhaps a little more caution is in order. And oh by the way, all those people telling you not to be afraid to fail?: if you succeed, they are going to get the lion’s share of the wealth you generate. And if you do fail—and in these ventures, the odds are absolutely overwhelming this will be the outcome—they’ll toss you aside like a used kleenex and head out looking for the next wide-eyed sucker “not afraid to fail.” Welcome to the real world.
So over the past few years I’ve come around to seeing all these things—endlessly celebrated in the popular media—as more than a bit of a scam, and have formulated my alternative: train people to create businesses where, as much as possible, you can sustainably be assured that you can extract compensation roughly at the level of your marginal contribution.  That’s not a unicorn-aspiring start-up—leave those for the sons and daughters of the 1%—and it is certainly not the “gig economy,” where the entire business plan involves some company making sure you are getting far less than your marginal contribution, buying low (you) and selling high (them). Stay away from both and just repeat “Lean and sustainable: get compensated at the rate of your marginal contribution.”
It’s an old argument—Monique Tilford and Vicki Robin’s 1990s best-seller Your Money or Your Life focused on exactly this principle, and in his own hypocritical fashion, it was the gist of Thomas Jefferson’s glorification of the [unenslaved] yeoman farmer as the foundation of a liberal democracy. Mind you, in point of fact Jefferson was born into wealth and married into even more wealth, and didn’t have the business acumen to run a 10-cent lemonade stand, whereas his nemesis Alexander Hamilton actually worked his way up from the utter dregs of poverty so, well, “it’s complicated,” but—as with so much of Jefferson—there is a lot useful in what he said, if not in what he actually did.
Getting back to the present, if you want to really help the data scientists you are planning to send out into the world, tell them not to get suckered into the fantasies of a fairy-tale start-up, and instead acquire the practical skills needed to create and run a business that can sustain itself—with minimum external funding, since banks and hedge funds are certainly not going to loan to the likes of you!—for a decade or more. Basic accounting, not venture capital; local marketing, not viral social networking; basic incorporation, payroll and tax law  , not outsourcing these tasks to guys in expensive suits doing lines of cocaine. And fundamentally, finding a viable business niche that you can occupy, hold, and with the right set of skills and luck, expand over a period of years or decades, not just selling some half-baked idea to the uncle of one of your frat brothers over vodka shots in a strip club.
A completely different approach than promoting start-up culture, and it’s not going to get on the front pages of business magazines sold in airline terminals but, you know, it might just be what your students actually need. 
And such an approach might also begin to put in dent in the rise of economic inequality through a process more benign than revolution, war, or economic catastrophe. That would be sorta nice as well, eh?
1. Or C-Ville or Cville.
2. One of the [few] interesting conversations I had at the recent CVille hip hipster fest—mocked in extended detail below—was with a young—well, young compared to me, though that’s true of most things now other than sequoia trees—African-American woman working as a mechanical engineer at a local company. Well, contractor. Well, in a SCIF, and probably developing new cruise missile engines, but this is CVille, right? Still, it’s Hidden Figures, MMXVII. Anyway, our conversation was how great CVille was because of the large community of people who work on solving complex technical problems, and how helpful it was to be surrounded by people who understood doing that for a living, even though the applied domains might vary widely. A very different sort of conversation than I would have had as an academic.
3. More of an issue for the previous essay, as the organization-which-shall-not-be-named is obsessed with this, but a somewhat related issue here is the irrelevance of “grand theory” in applied social science.
Let’s start with a little thought experiment: You’re shopping for a car. What is “the grand theory of the car” of General Motors compared to Toyota? Honda versus Volkswagon? And just how much do these “grand theories of the car” factor into your buying decision?
Not in the least, right? Because there are no grand theories of the car, but instead multiple interlocking theories of the various components and systems that go into a car. Granted, the marketing people—most of whom probably couldn’t even handle the engineering challenges involved in changing a tire—would dearly love you to believe that, at astonishingly fundamental levels, a Toyota is fantastically distinct from a Honda and this is because of the deep cultural and, yea, spiritual differences between the Toyota way—the Dao of Toyota—and the Honda way, but that’s all it is: marketing. Which is to say, crap. They’re just making cars, and cars are made of parts.
And so it is with humans and human societies, which have evolved with a wide variety of components to solve a wide variety of problems. Some of these solutions are convergent—somewhere in one of Steven Pinker’s books, I think The Language Instinct, he has a list some anthropologist put together of characteristics of human societies that appear to be almost universal, and it goes for about four pages—and in other cases quite divergent (e.g. the dominant Eastern and Western religious traditions are diametrically opposed on both the existence of a single omnipotent deity and whether eternal life is something to be sought after or escaped from). There are very useful theories about individual components of human behavior, just as one can more or less identify a “theory of batteries”, “theory of tires”, or even—about as close as we come to something comprehensive—a “theory of drive trains”, notably those based on electricity and those based on internal combustion engines. These various theories overlap to limited degrees, but none is a “theory of the car,” and one doesn’t need a grand “theory of society” to systematically study human behavior.
Such theories are, in fact, generally highly counter-productive compared to the domain-specific mid-level theories. The other dirty little secret which I’ve found to be almost universally shared across disciplines involved in the study of humans—the humanities as well as the social sciences—is that individuals obsessed with grand theories are typically rather pompous, but fundamentally sad, people who don’t have the intelligence and/or experience to do anything except grand theory. And as a consequence eventually—or frequently after just one or two years in grad school—they don’t get to play in any reindeer games. Maybe for a dozen people in a generation this is not true—and the ideas of only half of even those survive beyond their generation—but for the rest: losers.
There’s a saying, I’m pretty sure shared by researchers on both sides of the political divide, about studies of the Israeli-Palestinian conflict: “People come for a week and they write a book. They come for a month and they write an article. They come for a year and they can’t write anything.” Yep.
4. There’s a fascinating article to be written—how that for a cop-out?—on the decline of clustering methodology (or what would now be called unsupervised models) in quantitative political science. Ironically, when computer analyses first became practical in the 1960s, one actually saw a fair amount of this because there had been extensive, and rather ingenious, methods developed in the 1930s and 1940s in the fields of psychology (notably Cattell) and educational testing in order to determine latent traits based on a large number of indicators (typically test questions). These techniques were ready and waiting to be applied in new fields once computers reduced the vast amount of labor involved, and for example some of the earliest work involving clustering nation-states based on their characteristics and interactions by the late Rudolph Rummel—ahead of the curve and out of the box throughout his long career—would now seem perfectly at home as an unsupervised “big data” problem.
But these methods didn’t persist, and instead were almost completely shunted aside by frequentism (which, one will observe throughout these two essays, I also suspect may be involved in causing warts, fungal infections in gym shoes, and the recent proliferation of stink bugs) and by the 1990s had essentially disappeared in the U.S. Why?
I suspect a lot of this was the ein Volk, ein Reich, ein Führer approach that many of the proponents of quantitative methods adopted to get the approach into the journals, graduate curricula and eventually undergraduate curricula. This approach required—or was certainly substantially enhanced by—simplifying the message to The One True Way of doing quantitative research: frequentist null hypothesis significance testing. Unsupervised methods, despite continuing extensive developments elsewhere , did not fit into that model. 
The other issue is probably that humans are really good at clustering without any assistance from machines—this seems to be one of the core features of our social cognition. As I noted in the previous essay, Aristotle’s empirically-based typology of governance structures holds up pretty well even 2,400 years later, and, considerably better than Aristotle’s observations on mechanics and physiology. Whereas human cognition is generally terrible at probabilistic inference, so in this domain systematic methods can provide substantial added value.
5. In 2000, I went to a large international quantitative sociology meeting in Cologne and was amazed to discover—along with the fact that evening cruises on the Rhine past endless chemical plants are fun provided you drink enough beer—a huge research tradition around correspondence analysis (CA), which is a clustering and reduction-of-dimensionality method originally developed in linguistics. It was almost like being in one of those fictional mirror worlds, where everything is almost the same except for a couple key differences, and in this case all of the sophistication, specialized variations and so forth that I was seeing in North America around regression-based methods were instead being seen here in the context of CA. I was actually quite familiar with CA thanks to a software development gig I’d done earlier—at the level of paying for a kitchen remodel no less—with a Chicago-based advertising firm, for which I’d written a fairly sophisticated CA system to do—of course—market segmentation, but few of the other Society for Political Methodology folks attending (as I recall, several of us were there on a quasi-diplomatic mission) had even heard of it. I never followed up on any of this, nor ever tried to publish any CA work in political science, though in my methodology classes I usually tossed in a couple examples to get across the point that there are more things in heaven and earth,
Horatio, dudes, than are dreamt of in your philosophy copy of King, Keohane and Verba.
6. This may be an urban legend—though I’m guessing it is not—but factor analysis (easily the most widely used of these methods) took a hit when it was discovered that a far-too-popular routine in the very early BMDP statistical package had an bug which, effectively, allowed you to get whatever results you wanted to from your data. Also making the point that the “reproducibility crisis” is not new.
7. “LDA”: latent Dirichlet allocation or linear discriminant analysis? “NLP”: natural language process or nonlinear programming? “CA”: content analysis or correspondence analysis?
8. Even if still uniformly detested by students…hey, get over it!…and just last night I was at yet another dinner party where yet another nursing student was going into some detail as to how much she hated her required statistics class: I have a suspicion that medical personnel disproportionately prescribe unusually invasive and painful tests, with high false positive rates, to patients who self-identify as statisticians. Across the threshold of any medical facility, I’m a software developer.
9. Or more realistically, a semester of differential calculus and a semester of linear algebra, which some programs now are configured to offer.
10. The very ad hoc, but increasingly highly popular, t-SNE reduction of dimensionality algorithm is a good example of this transition when compared to earlier analytically-derived methods such as principal components and correspondence analysis which accomplished the same thing.
12. Encounters of this type are one of the reasons I no longer live in Pennsylvania, though the township minders, not the neighbors, were the culprit.
13. Like those government contracts where you sense that what they’d really like to require is that all computations be done in cuneiform on clay tablets. Because clay tablets have a track record for being very stable and were more than adequate for the government of Ashurbanipal.
But they don’t require cuneiform. Just MatLab.
14. Yes, sophisticated R programmers will just write critical code in C, effectively using R as a [still very weird] data management platform, but that’s basically C programming, not R.
15. Said subsidies furtively allocated by the city council in a process outside the normal review for such requests, which would have required the hip hipster fest to be audited, but that’s someone else’s story. Ideally someone doing investigative journalism.
16. Okay, some folks who clearly knew what they were doing put together an impressive 4-hour session on machine learning. Though I’m still not clear whether the relevant descriptor here was symbiosis or parasitism.
17. Dude, they’ve got drugs to treat that sort of condition now, don’t you know?
18. Caveat: I’ve now supported myself at a comfortable income level for almost four years as a software developer and data analyst—pretty much a successful business model by most reasonable definitions—but in that time I have not once managed to create a company whose capitalization exceeded a billion dollars! Alas, my perspective on business practice is undoubtedly affected by this.
19. But please do not expose students to the Pennsylvania corporate tax form[s] RCT-101 or they will immediately decide it would be way more pleasant to, say, become a street performer who chews glass light bulbs for spare change. Small business: just say no to Pennsylvania. I digress.
20. In light of the extended discussions over the past few days about just how psychologically challenging the academic world can be—teaching usually is rewarding but the rest of the job is largely one of nearly endless rejection and insult—another decided advantage of dealing with relatively short-term engineering problems—assuming, of course, that one can solve these successfully—is that one gets a lot of immediate gratification. And in the U.S. at least, running a small business has generally positive popular connotations even if, in practice, The Establishment, and both political parties (small business ≠ pay-to-play), and certainly Mr. Stiff-the-contractors President are very hostile, though probably no more so than they are towards academics. So individuals following this path are likely to be happier.
21. As it is partially paid for by tax dollars, I’d like to see the CVille hip hipster fest showcase some guy who graduated from the local community college and now runs his own welding shop, or some women from similar circumstances who are starting up a landscaping company. I’d also like to see pigs fly, which I’m guessing will happen first.
22. A very concrete example of this problem arose about a week later when I was attending a local geek-fest whose purpose is largely, if unofficially, recruiting, and while I’m not [currently] recruiting, I had some interesting chats with some folks who expressed interest in what I’m doing (and I’ve realized that compared to what most people in data science end up doing, the tasks typically undertaken by Parus Analytics, a really, really small little operation, actually are quite interesting…), so I asked them with they had business cards.
They didn’t have business cards.
Look, I am not going off on an extended screed about stupid Millennials, how could you not have business cards! (“Scotty, are the snark generators fully charged and engaged?” “Ay, Capt’n, and I’ve got them set at 11” “Excellent, so….GET OFF MY YARD!!!” I digress…) No, my point is that those sweet and helpful people who are telling Millennials to network, lean-in, and don’t-be-afraid-to-fail-because-you-have-a-large-trust-fund,-don’t-you? should also be advising “Don’t go to a networking/recruiting event without business cards,” and, while you’re on the topic, point out that the cost of 200 nicely individualized (from the zillion available templates) business cards runs about the same as a growler of craft beer, and take five minutes to set up and order, and if you hit a sale (likely common around graduation time), about the cost of a pint of craft beer.
But in the absence of a business card—yes, they seem very archaic but it’s hard to beat the combination of cost, bandwidth and specificity—the person you are trying to impress is far less likely to remember your name, and will probably misspell the URL of that web site you so carefully developed and instead get onto the home page of some Goth band whose leader bites off the heads of bats and whose drummer sells products for Amway.
Dear Professor Schrodt,
Thanks, as always, for this brilliant blog post! Keep ’em up, we’re big fans and lovers of it here in Geneva, Zurich and Switzerland in general.
One question arose among us, though: in the ROTFL passage on Python vs. R and the town merchant daughter C & motorcycle gang leader Lisp bit, we were wondering if you meant C or C++ – and whether things would change if it were C++ instead (and if so, what C++ would be, if not also the town merchant’s daughter…)
Greetings from over here!
Georg von Kalckreuth (and others)
Good question, as C++ is quite a different “person” than C, and probably comes closer to the merchant’s dutiful daughter. C, I suppose, more accurately would be a feral child raised by wolves (yeah, come to think of it, Bell Labs in the late 1960s was about as close as you can get to being raised by wolves…) and for whom would, of course, there would be an obvious mutual attraction to the leader of a motorcycle gang. Which, in fact, is probably why Python (in particular as contrasted to perl) has worked as well as it has. C++ “civilized” C — maybe the relevant reference here is Pygmalion/My Fair Lady? — though I sense that the Cython crowd (and C spin-offs such as the C-based Go language out of Google) are attracted by the relative simplicity of C, compared to all of the extras continually piled onto C++. That could also be said for Guido van Rossum’s strong preference for keeping the vocabulary of Python simple (probably too simple, e.g. in the fact that the “else” keyword is used in three very different contexts): that’s C thinking, not C++ thinking.
Pingback: What if a few grad programs were run for the benefit of the graduate students? | asecondmouse
Pingback: Happy 60th Birthday, DARPA: you’re doomed | asecondmouse