A robot points a gun at a human.

A still from Corridor Digital’s parody of robotics company Boston Dynamics.

The Smart, the Stupid, and the Catastrophically Scary: An Interview with an Anonymous Data Scientist

We’re constantly inundated with stories about how data science, machine learning, deep learning, and artificial intelligence are revolutionizing everything. But what do these terms even mean? And are they likely to have anywhere near the impact that the media hype would lead us to believe?

We sat down with a veteran data scientist to help us answer these questions. Over beer and Chinese food, we spent hours discussing a wide array of subjects, ranging from neural networks to algorithmic racism.

Later, when we transcribed and edited our conversation, we realized we had way more than we needed for a single piece. So we broke our conversation into four installments, each organized around a different theme—“Introduction,” “Deep Learning,” “FinTech,” and “The Future”—and decided to distribute them throughout the issue. For the web version, we’ve included each of those installments below.

This interview was conducted in November 2016.

LOGIC: Alright, let’s get started with the basics. What is a data scientist? Do you self-identify as one?

DATA SCIENTIST: I would say the people who are the most confident about self-identifying as data scientists are almost unilaterally frauds. They are not people that you would voluntarily spend a lot of time with. There are a lot of people in this category that have only been exposed to a little bit of real stuff—they’re sort of peripheral. You see actually a lot of this with these strong AI companies: companies that claim to be able to build human intelligence using some inventive “Neural Pathway Connector Machine System,” or something. You can look at the profiles of every single one of these companies. They are always people who have strong technical credentials, and they are in a field that is just slightly adjacent to AI, like physics or electrical engineering.

And that’s close, but the issue is that no person with a PhD in AI starts one of these companies, because if you get a PhD in AI, you’ve spent years building a bunch of really shitty models, or you see robots fall over again and again and again. You become so acutely aware of the limitations of what you’re doing that the interest just gets beaten out of you. You would never go and say, “Oh yeah, I know the secret to building human-level AI.”

In a way it’s sort of like my Dad, who has a PhD in biology and is a researcher back East, and I told him a little bit about the Theranos story. I told him their shtick: “Okay, you remove this small amount of blood, and run these tests…” He asked me what the credentials were of the person starting it, and I was like, “She dropped out of Stanford undergrad.” And he was like, “Yeah, I was wondering, since the science is just not there.” Only somebody who never actually killed hundreds of mice and looked at their blood—like my Dad did—would ever be crazy enough to think that was a viable idea.

So I think a lot of the strong AI stuff is like that. A lot of data science is like that too. Another way of looking at it is that it’s a bunch of people who got PhDs in the wrong thing, and realized they wanted to have a job. Another way of looking at it—I think the most positive way, which is maybe a bit contrarian—is that it’s really, really good marketing.

As someone who tries not to sell fraudulent solutions to people, it actually has made my life significantly better because you can say “big data machine learning,” and people will be like, “Oh, I’ve heard of that, I want that.” It makes it way easier to sell them something than having to explain this complex series of mathematical operations. The hype around it—and that there’s so much hype—has made the actual sales process so much easier. The fact that there is a thing with a label is really good for me professionally.

But that doesn’t mean there’s not a lot of ridiculous hype around the discipline.

I’m curious about the origins of the term “data science”—do you think that it came internally from people marketing themselves, or whether it was a random job title used to describe someone, or what?

As far I know, the term “data science” was invented by Jeff Hammerbacher at Facebook.

The Cloudera guy?

Yeah, the Cloudera guy. As I understand it, “data science” originally came from the gathering of data on his team at Facebook.

If there was no hype and no money to make, essentially what I would say data science is, is the fact that the data sets have gotten large enough where you can start to consider variable interactions in a way that’s becoming increasingly predictive. And there are a number of problems where the actual individual variables themselves don’t have a lot of meaning, or they are kind of ambiguous, or they are only very weak signals. There’s information in the correlation structure of the variables that can be revealed, but only through really huge amounts of data.

So essentially: there are N variables, right? So there’s N-squared potential correlations, and N-cubed potential cubic interactions or whatever. Right? There’s a ton of interactions. The only way you can solve that is by having massive amounts of data.

So the data scientist role emphasizes the data part first. It’s like, we have so much data, and so this new role arises using previous disciplines or skills applied to a new context?

You can start to see new things emerge that would not emerge from more standard ways of looking at problems. That’s probably the most charitable way of putting it without any hype. But I should also say that the hype is just ferocious.

And even up to last year, there’s just massive bugs in the machine learning libraries that come bundled with Spark. It’s so bizarre, because you go to Caltrain, and there’s a giant banner showing a cool-looking data scientist peering at computers in some cool ways, advertising Spark, which is a platform that in my day job I know is just barely usable at best, or at worst, actively misleading.

I don’t know. I’m not sure that you can tell a clean story that’s completely apart from the hype.

For people who are less familiar with these terms, how would you define data science, machine learning, and artificial intelligence? Because as you mentioned, these are terms that float around a lot in the media and that people absorb, but it’s unclear how they fit together.

It’s a really good question. I’m not even sure if those terms that you referenced are on solid ground themselves.

I’m friends with a venture capitalist who became famous for coining the phrase “machine intelligence,” which is pretty much just the first word of “machine learning” with the second word of “artificial intelligence,” and as far as I can tell is essentially impossible to distinguish between either of those applications.

I would say, again, “data science” is really shifty. If you wanted a pure definition, I would say data science is much closer to statistics. “Machine learning” is much more predictive optimization, and “artificial intelligence” is increasingly hijacked by a bunch of yahoos and Elon Musk types who think robots are going to kill us. I think artificial intelligence has gotten too hot as a term. It has a constant history since the dawn of computing of over-promising and substantially under-delivering.

So do you think when most people think of artificial intelligence, they think of strong AI?

They think of the film Artificial Intelligence level of AI, yeah. And as a result, I think people who are familiar with bad robots falling over shy away from using that term, just because they’re like, “We are nowhere near that.” Whereas a lot of people who are less familiar with shitty robots falling over will say, “Oh, yeah, that’s exactly what we’re doing.”

The narrative around automation is so present right now in the media, as you know. I feel like all I read about AI is how self-driving trucks are going to put all these truckers out of business. I know there’s that Oxford study that came out a few years ago that said some insane percentage of our jobs are vulnerable to automation. How should we view that? Is that just the outgrowth of a really successful marketing campaign? Does it have any basis in science or is it just hype?

Can I say the truth is halfway there? I mean, again, I want to emphasize that historically, from the very first moment somebody thought of computers, there has been a notion of: “Oh, can the computer talk to me, can it learn to love?” And somebody, some yahoo, will be like, “Oh absolutely!” And then a bunch of people will put money into it, and then they’ll be disappointed.

And that’s happened so many times. In the late 1980s, there was a huge Department of Defense research effort towards building a Siri-like interface for fighter pilots. And of course this was thirty years ago and they just massively failed. They failed so hard that DARPA was like, “We’re not going to fund any more AI projects.” That’s how bad they fucked up. I think they actually killed Lisp as a programming language—it died because of that. There are very few projects that have failed so completely that they actually killed the programming language associated with them.

The other one that did that was the—what was it, the Club of Rome or something? Where they had those growth projections in the 1970s about how we were all going to die by now. And it killed the modeling language they used for the simulation. Nobody can use that anymore because the earth has been salted with how shitty their predictions were.

It’s like the name Benedict.

Yes, exactly, or the name Adolf. Like you just don’t go there.

So, I mean, that needs to be kept in mind. Anytime anybody promises you an outlandish vision about what artificial intelligence is, you just absolutely have to take it with a grain of salt, because this time is not different.

I’m actually less optimistic about the future than I maybe should be. Because it’s hard for me to see a way out of the lump of labor fallacy—even conscious of the fact that it’s a fallacy—when it comes to something like truckers. Because our truckers are not going to become JavaScript web devs. Maybe a fraction of them will, but I don’t know.

I was talking about this with my friend who has a completely different point of view. His brother works as a video game designer, and that’s a job that didn’t exist a hundred years ago, and now he makes a really good salary doing that.

That said, his brother went to Harvard, is super smart, and frankly is probably a lot more intellectually talented and curious than a lot of the truck drivers they’re going to put out of business. And so there might be awesome jobs for people who really enjoy computers, but I kind of worry about what that looks like when computers start consuming more and more up the chain of the cognitive load.

Part of me likes being a programmer—because we’re the last job. I can see a future—if we don’t manage to blow ourselves up first—in the robot paradise where people are either robot engineers or programmers, or I guess do marketing. Or maybe bake pies, or smell things? Those are essentially the hardest things for a computer to do. But computers do everything else.

And I don’t know. What does humanity look like? What do jobs look like in that future? I have no idea. I think it’s not going to get there on the same timeline that the Oxford people think it’s going to get there. But yeah, we’ll get there, and I don’t know, as optimistic as I want to be about it, it is really scary.

What is the “lump of labor” fallacy?

That there’s a fixed pool of jobs and that if computers take more jobs, there’s a slice of people that get laid off. That’s a fallacy.

Like the idea of taking a slice of the pie, versus enlarging the pie?

Yeah, exactly. And I think we’ve proven in our economy that we’re very comfortable with the idea that, yeah, a bunch of specific people are going to suffer, but other people are going to benefit and we’re okay with that. And that’s our economic progress and our economic growth. But I think it might just accelerate violently as computers keep getting better and better at what they do.

Deep Learning

Is there a point at which a piece of software or a robot officially becomes “intelligent”? Does it have to pass a certain threshold to qualify as intelligent? Or are we just making a judgment call about when it’s intelligent?

I think it’s irrelevant in our lifetimes and in our grandchildren’s lifetimes. It’s a very good philosophical question, but I don’t think it really matters. I think that we are going to be stuck with specific AI for a very, very long time.

And what is specific AI?

Optimization around a specific problem, as opposed to optimization on every problem.

So like driving a car would be a specific problem?

Yeah. Whereas if we invented a brain that we can teach to do anything we want, and we have chosen to have it focus on the specific vertical of driving a car, but it can be applied to anything, that would be general AI. But I think that would be literally making a mind, and that’s almost irresponsible to speculate about. It’s just not going to happen in any of our lifetimes, or probably within the next hundred years. So I think I would describe it as philosophy. I don’t know, I don’t have an educated opinion about that.

Although I do really like Westworld.

I was going to ask you about that. It’s like there’s this particular media moment right now—there’s a lot of good television that revolves around these questions, it’s science fiction but it’s increasingly closer to reality, at least in the popular imagination. So like Westworld, or Black Mirror. There was Ex Machina not too long ago. I’m curious what your thoughts are on that.

The rate of progress in AI over the past decade has been astounding. Ten years ago, Go was something that would never be solved by anybody, and now it’s there. That required tremendous leaps forward. And so I think that although the popular imagination is always going to be leaps and bounds ahead of what’s realistic, a lot of that is a reflection of the progress that has in fact been made in the past decade. Whether that’s because the actual technology itself is in the golden age and will soon revert back is a good question.

I enjoyed Her and Ex Machina—those are great films. Westworld has been fun to watch. I just don’t think they’re a realistic portrayal of what things are going to look like in our lifetime.

Do you think that part of the fascination comes from the successful marketing of tools like Siri and Alexa and Cortana, which give this texture of interacting with an AI like that, even though it really is just something like speech-to-text piped into a search algorithm?

Yeah, I think it’s easier for people to see it as possible, and I think that Black Mirror is really interesting too because it’s just right on the edge in some episodes, which is really neat.

But I think people are typically pretty aware of the flaws in Siri, right? You know it will get better, in the same way that with the first versions of Google, you needed to learn how to Google: how to use certain types of words to identify things. Google is getting better, to the point where you can just type in random shit and it comes up with the right answer.

You don’t even bother to spell things correctly anymore.

It’s pretty astounding. I’m sure in elementary school you had to learn the Dewey Decimal System, and that crazy library and reference language where you have the Boolean qualifiers on terms that might appear in an item summary or something. That is just completely out of the window now. So yeah, it will get better.

But I think that the flaws will always be there. And, back to the hype cycle, a lot of the current AI assistants are just humans on the back end solving those problems.

Whenever I get an invite from an x.ai system or something I always fuck with them. They do a great job, but they do a great job because it’s a human on the back end and not a computer.

Human on the back end—so does that mean it’s going overseas?

In the Philippines or in the Midwest, somebody is tagging parts of the speech, and correcting things, and actually parsing whether I misspelled something or meant to write something else. It’s becoming training data for the machine, and eventually it will get incorporated.

It’s the MVP.

Yeah, it is just so much easier to build a web app that connects someone in the Philippines to a series of database questions, and has them do the work, than it is to build an AI than can handle arbitrary responses to a calendar invite. So you just tell your venture capitalists that you’re working on AI, but that some of it still needs to be labeled by hand. That just seems way easier. That’s the business I would start if I were doing that.

It seems like wages would have to be higher for it to be profitable to invest in the R&D required to successfully automate these forms of labor. Do you think wage levels make a difference?

I’ll say this—maybe it’s a little bit of a diversion from your question—but one of the things that I’ve noticed on AWS prices was that a few months ago, the spot prices on their GPU compute instances were $26 an hour for a four-GP machine, and $6.50 an hour for a one-GP machine.

That’s the first time I’ve seen a computer that has human wages. This is something that can run twenty-four hours a day, does not need vacation time, does not need benefits. I mean that’s the equivalent of essentially, I don’t know, depending on how you want to do the math you can easily make the argument that’s a $200,000-a-year person as this machine. I’ve never seen that before and that’s kind of frightening.

Whoever is using these machines must be someone pretty smart, because the ability to use a GPU effectively requires a bunch of smart data scientists… You would need to give me a team, and a bunch of time to build something that could adequately take that into account to solve some problem. At the same time, they’re paying $26 an hour to rent a server, and at that rate they are paying the full price of the hardware every two weeks, so they have to be really dumb too.

That’s fascinating. When you talk about the difficulties in building general AI, or even very sophisticated specific AI, how much of that is due to the engineering problems, and how much of it is because we currently have relatively low levels of investment in basic research? Does political economy play a role in this? I mean, if we were investing in basic research at mid-century Cold War levels, would we be automating things faster? Or would the technical problems still be so great that the amount of money that you threw at it wouldn’t matter?

I think it’s a really interesting question. So I can say that Japan—I need to pull a little more data on this, I feel little bit dumb just putting things out there—but I know Japan made a huge push late in the late 1990s on AI robotics. They’re the reason we have those weird dancing robots like Asimo or whatever from Honda, that was the function of billions of dollars being put in AI. And I think we can say now that investment didn’t pay off for the companies. If it had, it would be unbelievable what they would have. But it didn’t.

So I don’t know. Nobody would deny that the technology is getting better and better year after year. But one of the interesting things about the recent push in AI around neural networks is that none of technology there is particularly new. In fact, perceptrons, which I believe are the simplest neural networks, go back to the 1950s. What’s changed is the hardware we can run them on has gotten so much faster and so much more efficient and so much more powerful, and the data sizes that we can work with have gotten so much bigger. So now we can solve these problems, and it’s kind of awesome what we can do.

Deep learning feels like it’s having a marketing moment. We’ve had neural networks forever though—can you talk a bit more about why now for this technology?

So a neural network is… I’m trying to think of the most concise way of thinking about this. The basics of a neural network is that the output of one layer can be the input to another layer. And I believe once you get to two layers of a neural network you have a universal approximator, so it can learn any function, which is quite powerful—at least in theory. Once you have more than two layers, it becomes “deep learning.” So if you have that, then why not twenty? Why not thirty? Why not 600 layers?

They didn’t try that many layers before because it was too slow. It’s incredible how even now you run a normal multi-layer neural network on a CPU machine and it’s quite slow. The GPU is a big advancement—and the other thing is there have been some algorithmic advances on the vision side of deep learning. It’s been incredible for vision applications because when computer vision started—I think literally at this Dartmouth conference they thought it was a project for a grad student over the summer, because you just hook a camera up to a computer and it gets the pixel values.

It turned out there was a lot more to computer vision, as we now know. Sixty years later, we’ve actually managed to achieve a lot of the ends of the original goal. But this is what I’m saying in terms of the over-promise of AI: we thought it would be one grad student over one summer, and it turns out to be dozens of research labs over sixty years.

So that’s the multiplier that should be applied to the people who say, “Strong AI is a decade away.” I think you can apply a similar multiplier to that.

I feel like the Hollywood version of invention is: Thomas Edison goes into a lab, and comes out with a light bulb. And what you’re describing is that there are breakthroughs that happen, either at a conceptual level or a technological level, that people don’t have the capacity to take full advantage of yet, but which are later layered onto new advances.

Yeah, I think that’s a very coherent philosophy about how science advances. Thomas Edison was really fucking good at making money and keeping the IP for himself, so obviously he’s going to promulgate the view that it was a single genius, a loner working super hard in a room, who owns everything that came from it. Of course that’s going to be his mission.

That’s the startup founder.

Right. Exactly. Who needs no help from anyone—except from all these open-source packages.

But yeah, I think a lot of scientific discoveries are like that. Aluminum is the classic example. We had this metal that was super cool, but it required so much energy to produce. And you know, it has all these awesome properties that we take for granted. It never corrodes—it’s amazing as a metal. But in terms of actual industrial use—first we discovered this metal, then we figured out how to make it cheaply, and then energy got a lot cheaper. And decades later, it was like, oh shit, we have this new device called an aircraft and we need a metal to build it out of, and aluminum was there. So it moved in fits and starts.

For neural networks, I remember when I started graduate school I literally went to a presentation that was making fun of neural networks. They had a slide that was like, “If you don’t know what a neural network is, it was this thing that was really popular around the year 2000 but is now discredited.”

To their credit, there were a few guys from a bunch of random schools that kept neural networks alive, like a bunch random Canadian schools and NYU. They kept this idea alive, like, “Hey, this could be a thing, at least we have this approximation theorem.” And they were right. Everyone else was totally wrong, and I think all of those guys are pretty rich now, which they deserve because they spent twenty years in the wilderness.

So yeah, I think it’s a lot sloppier than people make it seem.

I most associate deep learning now with Google’s open-source TensorFlow, because it seems to be if anyone wants to use deep learning they’re using an existing package like that.

And it’s just a nightmare to use in practice. It’s getting there—I’m sure in another three years it will be much more usable, maybe—but it’s bad. It’s really bad to use. There’s so much hype around it, but the number of people who are actually using it to build real things that make a difference is probably very low.

Are there any other popular deep learning libraries?

There are like six different packages that are all computing. There’s TensorFlow, Theano, Caffe, Torch—then a bunch of other libraries that build on those libraries as primitives, like Keras and Lasagne. But outside of a handful of corporate research labs, nobody is using these tools to actually solve anything, because they’re just so hard to configure.

And frankly, a lot of the difficulties are exactly the spot where a lot of the data scientists are weakest. To bring the conversation back to the beginning, you have a bunch of people with physics PhDs who maybe wrote some R code in graduate school. And they suddenly have to compile all these packages with GPU support so they can get CUDA running, and they’re just like, “We can’t do that.” I think there’s just a huge gap right now between theory and practice.

That feels to me like the magic of AI marketing: you label something as AI and it sounds impressive, but under the hood it’s Naive Bayes—it’s whatever the simplest thing you can get up and running. And there’s a mysticism around the difficulty of the technology, even though the simplest thing gets you most of the way there.

For a number of applications, I think that’s completely correct. I think there are some—and I think these are the most interesting ones—where the information is in the correlation structure of the variables as opposed to the variables themselves. And those are applications where Naive Bayes, which is essentially counting, does poorly.

I mean, I know from experience that lending and credit ratings are examples where you get crazy out-performance by using multidimensional algorithms, as opposed to just simple counting-based algorithms. So there are genuine advances that you can see from this stuff.

But I mean, I feel a little bit guilty taking the position that, oh yeah, it’s all a bunch of shitty marketers, or whatever. Because they made me a bunch of money. It’s been easier for me to sell myself because of the marketing that other people have done and the tremendous hype—it’s just easier to sell things. So it would be kind of disingenuous of me to completely disavow it. “Oh yeah, it’s all a bunch of bullshit.”

Because it might be bullshit, but there is some real shit that happens, and as a practitioner, it’s made my job a lot easier. I couldn’t imagine selling what I sell as a day job without having that. It would just be so difficult.

You’d go and you’d be like, “Oh there’s this crazy complex stuff!”

And they’re like, “Wouldn’t that take a lot of data?”

And you’re like, “Yeah, it’s really hard, and you know what, it probably won’t work.”

And they’re like, “Why would I pay you a bunch of money?”

And you’re like, “Well, I don’t know.”

So yeah, I don’t want to speak too ill of the marketing hype.

FinTech

One hears a lot about algorithmic finance, and things like robo-advisers. And I’m wondering, does that fall into the same category of stuff that seems pretty over-hyped?

I would say that robo-advisers are not doing anything special. It’s AI only in the loosest sense of the word. They’re not really doing anything advanced—they’re applying a formula. And it’s a reasonable formula, it’s not a magic formula, but they’re not quantitatively assessing markets and trying to make predictions. They’re applying a formula about whatever stock and bond allocations to make—it’s not a bad service, but it’s super hyped. That’s indicative of a bubble in AI that you have something like that where you’re like, “It’s AI!” and people are like, “Okay, cool!”

There’s a function that’s being optimized—which is, at some level, what a neural net is doing. But it’s not really AI.

I think one of the big tensions in data science that is going to unfold in the next ten years involves companies like SoFi, or Earnest, or pretty much any company whose shtick is, “We’re using big data technology and machine learning to do better credit score assessments.”

I actually think this is going to be a huge point of contention moving forward. I talked to a guy who used to work for one of these companies. Not one of the ones I mentioned, a different one. And one of their shticks was, “Oh, we’re going to use social media data to figure out if you’re a great credit risk or not.” And people are like, “Oh, are they going to look at my Facebook posts to see whether I’ve been drinking out late on a Saturday night? Is that going to affect my credit score?”

And I can tell you exactly what happened, and why they actually killed that. It’s because with your social media profile, they know your name, they know the name of your friends, and they can tell if you’re black or not. They can tell how wealthy you are, they can tell if you’re a credit risk. That’s the shtick.

And my consistent point of view is that any of these companies should be presumed to be incredibly racist unless presenting you with mountains of evidence otherwise. Anybody that says, “We’re an AI company that’s making smarter loans”: racist. Absolutely, 100%.

I was actually floored, during the last Super BowI I saw this SoFi ad that said, “We discriminate.” I was just sitting there watching this game, like I cannot believe it—it’s either they don’t know, which is terrifying, or they know and they don’t give a shit, which is also terrifying.

I don’t know how that court case is going to work out, but I can tell you in the next ten years, there’s going to be a court case about it. And I would not be surprised if SoFi lost for discrimination. And in general, I think it’s going to be an increasingly important question about the way that we handle protected classes generally, and maybe race specifically, in data science models of this type. Because otherwise it’s like: okay, you can’t directly model if a person is black. Can you use their zip code? Can you use the racial demographics for the zip code? Can you use things that correlate with the racial demographics of their zip code? And at what level do you draw the line?

And we know what we’re doing for mortgage lending—and the answer there is, frankly, as a data scientist, a little bit offensive—which is that we don’t give a shit where your house is. We just lend. That’s what Rocket Mortgages does. It’s a fucking app, and you’re like, “How can I get a million dollar loan with an app?” And the answer is that they legally can’t tell where your house is. And the algorithm that you use to do mortgages has to be vetted by a federal agency.

That’s an extreme, but that might be the extreme we go down, where every single time anybody gets assessed for anything, the actual algorithm and the inputs are assessed by a federal regulator. So maybe that’s going to be what happens. I actually view it a lot like the debates around divestment. You can say, “Okay, we don’t want to invest in any oil companies,” but then do you want to invest in things that are positively correlated with oil companies, like oil field services companies? What about things that in general have some degree of correlation? How much is enough?

I think it’s the same thing where it’s like, okay, you can’t look at race, but can you look at correlates of race? Can you look at correlates of correlates of race? How far do you go down before you say, “Okay, that’s okay to look at?”

I’m reminded a bit of Cathy O’Neil’s new book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016). One of her arguments, which it seems like you’re echoing, is that the popular perception is that algorithms provide a more objective, more complete view of reality, but that they often just reinforce existing inequities.

That’s right. And the part that I find offensive as a mathematician is the idea that somehow the machines are doing something wrong. We as a society have not chosen to optimize for the thing that we’re telling the machine to optimize for. That’s what it means for the machine to be doing illegal things. The machine isn’t doing anything wrong, and the algorithms are not doing anything wrong. It’s just that they’re literally amoral, and if we told them the things that are okay to optimize against, they would optimize against those instead. It’s a frightening, almost Black Mirror-esque view of reality that comes from the machines, because a lot of them are completely stripped of—not to sound too Trumpian—liberal pieties. It’s completely stripped.

They’re not “politically correct.”

They are massively not politically correct, and it’s disturbing. You can load in tons and tons of demographic data, and it’s disturbing when you see percent black in a zip code and percent Hispanic in a zip code be more important than borrower debt-to-income ratio when you run a credit model. When you see something like that, you’re like, “Ooh, that’s not good.” Because the frightening thing is that even if you remove those specific variables, if the signal is there, you’re going to find correlates with it all the time, and you either need to have a regulator that says, “You can use these variables, you can’t use these variables,” or, I don’t know, we need to change the law.

As a data scientist I would prefer if that did not come out in the data. I think it’s a question of how we deal with it. But I feel sensitive toward the machines, because we’re telling them to optimize, and that’s what they’re coming up with.

They’re describing our society.

Yeah. That’s right, that’s right. That’s exactly what they’re doing. I think it’s scary. I can tell you that a lot of the opportunity those FinTech companies are finding is derived from that kind of discrimination, because if you are a large enough lender, you are going to be very highly vetted, and if you’re a very small lender you’re not.

Take SoFi, for example. They refinance the loans of people who went to good colleges. They probably did not set up their business to be super racist, but I guarantee you they are super racist in the way they’re making loans, in the way they’re making lending decisions.

Is that okay? Should a company like that exist?

I don’t know. I can see it both ways. You could say, “They’re a company, they’re providing a service for people, people want it, that’s good.” But at the same time, we have such a shitty legacy of racist lending in this country. It’s very hard not to view this as yet another racist lending policy, but now it’s got an app. I don’t know. I just think that there is going to be a court case in the next ten years, and whatever the result is, it’s going to be interesting.

When we talk about FinTech in general, does that refer to something broader than advising investors when to buy and sell stocks, and this new sort of loaning behavior? Or is that the main substance of it?

FinTech may most accurately be described as regulatory arbitrage: startups are picking up pieces that a big bank can’t do, won’t do, or is just too small for it to pick up. And I think FinTech is going to suffer over the next five years. If there’s a single sector that people are going to be less enamored with in five years than they are now, FinTech is definitely the one.

The other side of it is that they’re exploiting a hack in the way venture capitalists think. Venture capital as an industry is actually incredibly small relative to the financial system. So if you were starting, I don’t know, a company that used big data to make intelligent decisions on home loans—which is probably illegal, but whatever, you’re small enough that it’s no big deal—and you say, “Hey, we’re doing ten million dollars a year in business,” a venture capitalist will look at them like, “Holy shit, I’ve never seen a company get up to ten million dollars in business that fast.” The venture capitalist has no idea that the mortgage market is worth trillions of dollars and the startup essentially has none of it. The founder gives a market projection like, “Oh, this is a trillion dollar industry,” and the venture capitalist is like, “Oh, that market is enormous. I’ve never seen numbers like that before.”

It’s much more of a clever hack than an actual, sustainable, lasting, value-creating enterprise. One of the biggest flagship FinTech companies, Lending Club, is in a ton of trouble. SoFi is probably illegal. And those are the flag bearers for the sector.

The other thing that happened recently was the San Bernardino shootings—apparently the guns that were used were financed by a loan from Prosper, which is another peer-to-peer lender. And you just think about where this is going to go. Are we eventually going to get to the point where we have the credit models to assess and not give that guy a loan because of the risk that he could be a Muslim terrorist? Is that the society that we will be living in?

Maybe. But we’re going to get there with the data.

The Future

If you had to give a non-technical layperson one piece of advice for thinking about these questions, what would it be? Is it to be more skeptical? Is it to be less credulous when confronted with hype? Because it seems like there’s a fairly small number of people who understand these technologies well, and yet they appear to have the potential to make a pretty big impact on a lot of people’s lives.

I think the most realistic way of looking at it is that it’s not all hype. The technical advances are real—and even if they’re not real today, the relentless drumbeat of progress on hardware and algorithms will make them real eventually. It will take longer than you think—potentially a lot longer than you think—but it will happen. So everything you’re hearing is an early warning sign of what the future is going to look like. Maybe not even in our lifetimes, but yeah, it’ll get there. And the questions that we’re going over, they’re going to be real. It’s just not there yet.

So yeah, I would recommend some skepticism, but not complete skepticism. Because the advances underlying this are real. And the rate of progress has kept up, and I don’t see a reason why it’s going to stop.

It’s funny, because I think the biggest concern that I have for the future is that a bunch of people like me are going to make a bunch of money. And a bunch of people are going to lose their jobs. And a bunch of people are going to get new jobs that are crazy and cool. But I don’t know on net how great it’s going to be for society moving forward, though I want to be optimistic about it.

I don’t know if it’s just the tweets that I read in the last election, but the automation of trucking with self-driving cars seems like the most tangible disruptive application of this sort of AI technology. And that impacts such a huge number of people in our economy.

It’s the typical thing, where people overrate the effects of technological innovation in the short term, and underrate it in the long term. The automation of trucking is not going to happen overnight. It’s going to take years. But I believe “truck driver” is the single most common occupation in the U.S. Yeah, it’s not going to be next year. Maybe there will be some automation in five years. But in twenty or thirty years that might not even be a job that people do. And what’s going to happen? I don’t know. That’s a great question. It may involve becoming video game designers.

That’s the thing I have the hardest problem with. I often have this kind of discussion with people who are algorithmically minded, and they view capitalism as an optimizing function. And all questions about technological change go through this filter of, well, we’re glad we have cars instead of horse and buggies. And everything else will sort itself out. But everything else doesn’t just “sort itself out.”

I mean, I try to not fall into the Y Combinator autistic Stanford guy thing, but I actually do think that universal basic income is going to be the endgame. I think that is what society will look like long-term, because I think universal basic income is the welfare that everyone can get behind.

But it’s such a weighty question and technology’s impact on the economy changes so quickly that I don’t know if any of us have ever really had the chance to take a breath. You look at some of the strikes a hundred years ago, like at the Homestead plant, where the workers held out and had fucking gun boats come down the river with Pinkertons and shot the shit out of people a hundred years ago. There’s a Costco there now, and a bunch of smokestacks where the plant used to be. And it’s like, was that whole thing just this ridiculous farce? I don’t even know.

I feel like it’s always a question of, what are you optimizing for? There is this fetish of capitalism as supremely rational. And it does optimize for certain things, like technological innovation. But if you think about it from another perspective, it’s also catastrophically irrational, because what could be less rational than wasting the potential of the millions of people whom capitalism exploits, or, worse, excludes by rendering permanently redundant? Capitalism doesn’t optimize for that.

I feel like I encounter this a lot in conversations with people who are in tech—my colleagues—who often have good intentions, but sometimes it’s hard for them to move their frame of mind from the technical to the political, to move from the technical question of how to optimize the process in front of you to the political question of what are you optimizing for.

It can also be hard for them to consider solutions to political problems that might be extremely low-tech. And for that reason are probably even more difficult—like the question of what happens in the future—which may not be a hundred years away or 200 years away—when robots or algorithms can do 90% of the jobs. How do you prevent that future from becoming a brutal neo-feudal nightmare? That seems to me like a political question rather than a technical one.

I think the strangest thing about being out here in the Bay Area is that the Aspy worldview has just completely saturated everything to the point that people think that everything is a technical problem that should be solved technologically. It’s a very privileged view of very smart people who just want there to be sink or swim. It’s troubling.

On the one hand, there’s no better shepherd for the economy than an engineer; on the other hand, there’s no worse shepherd for an economy than an engineer. Because that kind of machine thinking is very good at producing some things, and very, very bad at producing other things.

On the one hand, I don’t view any of the Silicon Valley startup economies as producing any kind of sustainable growth or ways of employing all these people. On the other hand, I do think that the basic income idea eventually will be the future. One of the most interesting things is the amount of leverage that individual people in Silicon Valley are getting—you look at the WhatsApp acquisition or whatever, with so few people being worth so much money.

That may have been a little bit irrational, but longer-term, it’s hard to argue against. And I don’t see another endgame other than pretty high taxes plus basic income as the way of making that okay, because I don’t think that’s going to go away. I’m not even totally sure that we should discourage it from happening.

But the leverage is just astounding. To go back to that GPU example, there might be some guy who’s running some quant hedge fund somewhere who’s just sitting on the backs of thousands of these $26-an-hour machines making tons of money off of them. That kind of leverage was just unimaginable even ten years ago, and now there’s presumably one guy—I’m assuming it’s a guy, it’s probably a guy—just making shitloads of money, and that opportunity didn’t even exist ten years ago.

This may be a tangent, but I think the technical mindset is very compatible with the technocratic mindset. In both cases, it’s an evasion of politics, because just as the person who designs the racist algorithm presumably does not think of what they’re doing as political, neither does the technocrat who crafts the free trade agreement because all the mainstream economists in the room told him it would be good for the economy, full stop.

I think both approaches are connected to this overwhelming need to see political problems as technical ones, whether from an engineering perspective or from a technocratic governance perspective. To me those feel totally compatible. What you’re describing—the Silicon Valley view of the world—feels to me like a very technocratic view of the world, where if you can just solve certain problems then it will benefit everyone.

In defense of it, it’s also a hopeful view of the world, because you’re at least trying to describe problems that you can solve. It’s a very optimistic way of looking at things, and I’m hesitant to abandon that, because I think ultimately… It’s hard, grappling with this idea of the enormous amount of individual leverage and the crazy rate of change.

On the other hand, it’s hard not to be a kind of technical utopian. It’s hard to bet against the innovation that this country has produced, and maybe that’s a function of survivorship bias or looking back and saying we just happened to get lucky. But you know, airplanes, the elevator—we just invented that stuff, and that’s kind of cool. And so it seems sort of melancholy—or maybe this is my own limitation as a technical thinker to see it as melancholy—to be like, “Yeah, there’s some stuff we can’t solve.”

I’m not sure I want to live in that world. I always want to live in a world where we’re at least trying. But we’ll see.

This piece appears in Logic's issue 1, "Intelligence". To order the issue, head on over to our store. To receive future issues, subscribe.