Warning: 21st Century Mythology Employs Retrograde Vocabulary

There is a new presentation up over at edge.com featuring Jaron Lanier, a genius in computer science, that I want to recommend. It is super helpful in putting into context the concept of the Singularity. Because I think this information is so important to be aware of, I have quoted him below on what I think are his key points for reflection but you can (should!) listen to the soundcloud interview or read the unedited transcript by clicking here:The Myth Of AI

Do you feel incompetent when it comes to comprehending and navigating the digital database economy in which we now live? I wonder how much of that is down to a reasonable hesitancy in dealing with the new and unfamiliar that has morphed into ‘the Great and Terrible Unknowable’. Lanier’s statements offer a reset button of sorts for our attitudes towards Artificial Intelligence. We can use the frictional energy of the times to our everyday advantage in positive ways and that starts with re-imagining, re-visualizing current trends and trajectories and our rightful place in them.

*********

JARON LANIER:
The idea that computers are people has a long and storied history. It goes back to the very origins of computers, and even from before. There’s always been a question about whether a program is something alive or not since it intrinsically has some kind of man vs machineautonomy at the very least, or it wouldn’t be a program. There has been a domineering subculture—that’s been the most wealthy, prolific, and influential subculture in the technical world—that for a long time has not only promoted the idea that there’s an equivalence between algorithms and life, and certain algorithms and people, but a historical determinism that we’re inevitably making computers that will be smarter and better than us and will take over from us. Some like the idea of the computers taking over, and some of them don’t. What I’d like to do here today is propose that the whole basis of the conversation is itself askew, and confuses us, and does real harm to society and to our skills as engineers and scientists…

Let’s go to another layer of how it’s dysfunctional. And this has to do with just clarity of user interface, and then that turns into an economic effect. People are social creatures. We want to be pleasant, we want to get along. We’ve all spent many years as children learning how to adjust ourselves so that we can get along in the world. If a program tells you, well, this is how things are, this is who you are, this is what you like, or this is what you should do, we have a tendency to accept that. I’ll give you a few examples of what I mean by that. Maybe I’ll start with Netflix. The thing about Netflix is that there isn’t much on it. There’s a paucity of content on it. If you think of any particular movie you might blog_thezeitgeistmovement_comwant to see, the chances are it’s not available for streaming, that is; that’s what I’m talking about. And yet there’s this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there’s very little available from it. And yet people accept it as being intelligent, because a lot of what’s available is perfectly fine. Dating always has an element of manipulation; shopping always has an element of manipulation; in a sense, a lot of the things that people use these things for have always been a little manipulative. There’s always been a little bit of nonsense. And that’s not necessarily a terrible thing, or the end of the world. But it’s important to understand it if this is becoming the basis of the whole economy and the whole civilization. If people are deciding what books to read based on a momentum within the recommendation engine that isn’t going back to a virgin population, that hasn’t been manipulated, then the whole thing is spun out of control and doesn’t mean anything anymore. It’s not so much a rise of evil as a rise of nonsense. It’s a mass incompetence, as opposed to Skynet from the Terminator movies. That’s what this type of AI turns into…

If you talk about AI as a set of techniques, as a field of study in mathematics or engineering, it brings benefits. If we talk about AI as a mythology of creating a post-human species, it creates a series of problems that I’ve just gone over, which include acceptance of bad user interfaces, where you can’t tell if you’re being manipulated or not, and everything is ambiguous. It creates incompetence, because you don’t know whether recommendations are coming from anything real or just self-fulfilling prophecies from a manipulative system that spun off on its own, and economic negativity, because you’re gradually pulling formal economic benefits away from the people who supply the data that makes the scheme work….the mythology is the problem, not the algorithms. To back up again, I’ve given two reasons why the mythology of AI is stupid, even if the actual stuff is great. The first one is that it results in periodic disappointments that cause damage to careers and startups, and it’s a ridiculous, seasonal disappointment and devastation that we shouldn’t be randomly imposing on people according to when they happen to hit the cycle. That’s the AI winter problem. The second one is that it causes unnecessary negative benefits to society for technologies that are useful and good. The mythology brings the problems, not the technology…

This idea that some lab somewhere is making these autonomous algorithms that can take over the world is a way of avoiding the profoundly uncomfortable political problem, which is that if there’s some actuator that can do harm, we have to figure out some way that people don’t do harm with it. There are about to be a whole bunch of those. And that’ll involve some kind of new societal structure that isn’t perfect anarchy. Nobody in the tech world wants to face that, so we lose ourselves in these fantasies of AI. But if you could somehow prevent AI from ever happening, it would have nothing to do with the actual problem that we fear, and that’s the sad thing, the difficult thing we have to face…

To my mind, the mythology around AI is a re-creation of some of the traditional ideas about religion, but applied to the technical world…There’s an anticipation of a threshold,th00GTZ9CK_flickerdotcom an end of days. This thing we call artificial intelligence, or a new kind of personhood… If it were to come into existence it would soon gain all power, supreme power, and exceed people.The notion of this particular threshold—which is sometimes called the singularity, or super-intelligence, or all sorts of different terms in different periods—is similar to divinity. Not all ideas about divinity, but a certain kind of superstitious idea about divinity, that there’s this entity that will run the world, that maybe you can pray to, maybe you can influence, but it runs the world, and you should be in terrified awe of it. That particular idea has been dysfunctional in human history. It’s dysfunctional now, in distorting our relationship to our technology. It’s been dysfunctional in the past in exactly the same way. Only the words have changed…

If AI means this mythology of this new creature we’re creating, then it’s just a stupid mess that’s confusing everybody, and harming the future of the economy. If what we’re blacklistednewsdotcomtalking about is a set of algorithms and actuators that we can improve and apply in useful ways, then I’m very interested, and I’m very much a participant in the community that’s improving those things. Unfortunately, the standard vocabulary that people use doesn’t give us a great way to distinguish those two entirely different items that one might reference. …this vocabulary problem is entirely retrograde and entirely characteristic of traditional religions…In the history of organized religion, it’s often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity…That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, “Well, but they’re helping the AI, it’s not us, they’re helping the AI.” It reminds me of somebody saying, “Oh, build these pyramids, it’s in the service of this deity,” but, on the ground, it’s in the service of an elite. It’s an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.’

**********

Interesting side note: Jaron Lanier has no social media accounts at all and all purported ones are fake.

Comment here or reach me at elemental.living@yahoo.com

Advertisements

5 thoughts on “Warning: 21st Century Mythology Employs Retrograde Vocabulary

  1. Facebook AI Director’s no-hype vantage point: http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning

    more: Although I do believe we will eventually build machines that will rival humans in intelligence, I don’t really believe in the singularity. We feel like we are on an exponentially growing curve of progress. But we could just as well be on a sigmoid curve. Sigmoids very much feel like exponentials at first. Also, the singularity assumes more than an exponential, it assumes an asymptote. The difference between dynamic evolutions that follow linear, quadratic, exponential, asymptotic, or sigmoidal shapes are damping or friction factors. Futurists seem to assume that there will be no such damping or friction terms. Futurists have an incentive to make bold predictions, particularly when they really want them to be true, perhaps in the hope that they will be self-fulfilling.

    Like

  2. Oustanding article for further reading:
    ‘Early neural networks were limited to dozens or hundreds of neurons, usually organised as a single layer. The latest, used by the likes of Google, can simulate billions. With that many ersatz neurons available, researchers can afford to take another cue from the brain and organise them in distinct, hierarchical layers (see diagram). It is this use of interlinked layers that puts the “deep” into deep learning.’
    http://www.economist.com/news/briefing/21650526-artificial-intelligence-scares-peopleexcessively-so-rise-machines

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s