More than three years ago, the editor sit down Shortly after Sam Altman stepped down as president of Y Combinator to become CEO of the artificial intelligence company he co-founded with Elon Musk and others in 2015, he and Sam Altman attended a conference in San Francisco. small events, open artificial intelligence.
At the time, the language Altman used to describe OpenAI’s potential seemed odd to some. For example, the opportunity for general artificial intelligence — machine intelligence that can solve problems like humans — is so great that if OpenAI manages to crack it, the agency “could catch all future light cones” in the universe, Altman said. the value of. ’ He said the company “would have to not publish the research” because it was too powerful. Asked if OpenAI was guilty of fear-mongering — the agency’s co-founder Elon Musk There have been repeated calls for all organizations developing AI to supervised — Altman on Danger no Consider the “social consequences” when “you’re building something on an exponential curve”.
The audience laughed at the different points of the conversation, unsure how seriously to take Altman. However, no one is laughing now.Although machines are not as intelligent as humans, technology OpenAI has been released to the world close enough that some critics fear it could be our undoing (and more sophisticated technology is It is reported that the upcoming).
Indeed, although heavy users say it is not too smartThis Chat GPT The model OpenAI made available to the public last week can answer questionsWith a question like a person, professionals from all walks of life grapple with the implications.Educators, for example, wonder how they can distinguish original work from the algorithmically-generated papers they will inevitably receive—which can evade Anti-plagiarism software.
Paul Kedrosky is not an educator per se. An economist, venture capitalist and MIT researcher, he describes himself as a “depressed normal guy who likes to think about risks and unintended consequences in complex systems.” But he was one of those people who suddenly worried about our shared future, tweet yesterday: “[S]Kudos to OpenAI for delivering this pocket nuke without limit to an unprepared society. “I clearly feel that ChatGPT (and its ilk) should be withdrawn immediately,” Kedrosky wrote. And, if reintroduced, only with strict restrictions. “
We spoke with him yesterday about some of his concerns, and why he thinks OpenAI is driving what he sees as “the most disruptive change in the U.S. economy in 100 years,” and not in a good way.
Our chat has been edited for length and clarity.
TC: ChatGPT was released last Wednesday. What triggered your reaction on Twitter?
PK: I’ve worked with these conversational UI and AI services in the past, and this is obviously a huge leap forward. What particularly bothers me here is its random brutality, with dramatic consequences for many different activities. Not only is this obvious, like high school essay writing, but it covers almost any area that has grammar — [meaning] A methodical form of self-expression. That could be software engineering, high school papers, legal documents. All of which are easily eaten by the ravenous beast and spat out again without compensating for anything used to train it.
I heard from a colleague at UCLA that they didn’t know what to do with the papers at the end of the semester, they were receiving hundreds of papers per course and thousands of papers per department because they were no longer Know what’s fake and what’s not.So doing it so casually — as someone said to me earlier today — is reminiscent of the so-called [ethical] White hat hackers find bugs in widely used products and then notify developers before the wider public knows about it so developers can patch their products and we don’t have massive disruptions and grid outages. Quite the contrary, viruses are released into the wild with no regard for consequences.
It does feel like it could eat the whole world.
Some people might say, ‘Well, do you feel the same way when automation comes to auto factories and auto workers lose their jobs? Because it’s a broader phenomenon. But this is very different. These specific learning techniques are self-catalyzed; they are learning from the request. So robots in manufacturing plants, while having a disruptive effect on the people working there and having incredible economic consequences, didn’t turn around and start absorbing everything inside the factory, moving department by department, And it’s not just what we can expect but what you should expect.
Musk Lets OpenAI Partially End disagreement Regarding the company’s development, he said in 2019 that he has long viewed AI as an existential threat.but people criticize him don’t know what he’s talking about. Now we are faced with this powerful technology, and it is not clear who will step in to solve it.
I think it’s going to start in a lot of places at the same time, most of which look really clumsy and people will [then] Sneer because that’s what the techs do. But too bad, because we came into this world by creating something of such significance.So like the FTC asked people to blog years ago [make clear they] Having affiliate links and making money from them, I think on a trivial level, people will be forced to disclose “we didn’t write this”. It’s all machine generated.
I also think we’re going to see new energy ongoing litigation Copyright infringement against Microsoft and OpenAI in the context of our training machine learning algorithms. I think there’s going to be a broader DMCA question about this service here.
i think it is possible [massive] Eventually the lawsuits and settlements about the aftermath of the service, you know, it’s probably going to take too long and not help enough people, but I don’t see how we don’t end up [this place] about these technologies.
What is MIT thinking?
Andy McAfee And his team over there is more optimistic and also has a more orthodox view, whenever we see chaos, other opportunities are created, people are mobile, they go from one place to another, from one profession to another, We should not be so stubborn that we think this particular technological development is something we cannot change and migrate. I think that’s largely true.
But the lesson of the past five years in particular is that these changes can take a long time. Free trade, for example, is a deeply disruptive economic experience, and as economists we all tell ourselves that the economy will adapt and the average person will benefit from lower prices. What no one thought was that someone would organize all the angry people and elect Donald Trump.So there’s this idea that we can predict and predict what the consequences will be, but [we can’t].
You talked about essay writing in high school and college. One of our kids has already asked – in theory! — If using ChatGPT to write a paper is plagiarism.
The purpose of writing essays is to prove that you can think, so this shortens the process and defeats the purpose. Also, in terms of consequences and externalities, if we can’t get people to give homework because we no longer know if they’re cheating, that means everything has to happen in the classroom and has to be supervised. We can’t take anything home. More stuff has to be done orally, so what does that mean? This means schools are becoming more expensive, more artisanal, and smaller, which is exactly the opposite of what we are trying to do. In terms of the actual delivery of services, the consequences of higher education are devastating.
What do you think about the idea of a universal basic income, or making it accessible to everyone income From artificial intelligence?
I have far fewer supporters than I did pre-COVID. The reason is that, in a sense, COVID is an experiment in universal basic income. We pay people to stay home and they came up with QAnon. So I’m really concerned about what’s going to happen when people don’t have to hop in a car, drive somewhere, do a job they hate and then come home again because the devil will find jobs for idle hands and there will be plenty of idle hands and lots of demons.