- May 24, 2023
- Posted by: Art Berman
- Category: The Petroleum Truth Report
A deep-fake image of an explosion at the U.S. Pentagon triggered a 200-point stock market sell-off yesterday. This is a relatively benign example of the capabilities of artificial intelligence (A.I.).
Yesterday, I finished listening to my friend Nate Hagens’ podcast interview with Daniel Schmachtenberger “Artificial Intelligence and The Superorganism.” It was a provocative and sometimes scary discussion of the potential promise and danger of A.I.
It’s long and all of it is worth hearing but the A.I. part begins almost two hours (01:50:12) into the podcast.
Here, Schmachtenberger distinguishes between the sorts of narrow A.I.—that include deep fakes, machines that play chess, and ChatGPT—and artificial general intelligence (AGI). Google’s AlphaGo is a narrow A.I. game system that was not programed to include any human games. In 3 hours and a trillion runs, it was able to beat all previous A.I. chess programs.
Artificial general intelligence is way more than that. It’s a system that can learn to accomplish any intellectual task that human beings or other animals can perform. It doesn’t exist yet but it’s where A.I. is going.
“So the AI, because all the other tools are made by the kind of human intelligence that makes tools and AI is that kind of human intelligence externalized as a tool itself, it has a capacity to be omni-modal, right? Not dual use, omni-use more than anything else is, and omni-combinatorial.”
This means that AGI could plausibly result in the creation of an intelligent agent that could outcompete humans. This is what A.I. experts like Eliezar Yudkowsky call the singularity.
In a recent editorial in Time, Yudkowsky wrote,
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter…Shut it all down…We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.”
I’ve followed artificial intelligence casually for a while but listening to Schmachtenberger made me to think about it differently. I can convince myself that he and Yudkowsky are perhaps imagining a worst-case scenario that has a very low probability of happening.
At the same time, I am unwilling to dismiss their concerns. There are few people who know more about A.I. than Yudkowsky and Schmachtenberger is an expert at wide-boundary systems analysis. What they are each describing, after all, is a black swan event. We’ve seen a few of those just in the last 15 years.