AI Safety. What's to worry about? And what's not?
X-Risks have caught the attention of the world's press and politicians, but people are only just waking up to the daily risks of AI.
Hi folks, in this week’s newsletter, I’m exploring AI risks, from eXistential risks (X-risks) to the more everyday risks of using AI systems like ChatGPT, Bard and others.
It seems like just yesterday when the idea of superintelligent AI was relegated to the pages of science fiction, but today we stand on the brink of what could either be an awe-inspiring future or an episode straight out of 'Black Mirror'.
Are we inching towards a utopia where AI solves our grandest challenges, or are we blindly dancing on the edge of an existential precipice?
Amidst the cacophony of alarm bells rung by 'AI doomers', can we find a tune that rings true, or is it all just noise, or even a great conspiracy?
I'll wade into the great AI safety debate, sift through the doom and gloom, and unpack the grim prophecies to find the kernel of truth within, exploring the grandstanding of tech prophets, and unravelling whether AI dooming is just a lucrative industry preying on our innate fear of the unknown.
And as the world’s policymakers gather to pen declarations and orders, I ask - are we architecting the foundations of AI safety, or merely drafting our own tech-induced eulogy?
From the nuanced perspectives of AI pundits to the latest governmental strides towards AI regulation, we've got a lot to cover. So, let’s buckle up and assess whether our AI-powered companions are the harbingers of progress or the heralds of our demise.
Let’s dive in! 🚀🔍
Okay, to state my position clearly upfront, I’m an AI optimist.
I guess you’d expect that.
Not to the same extent that venture capitalist Marc Andreessen is, with his teenage-boy-like-zealotry as embodied in his “Techno-Optimist Manifesto” —that is, I don’t believe that technology can solve all humanity’s problems.
Good policy can solve a lot of things, and so can being nice, although both are in short supply.
Overall, though, I believe that, up to now, humans have, on balance, used technology for good rather than harm.
However, a single global nuclear war or genetically engineered virus could drastically tip the balance in the other direction.
Since the release of ChatGPT a year ago to this month, the volume from “AI doomers” has been cranked up to eleven and got the headlines of the mainstream media, not to mention the politicians around the world.
Doomers believe that AI might, some go as far as to say will, pose an existential risk to humanity, the so-called X-risk, at some point in the future, so we had better take steps to avoid it now because we appear to be on an exponential AI capability climb and no one knows if super-intelligence might suddenly emerge from the “giant inscrutable matrices” (thanks Eliezer) of the underlying neural nets.
The problem is, nobody knows if this is a real, or made-up concern, or if it is when it will happen, or what the circumstances are to make it happen.
But some very serious AI experts are believers and have signed various letters to governments urging them to regulate AI now, “before it’s too late!”.
“Dooming” as a business
Before we get more into the specifics of AI doomers, let’s not forget that “dooming” is a big and very old business model, tracing its roots back to ancient times.
This practice, coupled with its profitable counterpart—offering salvation from impending doom, often for a fee—has been a fundamental element of numerous movements, cults and religions throughout history.
Keep reading with a 7-day free trial
Subscribe to BotZilla AI Newsletter to keep reading this post and get 7 days of free access to the full post archives.