OpenAI: Do you still trust them? And the FT AI Conference Ushers in the Age of "Augmented Professionals"
Exploring the turbulent leadership at OpenAI and the transformative role of AI in various industries, as highlighted in the FT Future of AI Summit.
Hi folks, this week’s newsletter comes in two parts.
In the first part, I’m exploring the dramatic saga at OpenAI. The company, once a beacon of AI innovation, found itself in the eye of a storm that could rewrite the playbook on corporate governance and executive management.
Sam Altman, the CEO, was fired and then rehired, a board decision that seems almost devoid of legal and PR counsel and a ripple effect that left customers, investors and partners like Microsoft reeling. But what could possibly lead to such a drastic series of events at a company riding the crest of AI advancement?
The return of Sam Altman as CEO, Greg Brockman's reinstatement, and the near-mass exodus of employees paint a picture of loyalty and leadership turmoil. But maybe there’s more to this story – did Effective Altruism, a movement popular in Silicon Valley circles, influence some of the board decisions?
On a different yet equally compelling note, in part 2, the FT Future of AI Summit I attended shone a spotlight on AI's impact across various corporations. Imagine a world where AI is not just a tool for tech specialists but a ubiquitous force among “augmented professionals”. What did this summit reveal about AI's potential to transform traditional roles in legal and financial services? How is AI reshaping our approach to workforce training, ethical deployment, and public perception?
As you navigate through these revelations and insights, the overarching theme becomes clear: we are at a critical juncture in the AI odyssey. From the boardrooms of OpenAI to the conference halls of the FT Summit, the AI landscape is evolving rapidly, fraught with governance challenges but still brimming with potential.
And still, OpenAI’s competitors are not stopping and are releasing new, even more powerful models which I cover in brief.
Let’s dive in! 🚀🔍
P.S. If you’re reading this from the USA, happy Thanksgiving holiday!
What an absolute 💩 show the last 7 days have been for OpenAI! The board have literally humiliated themselves on a world stage.
It looks like we are witnessing a casebook lesson in how to destroy nearly $90 billion in shareholder value for what, in retrospect, may turn out to be an incredibly naive board decision to fire (and subsequently re-hire) the CEO, Sam Altman, and with what appears to be zero PR and zero legal counsel—not to mention blind-siding customers, investors and their biggest partner, Microsoft!
Unless a good explanation is given soon, this is going to become classic MBA business school case study fodder on how not to fire your CEO or run an executive board for literally centuries to come.
In fact, it might easily rank up there for sheer leadership incompetence with Lord Cardigan’s Charge of the Light Brigade when British light cavalry were ordered to charge Russian heavy-machine gun positions (needless to say, it didn’t end well for the Brits).
That being said, the situation as I write is that Sam Altman is back at OpenAI in his old position as CEO—Greg Brockman, who resigned as President and Chairman in protest, is back too, and the 710-odd employees (out of 770) who threatened to resign from OpenAI and follow Altman and Brockman over to Microsoft have stayed put.
Ilya Sutskever, OpenAI’s Chief Scientist, co-founder, board member and co-conspirator, and who was allegedly the person who told Mr Altman he was fired, has now put his hands up and asked for forgiveness in an X tweet after Mr Brockman’s wife (also allegedly) tearfully convinced him to reverse his decision, which he subsequently did.
To say no one (outside the board members involved) saw it coming is an understatement.
Sam Altman has presided over a company with meteoric growth in the last 12 months, the likes of which we have never seen before. He is literally the last founder you’d expect to be fired.
OpenAI has gone from $28 million in revenue in the previous year to well over a billion in the current year—and from zero to over 100 million active users for its main product, ChatGPT—it has also built a developer and creator community numbered in the millions.
Name another CEO who has achieved that kind of growth within the space of 12 months. Yep, I’m hearing those crickets too.
I’m not going to go over the minutiae of every twist and turn again here, as what matters is where we are now, but if you somehow took a week-long digital detox break and missed what happened, I recommend these links to catch up:
FT - Who were the OpenAI board members that sacked Sam Altman?
Semafor - Sam Altman is returning to OpenAI
The Verge - Sam Altman to return as CEO of OpenAI
NYTimes - Before Altman’s Ouster, OpenAI’s Board Was Divided and Feuding
NYTimes - Explaining OpenAI’s Board Shake-Up
Did Effective Altruism play a part in OpenAI’s bust-up?
Before I leave the OpenAI soap, there’s an interesting theory posited by Semafor doing the rounds, which suggests how thinking from the Effective Altruism (EA) movement might have influenced the decisions of some board members.
Read about it here,
The AI industry turns against its favorite philosophy
And here,
What we’ve learned about the robot apocalypse from the OpenAI debacle
In short, EA-funded organisations and individuals have been behind some of the most extreme letters to governments about imagined future AI existential risk (AI x-risk), one going so far as to ask for a 6-month pause in the release of new AI models.
Although, as the old saying goes, correlation does not imply causation, I have personally been suspicious about the motives of the EA movement for some time now as they seem intent on conjuring up headline-grabbing, but at the same time, unsubstantiated claims about world-destroying AI demons grabbing the attention of governments and politicians, whilst ignoring the harm that AI can (and does) do today.
I’m not saying people who follow EA are bad people. They purport to want to do the best for humanity. But EA does seem to attract a certain type of intellectual acolyte and tech billionaire, who, through their advocacy of utilitarianism, seems to completely miss the point about what it means to be human and live in the now.
Anyway, I’m not going further down the EA rabbit hole in this article.
Thankfully, other journalists, much more skilled than I, are waking up to the movement’s influence in Silicon Valley and academic circles, and I’m sure we’ll all hear a lot more about it in the future.
So, where does that leave us?
Nothing has changed except the board of directors…so far.
This clever fake video created using AI (of course!) gives the lowdown on everything else you need to know,
Where does that leave businesses relying on OpenAI products?
One thing we’ve all learnt from this fiasco is not to put all your ‘AI chickens’ in one basket.
The now ex-board broke our contract of trust with OpenAI, and it’ll take a long time to re-establish it. OpenAI may well have the best models today, but as we’ve just witnessed, they could be gone tomorrow.
If 710 out of the 770 employees said they would leave OpenAI to go to Microsoft, what effect would that have had on the service levels of ChatGPT and its APIs?
You’d like to think there would be an orderly exit orchestrated by Microsoft if that happened, but in a worst-case scenario, let’s say they go to a competitor, ChatGPT, and its developer infrastructure might have been out of action for days, weeks or maybe even forever.
No one wants to build on a bed of shifting sand. This whole event happened due to the much overlooked ‘G’ in corporate ESG—Environmental, Social, and, you guessed it, Governance.
Let that be a lesson to us all to diversify and abstract away from the service organisations that power our core businesses as much as possible.
Thoughts and themes from the Financial Times “Future of AI” Conference
Last week, before all the OpenAI shenanigans, I was kindly invited as a VIP guest to the FT’s Future of AI Conference.
It was two days of networking and finding out what, mostly, large corporations were doing with generative AI.
Reflecting on what was said during the summit, here are some thoughts and themes I have about AI in the corporate world - this summary is generated by GPT-4 from my written notes.
The Financial Times Future of AI Summit illuminated the profound impact that generative AI will have on the professional landscape.
It is evident that AI is moving beyond the realm of technology specialists and becoming an integral tool for a wide range of professions, effectively creating an era of “augmented professionals.”
Keep reading with a 7-day free trial
Subscribe to BotZilla AI Newsletter to keep reading this post and get 7 days of free access to the full post archives.