How Real is the Threat of Artificial Intelligence?
It seems everybody is writing about Artificial Intelligence these days; with positions ranging from great excitement to grave trepidation. In all cases, there is no doubt that AI is a transformative breakthrough, a technology that promises to revolutionize nearly every aspect of our lives. Yet, it threatens to be the quintessential double-edged sword: while it offers immense potential to enhance our lifestyle and solve complex problems, it also introduces profound risks that could ultimately threaten our very existence.
To understand the full extent of these risks, we need to pay attention to the perspectives of leading thinkers whose insights reveal the complex and urgent challenges posed by this rapidly advancing technology.
Eliezer Yudkowsky, a prominent AI researcher and co-founder of the Machine Intelligence Research Institute (MIRI), offers some of the most urgent warnings in the discourse on AI. His view is stark: aligning Artificial General Intelligence (AGI) with human values is a near-insurmountable challenge, with failure potentially leading to human extinction.
In his 2022 article, AGI Ruin: A List of Lethalities, Yudkowsky outlines why AGI poses such an existential risk, cautioning that even AGI systems with seemingly ‘safe’ goals could act catastrophically if their objectives misalign with human interests. He emphasizes that humanity only has one opportunity to solve this problem, as an AGI’s intelligence could quickly surpass human control, enabling it to manipulate systems—or even people—to dangerous ends. He argues current ‘alignment techniques’ fall short, lacking the scalability needed to prevent catastrophic actions as AGI capabilities grow. As he starkly puts it,
Losing a conflict with a high-powered cognitive system looks at least as deadly as everybody on the face of the Earth suddenly falls over dead within the same second.
Echoing Yudkowsky’s warnings is tech and space pioneer Elon Musk, who has frequently raised concerns about AI’s dangers, stating back in 2014 that it could be “potentially more dangerous than nukes”, and that “with artificial intelligence, we are summoning the demon”.
In a recent discussion with Jordan Peterson on X, Musk described the race toward ‘digital superintelligence’—an intelligence far smarter than any human and, ultimately, humanity as a whole. He remarked, ‘No one can say “is this a wise thing to do, isn’t this dangerous?”, unfortunately, whether we think that or not, it is being done.’
He went on to express his view that his involvement through the xAI (Grok) platform was essential for guiding AI development toward outcomes beneficial to humanity—when accused by Google’s Larry Page in 2013 of being a “specist,” someone who was biased in favor of their own species, Musk responded, “Well, yes, I am pro-human. I f-cking like humanity, dude!”.
Peterson himself holds a grim view of AI’s potential pitfalls. In a January 2024 discussion on Piers Morgan Uncensored, he shared his experience with AI tools like ChatGPT and Grok, noting how remarkably capable they’ve become—comparable to “a team of competent master students in multiple disciplines.” However, Peterson cautioned against their “constant lies” and “unconscionable” biases, shaped by recent politically skewed datasets. It seems even Musk’s capabilities in controlling biases in AI are tenuous.
Peterson warns of a future where automated systems could magnify humanity’s tendencies toward ‘slavishness and tyranny’, threatening individual autonomy. According to Peterson, the rapid deployment of AI, especially in areas like facial recognition and digital identity control, poses a profound risk as these systems evolve beyond our understanding and reach. His perspective underscores the need for a thoughtful, cautious approach to integrating AI into society.
Building on these concerns, Tristan Harris, co-founder of the Center for Humane Technology, offers a pathway he hopes will steer AI in a positive direction. At the 2024 AI for Good Global Summit, Harris outlined the crucial need for thoughtful, ethical development in his talk, The AI Dilemma: Navigating the Road Ahead. He emphasized that while AI holds immense potential to empower humanity, it also poses profound risks when wielded without adequate safeguards.
“AI gives us kind of superpowers”, Harris explained, noting that AI amplifies humanity’s strengths and weaknesses alike. However, current AI development is incentivized by a “race to the bottom” among companies, focusing on rapid deployment over safety—an approach reminiscent of early social media, with its unintended societal harms.
Harris warned of the dangers inherent in this unchecked progress, such as the risk of deep fakes, increased fraud, and amplified biases. He called for robust regulatory standards that align with human values and underscored the need for international cooperation to establish protective frameworks.
“No matter how high the skyscraper of benefits that AI assembles,” Harris warned, “if it can also be used to undermine the foundation of society…it won’t matter how many benefits there are.” His message underscores the urgency of proactive governance to ensure AI serves humanity’s long-term interests.
Providing a broader discourse on AI, examining it within the context of human immaturity and a holistic view of the universe, biologist Jeremy Griffith presents a thought-provoking critique. In his 2023 book AI & ALIENS: the truthful analysis, he warns against the dangers of empowering embattled “psychotic humans” with advanced AI systems, arguing that the pursuit of technological advancement without confronting our psychological issues is fraught with peril. As Griffith notes,
Playing God with ever more sophisticated technology while we couldn’t even confront God [which he describes as ‘Integrative Meaning’]… was a potentially very dangerous pursuit.
He emphasizes that while humanity has focused on the pursuit of technological advancements like AI and the frontier of outer space, the true challenge lies within—understanding and addressing our human condition.
Griffith’s insights echo the concerns raised by Yudkowsky and others, particularly regarding the need for caution in AI development. He expresses a strong view that the world is not ready to handle the immense power of AI, advocating for a halt in its advancement until we confront and heal our psychological state. He argues, “Giving us mad, ego-crazed humans superpowers is ridiculously irresponsible!” This call to pause AI development aligns with Yudkowsky’s plea for restraint, underscoring the urgency of addressing the underlying issues that could lead to catastrophic outcomes.”
Finally, a grim warning comes from Israeli historian and philosopher Yuval Noah Harari, who in his latest book Nexus emphasizes the profound implications of AI on human decision-making. He asserts,
Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision making. AI isn’t a tool—it’s an agent.
This distinction highlights the critical nature of the challenge we face: as AI systems evolve into autonomous agents capable of influencing outcomes, the ethical and societal ramifications become increasingly complex and dire.
As we navigate the multifaceted landscape of AI development, it is imperative to acknowledge the intertwined threads of opportunity and peril presented by this technology. The diverse perspectives of thought leaders like Eliezer Yudkowsky, Elon Musk, Jordan Peterson, Tristan Harris, Jeremy Griffith, and Yuval Harari converge on a fundamental truth: the evolution of AI demands a collective and conscientious approach that prioritizes ethical governance, introspection, and the human experience. The stakes are high, and the responsibility to steer AI towards a beneficial future lies with us. If we do not confront our psychological complexities and establish robust frameworks for AI’s development and application, we risk yielding our agency to systems that may not align with our values. The path forward must balance innovation with vigilance, ensuring that we harness the power of AI as a tool for good, rather than allowing it to become a force of unchecked consequence.
As a closing remark, I suggest watching Yudkowsky’s TED Talk Will Superintelligent AI End the World? As he warns of humanity not taking the threat of AI remotely seriously, the audience disturbingly responds with: laughter.