search
Damon Isherwood

The Center for Humane Technology: Fighting the Human Downgrade

Tristan Harris and Aza Raskin, co-founders of The Centre for Humane Technology (Screenshot from video, 'The A I Dilemma' March 9, 2023)
Tristan Harris and Aza Raskin, co-founders of The Centre for Humane Technology (Screenshot from video, 'The A I Dilemma' March 9, 2023)

A few weeks ago, I examined some of the biggest existential threats to humanity—nuclear conflict, the rise of artificial intelligence, and the mental health epidemic to name but a few. Then last week, I took a deeper dive into the specific dangers posed by AI, with a particularly bleak outlook forecast by the likes of Eliezer Yudkowsky, Elon Musk, Yuval Harari and others.

This week, however, I want to shift focus to organizations that are working to shape a more humane and positive future, particularly in the context of AI. One organization has been at the forefront of the public discourse: the Center for Humane Technology (CHT).

Logo for the Center for Humane Technology. (By Center for Humane Technology – humanetech.com, Public Domain)

A Google Presentation That Sparked a Movement

The CHT was co-founded by Tristan Harris, a former Google design ethicist (and childhood magician), and Aza Raskin, an inventor and entrepreneur responsible for the invention of the ‘infinite scroll’ (something he now evidently regrets).

The organization was borne from a movement dubbed ‘Time Well Spent’ which in turn was sparked by a presentation Harris gave at Google, in which he voiced growing concerns about how technology was designed to exploit human psychology for engagement, often at the cost of user well-being. Harris’s presentation and subsequent TED talks struck a nerve with many in the industry, inspiring him to take his mission beyond Google and, with Raskin, co-found CHT.

Tristan Harris, Co-Founder of the Center for Humane Technology. (Photo by Stephen McCarthy, CC BY 2.0 via Wikimedia)

The movement and the establishment of CHT emerged in response to an industry prioritizing engagement and profits over human welfare. Social media, smartphones, and digital platforms increasingly capture our attention, manipulate our choices, and distract us from what, arguably, really matters. CHT’s mission emerged to influence how technology is built—guiding it toward a model that aligns with what’s best for society, rather than what’s best for revenue.

Harris references a quote by biologist E.O. Wilson which lies at the core of CHT’s mission:

The real problem of humanity is the following: we have Palaeolithic emotions, medieval institutions, and god-like technology.

Biologist and conservationist Edward O. Wilson (Photo by Ragesoss, CC BY-SA 3.0 via Wikimedia)

This perspective underscores the delicate balancing act CHT advocates for—a recognition that while technology has evolved at an astonishing rate, the psychological state of our species has been stalled in the stone age, and our social infrastructures have not kept pace. Our “Palaeolithic emotions” are often overwhelmed by the relentless flow of digital stimuli, leading to what CHT calls the “human downgrade”—a state in which people feel less capable of deep focus, genuine connection, and autonomy over their digital lives.

Attention Economy, Misinformation and Mental Health

The attention economy is one of the key issues that CHT raises. Social media platforms use algorithms that prioritize sensational or emotionally charged content to keep users engaged. These algorithms, however, contribute to the spread of misinformation and deepen political polarization. By keeping people locked in a cycle of outrage and sensationalism, tech companies boost their advertising revenue at the cost of civil discourse and accurate information.

Cell phone with social media applications. Photo by Brian Ramirez (Pexels)

Another concern is the impact of this technology on mental health. Studies have shown that excessive social media use correlates with higher rates of anxiety, depression, and loneliness—especially among teenagers and young adults. CHT believes that technology should be designed to support psychological well-being, not undermine it. This extends beyond social media to the pervasive effects of digital platforms on self-worth, attention span, and focus. CHT has highlighted these effects in tools like the “Ledger of Harms”, a comprehensive list of the documented negative effects of technology on society, which they use to bring awareness and urge tech companies toward reform.

The success of The Social Dilemma

One of CHT’s most significant achievements is its involvement in the documentary The Social Dilemma, which brought the issues of technology’s impact on society to a broad, mainstream audience. This hugely popular Netflix documentary combined interviews with former tech insiders and real-life scenarios to illustrate how social media platforms manipulate users’ behavior, often without their awareness. The movie penetrated the public discourse, prompting many to re-evaluate their digital habits and question the ethics of big tech’s practices.

Screenshot from ‘The Social Dilemma’ trailer.

Beyond the documentary, CHT has developed a suite of resources aimed at different groups—tech industry professionals, educators, policymakers, and the general public. For example, their “Design Guide for Humane Technology” provides a template for tech companies to help prioritize user well-being into their products. CHT also champions the “Digital Well-Being” movement, which inspired features like Apple’s Screen Time and Google’s Digital Wellbeing, giving users tools to monitor and moderate their screen time.

Apple iPhone’s Screen Time tool.

While these initiatives have made an impact, CHT recognizes that these are only initial steps toward broader change. They continue to push for systemic reform within tech companies, calling for a fundamental redesign of algorithms and content delivery systems. Through a combination of public awareness campaigns and advocacy (their podcast ‘Your Undivided Attention’ has been hugely successful), CHT is working to transform the very structure of how digital platforms operate—making them more humane, transparent, and accountable.

GPT-4 and The Rise of the Golem

A big focus of CHT is not only the risks of social media, but also the rise and impact of AI. In a 2023 episode of their podcast titled ‘The AI Dilemma’, Harris and Raskin introduced the concept that with the advanced AI model seen in GPT-4 which was rolled out in early 2023, AI models were now “Golem-class AIs,” drawing a comparison to the golem which features in Jewish folklore.

Rabbi Loew and Golem by Mikoláš Aleš, 1899 (Public domain via Wikimedia)

Traditionally, a golem is an inanimate creature crafted from clay and brought to life through mystical means, symbolizing both protection and potential peril when its powers grow beyond its creator’s control. In this sense, Harris and Raskin warn that today’s AI has taken on a golem-like quality: it possesses emergent capabilities that we don’t fully understand or control, potentially leading to unintended consequences. They argue that, like a golem, AI is now more than the sum of its parts, capable of actions its creators may not foresee, raising urgent questions about responsibility, safety, and the rapid development of these technologies.

An Uphill Battle, but Not Without Hope

Despite its growing influence, CHT—and more broadly humanity as a whole—face a significant challenge influencing and controlling the development and use of technology like social media and AI. Some argue the approach of CHT is idealistic, with limited power to sway tech giants whose business models are rooted in the very practices CHT seeks to change. While CHT has managed to influence public discourse, the shift toward humane and safer technology within companies has been slow, and profit motives remain a formidable barrier. As Harris himself notes:

there is a 30-to-one gap in people building AIs and the people who work on safety.

He emphasizes, however, that the cultural shift they’re advocating for won’t happen overnight—but there is precedent that offers hope. Harris and Raskin draw a parallel to humanity’s response to the threat of nuclear annihilation during the Cold War, which led to the establishment of protocols, treaties, and institutions to regulate nuclear weapons. Though this shift took time, initiatives like The Day After—the largest made-for-TV film of its time, broadcast to American and Russian audiences during the 1980s—helped foster a shared understanding of the catastrophic consequences of full-scale nuclear war. (Soberingly, even with these advances, the threat of a nuclear conflagration remains very real).

Screenshot from ‘The Day After’ (film).

Hope for the Future?

With Harris quoting statistics like

50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI,

it’s easy to feel overwhelmed. However, CHT does provide the potential to counterbalance the breakneck pace of technological development, underscoring the ethical imperative to ensure that new technologies enhance human potential rather than diminish it. By advocating for tools that empower rather than undermine humanity, CHT champions an urgent and essential mission.

Biologist Jeremy Griffith captures the sentiment well:

Giving us mad, ego-crazed humans superpowers is ridiculously irresponsible!

Next week, I’ll take a closer look at the initiatives he’s leading to help shape a more positive, healthy society.

Finally, an update to a famous parable by Raskin which offers a stark insight into the danger of the power of AI:

Give a man a fish, and you feed him for a day. Teach a man to fish, and you feed him for a lifetime. But teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory—and then fish all the fish to extinction.

Don’t teach an AI to fish… (Image AI-generated by Author)
About the Author
Growing up in Sydney, Australia meant I was unquestioningly secular, as perhaps only an Anglo Australian can be. It followed that my vehicle for answering the why's and wherefore's of existence was science. Recently I discovered that my great-grandmother on my mother's side was Jewish; and moreover, Judaism was matrilineal! With this aspect of my heritage revealed, a great need was awakened in me to reconcile the scientific and religious approaches.