search
Sharon Gal Or
Ethics in AI

Universities are Dinosaurs?

Image: The AI meteorite is fast approaching, and only those who evolve survive.
Image: The AI meteorite is fast approaching, and only those who evolve survive.

How Higher Education Risks Extinction in the Age of AI – And How We Can Evolve.

“The real task of education is not to fill minds with facts, but to light a fire of inquiry.” – Socrates (attributed)

Image: The AI meteorite is fast approaching, and only those who evolve survive.

In every great extinction event, the warning signs were there – just ignored.

Today, universities face their own KT extinction moment. The impactor is not a rock from space, but a technology: artificial intelligence. And unless higher education evolves rapidly, it risks going the way of the dinosaurs – slow, proud, and extinct.

This metaphor struck me back when I was studying for my MSc in Plant Sciences. I was learning about Rubisco – the most abundant protein on Earth, essential to photosynthesis, present in every green plant cell. Yet despite its abundance, Rubisco is remarkably inefficient. Evolution kept it around not because it was the best solution, but because plants couldn’t adapt fast enough to replace it. Nature tolerated suboptimal performance – until crisis hit.
Today’s universities are at a similar Rubisco moment: abundant, vital – but slow to evolve in a rapidly changing environment.

In every technological revolution, there are missteps.
The printing press spread both wisdom and propaganda.
The internet democratized knowledge – and amplified division.
And today, as artificial intelligence integrates into higher education, we are once again witnessing the double-edged nature of innovation.

The truth is, we won’t fail because AI is too smart. We will fail because we weren’t wise enough.

Last month, I was invited to San Salvador, where I gave a series of talks and workshops on the integration of AI into higher education. Hosted by the Honorary Consulate of El Salvador in Israel, I had the opportunity to engage with university leaders, faculty, students, and policymakers.
It was the students’ questions – sharp, courageous, and future-facing – that encouraged me to continue my personal exploration of AI’s role in reshaping higher education.

Universities, under pressure to modernize, rushed to deploy AI: automated grading, predictive admissions algorithms, AI-powered proctoring systems. But instead of deepening learning, many of these early experiments exposed and amplified the systemic weaknesses already festering inside academia: bias, surveillance, inequality, a shallow understanding of human development.

As historian Yuval Noah Harari reminds us,

“Technology is never deterministic. Its impact depends on how we use it.”

The early failures of AI in universities are not technical glitches – they are philosophical failures. They reveal how little thought many institutions gave to fairness, trust, creativity, and human dignity before jumping into the future.

If education is truly about cultivating human potential, then the first wave of AI integration is a stark reminder of how easily that mission can be undermined – when tools are adopted without rethinking the deeper purpose of learning itself.

Before we look forward, we must first understand how and where things went wrong.

Image: “You want to deploy AI in education?” → Start here: Ethics > Transparency > Co-Design with Students → If not? → “STOP. Rethink.”

Rethinking Deployment: A New Ethical Model for AI in Higher Education

Before rushing to integrate AI technologies into educational ecosystems, institutions must pause and rethink the fundamental process. AI is not a plug-and-play solution; it is a systemic intervention that shapes not just how we teach, but who we become as a learning society.

A simple but powerful model can guide universities through this transformation:

“You want to deploy AI in education?” Start here: Ethics → Transparency → Co-Design with Students. If not? STOP. Rethink.”

Ethics First:
Every AI deployment must begin with a robust ethical framework. Universities must articulate clear principles: fairness, inclusivity, privacy, dignity, and human flourishing. Without this foundation, even the most sophisticated technologies risk perpetuating harm or exacerbating inequalities.

Transparency Always:
Students, faculty, and stakeholders must know how AI systems function, what data they collect, how decisions are made, and how accountability is ensured. Transparency transforms AI from a black box into a shared tool of trust and learning.

Co-Design with Students:
Education is not a service delivered to passive recipients; it is a collaborative ecosystem. Students must not merely be subjects of AI experimentation-they must be co-creators. Participatory design processes that involve learners in shaping AI tools lead to higher relevance, acceptance, and alignment with educational values.

If these steps cannot be fulfilled, the message is clear:
“STOP. Rethink.”

Deploying AI without an ethical and participatory foundation is not modernization-it is institutional negligence. Universities that aspire to thrive in the next era must not only innovate faster; they must innovate wiser.

Case Studies: Where It Went Wrong

Before we can dream boldly of the future, we must first reckon honestly with the present.

The early waves of AI integration into higher education have revealed a brutal and uncomfortable truth: without ethics, foresight, and humility, innovation can do more harm than good.

Like the dinosaurs facing a sudden extinction event, universities that move too slowly-or that remain blind to their own structural biases-are not just lagging behind. They are positioning themselves for extinction.

The collapse will not come because AI is too intelligent. It will come because institutions failed to ask the right questions when it mattered most.

In 2020, the UK’s Ofqual agency deployed an algorithm to predict final exam grades during COVID-19 disruptions (BBC News, 2020). Intended to bring fairness and objectivity, it instead downgraded thousands of students from disadvantaged schools while protecting those from elite institutions. Public outrage forced the government to abandon the system. The lesson was stark: AI does not remove human bias; it mirrors and magnifies it unless explicitly designed otherwise. An algorithm is only as fair as its inputs.

Around the same time, universities scrambling to preserve academic integrity during remote learning turned to AI-driven proctoring tools like Proctorio and ExamSoft (The New York Times, 2020). These systems, designed to monitor keystrokes, eye movements, and ambient noise, unleashed a wave of protests from students and faculty alike. Privacy was violated. Racial and socioeconomic biases were exposed. Anxiety skyrocketed. The backlash revealed a fundamental principle: you cannot teach trust through surveillance. Education rooted in suspicion breeds resentment, not resilience.

Even in less visible arenas, cracks were showing. Studies from MIT demonstrated that AI essay graders, adopted by several US school districts, could be easily fooled by students using complex vocabulary and formulaic structures-even if their arguments were incoherent (MIT Technology Review, 2019). The machine rewarded style over substance. The lesson was simple but profound: AI cannot measure depth. Only humans can nurture true understanding.

In the admissions process, where the stakes are even higher, some universities began experimenting with predictive algorithms to aid in selection (Reuters, 2020). These tools promised objectivity but instead encoded and perpetuated historical injustices related to race, income, and geography. By training on flawed historical data, AI systems risked cementing exclusion rather than dismantling it. The truth became undeniable: if you train on injustice, you replicate injustice.

And finally, in an act of institutional panic, many universities and school districts moved to ban ChatGPT and other generative AI tools in 2023 (The Guardian, 2023). They sought to prevent cheating, but only succeeded in driving innovation underground. Students adapted anyway, learning faster and more creatively outside official channels. The real failure was not technical, but pedagogical: banning innovation does not stop it. It merely disconnects institutions from the future.

Across all these cases, one pattern is clear. Technology does not save institutions from their own shortcomings. Values must come first. When machines decide, humans must first decide the values. And when they fail to do so, the cost is measured not in broken systems, but in broken trust-and broken futures. For readers wanting to dive deeper, detailed case reports on the Ofqual grading scandal and the AI proctoring backlash highlight the complexities and consequences of these early missteps.

Universities stand today at a crossroads.
They can either learn from these early failures, embracing ethics, transparency, and co-creation as their foundation-or they can continue along the slow, blind path toward irrelevance.

The choice is urgent. The clock is ticking.

An overview of key incidents where AI integration backfired, highlighting lessons learned for future educational resilience.

Image: Universities 2025? Dinosaurs vs. Agile Mammals?
Image: Bias by Design

Root Cause Analysis: Why We Failed

If artificial intelligence is a mirror, then its early use in higher education reflected something far more unsettling than technical immaturity. It revealed deep, systemic fractures – fractures that threaten the very soul of education if left unresolved.

When universities first welcomed AI onto their campuses, they treated it as a tool for efficiency rather than a catalyst for reimagining learning. Automation was applied to grading, surveillance, and administration, but rarely to the deeper mission of cultivating wisdom, creativity, or ethical courage. AI became a shortcut – a way to process students faster, not to help them grow deeper. As scholar Ruha Benjamin warned, “When we outsource responsibility to algorithms, we also automate injustice.” Instead of liberating education, technology risked entrenching its oldest biases in new digital forms.

In the rush to modernize, universities optimized what they could easily measure: grades, attendance records, time-on-task. Yet education’s most profound outcomes – empathy, ethical reasoning, original thought – defy neat quantification. Data dashboards could not capture the messy, relational work of becoming human. Projects like Harvard’s Project Zero have long demonstrated that creativity, curiosity, and moral reasoning can be nurtured and assessed, but they require different methods: portfolios, reflections, conversations. In mistaking what was measurable for what was meaningful, institutions hollowed out the purpose of learning itself.

The adoption of AI-driven proctoring and monitoring tools exposed an even deeper fracture: a culture of suspicion. Students were treated not as apprentices of knowledge but as potential cheats to be surveilled and punished. AI flagged different cultural behaviors, neurodivergent traits, and even skin tones as “anomalies.” The psychological safety essential for learning eroded under constant scrutiny. Research from the Electronic Frontier Foundation shows that surveillance-heavy environments diminish trust, stifle innovation, and disproportionately harm marginalized students. Education rooted in suspicion cannot grow minds; it only cages them.

Underlying all these failures was a more profound shortcoming: the abandonment of long-term thinking. Most institutions deployed AI reactively – rushing tools into classrooms during crises like COVID-19, chasing enrollment numbers without building systemic resilience. They forgot that true innovation demands foresight, not panic. As ethicist Wendell Wallach reminds us, “Ethics must be designed into AI, not bolted on afterward.” True resilience lies not in reacting faster, but in imagining farther – decades into the future, not just to the next enrollment cycle. For institutions seeking to embed ethical foresight into AI initiatives, Virginia Dignum’s work on Responsible Artificial Intelligence offers a foundational guide on how to design values into technical systems.

The path forward is clear. Universities must move beyond technical fixes and commit to ethical re-foundation. Initiatives like Oxford’s Ethical AI in Education project show that it is possible to design educational systems where technology amplifies human dignity, not diminishes it. It demands more than tools. It demands a new philosophy of learning – one rooted in creativity, trust, and responsibility.

If we fail to make this shift, we risk repeating the fate of the dinosaurs: magnificent, powerful, but ultimately undone not by external forces alone – but by their inability to evolve when it mattered most.

Call to Action:


Education has always been about the future. Now more than ever, that future demands a reawakening of purpose. Institutions must move beyond adopting AI reactively. They must design new ecosystems where intelligence, ethics, and imagination co-evolve. The next generation is not asking for better machines. They are asking for wiser, more human systems.

In the words of Sir Ken Robinson:

“Creativity is as important in education as literacy. And we should treat it with the same status.”

The same is true today of ethical AI literacy.

If we want the next chapter of higher education to be worthy of the term education at all – it’s time to rewrite the script.

“Ethical thinking must precede technological deployment.” – Wendell Wallach, Moral Machines

Time to Rewrite the Script

Higher education stands at a fork in the road.

One path continues the same way: treating AI as a tool to automate outdated systems, leading to deeper alienation, inequality, and irrelevance. The other path sees AI not as a crutch for a broken model, but as a catalyst for human flourishing.

The early failures of AI in universities are not the end of the story. They are the first drafts. They show us – urgently and vividly – what must change:

  • AI must be designed into education, not bolted onto it.
  • Ethics must be lived, not legislated after the fact.
  • Students must be partners, not products of the system. 

The future demands not just new technologies – it demands new relationships: to learning, to wisdom, and to each other. Let’s ensure that our educational systems honor that mission they were created in the first place.

If you want to predict the future of AI in education, first ask: What kind of humans are we trying to create?

Read the full article in Medium

About the Author
Sharon Gal Or – Pioneer of Transformation; Israeli Ambassador at U.S. Transhumanist Party. An Innovation, Sustainability & Leadership Management Strategist on creative education to government, non-profits, education, and arts bodies. Lectures in various international circles, leading and hosting training programs globally.
Related Topics
Related Posts