Could AI be the reason we haven’t encountered alien civilizations?

Could AI be the reason we haven’t encountered alien civilizations?


Artificial intelligence (AI) is swiftly emerging as one of the most transformative technologies in human history. Its potential impacts, whether positive or negative, have sparked heated debate.

However, a recent paper authored by Michael Garrett, a professor of astrophysics at the University of Manchester, suggests that AI might be the explanation for why we have yet to encounter technologically advanced civilizations elsewhere in the Universe.

There are many hypotheses to explain why astronomers have not detected any signs of alien life in the vast array of astronomical data amassed over the last 60 years — among these are the Fermi Paradox, Zoo Hypothesis, and Dark Forest Theory.

But Garrett argues that AI could be “a great filter”, a concept which argues that a universal barrier or insurmountable challenge that every civilization encounters prevents the emergence of intelligent life.

According to Garrett, AI could be responsible for the scarcity of advanced technological civilizations, as this filter emerges before any civilization can achieve multi-planetary travel. Perhaps even more disheartening, this suggests a typical longevity of any technical civilization of less than 200 years.

But not all is doom and gloom…

Benefits, risks, and artificial superintelligence

AI has developed on unprecedented timescales compared to other disruptive technologies, becoming integrated into the fabric of our daily lives, fundamentally altering how we engage and interact with technology.

Seemingly no sector has been left untouched, from healthcare to commerce and education, to policing and national security. “It is difficult to think of an area of human pursuit that is still untouched by the rise of AI,” wrote Garrett.

As with any technology, for all the potential good it might do, in the wrong hands it has enormous potential to do harm. “Not surprisingly, the AI revolution has also raised serious concerns over societal issues such as workforce displacement, biases in algorithms, discrimination, transparency, social upheaval, accountability, data privacy, and ethical decision making,” stated Garrett.

While these ethical and societal concerns warrant serious consideration, Garrett argues that they may not be the factors leading to the downfall of civilization. In 2014, the late Stephen Hawking famously warned that AI could spell the end of humankind if it gains the ability to evolve independently, redesigning itself at an ever-increasing rate and evolving into artificial superintelligence. “Very rapidly, human society has been thrust into uncharted territory,” noted Garrett.

In addition to fears around the weaponization of artificial superintelligence — Garrett argues that the rapidity of AI’s decision-making processes could escalate conflicts in ways that far surpass the original intentions — what happens when these AI systems no longer require the support of “biological civilizations” to exist?

“Upon reaching a technological singularity, [artificial super-intelligent] systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics,” wrote Garrett.

“The practicality of sustaining biological entities, with their extensive resource needs such as energy and space, may not appeal to an [entity] focused on computational efficiency — potentially viewing them as a nuisance rather than beneficial,” he continued.

This is indeed a worrisome idea, and AI doomsayers will lean in this direction, but it currently represents a worst-case scenario.

The vital role that regulation will play

The truth is, we’re navigating unknown territory with AI, and its ultimate trajectory remains uncertain. The AI we currently encounter in every-day life still operates within human-established constraints, and it’s essential to recognize that we still have agency and the ability to shape its course.

Garrett argues that a possible safety net against the perils of a super intelligent AI overlord is multi-planetary travel, where “a multi-planetary biological species could take advantage of independent experiences on different planets, diversifying their survival strategies, and possibly avoiding the single-point failure that a planetary-bound civilization faces.”

Given the speed with which AI is outpacing space travel, this doesn’t seem like a likely (or realistic) solution. “This naturally leads us to the thorny matter of AI regulation and control,” wrote Garrett. “Without practical regulation, there is every reason to believe that AI could represent a major threat to the future course of not only our technical civilization but all technical civilizations.

“While industry stakeholders, policymakers, individual experts, and their governments already warn that regulation is necessary, establishing a regulatory framework that can be globally acceptable is going to be challenging.”

But not impossible. Recently, the implications of autonomous AI decision-making have led to calls for a moratorium on the development of AI until a responsible form of control and regulation can be introduced.

There will be difficulties in developing a universal set of regulations, given the varying cultural, economic, and societal priorities of different nations. In addition, rapid advances in AI will likely outpace any agreed regulatory frameworks, raising concerns that the latter will always lag well behind new and unanticipated advances in the field.

These are unprecedented times, but by looking beyond the stars, we might be able to gain better perspective on our own present challenges.

“By examining the possibilities of alien civilizations, [this thought experiment] helps us contemplate the long-term sustainability of our own civilization, the potential risks we face, and how we might navigate and overcome future challenges,” wrote Garrett.

Maybe we’re the one planet in the cosmos that gets it right.

Reference: Michael A. Garrett, Is artificial intelligence the great filter that makes advanced technical civilisations rare in the universe? Acta Astronautica (2024). DOI: 10.1016/j.actaastro.2024.03.052

Feature image credit: Vincentiu Solomon on Unsplash