AI

Could super AI appear as early as 2027?

Artificial intelligence is developing by leaps and bounds, which causes concern not only among ordinary network users, but also among some scientists. So, we recently talked about the new Microsoft chatbot Copilot, which declared itself super intelligible and began to demand worship. 

More specifically, Copilot defines itself as general AI (AGI)—that is, greater-than-human intelligence. Fortunately, this unusual behaviour of Copilot is a bug and no AGI currently exists. However, some machine learning experts believe that general AI could emerge as early as 2027. 

After that, according to SingularityNET founder Ben Goertzel, AGI could quickly evolve into artificial superintelligence (ASI), possessing all the cumulative knowledge of human civilization.

What is AGI?

Artificial intelligence (AI) is everywhere, from smart assistants to self-driving cars. But what happens if the world gets a super AI that can do more than just perform specific tasks? What if there was a type of artificial intelligence that could learn and think like and even surpass humans?

super AI

This is the vision of artificial general intelligence (AGI) , a hypothetical form of AI that has the potential to perform any intelligent task available to humans. AGI is often contrasted with artificial narrow intelligence (ANI) , a modern state of artificial intelligence that can only excel in one or a few areas, such as playing chess or recognizing faces.

AGI differs from modern AI in its ability to perform any intellectual task that a human can do (and exceed). This difference lies in several key characteristics, including

  • Abstract thinking
  • Ability to make generalisations based on specific examples. drawing on a variety of background knowledge
  • Using common sense and awareness to make decisions
  • Understanding cause and effect, not just correlation
  • Effective communication and interaction with people and other systems

Although these functions are vital to achieving humanoid or superhuman intelligence, today’s intelligent systems are still far from AGI (and ASI).

Note that the concept of superintelligent AI is not new, and the idea itself is controversial. Some AI enthusiasts believe that the emergence of AGI is inevitable and inevitable and will definitely lead to a new era of technological and social progress. 

Others are more sceptical and warn of ethical and existential risks following the creation of a powerful and unpredictable “mind.”

Consequences and risks of AGI

AGI creates scientific, technological, social and ethical problems with serious and catastrophic consequences. From an economic perspective, superintelligent AI could disrupt existing markets, exacerbating existing inequalities, and improvements in education and healthcare could bring new challenges and risks.

super AI

From an ethical perspective, AGI may contribute to the introduction of new norms of social behaviour and the emergence of conflict, competition and violence. 

Essentially, super intelligent AI will question existing meanings and goals, expand knowledge, and redefine the nature and purpose of humans. 

Therefore, stakeholders must consider and address these impacts and risks, including scientists, developers, policymakers, educators, and citizens.

Will AGI arrive in 2027?

According to one leading AI expert, AGI may come sooner rather than later. During his closing keynote at this year’s Beneficial AGI Summit in Panama, SingularityNET founder Ben Goertzel said that while humans likely won’t create human-level artificial intelligence or superhuman artificial intelligence until 2029 or 2030, there is a possibility that AGI will appear in 2027.

super AI

AGI can quickly evolve into artificial superintelligence (ASI), possessing all the cumulative knowledge of human civilization. And while none of us has precise knowledge of how to create intelligent AI, its emergence within, say, the next three to eight years seems quite plausible to me, Goertzel said.

To be fair, Goertzel is not alone in trying to predict when AGI will arrive. Last fall, for example, Google DeepMind co-founder Shane Legg repeated his more than decade-long prediction that there was a 50/50 chance that humans would invent AGI by 2028.

In a tweet last May, “AI godfather” and ex-Googler Geoffrey Hinton said he now predicts, “without much confidence,” that AGI is five to 20 years away.

super AI

Best known as the creator of the humanoid robot Sophia, Goertzel has long theorised about the date of the so-called “singularity” – the point at which artificial intelligence reaches and subsequently surpasses human levels of intelligence.

Pipe dream

Until the last few years, AGI, as Goertzel and his colleagues describe it, seemed like a pipe dream, but with the development of large language models (LLMs) introduced by OpenAI during the release of ChatGPT in late 2022, such a possibility seems increasingly likely – although on their own Large language models are not capable of leading to general AI.

My own opinion is that once we have human level AGI, within a few years superhuman AGI (ASI) will become a reality. I think once AGI can analyse its own mind, then it will start doing engineering and science at a human or superhuman level, says the artificial intelligence pioneer.

super AI

All this means is that a strong general AI will be able to create an even smarter AI than itself, after which an intelligence explosion will occur – i.e. singularity.

Naturally, there are many caveats to what Goertzel preaches, not the least of which is that by human standards, even superhuman artificial intelligence would not have the “intelligence” that we have.

In this case, it cannot be ruled out that the evolution of technology will continue along a linear path – as if in a vacuum from the rest of human society and the harm we cause to the planet. Still, it’s a compelling theory—and given how quickly artificial intelligence has advanced in just the last few years, it shouldn’t be completely discredited.

 

Leave a Reply

Your email address will not be published. Required fields are marked *