AI Models Pose No Existential Threat to Humanity: Researchers

AI models pose no existential threat to humanity

AI has become a popular topic in the tech sector. Some people worry it could be misused. Others see it as a big help in medicine, science, and creative work.

So, does AI threaten humanity? According to a big study that tested thousands of Large Language Models (LLMs), which power apps like ChatGPT, the answer is no.

Want a Free Website

Researchers from Bath University in the UK and the Technical University of Darmstadt in Germany found that LLMs’ skills come from their ability to follow instructions, remember information, and use language well. These traits explain both what they can and can’t do.

The study tested whether LLMs could handle tasks they had never seen before. These tasks are called emergent abilities.

Past research showed that LLMs can answer questions about social situations even though they weren’t specifically trained for them.

It was previously thought that this meant the models “understood” social situations. However, the researchers proved that it was actually due to a technique called “in-context learning” (ICL), where models learn from a few examples they are given, as stated by Bath University.

The study found that although LLMs can follow instructions and use language well, they still need specific training to learn new skills.

This means they are controllable, predictable, and safe, leading to the conclusion that LLMs do not pose an existential threat to humanity.

The research team noted that even though LLMs are trained on bigger and bigger datasets, they can still be used safely. However, there is always the potential for misuse.

In the future, these AI models might become better at understanding and writing more complex language and following detailed prompts, but they are unlikely to develop complex reasoning skills, according to the researchers.

Dr Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study, said that the fear of AI acting in unexpected or dangerous ways is not valid.

He added that while it’s important to manage the risks of AI misuse, like fake news and fraud, creating regulations based on imagined threats is premature.

Want a Free Website