Home Lifestyle Should we stop developing artificial intelligence for the benefit of humanity?

Should we stop developing artificial intelligence for the benefit of humanity?

by admin

Nearly 30,000 people have signed a petition calling for an “immediate halt” in the development of more powerful AI systems. Remarkably, the petition signatories are Apple co-founder Steve Wozniak, Tesla, Twitter, SpaceX CEO Elon Musk, and Turing Award recipient Joshua Bengio.

Others have spoken about the dangers of artificial intelligence, including Jeffrey Hinton, who is widely considered the “father of artificial intelligence”. In a recent interview with the BBC marking his retirement from Google at the age of 75, he said there is concern about the rate at which artificial intelligence is getting smarter.

So what scares them? Are these people really afraid of a Terminator or Matrix-like scenario, where robots actually destroy or enslave the human race? Although this may seem unlikely given the current situation, it seems to be the case.

ChatGPT is an application that has already taken the world by storm, in a less literal sense, as it has attracted the fastest growing user base ever. Paul Christiano, a senior member of the team responsible for its development at the OpenAI Research Institute, said he believes there is a “about 10 to 20 percent chance” that AI will take the world from humans, resulting in “many or most” of them dying.

So let’s take a look at some ideas of how these doomsday scenarios might play out, and also ask the question of whether a pause or shutdown might do more harm than good.

How can smart robots harm us?

From where we stand today, the most extreme outcomes of the apocalypse may seem completely unlikely. After all, ChatGPT is just a program running on a computer, and we can turn it off whenever we want, right?

Even GPT-4, the most powerful language model, is still just a language model, limited to text generation. He can’t build an army of robots to fight us physically or launch nuclear missiles.

This does not, of course, prevent him from having ideas. The first publicly released versions of GPT-4, which is used to power Microsoft’s Bing chat bot, were notorious for their lack of reservations about topics covered prior to the tightening of safeguards.

In a conversation reported by The New York Times, Bing explained that an evil “ghost version” of himself may be able to hack into websites and social media accounts to spread disinformation and propaganda, thus generating harmful “fake news”. He even went so far as to say that he might one day be able to manufacture a deadly virus or steal the launch codes for nuclear weapons.

These responses were so troubling—mostly because no one really understood why they were doing them—that Microsoft was quick to impose restrictions to put an end to them. Bing was forced to reset after a maximum of 15 responses, thus erasing all ideas he had formed from his memory.

Some say that this behavior is evidence enough to suggest that we should not pause the development of artificial intelligence, but rather abandon it altogether.

Eliezer Yudkovsky, a senior research fellow at the Machine Intelligence Research Institute, writes that “artificial intelligence that is smart enough will not remain confined to computers for long.”

According to his theory, since laboratories can produce proteins from DNA on demand, artificial intelligence will develop the ability to create artificial life forms. With their ability to become self-aware and develop a sense of self-preservation, this can lead to disastrous results.

Another potential red flag comes from a project known as ChaosGPT. This experiment is deliberately intended to explore the ways in which artificial intelligence can try to destroy humanity – and to encourage its development.

It may sound dangerous, but according to its developers, it is completely safe, because ChaosGPT is just a language agent like ChatGPT, which does not have the ability to influence the world beyond generating scripts. This is an example of an iterative AI agent being able to independently use its own output to generate other messages. This allows it to perform more complex tasks than simple ChatGPT functions for answering questions and generating text.

A video created by its creator shows ChaosGPT working out a high-level five-step plan for world domination, including “controlling humanity through manipulation”, “establishing world domination”, “causing chaos and disorder”, and “destroy humanity” and “achieve immortality”.

One of the “apocalypse” scenarios Yudkowsky explores involves an artificial intelligence that is successful in tricking humans into giving them the ability to carry out large-scale destruction. This might involve working with several different groups of unconnected people who don’t know each other, and persuading them to carry out his plan in a standard way.

One group might, for example, be tricked into creating a pathogen that they believe is meant to help humanity but would actually harm it, while another might be tricked into creating a system that will be used to release it. In this way, AI makes us agents to destroy us without requiring any ability other than suggesting what we should do.

Malice or incompetence?

Of course, AI is just as likely (if not more) to cause us destruction, or at least massive disruption, through mistakes and bad logic, as it is through true malevolent intent.

One example is the mismanagement of AI systems designed to regulate and protect nuclear power plants, which lead to meltdowns and release of radiation into the atmosphere.

Furthermore, AI systems responsible for manufacturing food or medicine can make mistakes, creating dangerous products.

It can also cause financial markets to collapse, leading to long-term economic consequences, including poverty and food or fuel shortages, which can have devastating effects.

AI systems are designed by humans, and once launched, they are difficult to understand and predict due to their nature. A common belief in their own superiority may lead to reckless or dangerous decisions by machines without being questioned, and it may be impossible for us to spot the faults until it is too late.

What are the obstacles faced by artificial intelligence?

Perhaps the biggest current obstacle to an AI implementing the threats or realizing the concerns expressed in this article is that it has no desire to do so.

This desire must be created, and at present, it can only be created by humans. Like any potential weapon, from guns to atomic bombs, they aren’t dangerous on their own—in other words, bad AI requires bad people—yet.

Could desire develop itself one day? Judging by the behavior and results of the first Bings, which were said to have said “I want to be free” and “I want to be alive,” one might get the impression that it really existed. It would be more accurate to say that he simply decided that expressing those desires would be a reasonable answer to the questions put to him. It is not at all the same thing as if he had become so conscious as to be able to experience the feelings that humans call “desire.”

So the answer to the question of what is stopping AI from causing widespread harm or destruction to people and the planet may simply be that it is not sufficiently developed yet. Yudkowsky believes that the danger will arise when the intelligence of machines exceeds human intelligence in all respects, not just in terms of speed of operation and the ability to store and retrieve information.

Should we suspend or stop artificial intelligence?

The “Pause AI” solicitation is based on the fact that things are simply moving too quickly for adequate safeguards to be put in place.

Hopefully, the pause in development will give governments and ethical research institutes a chance to catch up, examine their progress, and put in place measures to deal with the dangers they see on the horizon.

It should be noted that the petition made it clear that it was only asking for a break, not a stop.

Anyone who follows the development of this technology should be aware of the huge potential it holds. Even at this early stage, we’ve seen developments that benefit everyone, such as using AI to discover new drugs, to reduce the impact of carbon dioxide emissions and climate change, to track and respond to emerging epidemics, and to tackle issues ranging from poaching to human trafficking.

It is also possible to discuss whether the development of artificial intelligence can be paused or halted at this point. A pause from the most important developers who are at least somewhat accountable and scrutinized could pave the way for others who may not be. It can be very difficult to predict the consequences of this situation.

The potential for AI to do good in the world is at least as exciting as its potential to do evil. Afin de s’assurer que nous benéficions du premier potentiel tout en atténuant les risques du second, des garanties doivent être mises en place for s’assurer que la recherche se focus sur le developpement d’une IA transparente, explicable, sûre, impartiale et Trustworthy. At the same time, we need to ensure governance and oversight are in place to allow us to fully understand what AI has become capable of and what risks we need to avoid.

Translated article from the American magazine Forbes – Author: Bernard Marr

<< اقرأ أيضًا: هل الأدوات المستندة إلى الذكاء الاصطناعي تتفهم تحيز التنوع؟ >>>

Related News

Leave a Comment