Home » Suspending AI development? Elon Musk and Steve Wozniak sign a petition

Suspending AI development? Elon Musk and Steve Wozniak sign a petition

by Tim

Elon Musk, Steve Wozniak and more than 1,000 other entrepreneurs, scientists and professors have signed a Future of Life Institute petition to temporarily suspend development of AIs like the GPT-4 chatbot. What’s driving such concerns?

Does Elon Musk want to stop the development of artificial intelligence (AI)?

Elon Musk, Steve Wozniak and other entrepreneurs, scientists and other professors have signed a petition proposed by the Future of Life Institute, which calls for a temporary halt to the development of AI:

A total of 1,377 signatures have been collected as of this writing. The purpose of this letter is to pause for at least six months the training of systems such as GPT-4, the new version of the famous chatbot of OpenAI, a company nevertheless co-founded by Elon Musk.

The main concern is the place of the human being in front of these developments. In this sense, several questions are raised, such as that of algorithms that can decide what we see on different social networks, the automation of various operations and the development of minds more intelligent than the human being :

” Such decisions should not be delegated to unelected technology leaders. Powerful AI systems should only be developed once we are confident that their effects will be positive and their risks manageable. This confidence must be well justified and increase with the scale of a system’s potential effects. “

On the other hand, the petition calls for refocusing AI development on technology domains that are “safe, interpretable, transparent, robust, aligned, trustworthy and fair. “

Is AI a real danger?

Like any technological innovation, AI raises questions. Already in the era of the Industrial Revolution, human beings wondered about their potential replacement by machines, and works of fiction like Terminator have unleashed passions about a possible disaster scenario, but are we really at the doorstep of the Singularity? The question divides.

These questions did not wait for the recent surge in popularity of AI on social networks to arise.

However, there are indeed ethical issues to be highlighted, especially regarding the way these systems are trained. Just like all our activities on the Web, each interaction with an artificial intelligence gives it information about us, and we feed the algorithms of private companies for free.

Thus, it is interesting to question how these data are and will be used by these companies, which may represent a more rational danger than the fantasy of a totally autonomous super AI at the moment.

As for the replacement of human beings, as in other technological fields, if AI can indeed eliminate some jobs, it will create others in parallel. In short, it is a morally neutral tool, whose dangers or benefits depend on the use that is made of it.

Related Posts

Leave a Comment