Sam Altman, CEO of OpenAI, admits: GPT-4 now scares us, too

Elizabeth Smith

Sam Altman, CEO of OpenAI, warned that humanity must be prepared for the negative consequences of AI. Disinformation and cyber attacks, are his main concerns.

Artificial intelligence is once again taking center stage worldwide. This, with the recent arrival of the GPT-4 language model, the latest version of ChatGPT.

The new tool amazes with its capability. But it also poses major dilemmas faced by its creator himself, Sam Altman, current CEO of the OpenAI company.

Sam Altman: ChatGPT-4 now scares us, too

In a recent interview with U.S. media outlet ABC News, Altman acknowledged that there is great potential yet to be developed. But that this technology could bring real dangers to humanity too.

We have to be careful and at the same time understand that it is useless to have everything in a lab. It’s a product that we have to disseminate and get in touch with reality and have errors as long as the risks are low. Having said that, I think people should be happy that we are a little bit afraid of this. If I could say it doesn’t scare me, you wouldn’t have to trust me.

Disinformation and cyber attacks, Altman’s main concerns

The OpenAI CEO had previously expressed his fears in a post published on his personal Twitter account.

On this occasion, he stated that on of his main concerns is that ChatGPT could be used to generate content with the aim of misinforming people.

“I am particularly concerned that these models could be used for large-scale disinformation. Now that they are getting better at writing computer code, they could be used for offensive cyber attacks,” he said in the interview.

On the other hand, Altman felt that the ability of these tools to write code in various programming languages could create cybersecurity risks. “Now that they have improved their programming capabilities, they could be used to execute more aggressive cyber attacks,” he said.

However, Altman reflected, “This is a tool that is largely under human control.” In that sense, he noted that GPT-4 “waits for someone to give it input”. And what is really worrisome is who is in control of those inputs.

Read also: GPT-4, the new version of Chat GPT: the main innovative features

Humanity needs time to adapt to AI

Altman assured that all the changes generated through the advancement of artificial intelligence technology can be considered positive. But, humanity needs time to adapt to them.

He insisted that OpenAI must also carry out this adaptation process to correct inappropriate use or harmful consequences that this type of system may have.

“We will definitely make corrections at the same time as these negative events occur. Now that the risks are low, we are learning as much as we can and establishing constant feedback to improve the system. And, avoid the most dangerous scenarios,” assured the company’s CEO.

Read also: Midjourney, what is the software that creates super realistic images and why could it be a danger of misinformation

Related articles...
Latest news
Who are the 12 jurors who will decide on Trump’s guilt in the Stormy Daniels trial
The evolution of reserve currencies: shifting dynamics in global finance
Compact cameras, here are the 3 cheapest on the market and of great quality
Combustion cars are already banned in this country
Who is Ali Khamenei, Iran’s Ayatollah and supreme leader
Salary increases for everyone: Warren Buffett’s solution to save the economy

Newsletter

Sign up now to stay updated on all business topics.