Sam Altman, CEO of OpenAI, admits: GPT-4 now scares us, too

Sam Altman, CEO of OpenAI, warned that humanity must be prepared for the negative consequences of AI. Disinformation and cyber attacks, the main concerns.
sam altman

Sam Altman, CEO of OpenAI, warned that humanity must be prepared for the negative consequences of AI. Disinformation and cyber attacks, are his main concerns.

Artificial intelligence is once again taking center stage worldwide. This, with the recent arrival of the GPT-4 language model, the latest version of ChatGPT.

The new tool amazes with its capability. But it also poses major dilemmas faced by its creator himself, Sam Altman, current CEO of the OpenAI company.

Sam Altman: ChatGPT-4 now scares us, too

In a recent interview with U.S. media outlet ABC News, Altman acknowledged that there is great potential yet to be developed. But that this technology could bring real dangers to humanity too.

We have to be careful and at the same time understand that it is useless to have everything in a lab. It’s a product that we have to disseminate and get in touch with reality and have errors as long as the risks are low. Having said that, I think people should be happy that we are a little bit afraid of this. If I could say it doesn’t scare me, you wouldn’t have to trust me.

Disinformation and cyber attacks, Altman’s main concerns

The OpenAI CEO had previously expressed his fears in a post published on his personal Twitter account.

On this occasion, he stated that on of his main concerns is that ChatGPT could be used to generate content with the aim of misinforming people.

“I am particularly concerned that these models could be used for large-scale disinformation. Now that they are getting better at writing computer code, they could be used for offensive cyber attacks,” he said in the interview.

On the other hand, Altman felt that the ability of these tools to write code in various programming languages could create cybersecurity risks. “Now that they have improved their programming capabilities, they could be used to execute more aggressive cyber attacks,” he said.

However, Altman reflected, “This is a tool that is largely under human control.” In that sense, he noted that GPT-4 “waits for someone to give it input”. And what is really worrisome is who is in control of those inputs.

Read also: GPT-4, the new version of Chat GPT: the main innovative features

Humanity needs time to adapt to AI

Altman assured that all the changes generated through the advancement of artificial intelligence technology can be considered positive. But, humanity needs time to adapt to them.

He insisted that OpenAI must also carry out this adaptation process to correct inappropriate use or harmful consequences that this type of system may have.

“We will definitely make corrections at the same time as these negative events occur. Now that the risks are low, we are learning as much as we can and establishing constant feedback to improve the system. And, avoid the most dangerous scenarios,” assured the company’s CEO.

Read also: Midjourney, what is the software that creates super realistic images and why could it be a danger of misinformation

Related articles...
Latest news
in-flight turbulence climate change

Increasing in-flight turbulence: climate change is the cause

investing in blue chip stocks

Understanding Blue Chip Stocks in finance: characteristics and advantages

trumps electoral program elections 2024trumps electoral program elections 2024

The 10 main points of Trump’s electoral program for the 2024 race to the White House

best tools for creating images with ai

The 6 best tools for creating images with artificial intelligence

marine energy pros cons

All about marine energy: harnessing the power of the oceans

what are commodities how are they traded

What are Commodities and how are they traded on the global market?

Newsletter

Sign up now to stay updated on all business topics.