Sam Altman, CEO of OpenAI, admits: GPT-4 now scares us, too

Sam Altman, CEO of OpenAI, warned that humanity must be prepared for the negative consequences of AI. Disinformation and cyber attacks, the main concerns.
sam altman

Sam Altman, CEO of OpenAI, warned that humanity must be prepared for the negative consequences of AI. Disinformation and cyber attacks, are his main concerns.

Artificial intelligence is once again taking center stage worldwide. This, with the recent arrival of the GPT-4 language model, the latest version of ChatGPT.

The new tool amazes with its capability. But it also poses major dilemmas faced by its creator himself, Sam Altman, current CEO of the OpenAI company.

Sam Altman: ChatGPT-4 now scares us, too

In a recent interview with U.S. media outlet ABC News, Altman acknowledged that there is great potential yet to be developed. But that this technology could bring real dangers to humanity too.

We have to be careful and at the same time understand that it is useless to have everything in a lab. It’s a product that we have to disseminate and get in touch with reality and have errors as long as the risks are low. Having said that, I think people should be happy that we are a little bit afraid of this. If I could say it doesn’t scare me, you wouldn’t have to trust me.

Disinformation and cyber attacks, Altman’s main concerns

The OpenAI CEO had previously expressed his fears in a post published on his personal Twitter account.

On this occasion, he stated that on of his main concerns is that ChatGPT could be used to generate content with the aim of misinforming people.

“I am particularly concerned that these models could be used for large-scale disinformation. Now that they are getting better at writing computer code, they could be used for offensive cyber attacks,” he said in the interview.

On the other hand, Altman felt that the ability of these tools to write code in various programming languages could create cybersecurity risks. “Now that they have improved their programming capabilities, they could be used to execute more aggressive cyber attacks,” he said.

However, Altman reflected, “This is a tool that is largely under human control.” In that sense, he noted that GPT-4 “waits for someone to give it input”. And what is really worrisome is who is in control of those inputs.

Read also: GPT-4, the new version of Chat GPT: the main innovative features

Humanity needs time to adapt to AI

Altman assured that all the changes generated through the advancement of artificial intelligence technology can be considered positive. But, humanity needs time to adapt to them.

He insisted that OpenAI must also carry out this adaptation process to correct inappropriate use or harmful consequences that this type of system may have.

“We will definitely make corrections at the same time as these negative events occur. Now that the risks are low, we are learning as much as we can and establishing constant feedback to improve the system. And, avoid the most dangerous scenarios,” assured the company’s CEO.

Read also: Midjourney, what is the software that creates super realistic images and why could it be a danger of misinformation

Related articles...
Latest news
who are the big donors fueling Donald Trump's 2024 campaign

Billionaire backing: who are the big donors fueling Donald Trump’s 2024 campaign

promotergroup one partner infinite services

Promotergroup, one partner for infinite services

Gasparotto Pio SRL, custom-made pallets for every need

Gasparotto Pio SRL, custom-made pallets for every need!

How Strawberry, OpenAI's new AI, works

How Strawberry, OpenAI’s new artificial intelligence, works

biden withdraws and supports kamala harris

Biden withdraws from the race for the White House and gives support to Kamala Harris

returnable packaging advantages

Returnable packaging: how it works and all the advantages

Newsletter

Sign up now to stay updated on all business topics.