How OpenAI will transform ChatGPT into Siri and Alexa

Elizabeth Smith

Now ChatGpt speaks and sees. With new features, the OpenAI company’s chatbot will be able to converse in its own voice like Siri, Apple’s voice assistant, or Amazon’s Alexa or Google’s assistant, as well as analyze photos.

Artificial intelligence getting “smarter”: ChatGpt can now see, hear and speak

OpenAI continues to update its generative AI chatbot since launching last November and immediately becoming a phenomenon.

The company explained that soon the chatbot will be able to converse with users via voice, mimicking that of a real person, as well as the ability to analyze photos that a user uploads to the platform.

The new features give the chatbot more utility right now and point to a future in which artificial intelligence tools understand the world around them, not just the online data they have been trained on.

This brings ChatGpt closer to similar AI services such as Apple’s Siri, Google’s voice assistant, and Amazon’s Alexa.The updates affect the official Android and iOS app and will be available in two weeks for customers paying for a Plus or Enterprise subscription, aimed exclusively at businesses.

The announcement comes on the same day that Amazon pledged to invest up to $4 billion in OpenAI rival Anthropic.

A move that forms part of a broader battle over generative artificial intelligence among global tech giants that includes Google trying to catch up through its Bard chatbot, Meta adopting a robust open source ethos to help it gain an edge, and Microsoft with OpenAI itself.

Chat GPT’s new features

From now on, ChatGpt can also tell bedtime stories, resolve debates at the dinner table, and speak text input aloud from users.

In a demo of the new update shared by OpenAI, a user asks ChatGPT to make up a story about the “super hedgehog sunflower named Larry.” The chatbot is able to tell a story aloud with a human-sounding voice that can also answer questions such as “What was his house like?” and “Who is his best friend?” reports CNN.

The voice feature “opens the door to many creative and accessibility-focused applications,” OpenAI pointed out.

Read photos like Google Lens

As for the second new feature, “reading” images, OpenAI points out that soon one will be able to upload photos to the ChatGpt conversation box to let ChatGpt analyze them to provide in-depth directions.

One will be able, for example, to take a picture of a set of ingredients and let the AI create a dish from them, with steps for making it.

Currently, a popular service for obtaining image information is Alphabet’s Google Lens.

In addition, OpenAI said last week that ChatGPT will also soon be able to generate images, thanks to its integration with DALL-E 3, reports The Verge.

The collaboration with Spotify

At the same time, OpenAI announced a collaboration with Spotify to translate original English-language podcasts into Spanish and French, precisely because of its AI.

Specifically, podcasters will be able to sample their own voice and translate their programs, while maintaining their original voice.

Read also: Anthropic, Google, Microsoft and OpenAi team up in Frontier Model Forum for safe AI development. A bluff?

Related articles...
Latest news
How Meta AI, the virtual assistant of WhatsApp, Facebook and Instagram works
Vertical gardens: the best examples around the world
Over 34.000 dead in Gaza since October 7th 2023 and more than 77.000 injured
Penny Stocks: what they are, how they work and how to invest
All about urban regeneration: what is the meaning and what are its main aspects
The first restaurant managed entirely by AI is born in California

Newsletter

Sign up now to stay updated on all business topics.