OpenAI Begins Training New Flagship AI Model to Succeed GPT-4 Technology
OpenAI, a leading artificial intelligence company based in San Francisco, announced on Tuesday that it has started training a new flagship AI model to succeed its popular GPT-4 technology, which powers the ChatGPT online chatbot.
In a blog post, OpenAI stated that the new model is expected to bring “the next level of capabilities” as the company aims to develop artificial general intelligence (A.G.I.) – a machine that can perform tasks equivalent to the human brain. This new model will be utilized in various AI products such as chatbots, digital assistants like Apple’s Siri, search engines, and image generators.
To address the potential risks associated with the new model and future technologies, OpenAI also revealed the creation of a Safety and Security Committee. The company emphasized the importance of engaging in a robust debate on the topic of AI safety.
While OpenAI strives to advance AI technology ahead of its competitors, it also aims to address concerns raised by critics regarding the dangers of AI, including spreading disinformation, job displacement, and threats to humanity. The company, along with other tech giants like Google, Meta, and Microsoft, has been steadily enhancing the power of AI technologies over the past decade.
The GPT-4 technology, released in March 2023, enables chatbots and other software applications to perform various tasks such as answering questions, writing emails, generating content, and analyzing data. An updated version of the technology, known as GPT-4o, was recently unveiled by OpenAI, showcasing enhanced capabilities like generating images and engaging in highly conversational interactions.
However, OpenAI faced criticism from actress Scarlett Johansson, who claimed that the voice used in the GPT-4o model sounded similar to hers without her consent. The company denied the allegations and stated that the voice was not Ms. Johansson’s.
As OpenAI continues to train its new AI model, the Safety and Security Committee will focus on developing policies and processes to ensure the technology’s safeguarding. The company anticipates implementing these new policies by late summer or fall.
Despite recent departures of key personnel like Ilya Sutskever, a co-founder and safety leader at OpenAI, the company remains committed to addressing the potential risks associated with AI technologies. Dr. Sutskever’s departure raised concerns about OpenAI’s approach to AI safety, but the company has reassured the public that it is dedicated to ensuring the safe development and deployment of AI technologies.