The ChatGPT chatbot from OpenAI has taken the world by storm, but a more potent successor may be on its way.
Following GPT-3.5, which serves as the foundation for ChatGPT, the company’s GPT-series’ next large-language model, GPT-4, is said to be coming next week. That’s according to Andreas Braun, CTO of Microsoft Germany, who made the announcement at an event on March 9. GPT-4 could arrive as soon as this spring, according to previous reports.
The new AI model’s specifics have largely been kept a secret, but Braun claims that GPT-4 will have multimodal capabilities, such as the capacity to convert text into video. That makes sense considering that rivals Meta and Google both introduced text-to-video generators in the fall of last year.
The fact that Microsoft is holding a special event on March 16 is probably not by chance. Satya Nadella, the CEO of the tech giant, will attend the demonstration, where the company will demonstrate the “future of AI,” including how it will function in productivity applications such as Teams, Word, and Outlook.
What, then, is GPT-4 capable of? It will be able to make short videos from rough text descriptions of a scene, if it is anything like the technology that was shown off by Google and Meta. It will likewise help the goal of the clasps utilizing other computer based intelligence models to make them look more clear. However, there are limitations to the systems: They tend to make videos that look fake, with subjects that are blurry and animation that is distorted, and they don’t have sound.
However, moving from AI that generates images to video is a significant development in and of itself.
Mark Zuckerberg, CEO of Meta, previously explained: Video production is much more challenging than photo production because, in addition to correctly generating each pixel, the system must also predict how they will change over time.
Dall-E, a tool that can extract images from natural language descriptions, is already available from OpenAI.
Language models are, in a nutshell, algorithms that can recognize, summarize, and generate text based on information gleaned from large datasets, such as information gleaned from Wikipedia. These components of the training data, which are referred to as parameters, basically indicate the model’s ability to solve a problem, such as generating text.
175 billion parameters make up the largest model in GPT 3.5. In contrast, Meta’s own language model, which has up to 65 billion parameters, was recently made available to researchers.
What kind of parameters GPT-4 will have is a mystery. OpenAI CEO Sam Altman previously called out “complete bulls**t” rumors that GPT-4 would significantly increase the parameter count from GPT-3 from 175 billion to 100 trillion.
OpenAI CEO Sam Altman reportedly demonstrated GPT-4 to members of Congress in January to allay their concerns regarding the risks posed by AI, including the technology’s capacity to imitate human authors and produce convincing images of fictitious events known as deepfakes. According to reports, Altman demonstrated the new system’s increased security controls.
A number of chatbots, including OpenAI’s own proprietary ChatGPT tool and those from Microsoft (OpenAI’s investor), Snapchat, Discord, and Slack, are already powered by the GPT-3.5 language model.
At the Microsoft Germany occasion, the organization likewise reiterated a portion of the highlights it had previously uncovered, including the tech’s capacity to reply and sum up calls for salesmen.
- World Meditation Day 2024: The Emotional Growth Benefits of Mindfulness for Kids - December 21, 2024
- Bryson DeChambeau will make international history in his first tournament of the year - December 21, 2024
- Disney’s ‘Mufasa: The Lion King’: Who Is the Voice of the Legendary King? - December 21, 2024