OpenAI’s Spring Update Event
OpenAI held its much-anticipated Spring Update event on Monday, where it made several significant announcements regarding its ChatGPT and GPT-4 models. The event was streamed online on YouTube and was held in front of a small live audience. Here are the key highlights from the event:
New Features and Announcements
- GPT-4o Model: OpenAI announced a new flagship-level artificial intelligence (AI) model called GPT-4o, which stands for “omni,” referring to the model’s ability to handle text, speech, and video. This model is set to roll out iteratively across the company’s developer and consumer-facing products over the next few weeks. GPT-4o provides GPT-4-level intelligence but improves on GPT-4’s capabilities across multiple modalities and media, including voice, text, and vision
- ChatGPT Desktop App: The Chief Technical Officer of OpenAI, Mira Murati, launched the new ChatGPT desktop app during the event. The app now comes with computer vision and can look at the user’s screen, allowing the AI to analyze and assist with whatever is shown. Users will have the option to turn this feature on and off
- Interface Refresh: The web version of ChatGPT is getting a minor interface refresh, featuring a minimalist appearance, suggestion cards, smaller icons, and a hidden side panel, making a larger portion of the screen available for conversations. Additionally, ChatGPT can now access web browsers and provide real-time search results.
Accessibility and Availability
- Free Access to GPT-4 Features: OpenAI announced that all the GPT-4 features, previously available only to premium users, will now be available to everyone for free. This move aligns with the company’s mission to make advanced AI tools available to everyone .
- Voice and Vision Capabilities: GPT-4o will be able to speak in emotive voices, react to human emotions in real-time, and reason across audio, vision, and text in real-time. This includes the ability to handle realistic voice conversations and interact across text and image