OpenAI kicked off its inaugural “12 Days of OpenAI” media blitz on December 5, each day unveiling new features, models, subscription tiers, and capabilities for its growing ChatGPT product ecosystem during a series of live-stream events.
Here’s a quick rundown of everything the company announced.
OpenAI kicked off the festivities with a couple major announcements. First, the company revealed the full-function version of its new o1 family of reasoning models and announced that they would be immediately available, albeit in limited amounts, to its $20/month Plus tier subscribers. In order to get full use of the new model (as well as every other model that OpenAI offers, plus unlimited access to Advanced Voice Mode), users will need to spring for OpenAI’s newest, and highest, subscription package: the $200/month Pro tier.
On the event’s second day, the OpenAI development team announced that it is expanding its Reinforcement Fine-Tuning Research program, which allows developers to train the company’s models as subject matter experts that “excel at specific sets of complex, domain-specific tasks,” according to the program’s website. Though it is geared more toward institutes, universities, and enterprises than individual users, the company plans to make the program’s API available to the public early next year.
On the third day of OpenAI, Sam Altman gave to me: Sora video generation. Yeah, OK, so the cadence for that doesn’t quite work but hey, neither does Sora. OpenAI’s long-awaited and highly touted video generation model, which has been heavily hyped since February, made its official debut on December 9 to middling reviews. Turns out that two years into the AI boom, being the leading company in the space and only rolling out 20-second clips at 1080p doesn’t really move the needle, especially when many of its competitors already offer similar performance without requiring a $20- or $200-per-month subscription.
OpenAI followed up its Sora revelations with a set of improvements to its recently released Canvas feature, the company’s answer to Anthropic’s Artifacts. During its Day 4 live stream, the OpenAI development team revealed that Canvas will now be integrated directly into the GPT-4o model, making it natively available to users at all price tiers, including free. You can now run Python code directly within the Canvas space, enabling the chatbot to analyze it directly and offer suggestions for improvement, as well as use the feature to construct custom GPTs.
On day five, OpenAI announced that it is working with Apple to integrate ChatGPT into Apple Intelligence, specifically Siri, allowing users to invoke the chatbot directly through iOS. Apple had announced that this would be a thing back when it first unveiled Apple Intelligence but, with the release of iOS 18.2, that functionality is now a reality. If only Apple’s users actually wanted to use Apple’s AI.
2024 was the year that Advanced Voice Mode got its eyes. OpenAI announced on Day 6 of its live-stream event that its conversational chatbot model can now view the world around it through a mobile device’s video camera or via screen sharing. This will enable users to ask the AI questions about their surroundings without having to describe the scene or upload a photo of what they’re looking at. The company also released a seasonal voice for AVM which mimics Jolly Old St. Nick, just in case you don’t have time to drive your kids to the mall and meet the real one in person.
OpenAI closed out the first week of announcements with one that is sure to bring a smile to the face of every boy and girl: folders! Specifically, the company revealed its new smart folder system, dubbed “Projects,” which allows users to better organize their chat histories and uploaded documents by subject.
OpenAI’s ChatGPT Search function, which debuted in October, is now available to all logged-in users, regardless of their subscription tier. The feature works by searching the internet for information about the user’s query, scraping the info it finds from relevant websites, and then synthesize that data into a conversational answer. It essentially eliminates the need to click through a search results page and is functionally identical to what Perplexity AI offers, allowing ChatGPT to compete with the increasingly popular app. Be warned, however: a recent study has shown the feature to be “confidently wrong” in many of its answers.
Like being gifted a sweater from not one but two aunts, OpenAI revealed on day nine that it is allowing select developers to access the full version of its o1 reasoning model through the API. The company is also rolling out real-time API updates, a new model customization technique called Preference Fine-Tuning, and new SDKs for Go and Java.
In an effort to capture that final market segment that it couldn’t already reach — specifically, people without internet access — OpenAI has released the 1-800-ChatGPT (1-800-242-8478) chatline. Dial in from any land or mobile number within the U.S. to speak with the AI’s Advanced Voice Mode for up to 15 minutes for free.
Last month, OpenAI granted the Mac-based desktop version of ChatGPT the ability to interface directly with a number of popular coding applications, allowing its AI to pull snippets directly from them rather than require users to copy and paste the code into its chatbot’s prompt window. On Thursday, the company announced that it is drastically expanding the number of coding apps and IDEs that ChatGPT can collaborate with. And it’s not just coding apps; ChatGPT now also works with conventional text programs like Apple Notes, Notion, and Quip. You can even launch Advanced Voice Mode in a separate window as you work, asking questions and getting suggestions from the AI about your current project.
For the 12th day of OpenAI’s live-stream event, CEO Sam Altman made a final appearance to discuss what the company has in store for the new year — specifically, its next-generation reasoning models, o3 and o3-mini. The naming scheme is a bit odd (and done to avoid copyright issues with U.K. telecom, O2) but the upcoming models reportedly offer superior performance on some of the industry’s most challenging math, science, and coding benchmark tests — even compared to o1, the full version of which was formally released less than a fortnight ago. The company is currently offering o3-mini as a preview to researchers for safety testing and red teaming trials, though there’s no word yet on when everyday users will be able to try the models for themselves.
Curiously, the live streams did not feature anything solid on the next generation of GPT. Don’t worry — we’re keeping an eye on everything you need to know about GPT-5,
Related Posts
New study shows AI isn’t ready for office work
A reality check for the "replacement" theory
Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns
The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.
Microsoft tells you to uninstall the latest Windows 11 update
https://twitter.com/hapico0109/status/2013480169840001437?s=20