OpenAI is reportedly gearing up for the release of its highly anticipated AI model, Orion, potentially set to launch in December. Unlike previous rollouts, such as GPT-4, Orion is expected to be restricted to OpenAI’s close business partners and won’t initially be available to general ChatGPT users. According to a recent report from The Verge, sourced from insiders, Microsoft engineers are already preparing to host Orion on their Azure cloud platform, targeting deployment by the end of November. This new model is rumored to be the next evolution of OpenAI’s language models, possibly even dubbed GPT-5 internally, although the official name remains unconfirmed.
Hints about Orion’s capabilities and scale have been circulating for some time. In a post on X (formerly Twitter), an OpenAI executive mentioned that Orion could deliver “100x more computation volume” than GPT-4, suggesting a massive leap in processing power. The post also hinted that, while Orion uses a compact “Strawberry” variant during training, it still leveraages considerable computational resources. This implies that Orion may bring sophisticated advancements, enabling it to tackle complex problem-solving and reasoning tasks more effectively than its predecessors.
Last month, OpenAI’s CEO, Sam Altman, added to the intrigue with a cryptic tweet: “excited for the winter constellations to rise soon; they are so great.” When asked, ChatGPT interpreted the reference as a nod to the Orion constellation, possibly hinting at the model’s imminent arrival. The timing of this release also coincides with OpenAI’s recent $6.6 billion funding round, which has necessitated a shift toward a for-profit structure.
The development of Orion follows OpenAI’s recent internal restructuring. The company has recently dissolved its ‘AGI Readiness’ team, which was responsible for evaluating OpenAI’s capability to handle models with potential artificial general intelligence (AGI) capabilities, or those approaching human-level understanding. Prior to this, OpenAI had already disbanded its Superalignment team, which had been tasked with ensuring future models remain aligned with human values.