Post Revision: v1.0.0

In December 2015, OpenAI was founded. Just a month later, in January 2016, I launched ThinkingFolio. Despite our similar beginnings, OpenAI skyrocketed while ThinkingFolio faltered. Reflecting on the reasons behind ThinkingFolio’s failure and comparing it with other AI pioneers like Numenta, DeepMind, and Ben Goertzel’s ventures, I’ve gained valuable insights into what set OpenAI apart. This article offers my perspective as a former Artificial General Intelligence (AGI) entrepreneur, shedding light on OpenAI’s success and the challenges it faces today.

The Turning Point

OpenAI’s defining moment arrived in February 2019 with the release of their paper, Language Models are Unsupervised Multitask Learners—better known as the GPT-2 paper. This work revealed the potential of large language models (LLMs) to generate coherent, task-relevant responses across a wide range of subjects, marking a leap toward general-purpose AI.

Just a month later, Sam Altman left his role as president of Y Combinator to become OpenAI’s full-time CEO, coinciding with the company’s transition to a for-profit subsidiary. From that point forward, OpenAI embarked on a meteoric rise, launching GPT-3, GPT-3.5, ChatGPT, and GPT-4—bringing the idea of AGI closer to reality.

In March 2023, Microsoft published *”Sparks of Artificial General Intelligence: Early Experiments with GPT-4,”* signaling that AGI was no longer a distant dream. By this point, OpenAI had become a dominant force in AI, outpacing many of its rivals.

The Strategy Behind OpenAI’s Success

OpenAI’s early success can be attributed to two key strategies:

  1. Focusing on high-impact, low-complexity tasks
  2. Rapid iteration cycles

By prioritizing scalable tasks with a large impact, OpenAI made significant strides by scaling up LLMs through hefty investments. This scaling strategy, while resource-intensive, is a low-complexity tactic that requires far fewer research breakthroughs compared to deeper AI innovations. By focusing on scaling rather than complex fundamental research, OpenAI was able to achieve measurable performance gains quickly. Additionally, their rapid iteration process enabled them to release products quickly, gather feedback, and continuously refine their models—a strategy that contributed to both their technological success and growing public visibility.

These strategies are strongly aligned with Y Combinator’s principles of prioritization and speed. As OpenAI’s CEO, Sam Altman brought these same principles to the company’s operational playbook, enabling its rapid rise. However, this approach also planted the seeds for OpenAI’s current struggles.

The Roots of OpenAI’s Current Challenges

In its early days, OpenAI’s priority was to demonstrate progress in both AI technology and practical applications, especially at a time when skepticism around AGI was high. This progress kept its engineers motivated and maintained belief in the AGI vision.

Today, with the widespread success of ChatGPT and increasing optimism about AGI—Nvidia’s CEO Jensen Huang even predicting AGI within two years—OpenAI’s challenges have shifted. The very strategies that fueled its early success are now sources of tension, both within and outside the company.

OpenAI’s rapid-iteration model, reminiscent of Facebook’s “move fast and break things” approach, worked when AI models were simpler and experimental. But now, with ChatGPT reaching millions of users, concerns around safety and alignment with human values have taken center stage. “Move fast and break things” is no longer an option; the stakes are much higher.

Moreover, OpenAI’s internal researchers—once energized by the rapid progress in LLMs—are growing restless. Scaling up models and relying on more data alone is no longer enough. They want to push the boundaries with deeper, more fundamental advances in AI technology. But with heightened public scrutiny and regulatory concerns, OpenAI must tread carefully. Safety, privacy, and alignment with human values must become top priorities if the company is to retain its leadership in AGI development. This shift has sparked frustration and discontent within OpenAI, leading to the departure of key figures like Chief Scientist Ilya Sutskever, Jan Leike, and John Schulman. More recently, former Chief Technology Officer Mira Murati, former Chief Research Officer Bob McGrew, and former Vice President of Research Barret Zoph have also resigned.

Looking Ahead: The Path Forward for OpenAI

OpenAI has firmly established itself in the commercial AI market. New talent will continue to flow into the company, and it will remain a competitive player in general-purpose AI, particularly with products like ChatGPT. However, the road to true AGI—the kind of AI capable of human-level cognition—demands breakthroughs in high-complexity research areas that go beyond mainly relying on scaling up existing models. While OpenAI has made some technical progress, achieving true AGI will require deeper innovations and fundamental advances in AI research.

If OpenAI cannot balance the needs of its commercial products with its AGI ambitions, it risks falling behind in the race for true AGI. The internal conflict between pushing for deeper innovation and managing external responsibilities such as safety and alignment could further strain its ability to innovate.

There are still opportunities for future innovators to make meaningful contributions, and the field is far from being won.