AI Must-Reads: OpenAI + Google DeepMind
A “no fluff” list of official posts worth reading end-to-end.
OpenAI (must-read)
-
Evaluating chain-of-thought monitorability (Dec 18, 2025)
- References: Monitoring Monitorability (PDF)
-
Introducing gpt-oss (Aug 5, 2025)
- References: gpt-oss Model Card, Model Card PDF, Open models hub, gpt-oss cookbook
-
Measuring the performance of our models on real-world tasks (GDPval) (Sep 25, 2025)
- References: GDPval paper (PDF), OpenAI Evals
-
Learning to reason with LLMs (Sep 12, 2024)
- References: OpenAI o1, OpenAI o1 System Card
-
Hello GPT-4o (May 13, 2024)
- References: GPT-4o System Card
-
GPT-4o System Card (Aug 8, 2024)
- References: PDF version
-
Introducing Structured Outputs in the API (Aug 6, 2024)
- References: Structured Outputs intro (Cookbook)
-
Introducing the Model Spec (May 8, 2024; updated Feb 12, 2025)
- References: Model Spec (latest), Model Spec (2025-02-12 snapshot)
-
Function calling and other API updates (Jun 13, 2023)
-
Sora 2 is here (Sep 30, 2025)
Google AI / Google DeepMind (must-read)
-
Introducing Gemini 1.5, Google’s next-generation AI model (Feb 15, 2024)
-
What is a long context window? (Feb 16, 2024)
-
Gemini breaks new ground with a faster model, longer context, AI agents and more (May 14, 2024)
- References: Google I/O 2024 announcement collection
-
Gemma: Introducing new state-of-the-art open models (Feb 21, 2024)
-
Gemma 2 is now available to researchers and developers (Jun 27, 2024)
-
Genie 2: A large-scale foundation world model (Dec 4, 2024)
-
Start building with Gemini 2.0 Flash and Flash-Lite (Feb 25, 2025)
- References: Gemini 2.0 is now available to everyone
-
Our vision for building a universal AI assistant (May 20, 2025)
- References: Google I/O 2025 announcement collection
-
Gemini Robotics brings AI into the physical world (Mar 12, 2025)
-
FunSearch: Making new discoveries in mathematical sciences using Large Language Models (Dec 14, 2023)
- References: Paper (PDF), Nature article