Senior Research Scientist @Google DeepMind

Type 1 / 2 Hard Problems

An underdeveloped meta-skill in AI research is distinguishing between "Type 1 Hard" and "Type 2 Hard" problems. Both consume immense amounts of time and compute, but only one leads to meaningful progress.

Type 1 Hard problems are hard due to incidental complexity. This is the slog of fighting a messy dataset, a poorly designed RL environment, or trying to eke out a 0.1% gain on a saturated benchmark. The difficulty doesn't teach you anything fundamental; it's a tar pit. You spend months debugging a complex training pipeline only to realize the core idea was flawed from the start.

Type 2 Hard problems are hard due to inherent, foundational challenges. This is the difficulty of building the first version of a system that does something novel, like getting an agent to use tools reliably or figuring out the right reward spec to prevent a new form of reward hacking. When you solve a piece of a Type 2 problem, you don't just fix a bug; you unlock a new capability or a deep insight that compounds over time.

Developing the "taste" to tell these two apart is what separates consistently impactful researchers from those who are merely busy. It's an intuition built from seeing hundreds of wandb charts die, internalizing The Bitter Lesson, and recognizing the patterns that signal a dead end versus a promising frontier. In a world of near-infinite research directions, the most valuable resource isn't just compute, but the wisdom to point it at the right mountain.