How I Stay Current on AI
Do NOT over-collect and under-synthesize.
Start where the future is still unpolished: Research
If you want to know where AI will be in two years, do not start with product launches. Start with research labs.
I consistently read work from Google Research, DeepMind, and Berkeley AI Research (BAIR).
This is where core ideas surface first: reasoning over long horizons, agentic planning, multimodal systems, inference efficiency, and evaluation gaps. These blogs explain tradeoffs. They show failure modes. They do not oversell readiness.
Then watch how ideas become reality
Research tells you what might work. Translation tells you what survives contact with real customers.
I selectively read TechCrunch, TopBots, ScienceDaily, and Towards Data Science—not for announcements, but for patterns.
Across sectors, the same themes repeat. AI pilots scale slower than expected. Data quality, not models, becomes the bottleneck. Costs move from training to inference. Trust becomes a limiter in customer-facing experiences.
Stay close to builders. They surface friction first.
Some of the most valuable AI insight comes from disagreement, not consensus.
I read Hacker News daily. Not because it is polished. Because it is honest. Engineers debate cost curves. Founders argue about reliability. Operators share what broke in production.
Friction is predictive. When many practitioners complain about the same failure mode like hallucinations in decision support, brittle agents, runaway inference costs—it usually becomes a board-level issue twelve to eighteen months later.
Learn from operators who ship, not just theorize
AI changes organizations only when it changes how work flows.
For that lens, I spend time with Lenny’s Newsletter, First Round Review, Andreessen Horowitz (a16z) News, and SVPG.
These sources focus on execution realities. Where human judgment must remain in the loop. Where automation increases speed but erodes accountability. Where AI shifts incentives inside teams.
Slow down for the second-order effects
I rely on MIT Technology Review for long-term thinking on governance, power, and unintended consequences.
