#299 – April 02, 2026
how powerful technologies can bring both great benefits and serious risks
Which future?
32 minutes by Michael Nielsen
Michael explores how powerful technologies, especially artificial intelligence, can bring both great benefits and serious risks. Using historical examples, he shows how unintended consequences can cause harm. He argues that simple safety measures are not enough, and society must build better systems and institutions to manage these risks. The goal is to guide technological progress toward a safe and positive future.
Your AI shouldn't grade its own homework
sponsored by CodeRabbit
Claude Code and Codex write beautiful code, but they shouldn't review it. Asking an AI to review its own work is like a student grading their own exam—they always pass. CodeRabbit CLI acts as an external reviewer with different architecture, 40+ static analyzers, and zero emotional attachment. The agent writes, CodeRabbit reviews, and the agent fixes. You only show up for the final approval. The AI still does the work; it just doesn't decide if it’s good anymore. Free tier available. Try CodeRabbit's CLI.
What about juniors?
6 minutes by Marc Brooker
Junior software engineers have an advantage in a changing field because they are ready to learn and adapt. As automation reduces routine coding work, Marc suggests juniors need to engage earlier with real engineering problems—understanding users, constraints, and business needs. Their role shifts from mainly learning coding skills to owning projects, thinking creatively, and applying both practical knowledge and core computer science to build effective solutions.
We have learned nothing
23 minutes by Jerry Neumann
Popular startup advice has failed. Despite millions of books sold and widespread adoption of frameworks like Lean Startup and customer development, startup survival rates are unchanged over 30 years. Jerry says that the core problem is simple: when every founder uses the same methods, they build the same companies and compete head to head. Like evolution's Red Queen, where species must keep changing just to survive, startups must differentiate to win, meaning any widely adopted playbook quickly becomes useless.
Why alignment beats autonomy
12 minutes by Maarten Dalmijn
Autonomy is not about working independently — it means choosing to act with intent. In this post Maarten describes that teams can be highly autonomous and still depend on each other. Alignment must come first, because without it, teams retreat into silos and autonomy disappears for everyone. Clear goals and shared constraints are what make real autonomy possible when teams need each other to succeed.
The alignment tax: What a real C-level relationship looks like
10 minutes by Stephanie Leue
When a CPO and CTO avoid honest conflict, the whole team slows down. Mixed signals spread, people protect themselves, and progress stalls without anyone knowing why. Real alignment means having uncomfortable conversations directly, not staying polite while problems grow. The relationship between top leaders sets the pace for everyone below them.
And the most popular article from the last issue was: