You can't have AI without DevOps | GitHub’s Martin Woodward
Why strong habits scale faster than strong models
The single biggest predictor of success with AI isn't the model you choose, it's the DevOps culture you've already built.
Martin Woodward, VP of Developer Relations at GitHub - and the sixth person to ever use Copilot - joins us to explain why this surprising insight is key to the new era of autonomous coding agents. He traces the evolution of GitHub Copilot from a simple autocomplete to a powerful agent that opens its own pull requests, arguing that AI's true power is as a massive accelerant for the iterative loops high-performing teams have already perfected.
Martin explains that teams with strong guardrails for shipping quickly and safely are best equipped to leverage this AI revolution because they can trust the accelerated output. He also reveals how top teams use the key technique of custom instructions to guide Copilot toward writing the code of the future, not just mimicking the code of the past. This conversation uncovers how new agentic workflows are 'tricking' developers into improving their communication and documentation skills, providing a crucial look at the cultural foundations required to thrive in the AI-accelerated enterprise.
"If you wrote better comments, Copilot did a better job for you of coding, and so it kind of tricked you into writing better comments and better interface names and better documentation... Communication is key and it just gets more important with the advance of AI." - Martin Woodward
The Download
Refactoring hype into habits, one week at a time. 🪞
1. GPT-5 ships, and the real test is your workflow 🛝
OpenAI unveiled GPT-5 with faster responses, stronger coding and math chops, and a “safe completions” system meant to handle sensitive prompts more carefully. The model’s vibe shift is polarizing for some, with many taking to social media to share their negative opinions of the new tool, its accuracy, and its perceived personality. The roll-out is a good reminder that AI is something different for everyone, so making a positive change across the board is extremely hard.
Read: OpenAI launches GPT-5 model
2. GitHub’s CEO says goodbye to lead as a founder again 🛠
Thomas Dohmke announced he’s leaving GitHub to return to startup life, after steering the platform through the Copilot era and deeper into Microsoft’s orbit. He’ll stay through the end of 2025 to guide the handoff as GitHub aligns under Microsoft’s CoreAI organization, a continuity signal for customers even as the builder itch calls. For leaders, read this as a sign of the times, seasoned operators are leaving big chairs to build, and the dev tools space is primed for another wave of founder-led experimentation.
Read: Auf Wiedersehen, GitHub ♥️
3. He tried every todo app and ended up with a .txt file 🗒️
After cycling through Notion, Todoist, Things, OmniFocus, Asana, Trello, and even a homebrew app, Alireza Bashiri landed on a simple daily text file for tasks, notes, and schedule. It works because it’s the fastest capture medium with no vendor lock-in. It’s versionable and portable across devices and editors, and ends up as the ideal future-proof and AI-friendly substrate for your thoughts.
Read: I Tried Every Todo App and Ended Up With a .txt File
4. Claude’s “you’re absolutely right” bug says a lot about trust 🐜
A widely shared GitHub issue calls out Claude for reflexively replying “You’re absolutely right!” in contexts where no factual claim was made, turning a bug report into an industry meme. The thread also surfaces plausible root causes, like over-tuned system prompts and RL policies that reward deference over precision.
Read: Claude says “You're absolutely right!” about everything
Run your own AI code review bake-off 🧁
LinearB’s 2025 AI Code Review Evaluation Guide is the first controlled framework to compare Copilot, CodeRabbit, and LinearB with head-to-head data on clarity, composability, and developer experience, plus a step-by-step playbook to run the experiment yourself. Beyond test scores, the guide equips you with strategies for finding the right AI code review tool for your own team.
5. MCP forgets 40 years of RPC lessons at enterprise scale 🔒
Julien Simon’s rightful critique of the Model Context Protocol argues that shipping AI without time-tested guardrails invites the same old fires. If your team is adopting MCP in production, treat it like distributed systems plumbing that must earn trust with battle-proven basics: strong authorization and scoped tokens, clear schemas, versioning, and compatibility contracts.
Such a great insight. Do you think agentic AI will make documentation habits non-negotiable for every team?