How to Build AI Workflows You Can Trust | Infactory’s Brooke Hartley Moy
Would you give AI a hammer if it’s hallucinating nails?
Is your company rushing headlong into AI, only to find it's a 'square peg in a round hole'?
This week, Andrew tackles the critical issue of building trust in AI systems with Brooke Hartley Moy, CEO and co-founder of Infactory.
Brooke, with her experience at companies like Google and Samsung, cuts through the hype and reveals the biggest misconceptions businesses have about AI. We dive into the 'black box' problem, the importance of high-quality data, and why not all AI is created equal.
From seating Matthew McConaughey in the rain to high-stakes medical decisions, we explore the crucial role of domain expertise and the need to move beyond LLM-centric thinking. If you're an engineering leader grappling with AI implementation, this episode is your essential guide to building trustworthy and impactful AI systems.
"Trust is the only thing that is really going to matter at a foundational level." — Brooke Hartley Moy, CEO of Infactory
The Download
The Download is your weekly guilt-free slice of engineering pie. 🥧
1. The AI copyright debate just got louder 📢
OpenAI is pushing the US government to clarify whether training AI on copyrighted content is fair use, or risk falling behind China. While devs have reaped real benefits from AI with minimal disruption, creatives are caught in a lopsided equation: all the upheaval, little of the upside. That imbalance may get worse before it gets better, especially with no international consensus on copyright. Until then, the fight over fair use is both legal and cultural.
Read: OpenAI declares AI race “over” if training on copyrighted works isn’t fair use
2. AI still needs a chaperone on commit night 🪩
Past Dev Interrupted guest Birgitta Böckeler just dropped a practical memo-turned-field-guide for AI-assisted development, and it’s a must-read for any team experimenting with copilots. From broken builds to bloated CSS, she shows how AI can quietly derail time to commit, team flow, and maintainability if left unchecked. The good news? With the right discipline and guardrails, you can hit 80% AI-assisted coding. Just don’t expect it to drive itself home.
Read: The role of developer skills in agentic coding from Thoughtworks
3. What’s your AI collaboration style?
Some devs use AI to crank out code. Others use it to steer sprints. Most of us? Somewhere in between.
If you’re experimenting with copilots or trying to make AI help your team ship faster, this 30-second quiz will show you where you stand on the AI Collaboration Matrix. You’ll get a read on your current style—Newbie, Vibe Coder, AI Orchestrator, Pragmatist, or Fully AI-Driven—plus:
Your AI strengths
What you might be missing
Where to focus next
Take the quiz, get your map, and see how your style stacks up in the era of agentic AI.
Take the quiz: Discover Your AI Collaboration Style
Stop debating AI and start shipping faster 🏗️ (sponsored)
AI can unblock your developers, but only if you can focus it. From auto-generating PR descriptions to summarizing iterations for better retros, LinearB’s new AI features are built to eliminate the grunt work and boost your team’s flow.
We’ve rolled them out internally and seen faster delivery results, sharper habits, and less team-wide burnout.
Join our upcoming webinar to see how it works in real life.
4. Nobody asked for another chatbot 🙅♀️
A new survey reveals a growing AI culture gap:
“…while 75% of execs think their rollout’s going great, less than half of employees agree…”
and many are quietly hiding their AI use out of fear. This disconnect is a recipe for sabotage, burnout, and bad tooling. The real issue? Leaders keep pushing chatbots instead of purpose-built tools that actually support how teams work. If you’re not pairing rollout with strategy, training, and guardrails then you’re not shipping with purpose.