AI isn't for cutting costs, it's for multiplying impact | Super.com's Matt Culver
Plus, trolling-as-a-marketing playbook, model poisoning, and AI intelligence drift
Is your company using AI to trim your budget, or to multiply your team’s impact? We’re joined by Matt Culver, a senior engineering leader at Super.com, to discuss why the common view of AI as a tool for cost-cutting is a misguided “accounting mindset” that ultimately destroys trust. He argues that leaders should instead see efficiency gains from AI as a powerful opportunity to reinvest in their teams. This conversation reframes the AI debate by urging leaders to look beyond the coding loop to improve the entire product development lifecycle—from ideation to delivery.
Matt explains that the key to successful AI adoption is aligning new initiatives with the developer’s core incentives: removing friction and enabling the creative flow state that makes their job enjoyable. He provides a human-centric approach for channeling AI’s power to solve upstream problems in product planning and market research, rather than just generating more code. Learn how to use AI not as a means to an end, but as a way to empower your developers, generate more value, and build a high-trust engineering culture.
“I think when you’re getting these efficiency gains, you should always think about them as reinvestment opportunities where you can improve the quality of the product, you can explore more new possibilities for your product.”
The Download
Your weekly dose of code, chaos, and clarity. 🦑
1. AI models poisoned by just 250 malicious docs? 🤯
Anthropic’s latest report reveals that as few as 250 malicious documents can create a “backdoor” in large language models, regardless of their size or training data volume. This challenges the assumption that attackers need to control a percentage of training data, highlighting a fundamental shift in software security. With information itself becoming an attack vector, the importance of AI observability and data provenance is more critical than ever.
Read: A small number of samples can poison LLMs of any size
2. Are LLMs getting dumber, or are we just expecting too much? 🤔
’s article revisits the debate on whether large language models are losing their edge. While some theories suggest cost-cutting or stale training data as culprits, others point to user perception and changing expectations. Despite mixed opinions on new models like GPT-5, many users report improved workflows. The real challenge lies in adapting to these changes and leveraging AI effectively, especially when data sources are murky.Read: Revisiting “Intelligence Drift” - by Charlie Guo
3. Silicon Valley’s AI marketing: genius or just trolling? 🧌
points out that AI companies are blurring the line between marketing and trolling with campaigns like “STOP HIRING HUMANS.” While these tactics generate buzz, they also reflect a disconnect from AI’s true capabilities, which are more about augmenting human work than replacing it. The backlash against such marketing highlights the need for a more nuanced understanding of AI’s role in the workplace, where human expertise remains crucial.Read: AI profiteering is now indistinguishable from trolling
4. AI’s water consumption debate: missing the point? 💧
The idea that AI’s water usage is a serious national environmental issue is a “fake problem,” according to
. While data centers’ resource consumption is real, he argues concerns are rooted in contextless large numbers and misleading media framings (like associating construction issues with operational water use). Masley points out that AI is currently using less than 0.04% of America’s freshwater, and AI-driven leak detection is already saving billions of gallons of water, a benefit that will likely continue to outweigh its environmental footprint.Read: The AI water issue is fake
Tired of slow, inconsistent code reviews? (sponsored)
Meet LinearB AI, your new AI-powered workflow assistant built to supercharge your team’s review process. With LinearB AI, you’ll get automatic PR descriptions, AI-generated review suggestions, and instant insights that help your developers fix issues before a human even looks at the code. No more waiting. No more guesswork. Just faster, smarter, and higher-quality reviews powered by AI.
Read: Optimize Code Reviews with LinearB
5. Rediscovering HTML’s forgotten gem: the <output> tag 💡
Den Odell shines a light on the overlooked <output> tag in HTML, which offers a semantic solution for dynamic content updates announced to screen readers. Unlike the commonly used <div>s and ARIA regions, <output> provides a native, accessible way to communicate changes, enhancing web accessibility. This long-ignored tag could play a key role in making AI-driven interfaces more inclusive.










This piece realy made me think, how do we shift more companies from the cost-cutting mentality to seeing AI as true reinvestment, you've hit on something super insightful here!