How Capital One supports 14,000 technologists with one pipeline | Ameesh Paleja
The "Ironman suit" for engineers, the $500B AI line item, and murdering your deploy freeze.
Capital One operates less like a traditional bank and more like a “technology company that happens to do banking.” Ameesh Paleja, EVP of Enterprise Platforms, joins the show to explain how this philosophy empowers their 14,000 technologists to innovate at the speed of a startup despite operating in a highly regulated industry.
Ameesh breaks down how standardization serves as the unsung hero of enterprise scale, revealing how consolidating build processes removes undifferentiated heavy lifting so engineers can focus on creative problem-solving. He also details how his team automates SRE tasks to prevent burnout and outlines a unique funnel strategy for AI adoption that balances cutting-edge experimentation with strict security and governance.
1. 2026: The year AI gets procured
Big Tech capital expenditures on AI infrastructure are projected to exceed $500 billion in 2026. But the real story isn’t just the spending; it’s the shift in mindset. We are moving from cool demos to budget line items. Interestingly, predictions suggest context windows will plateau around one million tokens. It seems we have more to gain from applying models to new workflows rather than infinitely expanding their memory.
Read: 17 predictions for AI in 2026
2. Your LLM is gaslighting you about the news
If you ask ChatGPT about a breaking event, it might confidently tell you it didn’t happen. Large Language Models are frozen in time, forever playing a game of catch-up with the real world. Ben Lloyd Pearson experienced this firsthand with his fantasy football team. The LLMs were incredible at drafting by analyzing historical data, but terrible at weekly roster changes when they had to react to real-time injuries. The issue is twofold: websites aren’t optimized for LLM consumption, and models lack the ability to form opinions on developing stories. Trust, but verify - especially with breaking news.
Read: Why ChatGPT can’t be trusted with breaking news
3. Why Big Tech turns everything into a knife fight
A new article argues that internal competition inevitably intensifies at scale, creating a culture of persistent infighting. This reflects the jarring transition many engineers face when moving from startups to giants. In a startup, the politics are externally focused (beating the competition). In Big Tech, the politics are internal. It is easy for newcomers to cross “invisible lines” established by tenured teams. Understanding these dynamics is crucial for survival: sometimes the hardest part of the job isn’t the code, but navigating the siloed friction of a massive org.
Read: Why Big Tech Turns Everything Into a Knife Fight
4. What does elite engineering look like in 2026?
LinearB analyzed 8.1 million pull requests across 4,800 engineering teams to find the answer. In this on-demand workshop, leaders from CircleCI, Apollo GraphQL, and LinearB break down the results.
You’ll get a clear view of the market with 20 core SDLC metrics. Plus, three brand-new AI metrics designed to show you exactly how AI tools are impacting delivery velocity and code quality. No fluff. Just data. Watch the workshop to see how your team stacks up.
5. Sometimes that puppy needs murdering
Charity Majors is back with a hot take on everyone’s least favorite activity: Friday deploys. Her argument is simple. Pausing deploys without pausing merges leads to an accumulation of untested changes, which actually increases risk. Think of Friday deploys like fire drills for your codebase where you build confidence by practicing when it is slightly inconvenient. Real resilience means handling a curveball without panicking. If you are returning from the holidays to a mountain of un-deployed PRs, this is your sign to stop letting fear dictate your release schedule.
Read: On Friday Deploys: Sometimes that Puppy Needs Murdering







