The biggest time sink for engineering teams: testing
A guest post from Daniel Mauno Pettersson, CTO & co-founder QA.tech
A guest post from Daniel Mauno Pettersson, CTO & co-founder of QA.tech.
Engineering leaders often find testing and QA (Quality Assurance) is the most significant time sink in software delivery. This is especially true in mid-sized to enterprise teams of 30–200 engineers (companies with 250–5000 employees). You might quickly develop new features, but getting them reliably tested and released can take longer than expected. The culprit isn’t the engineers or QA professionals themselves; the traditional QA workflow slows everything down. In many organizations, testing is a gating factor for releases, introducing days or weeks of delay due to manual processes and bottlenecks.
QA bottlenecks in traditional workflows
In a typical enterprise setup, the code enters a QA phase after developers finish coding a new feature or bug fix. Often, small QA teams validate changes from dozens of developers, executing lengthy test plans by hand or with brittle scripts. This process doesn’t scale well. When you have frequent code changes, manual testing becomes a severe bottleneck – QA simply can’t keep up with the volume and velocity of development. As a result, organizations face choices: delay releases until testing is done (hurting time-to-market), or cut corners on testing (risking quality issues). Neither is appealing for a business that needs to move fast and maintain reliability.
It’s telling that many teams feel they spend more time waiting on tests or fixing issues found late than actually building new features. Regression testing of existing functionality, in particular, becomes a time sink. Every change needs to be verified against a suite of existing features to ensure nothing else broke – a tedious task when done manually. Enterprise products also tend to have complex integration points and user flows that must be validated across different environments, browsers, or data sets. All this adds up to long QA cycles that stretch the overall development cycle time.
The problem is the system and process – not the QA professionals. QA engineers and testers are usually very capable, but when they’re forced into a mostly manual, reactive role, their throughput is limited. One tech CEO described how his team struggled with this: “We were releasing new updates almost daily, but it became hard to keep up. Our process was tedious and we needed more coverage without hiring more staff. Ultimately we wanted to rely less on human QA resources.” In other words, the traditional approach was unsustainable for the pace of development. The bottleneck wasn’t the people – it was the fact that so much of the QA process relied on humans executing repetitive tests step by step.
It’s the process, not the people
It’s important to emphasize that QA bottlenecks are a process issue, not a personnel issue. QA professionals in enterprise teams are usually juggling huge workloads, managing test environments, writing scripts, and manually checking critical flows. If bugs slip through or testing takes too long, it’s not because the QA folks aren’t skilled or working hard – it’s because the manual process they’re stuck with is inherently slow and prone to oversight. In many companies, QA is under-resourced relative to development. You might have 50 developers and 5 testers, for example, which means without augmenting their efforts, testing will inevitably lag behind development. The result is often a scramble at the end of each release cycle, with QA working overtime to run tests and developers context-switching back to fix bugs found late. This scramble is a symptom of a broken system.
Modern engineering organizations are recognizing that to speed up delivery, they must fix the QA process itself. That means introducing more automation, smarter tools, and better integration of testing into the development pipeline. The goal is to let QA focus on strategy (what to test and why) rather than being human test execution machines. That’s where a new generation of AI-driven testing agents is entering the picture.
How AI agents are changing the QA process
AI-powered QA agents are autonomous testing tools that simulate a human tester’s actions. Think of them as tireless bot testers that can click through your application, fill forms, validate outputs, and report bugs – but much faster and more consistently than a human. These AI agents are changing the QA landscape by attacking the very inefficiencies that slow traditional QA down:
Continuous regression testing: AI agents can run regression tests on every code change or every night, covering critical user flows to ensure nothing that worked before was broken by the latest updates. This means you get immediate feedback if a new commit caused an issue, rather than discovering it days or weeks later. Teams using AI QA have been able to run daily test suites across core features without dedicating human resources, something impractical to do with a small manual QA team.
Faster test execution and coverage: What might take a tester several days to go through manually, an AI-driven test bot can do in hours or minutes, and it can run tests in parallel. For instance, I’ve worked with an e-commerce platform that was able to automate over 5,000 test executions covering their critical flows, saving 110 hours of manual QA effort. This freed their team to release updates faster, since they no longer had to wait on lengthy manual regression cycles.
Flaky test detection and resilience: AI agents can detect patterns like flaky tests (tests that sometimes pass, sometimes fail due to timing or environment issues) by analyzing test run data. Some advanced tools automatically retry or adjust to flaky tests, or flag them for engineers to fix. They can also be more resilient to minor application changes – for example, if a button moved or was renamed, an AI-driven test might adapt using computer vision or contextual selectors, whereas a traditional scripted test would just break. This reduces the maintenance burden on QA to constantly update test scripts.
24/7 testing integrated with CI/CD: AI QA tools integrate with continuous integration pipelines (Jenkins, GitHub Actions, etc.), so tests run automatically with each build or deployment. No waiting for the next morning’s manual test run – the feedback is ready as soon as the code is deployed to a test environment. This tight integration was key for teams like Shoplab, who hooked an AI QA agent into their GitHub CI workflow. “Every time we push a release, [the AI agent] runs the tests… ensuring immediate feedback on potential issues,” one engineering lead noted. The outcome is fewer post-release bugs and a smoother user experience because problems are caught early.
Crucially, these improvements don’t come by working QA engineers harder; they come by changing the process and tools. The AI does the heavy lifting of executing tests, so the humans can focus on higher-level quality strategies. However, it’s also important to know where the current capabilities end. AI agents aren’t magical silver bullets for every kind of testing. Let’s break down what AI QA agents can already handle today, and what they cannot (at least not yet).
What AI QA agents can automate today
Repetitive regression tests across builds and environments (UI clicks, form submissions, basic validations)
Flaky test detection and retry logic to handle intermittent failures in test scripts.
Continuous smoke testing of key user flows (login, checkout, etc.) on every deployment, catching breakages immediately.
Auto-generation of simple test cases from specifications or past user journeys (for well-defined scenarios).
Self-healing test scripts that adjust to minor UI changes (e.g., an updated button text or layout) using AI vision/NLP techniques.
What they can’t (yet) automate
Performance and load testing at scale to simulate thousands of concurrent users or measuring system performance under stress.
Security penetration testing beyond known patterns to identify novel vulnerabilities or doing deep security audits still needs human expertise and specialized tools.
Exploratory testing and UX assessments to understand the nuances of user experience, usability, or doing unscripted exploration of the app (human creativity is still superior here)
Complex logic validation without guidance to ensure a new feature meets business requirements if those rules aren’t clearly defined for the AI. (AI can’t read your mind for expected behavior, it follows the patterns or goals you give it)
Compliance and domain-specific testing like regulatory compliance checks or industry-specific test scenarios often require expert knowledge that generic AI agents don’t possess yet.
AI QA agents today are very effective at automating repetitive functional tests and catching regressions, which addresses the biggest time sinks in QA. Tasks like running hundreds of login/purchase scenarios or validating calculations can be handed off to an AI agent with confidence. These are areas where tools are mature today. For example, it’s already feasible to have an AI agent auto-click through an app’s critical paths every night and alert the team if anything fails – many companies are doing this right now and drastically reducing the number of escaped bugs.
On the other hand, certain testing domains remain challenging for AI. Performance testing involves simulating complex load patterns and interpreting system metrics – while AI can assist, you still need specialized load testing frameworks and human analysis to truly vet performance. Similarly, security testing (like penetration testing or code security audits) is only partially automatable; AI might help generate test inputs or identify common vulnerabilities, but it’s not reliably catching all security issues by itself. And when it comes to exploratory testing – the free-form, unscripted exploration that experienced testers do to find edge cases – AI is still largely an assistant rather than a replacement. In short, AI can do the heavy lifting on routine tests, but human insight is still critical for the tricky stuff.
One example of AI-powered QA comes from Telgea, a startup that provides local mobile plans globally. With a small QA team, they began by having an AI agent continuously test their web and mobile apps across regional variants. This allowed them to catch locale-specific issues early, speed up delivery, and keep quality high, without increasing headcount. Their QA lead now spends time crafting new test strategies instead of manually running regression test suites.
Getting started with AI-powered QA
For engineering teams eager to shorten release cycles and ease their QA bottleneck, the question is: How do you transition toward AI-powered QA in practice? Even if you still have a small (or stretched-thin) QA team, there are pragmatic steps you can take right now:
Identify high-value automation targets: Start by pinpointing the tests that consume the most time each release. These are usually regression tests or smoke tests that run through core user flows (login, CRUD operations, checkout, etc.). If your QA team is running the same test script or checklist over and over, that’s a prime candidate for an AI agent to automate.
Pick a pilot project and tool: There are several AI-driven testing tools and platforms available (ranging from open-source frameworks to enterprise services). Choose one that fits your tech stack and try a pilot on a contained project or a single module of your application. The goal is to get a feel for how the AI agent works with your app. For example, set it up to automate the login and a few critical screens of your application initially.
Integrate it with your CI/CD pipeline: Treat the AI tests just like you would any automated test suite and integrate them into your continuous integration or deployment pipeline. This way, whenever developers push code, the AI agent automatically kicks off tests. Early integration is key; you want the AI-driven tests to run in staging environments on every build if possible, providing quick feedback. This also helps build trust in the tool’s results as you’ll see test reports regularly.
Train and involve your QA team: Rather than viewing AI as a replacement for your QA engineers, involve them in configuring and improving the AI agent. The QA team’s domain knowledge is crucial because they know the edge cases and expected behavior. They might need to train the AI agent on how to navigate certain workflows or verify certain outputs. Encourage QA engineers to see the agent as an assistant: for instance, they can teach it the basic flows, then let it handle the bulk of execution while they supervise and write new high-level test scenarios. This transition may require learning new tools or scripting for the QA team, but it will pay off in multiplied productivity.
Start small, then iterate and expand: Begin with a few automated scenarios and once those are running reliably, add more. Perhaps you start with automating regression tests for last quarter’s features; next you expand to cover new features as they are developed. Use the results to guide you; if the AI tests catch a bug, do a post-mortem to refine the test or the code. If the AI misses something, treat it as feedback to improve either the test coverage or to adjust the agent’s configuration. Over a few iterations, you’ll gradually increase coverage and confidence. It’s often effective to have the AI tests run in parallel with manual testing for a couple of cycles, so you can compare results and fine-tune before fully relying on them.
Measure and celebrate the wins: Track metrics to show the impact. How much did automation reduce the testing time per release? Are you releasing more frequently or with fewer bugs? For example, measure “QA cycle time” before and after (how long did it take from code complete to release). Also track any reduction in bugs found in production. When you quantitatively show, say, a 30% faster release cadence or a drop in QA cost hours, it builds momentum and buy-in to invest further in AI QA. It also highlights the QA team’s strategic value – they enabled those improvements by leveraging smarter tools.
By following these steps, teams can gradually shift from a manual-intensive QA process to an automation-augmented process. Even if you only have one or two QA engineers, augmenting them with an AI agent can amplify their impact dramatically. The key is to start with manageable pieces and grow from there, all the while involving your team so they develop expertise with the new tools.
How QA testing is evolving from test execution to quality strategy
As AI agents take over the rote execution work, the role of QA professionals is inevitably shifting. Forward-looking engineering teams should prepare for QA to become more about strategy and oversight, and less about clicking through tests. In practice, this means QA engineers will spend more time on activities like:
Designing test scenarios and strategies: Deciding what needs to be tested and under what conditions. With AI able to execute many tests, the bottleneck becomes identifying the right cases. QA’s insight into user journeys and risk areas is crucial here – they direct the AI to focus on the important things. For example, a QA lead might outline a set of high-risk user flows for the AI agent to prioritize, or define the acceptance criteria for new features that the AI should verify.
Maintaining and improving AI test suites: Instead of manually running tests, QA engineers will monitor automated test results, investigate failures, and fine-tune the test scripts or AI agent’s behavior. If an AI-driven test fails due to a changed requirement or a false alarm, the QA engineer adjusts it. They become the curator of the automated test suite, ensuring it stays aligned with the evolving product.
Analyzing quality trends: With more automated testing, QA can also take on a more analytical role, looking at the patterns in failures, areas of the application that are frequently breaking, and feeding that information back to the development team. This strategic feedback loop helps improve the product and the development process (for instance, if a certain module has a lot of regression failures, maybe the dev team needs to add unit tests or refactor that module).
Exploratory and creative testing: Free from running basic regression checklists, QA can spend time on exploratory testing – the kind of creative, unscripted testing that finds the unknown unknowns. This is where human intellect complements the AI. While the AI is busy churning through predefined scenarios, a QA engineer can be manually exploring new features, trying odd combinations or adversarial inputs that a scripted approach might miss. The result is a wider net of coverage together.
In essence, the QA role is moving from execution to supervision and strategy. QA professionals become the architects of quality, not just the laborers of testing. This is a positive development: it makes the work more interesting and impactful. Teams should prepare for this change by upskilling QA staff in using automation and AI tools, and by adjusting processes to give QA a voice early in development. Rather than throwing code over the wall to QA at the end, the best results come when QA is involved from the design phase, identifying how new features will be tested (often with AI assistance) as they are being built.
Finally, it’s worth noting that none of this means QA engineers disappear; the truth is far from it. In fact, many companies adopting AI for QA find they want more QA talent, but the profile shifts. They look for QA engineers who can write automation code, who understand machine learning outputs, or who can act as a quality coach within development squads. The value of QA expertise rises as quality moves “left” in the process. AI agents are powerful tools in the QA toolkit, but tools need skilled people to wield them effectively.
Use AI to reclaim QA time
Testing has long been one of the most significant time sinks for engineering teams, but it doesn’t have to stay that way. By recognizing that the slow, manual QA process is the real bottleneck, organizations can take action to improve their process. AI QA agents represent a breakthrough in speeding up and scaling out testing. They tackle the mundane and time-consuming parts of QA, from regression runs to repetitive validations, in a fraction of the time, and enable engineering teams to ship faster, with confidence, as evidenced by the real-world results (faster releases, higher coverage, lower costs).
For technical decision-makers, the takeaway is clear: investing in smarter QA processes yields real ROI in productivity and product quality. Starting small with AI-driven testing and gradually expanding can transform QA from a roadblock into a competitive advantage. As you embark on this journey, remember to bring your QA team along and redefine their roles to leverage these new capabilities. The future of QA is not manual testers vs. AI bots; it’s a collaborative model where AI handles the heavy lifting and humans steer the strategy. By transitioning to AI-powered QA, engineering teams can reclaim all those hours sunk into testing fire drills and reinvest them into innovation, ultimately delivering value to users faster without sacrificing quality. The tools are ready, the system needs to catch up.