Striking the balance - trust, quality and security in the age of AI
23 June 2025
Software development is accelerating. Code is now being written, tested and even reviewed by machines faster than ever. As more organisations join the race to embed machine learning into their products and workflows, one question persists: Are we trading security for speed?

For engineering teams and business leaders alike, one thing is clear - in our new AI-enhanced world, quality assurance (QA) and cyber security are not optional, they are essential.
Productivity vs risk: understanding AI’s dual edge
AI has quickly become a developer’s assistant. From generating boilerplate code to suggesting optimisations, AI tools accelerate delivery and reduce cognitive load. They’re helping teams solve problems faster than ever.
But speed isn’t everything.
AI-generated code can also introduce vulnerabilities. Tools like GitHub Copilot learn from large public codebases, many of which contain flaws. These tools don’t reason; they replicate patterns. If that pattern includes a security flaw, it can be reproduced without warning.
This is why automated assistance must be paired with rigorous QA and security review. AI might accelerate the build, but only humans can ensure what’s built is fit for purpose. The developer is still responsible for the output of any AI development and it’s important that this remains.
Similarly, AI is changing how we test software. Intelligent tools can generate and run thousands of test cases in minutes. Yet testing AI-driven systems brings new challenges. Traditional bugs still matter, but we must now look for model bias, fairness issues and unpredictable behaviour under real-world data shifts. We’ve seen some tools starting to identify traditional security issues - how will it work with AI generated code?
The question has changed from “does it work?” to “does it work reliably, securely and fairly – even as it learns and evolves?”
AI-enhanced attacks: the growing threat landscape
It’s not just development that’s evolving. So is cybercrime.
AI is also now in the hands of attackers, enabling sophisticated phishing, dynamic malware and high-volume scanning at a scale beyond human capability. Attackers are crafting more convincing emails, mimicking executive voices and probing systems for weaknesses using AI-driven reconnaissance.
Recent data shows a massive spike in phishing and credential theft since 2022, driven by AI’s ability to personalise and automate attacks.
Security, therefore, must keep pace. The same technology that improves our workflows is being used to undermine them. Defensive strategies must now include anomaly detection, threat modelling tailored for AI systems and continuous monitoring.
AI is no longer an experimental tool. It’s a key player on both sides of the cyber security divide.
Securing AI systems across the full lifecycle
Securing an AI-powered application isn’t the same as securing traditional software. It requires us to think and act differently, across every phase of the lifecycle:
Design: Start with threat modelling. How could this model be misused? Could it be manipulated via input or exposed via output?
Development: Test not only the code but the model. Use adversarial inputs. Check for bias. Validate outputs under edge cases.
Deployment: Apply access control, encrypt sensitive flows and monitor for drift – because models degrade and threats evolve.
Decommissioning: Treat retiring models as critical. Preserve or destroy artefacts securely and prevent legacy systems from introducing new risks.
The quality of AI outputs is directly tied to the quality of the data it learns from. If that data is compromised, so is the model. Clean, validated and securely sourced data is essential - not just for functionality but for safety.
The role of QA and cyber security: more vital than ever
In this new AI era, QA engineers and security professionals are more than gatekeepers - they’re partners in innovation.
QA teams help ensure AI features meet expectations, handle edge cases gracefully and behave reliably under pressure. They think about the user journey, the data inputs, the edge cases, and, increasingly, the adversarial prompts that could expose weaknesses.
Security professionals assess the full attack surface - from training data to inference endpoints. They work with data scientists to secure datasets, with developers to harden code and with leadership to embed governance.
And crucially, these roles must now collaborate from day one. AI doesn’t wait until the end of the pipeline. Neither should quality or security.
Think AI DevSecOps - an integrated end-to-end approach that bakes in trust and resilience from the first line of code.
Building resilient AI systems: practical steps forward
For teams embracing AI, here’s what best practice looks like:
Secure by design - embed security at the design stage. Protect data at every point. Use proven frameworks and secure your AI architecture from day one.
Test for more than correctness - move beyond functional tests. Introduce adversarial testing. Validate fairness. Expect the unexpected and test for it.
Monitor continuously - set up telemetry on model behaviour. Watch for drift. Detect anomalies early and be ready to respond.
Protect the pipeline - secure every stage – from dataset to model deployment. Track model provenance. Control access. Guard the integrity of the AI supply chain.
Train your people - invest in awareness. Educate your teams on AI-specific risks. Encourage collaboration between disciplines.
Establish governance - leaders must define clear policies on data use, model deployment and AI ethics. Provide clarity and direction. Move fast – but move safely.
Build in guardrails – define boundaries for acceptable behaviour. Use policy constraints, rate limiting and fallback mechanisms to keep models aligned with intent. Prevent misuse and ensure safe operation.
Looking ahead: responsible AI, built on trust
AI will shape the future of software, but how we build and secure it will determine whether that future is trusted.
The organisations that succeed will be those who recognise that speed must be matched with rigour. That innovation is only as valuable as it is secure. And that AI, for all its promise, doesn’t replace the need for thoughtful, human-led software engineering.
At Instil, we believe quality and security are non-negotiable, and that applies tenfold in an AI-enabled world. Because trust isn’t something you retrofit, it’s something you bake in from the start.

Simon Whittaker
Head of Cyber Security