ISO 27001
Information security management
Cyber Essentials Plus
NCSC-accredited cyber baseline
AWS Advanced Tier Partner
Independently certified capability

Data Protection and Privacy

How does the solution handle personal data?
The solution processes customer personal data in order to identify callers, access account information, and resolve interactions. All personal data processing is conducted in accordance with UK GDPR and, for European deployments, the EU GDPR. Data is processed only for the purposes defined in the service agreement and is not used for any other purpose, including AI model training on customer data without explicit consent.
Where is data stored and processed?
For UK customers, data is stored and processed within the UK and EU. For European customers requiring full data sovereignty, the solution can be deployed entirely within the EU, hosted on AWS EU infrastructure. This is a critical requirement for an increasing number of public sector and regulated organisations and is built into our standard European deployment architecture.
Is customer data used to train the AI model?
Customer interaction data — transcripts, call recordings, and chat logs — is used solely to improve the performance of the AI for that specific customer’s deployment (customer-specific fine tuning), not to train a shared or generic model. Data from one customer’s deployment is never used to train or improve the model for another customer. This is a key differentiator from many generic AI platforms.
Who can access customer interaction data?
Access to interaction data is strictly controlled by role-based access controls (RBAC). Within the customer organisation, access is limited to authorised personnel defined during the security and guardrails workshop. StableLogic accesses interaction data only for performance monitoring, quality assurance, and improvement purposes, under the terms of the data processing agreement.
Is a Data Processing Agreement (DPA) available?
Yes. StableLogic provides a full Data Processing Agreement as standard, covering the roles of data controller and data processor, the categories of data processed, retention periods, sub-processor details, and the rights of data subjects. This is provided at contract stage and can be reviewed by your legal and information governance teams prior to signature.
How long is interaction data retained?
Default retention periods are defined in the service agreement and are configurable to meet your organisation’s data retention policy. Interaction transcripts, recordings, and summaries can be automatically deleted after a defined period. Audit logs are retained for a minimum period to support compliance and complaints handling requirements.

AI Transparency and Explainability

Can we see what the AI is doing and why?
Yes. Every action taken by the AI is logged in full — what the customer said, what the AI understood, what it accessed, what decision it made, and what action it took. This full audit trail is available through the management dashboard and can be reviewed at any time. There are no black-box decisions: every interaction is explainable, reviewable, and auditable.
How does the AI make decisions?
The AI uses a combination of large language model (LLM) reasoning, retrieval-augmented generation (RAG) from your organisation’s knowledge base, and defined policy rules to understand customer intent and determine the appropriate response or action. All actions are constrained by the policy guardrails defined and agreed during the Security and Guardrails workshop before go-live.
What happens if the AI makes a mistake?
All AI interactions are logged, and any errors or unexpected outcomes can be reviewed in full. Where an AI action results in a customer complaint or requires correction, the full interaction transcript is available to support resolution. The continuous improvement process — built into the operational model — identifies patterns of error and uses them to refine the AI’s responses and guardrails over time.
Can customers tell they are talking to an AI?
Yes, and they must be. The solution is designed to be transparent with customers about the fact that they are interacting with an AI system. This is both an ethical requirement and, for deployments subject to the EU AI Act, a legal obligation. The AI will clearly identify itself as an AI at the start of every interaction. Customers are always offered the option to speak with a human agent.
Does the AI system fall under the EU AI Act?
AI systems used in customer-facing contact centre contexts are generally classified as limited-risk under the EU AI Act, which requires transparency obligations — specifically, disclosing to users that they are interacting with an AI. Full enforcement of the EU AI Act’s high-risk provisions is planned from August 2026. StableLogic’s solution is designed to meet these transparency requirements as standard. For deployments in healthcare or other sectors where the AI may be classified as higher-risk, we provide a full regulatory compliance assessment as part of the discovery process.
What is your approach to algorithmic bias?
The AI is trained on your organisation’s own data and knowledge base, reducing the risk of generic model biases affecting your customers. Before go-live, StableLogic conducts a bias and fairness review as part of the Security and Guardrails workshop, examining the AI’s behaviour across customer segments, languages, and interaction types. Ongoing sentiment and resolution monitoring provides early warning of any differential outcomes by customer group.

Safeguarding and Vulnerable Customers

How does the solution handle vulnerable customers?
Safeguarding is built into the solution by design, not as an afterthought. The AI uses real-time sentiment analysis to identify customers who may be distressed, confused, or vulnerable. When these signals are detected, the AI adjusts its behaviour — slowing the interaction, offering clearer options, and proactively offering human escalation. The specific triggers and escalation pathways are defined with each customer during the Security and Guardrails workshop.
Can the AI recognise if a customer is in crisis?
The AI is configured with sector-specific safeguarding triggers — for example, references to domestic abuse, mental health crisis, or self-harm — that result in immediate escalation to a human agent and, where appropriate, signposting to relevant support services. These triggers are defined in collaboration with the customer’s safeguarding team before go-live.
What training or oversight is in place for vulnerable customer interactions?
All escalation pathways for vulnerable customers are defined, documented, and tested before go-live. The AI’s behaviour in sensitive scenarios is tested as part of the testing phase and reviewed in the first weeks of operation. Human agents who receive escalated vulnerable customer interactions are trained in the context they will receive from the AI handoff.
How does the solution ensure customers can always reach a human?
The option to speak with a human agent is always available and is never suppressed. The AI is designed to offer human escalation proactively in any interaction where it detects complexity, distress, or customer preference. There is no point in any AI-handled interaction where the customer is unable to reach a human if they need to.

Want the full governance documentation?

Talk to our team →

Security and Cyber Risk

How is the solution secured?
The solution is built on AWS, one of the world’s most secure cloud platforms, and is governed by AWS Config and IAM (Identity and Access Management). All communications are encrypted in transit and at rest. Access controls are enforced at platform level and are configured to the principle of least privilege. StableLogic holds ISO 27001 certification and Cyber Essentials Plus, and applies the same security standards to all customer deployments.
Can the AI be manipulated by malicious users (prompt injection)?
This is a known risk with large language model-based systems and is addressed through a combination of input filtering, output guardrails, and policy constraints defined during the Security and Guardrails workshop. The AI operates within a defined scope of permitted actions and responses. Any attempt to manipulate the AI outside this scope is logged, flagged, and — where it represents a security risk — escalated for review.
What happens if the AWS infrastructure goes down?
The solution is architected for high availability within AWS, using redundant components and multi-availability zone deployment. In the event of a service disruption, calls and contacts can be routed to human agents via defined fallback procedures. StableLogic provides a service level agreement (SLA) covering availability and incident response times.
Who is responsible for security in the event of a breach?
Responsibilities are clearly defined in the contract and data processing agreement. StableLogic is responsible for the security of the platform infrastructure and for notifying the customer of any security incidents within the required regulatory timeframes. The customer retains responsibility for security within their own systems and for the actions of authorised users.

Governance and Compliance

What governance framework does the solution operate within?
The solution operates within a governance framework that is defined and agreed with each customer before go-live. This includes a documented set of policies, guardrails, escalation procedures, and performance metrics. The framework is reviewed at regular intervals and updated as the solution evolves, the customer’s requirements change, or the regulatory environment shifts.
How are guardrails defined and enforced?
Guardrails are defined during a dedicated Security and Guardrails workshop, which takes place before implementation begins. This workshop identifies the actions the AI is permitted to take, the scenarios in which it must escalate, the language and tone it is permitted to use, and the boundaries of its knowledge and authority. These guardrails are implemented as hard constraints in the system configuration — not as suggestions to the AI.
Can the AI take actions that are irreversible — such as cancelling a contract or processing a refund?
The AI can take actions of this type only where they are explicitly within the guardrails agreed during the Security and Guardrails workshop, and only within the authorisation boundaries defined by the customer. For high-value or high-risk actions, additional confirmation steps can be built into the workflow — for example, requiring the customer to confirm by SMS before a refund is processed.
How does the solution support regulatory compliance in specific sectors?
For Social Housing, the solution is configured to support compliance with the Housing Ombudsman’s complaint handling requirements, the Regulator of Social Housing’s Transparency, Influence and Accountability standard, and consumer regulation requirements. For Healthcare, the solution is designed to support CQC-aligned service delivery and relevant NHS frameworks. For Public Sector, the solution is designed to meet Government Digital Service (GDS) accessibility standards, including WCAG 2.1 AA. For all regulated sectors, a sector-specific compliance review is included in the discovery phase.
How does the UK regulatory landscape affect this solution?
The UK government operates a principles-based approach to AI regulation, built around five cross-sectoral principles: safety, security, fairness, accountability, and transparency. StableLogic’s governance framework is explicitly designed to address all five principles. We monitor developments in UK AI regulation, including the AI Safety Institute’s guidance, and update our governance framework accordingly.

Workforce and Change Management

How will this affect our existing contact centre staff?
This is one of the most important questions to address early, and honestly. The solution is designed to remove routine and repetitive interactions from human agents, not to eliminate the human contact centre entirely. The realistic outcome for most deployments is a reduction in required headcount over time — through natural attrition and redeployment, not immediate redundancy. StableLogic provides workforce change management support as part of the Tier 2 deployment, including communication planning, union engagement guidance, and redeployment frameworks.
How do we manage union concerns?
Union engagement is addressed as a specific workstream in the Tier 2 deployment plan. StableLogic has experience supporting organisations through this process. Our recommended approach is to engage unions early, be transparent about the intended scope and timeline of automation, and work collaboratively on the workforce transition plan. Early engagement consistently produces better outcomes than attempting to manage union concerns reactively after deployment begins.
What training do our staff need?
Human agents working alongside the AI require training on three areas: how to handle escalations from the AI (including reading the context summary provided by the AI), how to use the management dashboard, and how to provide feedback on AI performance to support continuous improvement. This training is included in the Tier 2 deployment plan and is designed to be completed in a single day for most agents.

Have a governance or compliance question that isn’t answered here?

Our team works with procurement, legal and information governance leads every week. Bring the question — we’ll give you a precise, verifiable answer.

Talk to our team