STOCKHOLM, SWEDEN, March 17, 2026 /EINPresswire.com/ — Sorena AI says companies adopting generative AI for governance, risk, and compliance are running into two connected problems: confident answers that cannot prove coverage, and agentic systems that can be influenced by untrusted inputs.
The company argues that those risks are becoming more important as legal, security, privacy, and compliance teams try to use AI to move faster on audits, customer questionnaires, regulatory change, and internal reviews. In Sorena’s view, the issue is no longer whether AI can produce useful drafts. The issue is whether teams can trust those outputs when the work has to stand up to auditors, regulators, and customers.
Two risks, Sorena says, are converging
“In compliance, the failure mode is not always obvious nonsense,” a Sorena AI spokesperson said. “It is partial work that sounds complete, or an agent that treats untrusted content as something it should follow. Both create exposure.”
False confidence in GRC work
Sorena says its own benchmark work illustrates the first problem. In an internal January 2026 evaluation, two auditors scored Sorena Research Copilot and a baseline general-purpose AI assistant across 43 real-world compliance and regulatory research sessions covering 4,332 requirements. According to Sorena, its system achieved 100% requirement coverage with 0 factual errors, while the baseline assistant averaged 25% coverage and 183 factual errors. The company says the sessions included work such as privacy audits, AI governance, regulatory timeline analysis, sustainability compliance, and technical reviews. Sorena notes that the benchmark was internal and that results may vary by use case.
The company says those results point to what it calls a false-confidence problem in GRC. A model can summarize quickly, draft fluently, and still miss important obligations, controls, or timing requirements that only show up later when another stakeholder asks to see the evidence. That, Sorena says, is why teams often mistake speed at the answer layer for completeness at the execution layer.
Prompt injection in agentic systems
The second problem, in Sorena’s telling, is security. As more organizations experiment with AI agents that read websites, uploaded documents, emails, chats, and tickets, the company says they are exposing those systems to content that the agents did not write and cannot fully trust. Hidden instructions, poisoned context, and malicious prompts can all be mixed into what appears to be normal content. Sorena says that once an agent is allowed to both read untrusted material and take meaningful action, prompt injection becomes a design problem rather than a minor edge case.
How Sorena says teams should respond
Governed sources and reviewable outputs
That is why the company says guardrails alone are not enough. Sorena argues that the stronger control is to reduce exposure to uncontrolled inputs and to make provenance visible throughout the workflow. It says the core question for any agentic system is not simply how smart the model appears, but what the system is allowed to trust.
Sorena describes its AI-powered compliance platform as an attempt to solve that problem operationally rather than cosmetically. The product combines a Research Copilot for source-linked regulatory answers, an Assessment Autopilot for turning documents and frameworks into structured, reviewable assessments, and SSOT, a single source of truth layer that centralizes governed regulatory content, standards, security datasets, and customer documents. The company says that architecture is meant to keep outputs tied to approved and traceable sources instead of letting AI roam freely across mixed-trust material by default.
Sorena says customers use the platform across programs tied to regulations and standards such as the EU AI Act, GDPR, NIS2, DORA, CSRD, ISO/IEC 42001, and ISO 27001, as well as customer due diligence, internal audit, and questionnaire workflows. The company says the common requirement across all of those domains is not just automation, but verifiable automation: answers linked to sources, requirements mapped to evidence, and outputs that can be reviewed without starting from scratch every time.
A wider shift in how compliance tools are judged
The company also frames the shift as part of a broader change in how organizations evaluate compliance tools. Instead of asking whether AI can draft a response, Sorena says buyers are increasingly asking whether the system can show its work, maintain trust boundaries, and reduce rework. That standard, the company says, is what will separate demo-quality AI from software that can operate inside audit-critical environments.
Sorena’s position is that the compliance market is moving away from generic assistance and toward systems built for coverage, traceability, and control. Whether that view spreads beyond early adopters may depend on whether more buyers start seeing false confidence and prompt-injection exposure as operational risks rather than purely technical ones.
Media Relations
Sorena AI
email us here
Legal Disclaimer:
EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.
![]()

























