How FinTechs are turning GRC into a strategic enabler
In this Help Net Security interview, Alexander Clemm, Corp GRC Lead, Group CISO, and BCO at Riverty, shares how the GRC landscape for FinTechs has matured in response to tighter regulations and global growth. He discusses the impact of frameworks like DORA and the EU AI Act, and reflects on building a culture where compliance supports, rather than slows, business progress.
How has the GRC landscape evolved for FinTechs in the last few years, particularly as they scale or expand globally?
Over the past few years, the GRC landscape for FinTechs has undergone a significant transformation: One marked by both increasing regulatory rigor and a shift toward harmonization across jurisdictions.
In the European Union, the Digital Operational Resilience Act has become a defining regulatory milestone. Unlike its predecessors, DORA comes with detailed Regulatory Technical Standards that substantially narrow the room for interpretation. This level of granularity represents a step-change: it doesn’t just signal intent, it operationalizes compliance. As a result, DORA has become more than just another regulation – it’s the gravitational center around which many operational resilience strategies now revolve.
On a global scale, regulatory frameworks may differ in name and nuance, but their underlying goals are remarkably aligned: secure digital operations, risk-aware expansion, and resilient infrastructures. What we increasingly see is the emergence of a global compliance “meta-language” – and regulations like DORA, with their prescriptive clarity, provide a robust baseline for navigating international expansion. For FinTechs looking to scale, this convergence actually offers strategic leverage. Rather than reinventing GRC with each new market entry, companies can anchor their frameworks in DORA-aligned principles and adapt efficiently from there.
That said, complexity still looms large. The key to mastering modern GRC is the creation of a scalable and interoperable internal framework, i.e., one that maps disparate external requirements into a unified, well-governed internal control environment.
At Riverty, we do this by defining clear ownership (RACIs), proxy models, and terminology harmonization layers.
Looking at the broader evolution, regulation has also become a strategic equalizer. Since PSD2, and now with DORA, there’s been a notable uptick in regulatory maturity. Rather than stifling innovation, this actually levels the playing field – favoring those who invest in resilience and structured compliance. It’s no longer viable to operate as a FinTech without demonstrating that you can uphold trust, security, and continuity.
At Riverty, we’ve long seen resilience management not as a checkbox exercise, but as a core competency. In fact, our proactive stance on GRC and resilience is becoming a competitive advantage. As highlighted in Fintech 2040, the future of finance will reward those who can embed trust, agility, and transparency into their operational DNA.
How can FinTechs integrate cybersecurity risk into enterprise risk management frameworks without slowing innovation?
The misconception that risk management and innovation exist in tension is one that modern FinTechs must move beyond. At its core, cybersecurity – when thoughtfully integrated – serves not as a brake but as an enabler of innovation. The key is to design governance structures that are both intelligent and adaptive (and resilient in itself).
The foundation lies in aligning cybersecurity risk management with the broader business objective: enablement. This means integrating security thinking early in the innovation cycle, using standardized interfaces, expectations, and frameworks that don’t obstruct, but rather channel innovation safely. For instance, when risk statements are defined consistently across teams, decisions can be made faster and with greater confidence.
Critically, it starts with the threat model. A well-defined, enterprise-level threat model is the compass that guides risk assessments and controls where they matter most. Yet many companies still operate without a clear articulation of their own threat landscape, leaving their enterprise risk strategies untethered from reality. Without this grounding, risk management becomes either overly cautious or blindly permissive, or a bit of both.
We place a strong emphasis on bridging the traditional silos between GRC, IT Security, Red Teaming, and Operational teams. This collaborative posture ensures cybersecurity risks are not treated as external compliance obligations, but as integral dimensions of product and platform design. A good example: Even though Riverty isn’t formally obliged under DORA to run Threat-Led Penetration Testing (TLPT), we’ve chosen to internalize those principles. Our penetration testing program is grounded in a specific threat model that is reflecting the threats that matter most to our systems, processes, and customer interfaces.
Another powerful mechanism is defining a clear, business-aligned risk appetite, supported by a standardized decision-making framework. This enables teams to make informed choices in real-time: knowing when risk warrants intervention and when it can be accepted as part of progress. This clarity accelerates development rather than hinders it, because the rules are known in advance and adapted to your strategic posture.
Ultimately, cybersecurity risks are not about stopping innovation but are about illuminating it. When you can clearly distinguish between real pain points and manageable exposures, you create the freedom to move fast and safely. As Fintech 2040 emphasizes, resilient systems are those that combine automation with intelligence and not just reacting to risks, but predicting and adapting to them.
What’s your take on the growing role of AI governance in FinTech GRC, especially with regulators now paying close attention to model risk and explainability?
AI is rapidly becoming foundational to the FinTech operating model. It is powering everything from credit scoring and personalization to fraud detection and collections. But with this growing reliance comes a parallel responsibility: ensuring that AI systems are not only effective, but explainable, secure, and governed responsibly.
We’ve taken a proactive stance on this challenge by finalizing a dedicated internal standard on AI Security and Governance. This framework introduces a new layer of controls specifically tailored to the risks unique to AI – particularly around data and model components. While infrastructure- and application-level controls remain largely consistent with traditional software security practices, AI introduces distinctive vectors of risk that require additional attention.
Two examples from our upcoming standard illustrate this evolution:
- Corrective feedback mechanisms: “Incorporate human feedback within a model for any corrective actions in case of unforeseen or erroneous model behaviors.”
- Model isolation: “Each model is deployed in a separate, secure environment to isolate it from other models, preventing cross-contamination and the transfer of malicious knowledge.”
Importantly, we have adopted the EU AI Act as a strict internal baseline. By embedding its principles such as transparency, risk classification, and oversight into our own control landscape, we’re not only preparing for regulatory compliance but actively shaping trust-centric AI practices.
AI governance is about creating structured clarity at the intersection of innovation and accountability. Our role in GRC is to act as the voice that ensures risk is visible to leadership as they seize the opportunity. As outlined earlier, good governance doesn’t hinder innovation, it enables sustainable, long-term innovation by ensuring decisions are made with awareness of both upside and downside.
As explored in Fintech 2040, we’re entering an era of autonomous, highly personalized financial ecosystems driven by AI agents. The only way to preserve trust in such an environment is to embed explainability, human-centric oversight, and security into the design of AI systems from the very beginning.
How can GRC leaders in FinTechs build a culture of compliance and risk awareness without becoming the “Department of No”?
We’ve long embraced a mindset that’s now gaining momentum across the industry: GRC should be the department of “know”, not “no.” That phrase has been on our corporate GRC decks since 2019 – more than a catchphrase. It reflects a fundamental belief that GRC functions should enable smart, informed risk-taking and not stifle it.
Building a risk-aware culture starts by aligning GRC with the business strategy, not sitting adjacent to it. That means understanding the product vision, the customer journey, and the market dynamics – and then engaging early and often. By anticipating what’s coming rather than reacting too late, GRC teams can embed trust and resilience into the process, rather than slowing things down at the last mile.
This proactive positioning lets GRC practitioners serve as translators, connecting regulatory or risk concerns with commercial priorities. We enable transparency by facilitating structured, informed decision-making. Instead of blanket denials, we present options, trade-offs, and clarity on what’s needed to move forward responsibly. In that sense, compliance becomes a competitive differentiator, not a drag.
Equally important is the role clarity across the Lines of Defense (LoD). We invest in clearly defined RACIs that empower both the 1st and 2nd LoD to act with autonomy and accountability. GRC isn’t about taking decisions away from product or tech teams but it’s about helping them take the right decisions, at the right time, with full visibility of their implications.
This culture-building approach resonates deeply with one of the key trajectories outlined in Fintech 2040: as regulatory expectations become more complex and FinTech ecosystems more interconnected, the winners will be those who turn resilience and compliance into embedded capabilities – not external constraints.
Can automation and AI meaningfully reduce the GRC burden for lean teams, or do they introduce more complexity than they solve?
The current attention around AI is well-earned. Unlike earlier hype waves such as NFTs or generic blockchain solutions, AI is showing real, sustained potential to transform how organizations operate. That said, its value in GRC depends heavily on how it’s applied, and whether organizations are clear-eyed about both its current limitations and long-term promise.
We see AI as a powerful augmenting force, not a replacement for human intelligence, it is a different intelligence. It’s not about handing over decision-making to machines, but about enhancing the efficiency and reach of GRC teams. Used thoughtfully, AI can take on repetitive, resource-intensive tasks and free up human capacity for judgment, foresight, and engagement.
However, trust in AI starts with reliability and reproducibility. In the context of compliance, there is virtually zero tolerance for ambiguity or hallucination. That’s why AI must be rigorously tested and tightly scoped to clearly defined use cases before it becomes a reliable asset in risk and compliance environments.
We’re applying AI in targeted support areas within GRC, always with a human in the loop. Examples include:
- Automating updates to process documentation, improving consistency while reducing manual effort.
- Smart ticket routing, directing issues to the correct functions faster.
- Grammar and clarity checks for internal documentation and policies.
- Early-stage brainstorming on new regulatory developments.
- Multi-dimensional data analysis, helping identify trends or correlations across complex data sets.
These use cases represent exactly what AI should do for GRC: reduce burden, increase clarity, and unlock insight. But they also underscore the importance of human oversight, both to validate outcomes and to interpret them responsibly.
As outlined in Fintech 2040, the FinTech landscape is headed toward highly automated, data-driven ecosystems. But resilience in that environment depends not just on automation itself – it hinges on how intelligently that automation is governed.
In short, for lean as well as for well-staffed GRC teams, AI doesn’t have to be a complexity driver. When grounded in real needs and used transparently, it becomes a strategic enabler, one that helps teams move faster, not just more carefully.