Leading at the Edge - AI Governance and Risk Management for Financial Services
Practical Strategies for Board Leaders Navigating AI Innovation, Risk, and Governance in Finance
Introduction
As part of our broader educational series on sustainable and ethical use of LLMs, on April 17, we hosted a panel discussion titled “LLMs at the Edge: Challenges and Promises for Real-Time Intelligence.”
The session brought together experts to explore the evolving role of Edge AI and its implications across latency, privacy, ethics, and sustainability across sectors. While the technological promise is compelling, the conversation quickly turned to the tensions that Edge AI introduces - between innovation and risk, speed and control, decentralisation and oversight. These tensions are especially pressing in regulated sectors like finance and insurance, where leadership must navigate not only opportunity but also complexity and accountability.
To help organisations in these sectors translate Edge and GenAI advancements into responsible, scalable implementations, this paper examines four critical leadership questions that emerged from our panel and subsequent discussions:
First, what should boards be asking their technology teams to ensure that LLM adoption drives innovation without compromising on privacy, security, or regulatory compliance?
Second, as LLMs move beyond cloud infrastructure and onto edge devices, how can companies effectively manage risks related to local data processing, model integrity, and real-time decision-making?
Third, in an environment where models are evolving rapidly, how can leadership proactively address AI bias, misuse, and loss of oversight, especially when systems operate autonomously or at the edge?
Finally, we explore practical steps boards can take to align AI deployments with both strategic business goals and legal obligations, ensuring a governance-first approach that scales with the technology.
To explore these questions in depth, we’ve teamed up with one of our panellists, Jayeeta Putatunda, Director of the AI Centre of Excellence at Fitch Group. A recognised GenAI leader, Jayeeta brings deep expertise in scalable NLP, Edge AI deployment, and responsible innovation. She’s a recipient of the AI100 Award and was named one of the Top 25 Women in FinTech AI. As a champion for inclusive leadership in tech, she also serves as the NYC Chapter Lead for Women in AI and frequently presents at top-tier AI conferences, including ICML and ODSC.
Together, in this article, we will reflect on the growing need for board-level fluency in AI strategy, particularly as technologies like Edge LLMs move from experimentation to enterprise deployment. In the pages that follow, we offer insights and practical guidance for leaders navigating this rapidly evolving landscape, grounded in both technical expertise and real-world industry challenges.
Rethinking Board Oversight in the Age of GenAI
Gen AI has transformed our industry, similar to how the internet opened the world to new possibilities. Gen AI, including large language models, has been successful in raising the bar for human productivity and innovation potential. So, this time the tech hype is real and as expected, even the highly governed companies and their leaders are now trying to make sure they are adopting AI and improving their workflows and offerings and not trailing in the industry! But this hyper-scalable growth also introduces novel enterprise risks. For example, Deloitte notes that “Generative AI…presents new types of risk (e.g., ‘hallucinated’ outputs that are factually false) and magnifies existing risks due to scale”. Banks and insurers have begun applying AI in credit risk, underwriting, compliance, and customer service, yet face concerns about inaccuracy, bias, data privacy, and cyber threats, as noted in this EY report. Regulators worldwide are intensifying scrutiny: EU regulators cite bias in lending and chatbots’ errors, California has passed AI transparency laws, and insurance regulators emphasize fair, accountable AI use, as stated in this Pillsbury Law report. For boards, alongside getting up to speed on Generative AI and what it means for their risk management responsibilities, establishing a strong LLM governance will go hand in hand.
For boards, this means that alongside getting up to speed on generative AI and its implications for risk management, they must also help establish strong LLM governance frameworks. Based on feedback from tech teams, there are three major areas, described in Fig. 1, where boards should focus their oversight:
Fig. 1. Key Board Oversight Areas for LLM Governance
In addition to board members and senior leaders receiving AI education to support effective oversight, ensuring cross-functional AI governance structure that includes legal, compliance, risk, data, and ethics would also be key to a thorough insight and ensuring that the board is receiving regular updates on AI projects, incidents, and reviews risk in an ongoing basis.
Risk and Reward at the Edge
LLMs are now moving beyond cloud servers and into edge devices. It’s everywhere you can think of - from mobile phones and ATMs to IoT endpoints. To leverage these device-native, powerful new possibilities, financial and insurance companies are exploring and realising the power of on-device LLMs. They can enable faster responses, better personalisation, offline capability, and reduced cloud costs.
Google recently launched AI Edge Gallery, an experimental Android app that runs Hugging Face AI models directly on phones without internet connectivity. This demonstrates the mainstream push toward edge AI deployment beyond enterprise use cases.
Enables offline AI tasks like image generation, question answering, and code editing
Addresses privacy concerns by keeping sensitive data on-device rather than in cloud servers
Open source availability (Apache 2.0 license) supports broader adoption
Performance depends on device hardware and model size
But there's a catch: deploying LLMs on the edge introduces new, complex risks — especially around data privacy, security, model integrity, and compliance. These risks are magnified in sectors like finance and insurance, where data is sensitive, regulation is tight, and user trust is everything.
1. Data Protection First: Minimise, Encrypt, and Stay Local
Edge devices often handle highly sensitive customer data, from transaction histories and personal identifiers to biometric signals, making the job of data protection critical.
To reduce risks and maintain customer trust, organisations should minimise the amount of data collected, ensuring the LLM only receives the specific information it needs for a given task.
Encrypting all data, both at rest and in transit, is essential - using device-level security features such as Apple’s Secure Enclave or Android Keystore can provide additional hardware-level protection.
Wherever possible, processing data locally on the device rather than routing it through the cloud helps reduce exposure to breaches and limits the attack surface.
These measures not only lower the likelihood of malicious access but also help companies comply with privacy regulations such as GDPR and GLBA.
2. Model Lockdown
Edge deployments, while offering significant performance and efficiency gains, also introduce heightened risks of reverse engineering and model theft, putting valuable intellectual property and security at risk.
To safeguard proprietary models, organisations should prioritise using quantised or obfuscated versions that are intentionally designed to be harder to reverse engineer, rather than deploying plain, vanilla open-source implementations that can be more easily copied or tampered with.
Additionally, digital watermarking techniques can be employed to embed invisible identifiers into models, enabling the organisation to track, trace, and verify model transactions or detect unauthorised use.
It’s also critical to regularly run models in controlled test environments to monitor their behaviour over time, detect any unexpected changes, and ensure they continue to operate as intended under different conditions.
3. Design with Hybrid Intelligence
Not all tasks are suited to be handled directly on the edge, especially those involving high-risk, sensitive decisions, or complex reasoning.
For routine or lightweight tasks, organisations can deploy small, efficient models such as DistilBERT or Gemma locally on the device to provide fast, low-latency responses without overburdening resources.
However, for high-stakes decisions like fraud detection, credit underwriting, or regulatory compliance checks, it’s crucial to offload processing to the cloud or route the task for human review, ensuring greater oversight and reducing the risk of errors or unintended outcomes.
Additionally, systems should be designed with robust fallback mechanisms, so if the edge model fails, encounters uncertainty, or flags an anomaly, the task is automatically escalated to a secure endpoint, whether that’s a more powerful cloud system or an expert human in the loop, to maintain reliability, accuracy, and trust.
4. Monitor Everything
You can’t fix what you don’t see, which is why having visibility into edge AI behaviour is absolutely essential.
Organisations should enable on-device logging of LLM interactions to track how models are performing in the real world, while carefully balancing this with user privacy protections to avoid unnecessary data exposure. Alongside local logging, centralised telemetry systems can provide real-time monitoring across deployments, helping teams detect performance issues, identify anomalies, and flag potential misuse or unexpected behaviours.
To complete the safety net, it’s critical to build in remote kill switches that allow security teams to quickly shut down or disable models if they start behaving unpredictably or pose a risk, ensuring that organisations maintain control even at the edge.
5. Bake in Compliance and Ethics
Deploying AI at the edge doesn’t sidestep regulatory obligations — it raises the bar.
Conduct Data Protection Impact Assessments (DPIAs) before rollout.
Get explicit consent when LLMs process personal data on devices.
Be transparent: let users know when they’re interacting with AI.
In highly regulated industries, it’s critical to ensure every edge deployment passes legal and ethical muster.
Vet Vendors and Secure the Supply Chain
Many edge AI solutions depend on third-party models, tools, or specialised chips, which is why vendor and supply chain governance is critical to maintaining security and trust. Organisations should ensure they only use pre-trained models from reputable, trusted sources, carefully verifying licensing terms and understanding the origins and composition of the training data to avoid hidden risks.
On the hardware side, it’s essential that vendors support robust security measures, including secure boot processes, firmware validation, and regular patching to address emerging threats.
Additionally, teams must actively monitor software dependencies, especially open-source components, to stay ahead of known vulnerabilities that could be exploited in the field.
While deploying LLMs on the edge offers exciting opportunities, sectors like finance and insurance can’t afford a “move fast and break things” approach; instead, they must adopt a “move fast and protect everything” mindset. Edge LLMs should be viewed as co-pilots, not captains — powerful assistants that support human decision-making but still operate within tightly governed, secure frameworks.
Addressing AI Bias, Misuse, and Operational Fragility
It’s hard, hard to keep up with the hype, but also make sure companies are adopting the right technologies, and the effort to impact trade-off can at least translate to real business outcomes.
In a recent article, Salesforce called out LLMs as “jagged intelligence”, which means that AI is performing brilliantly in isolated cases but collapses in complex, real-world business environments.
This characterisation highlights the importance of prioritising reliability over novelty. Rather than chasing cutting-edge capabilities for their own sake, companies should focus on developing AI systems that operate consistently and robustly across diverse business scenarios. That requires rigorous testing, validation, and an understanding of how models behave under real operational pressures.
Equally important is establishing a continuous cycle of robust evaluation, not just periodic audits. Multi-stakeholder testing environments, where teams from risk, legal, engineering, and operations assess LLM behaviour across edge cases, can help surface hidden inconsistencies, especially when models are deployed in decentralised or customer-facing settings.
Ultimately, managing risks like bias and misuse is not just as much a technical problem as a function of organisational culture. Within this complex AI climate, it is imperative for leaders to foster a culture of accountability, where ethical considerations are embedded into every stage of AI development and deployment. This means training cross-functional teams in responsible AI practices, establishing clear accountability structures, and ensuring that oversight is integrated into both strategic planning and day-to-day operations.
Conclusion
In this article, we have discussed in detail how to get started on building a reliable framework for AI to solve business use cases, particularly for the financial sector, but also beyond. Our approach, however, should be problem first. AI is a tool to help us make better products, and not the magic solution to all problems.
To ensure that any new AI technologies adopted are aligned with business and legal goals, the three key areas to reiterate:
Robust AI Governance Framework: Develop clear policies and guidelines that define the ethical use of AI, assign responsibilities, and ensure compliance with relevant regulations. This framework should be regularly reviewed and updated to adapt to evolving technologies and legal landscapes.
Conduct Regular AI Risk Assessments: Implement periodic evaluations of AI systems to identify and mitigate potential risks, such as bias, data privacy concerns, and operational failures. These assessments should inform decision-making and policy adjustments.
Establish a Cross-Functional AI Oversight Committee: Form a dedicated committee comprising members from various departments, such as legal, compliance, IT, and ethics, to facilitate holistic oversight, coordinate AI initiatives, and monitor ethical implications.
Staying informed on regulatory developments and upskilling with the new AI best practices and proactively implementing these strategies would be key to maintaining compliance with emerging legal standards and minimising associated risks.