In the financial sector, conversations about Artificial Intelligence (AI) are moving beyond “proofs of concept” and are now measured in terms of operational efficiency, revenue, and risk management. This is the key takeaway from the sixth edition of NVIDIA’s annual “State of AI in Financial Services” report, based on a survey of more than 800 professionals in the industry worldwide.
The overarching message is clear: banks, insurers, asset managers, and fintechs are scaling up use cases they are already familiar with (fraud, risk, customer service, back-office) and simultaneously paving the way for a new wave: generative AI and AI agents capable of performing tasks more autonomously.
The numbers that explain the shift (and why it matters)
The survey reflects a predominantly positive outlook on economic returns:
- 89% of respondents say AI is helping to increase revenues and/or reduce costs.
- 65% confirm that their organization already actively uses AI, up from 45% the previous year.
- 61% report using or evaluating generative AI, showing a significant year-over-year increase.
- Nearly 100% indicate that AI budgets will increase or stay the same.
In an environment where capital costs and regulatory scrutiny remain tight, these percentages serve as indicators: AI is shifting from an “innovation project” to a core investment area.
Open source: flexibility, control, and less dependency
A recurring conclusion in the report is the growing importance of open source (models and software) as a strategic component. In the survey, more than 8 out of 10 participants consider it relevant to their AI strategy.
The reasons are pragmatic rather than ideological:
- Avoiding “vendor lock-in” with technology that evolves quarterly.
- Fine-tuning models to align with internal data and processes.
- Cost optimization when moving from laboratory to production deployment.
- Improved system auditing (crucial in financial environments).
Various voices in the report agree: competitive advantage is not just about using AI, but about training and customizing capabilities based on an organization’s internal data and knowledge.
The next phase: AI agents (and the challenge of governing them)
The report positions AI agents as a technology entering the adoption phase. In the survey, 42% of participants are using or evaluating agent-based AI; within this group, 21% have already deployed agents.
In finance, the appeal is straightforward: automate tasks that currently require hours and human coordination, such as:
- Internal operations (reconciliations, incidents, reporting).
- Document processing (KYC, policies, claims, contracts).
- Customer support with productivity boosts.
- Research and analysis for investment teams (with ongoing control).
The challenge is also clear: an agent not only “responds,” but acts. This necessitates stronger controls over security, traceability, permissions, and continuous evaluation (including testing for biases and errors).
Where the game is played: hybrid infrastructure, cloud, and on-premises
The survey also captures investments in infrastructure to support AI workloads, with interest in on-premises and cloud options, as well as hybrid schemes. Practically, the debate shifts toward a pragmatic question: which workloads should reside close to the data (due to latency, cost, or compliance), and which can scale elastically in the cloud?
In banking and insurance, where legacy systems coexist with data sovereignty requirements and demand spikes, the winning strategy often resembles:
- AI near the core for critical processes and sensitive data.
- Cloud for prototyping, short-term training, or temporary scaling.
- Observability and governance as cross-cutting layers to ensure sustainable deployment.
What executives should read between the lines
Beyond the percentages, the report highlights three strategic business implications:
- AI is becoming a “driver” of measurable efficiency, not just innovation.
- Open source enters the landscape for control and cost reasons, but demands higher operational maturity (security, MLOps, compliance).
- Agents push for process redesign, as automation is no longer just “assistance”: it can be execution.
For the industry, the message is both uncomfortable and clarifying: the cost of not adopting AI is no longer just losing productivity, but falling behind in product speed, fraud control, and customer experience.
Frequently Asked Questions
What does “agentic AI” mean in the financial industry?
It refers to AI systems capable of planning and executing tasks with a certain level of autonomy (for example, completing an operational flow), rather than just answering questions or generating text.
Why is open source gaining ground in AI for banking and insurance?
Because it offers flexibility, reduces dependency on a single provider, and allows for model adaptation using internal data; however, it requires more discipline in security, MLOps, and data governance.
Which use cases provide the fastest ROI?
The report highlights areas such as fraud detection, risk management, document processing, and customer service, where improvements are often measurable in time savings, error reduction, and cost efficiency.
What are the most critical risks when scaling AI?
Data governance, compliance, biases, traceability, and security. Particularly with agents, key practices include auditing actions, permissions, and outcomes through telemetry and ongoing controls.
via: blogs.nvidia

