Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking and Finance Review

Global Banking & Finance Review

Company

    GBAF Logo
    • About Us
    • Profile
    • Privacy & Cookie Policy
    • Terms of Use
    • Contact Us
    • Advertising
    • Submit Post
    • Latest News
    • Research Reports
    • Press Release
    • Awards▾
      • About the Awards
      • Awards TimeTable
      • Submit Nominations
      • Testimonials
      • Media Room
      • Award Winners
      • FAQ
    • Magazines▾
      • Global Banking & Finance Review Magazine Issue 79
      • Global Banking & Finance Review Magazine Issue 78
      • Global Banking & Finance Review Magazine Issue 77
      • Global Banking & Finance Review Magazine Issue 76
      • Global Banking & Finance Review Magazine Issue 75
      • Global Banking & Finance Review Magazine Issue 73
      • Global Banking & Finance Review Magazine Issue 71
      • Global Banking & Finance Review Magazine Issue 70
      • Global Banking & Finance Review Magazine Issue 69
      • Global Banking & Finance Review Magazine Issue 66
    Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

    Global Banking & Finance Review® is a leading financial portal and online magazine offering News, Analysis, Opinion, Reviews, Interviews & Videos from the world of Banking, Finance, Business, Trading, Technology, Investing, Brokerage, Foreign Exchange, Tax & Legal, Islamic Finance, Asset & Wealth Management.
    Copyright © 2010-2025 GBAF Publications Ltd - All Rights Reserved.

    ;
    Editorial & Advertiser disclosure

    Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    Home > Technology > Solving the challenges of Enterprise-Wide Adoption of Generative AI Models
    Technology

    Solving the challenges of Enterprise-Wide Adoption of Generative AI Models

    Solving the challenges of Enterprise-Wide Adoption of Generative AI Models

    Published by Jessica Weisman-Pitts

    Posted on February 21, 2024

    Featured image for article about Technology

    Solving the challenges of Enterprise-Wide Adoption of Generative AI Models

    ByNeil Serebryany, Founder and CEO, CalypsoAI

    Deploying generative artificial intelligence and large language models across the enterprise presents opportunities for both increased productivity and innovation, but also creates challenges for managing organizational risk.

    Where We’ve Been

    Approximately 10 years ago, artificial intelligence (AI)-dependent tools became an established feature in the business landscape. Financial organizations were among the earliest and most enthusiastic adopters, leading the development and deployment of innovative, productivity- and revenue-driven AI solutions for issues that have long plagued the field, such as anticipating market trends and fluctuations, ensuring compliance in a dynamic regulatory environment, improving client support services, and deterring fraud. More recent challenges include multi-channel customer options and collecting and analyzing data on consumer behaviors and preferences.

    Where We Are

    The latest additions to the AI toolbox are generative AI (GenAI) and large language models (LLMs), which are being deployed throughout the enterprise for tasks as diverse as assessing risk exposure, developing investment strategies, improving competitive advantage, and generating and executing narrowly targeted marketing campaigns. Across the banking, insurance, and financial services industries, these models are increasing productivity, driving innovation, ensuring compliance, and reducing fraudulent activity. However, operational and security challenges balance these large productivity gains.

    Organizations adopting these models must acknowledge and prepare for the new risk layer that accompanies–and can disrupt–successful model adoption and deployment. The list below identifies top considerations that accompany LLM deployments and ideas for addressing them.

    Cost Controls

    The cost of model deployment can vary dramatically and includes variables that must be carefully considered before being decided upon. At its most basic, the spend is a cost vs performance issue. LLM providers’ fees are typically a cost-per-thousand-token calculus, with a token being the equivalent of around three-quarters of a word. The amount of information the LLM can process in a query (the model’s “context window”) and the model’s performance characteristics also affect pricing. Models with longer context windows can provide responses of greater depth and nuance, but increase the compute spend.

    Observability and Visibility

    The typical enterprise deploys multiple models, including multimodal GenAI models that use voice and images, that operate in parallel, leaving security teams with a fragmented view of the overall system when what they need is one tool that provides full visibility into and across all models in use.

    The solution is to deploy an automated tool spanning all models in use, providing observability at a per-model, system-wide, level. When able to audit and secure models, administrators can leverage insights about usage and user behavior, enhance and streamline decision-making processes, overcome inherent limitations, and provide stability, reliability, efficiency, and added security across the organization.

    Data Security

    Both fine-tuned LLMs and retrieval-augmented generation (RAG) models access large amounts of proprietary data. They must be protected from accidental, as well as deliberate, data leakage. The most common data leak is an unintended exposure via a user query to the model, such as an internal memo containing detailed information regarding a merger or acquisition under consideration sent to the LLM to improve the structure and verbiage more professional. The security issues here are threefold:

    • The highly confidential information in the prompt is sent outside the organization, which is an unauthorized release.
    • The information becomes the property of a third-party—the model provider—that should not have access to the information and that may or may not have strong security protocols to prevent a data breach.
    • The information, now the property of the third party, could be incorporated into the dataset used to train the next iteration of the model, meaning that the data could be made available to anyone—such as a competitor—querying that model with a prompt crafted to find such data.

    The solution is also threefold:

    • Employees and other users must be educated as to the risks posed by model use and trained to use the models properly.
    • AI security policies must describe appropriate and inappropriate use of the models and align to organizational values and industry regulations.
    • Model usage by individual users must be traceable and auditable.

    AI Security

    “AI security” is used more and more frequently, but often without explanation. The term refers to the strategic implementation of robust measures and policies to protect an organization’s AI systems, data, and operations from unauthorized access, tampering, malicious attacks, and other digital threats. It goes well beyond traditional cybersecurity because every AI-driven or AI-dependent component linked to an organization’s digital infrastructure adds to the sprawl of pathways into the system.

    While many technical solutions exist to address technological vulnerabilities, an organization’s most commonly exploited vulnerability is the user who, as mentioned above, inadvertently includes sensitive information in a prompt or acts on information received in a response unaware that it’s malware, a hallucination, a phishing campaign, or a social engineering attempt. An insider who takes deliberate actions, for instance, by trying to outwit security features via prompt injection or “jailbreak,” not realizing they could put the organization at risk, is another unfortunately common threat vector.

    The solution is to apply strong filters on outgoing and incoming channels to identify content that is suspicious, malicious, or otherwise misaligned with organizational policies and industry standards.

    Where We’re Going

    The challenges of deploying AI systems, specifically Gen AI and LLM systems, are growing in number and sophistication on par with the number and sophistication of the models themselves. The risks presented by poor or incomplete adoption and deployment plans are also expanding in scope, scale, and nuance. This is why identifying, evaluating, and managing every potential risk is vital for maintaining the models’ integrity, security, and reliability, as well as the organization’s reputation and competitive advantage.

    While having a strong AI security strategy and deployment plan, which includes employee education and training about the role they play in mitigating risk, are very important, incorporating the best tools for the situation is also critical to ensuring a safe, secure adoption and rollout. A “weightless” trust layer built into the security infrastructure, which enables full observability into and across all models and detailed insights about their use, is the ideal. When system and security administrators can see who is doing what, how often, and with which models, they are afforded both wide and deep user and system insights that support a strong, stable, transparent deployment posture.

    About Author:

    Neil Serebryany is the founder and Chief Executive Officer of CalypsoAI. Neil has led industry-defining innovations throughout his career. Before founding CalypsoAI, Neil was one of the world’s youngest venture capital investors at Jump Investors. Neil has started and successfully managed several previous ventures and conducted reinforcement learning research at the University of South California. Neil has been awarded multiple patents in adversarial machine learning.

    Related Posts
    Treasury transformation must be built on accountability and trust
    Treasury transformation must be built on accountability and trust
    Financial services: a human-centric approach to managing risk
    Financial services: a human-centric approach to managing risk
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    How Integral’s CTO Chidambaram Bhat is helping to solve  transfer pricing problems through cutting edge AI.
    How Integral’s CTO Chidambaram Bhat is helping to solve transfer pricing problems through cutting edge AI.

    Why waste money on news and opinions when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    Previous Technology PostHow Should Treasurers Think About Real-Time Data?
    Next Technology PostNICE Actimize Targets Generative AI for Financial Crime

    More from Technology

    Explore more articles in the Technology category

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Compliance Has Become an Engineering Problem

    Why Compliance Has Become an Engineering Problem

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    How Nclude.ai   turned broken portals into completed applications

    How Nclude.ai turned broken portals into completed applications

    The Silent Shift: Rethinking Services for a Digital World?

    The Silent Shift: Rethinking Services for a Digital World?

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    View All Technology Posts