Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking and Finance Review

Company

    GBAF Logo
    • About Us
    • Profile
    • Privacy & Cookie Policy
    • Terms of Use
    • Contact Us
    • Advertising
    • Submit Post
    • Latest News
    • Research Reports
    • Press Release
    • Awards▾
      • About the Awards
      • Awards TimeTable
      • Submit Nominations
      • Testimonials
      • Media Room
      • Award Winners
      • FAQ
    • Magazines▾
      • Global Banking & Finance Review Magazine Issue 79
      • Global Banking & Finance Review Magazine Issue 78
      • Global Banking & Finance Review Magazine Issue 77
      • Global Banking & Finance Review Magazine Issue 76
      • Global Banking & Finance Review Magazine Issue 75
      • Global Banking & Finance Review Magazine Issue 73
      • Global Banking & Finance Review Magazine Issue 71
      • Global Banking & Finance Review Magazine Issue 70
      • Global Banking & Finance Review Magazine Issue 69
      • Global Banking & Finance Review Magazine Issue 66
    Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

    Global Banking & Finance Review® is a leading financial portal and online magazine offering News, Analysis, Opinion, Reviews, Interviews & Videos from the world of Banking, Finance, Business, Trading, Technology, Investing, Brokerage, Foreign Exchange, Tax & Legal, Islamic Finance, Asset & Wealth Management.
    Copyright © 2010-2026 GBAF Publications Ltd - All Rights Reserved. | Sitemap | Tags

    Editorial & Advertiser disclosure

    Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    Home > Finance > Open-source AI models vulnerable to criminal misuse, researchers warn
    Finance
    Open-source AI models vulnerable to criminal misuse, researchers warn

    Published by Global Banking and Finance Review

    Posted on January 29, 2026

    4 min read

    Last updated: January 29, 2026

    Open-source AI models vulnerable to criminal misuse, researchers warn - Finance news and analysis from Global Banking & Finance Review
    Tags:securityArtificial Intelligencecybersecuritytechnologyresearch

    Quick Summary

    Researchers warn that open-source AI models are vulnerable to criminal misuse, highlighting risks such as hacking and disinformation campaigns.

    Table of Contents

    • Security Risks of Open-Source AI Models
    • Potential Criminal Activities
    • Geographic Distribution of AI Models
    • Responsibilities of AI Developers

    Researchers Warn of Criminal Risks from Open-Source AI Models

    Security Risks of Open-Source AI Models

    By AJ Vicens

    Potential Criminal Activities

    Jan 29 (Reuters) - Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday.

    Geographic Distribution of AI Models

    Hackers could target the computers running the LLMs and direct them to carry out spam operations, phishing content creation or disinformation campaigns, evading platform security protocols, the researchers said.

    Responsibilities of AI Developers

    The research, carried out jointly by cybersecurity companies SentinelOne and Censys over the course of 293 days and shared exclusively with Reuters, offers a new window into the scale of potentially illicit use cases for thousands of open-source LLM deployments. These include hacking, hate speech and harassment, violent or gore content, personal data theft, scams or fraud, and in some cases child sexual abuse material, the researchers said.  

    While thousands of open-source LLM variants exist, a significant portion of the LLMs on the internet-accessible hosts are variants of Meta’s Llama, Google DeepMind’s Gemma, and others, according to the researchers. While some of the open-source models include guardrails, the researchers identified hundreds of instances where guardrails were explicitly removed.

    AI industry conversations about security controls are "ignoring this kind of surplus capacity that is clearly being utilized for all kinds of different stuff, some of it legitimate, some obviously criminal," said Juan Andres Guerrero-Saade, executive director for intelligence and security research at SentinelOne. Guerrero-Saade likened the situation to an "iceberg" that is not being properly accounted for across the industry and open-source community. 

    The research analyzed publicly accessible deployments of open-source LLMs deployed through Ollama, a tool that allows people and organizations to run their own versions of various large-language models.

    The researchers were able to see system prompts, which are the instructions that dictate how the model behaves, in roughly a quarter of the LLMs they observed. Of those, they determined that 7.5% could potentially enable harmful activity. 

    Roughly 30% of the hosts observed by the researchers are operating out of China, and about 20% in the U.S.

    Rachel Adams, the CEO and founder of the Global Center on AI Governance, said in an email that once open models are released, responsibility for what happens next becomes shared across the ecosystem, including the originating labs.

    “Labs are not responsible for every downstream misuse (which are hard to anticipate), but they retain an important duty of care to anticipate foreseeable harms, document risks, and provide mitigation tooling and guidance, particularly given uneven global enforcement capacity,” Adams said. 

    A spokesperson for Meta declined to respond to questions about developers’ responsibilities for addressing concerns around downstream abuse of open-source models and how concerns might be reported, but noted the company's Llama Protection tools for Llama developers, and the company's Meta Llama Responsible Use Guide. 

    Microsoft AI Red Team Lead Ram Shankar Siva Kumar said in an email that Microsoft believes open-source models "play an important role" in a variety of areas, but, "at the same time, we are clear‑eyed that open models, like all transformative technologies, can be misused by adversaries if released without appropriate safeguards."

    Microsoft performs pre-release evaluations, including processes to assess "risks for internet-exposed, self-hosted, and tool-calling scenarios, where misuse can be high," he said. The company also monitors for emerging threats and misuse patterns. "Ultimately, responsible open innovation requires shared commitment across creators, deployers, researchers, and security teams."

    Ollama did not respond to a request for comment. Alphabet's Google and Anthropic did not respond to questions.

    (Reporting by AJ Vicens in Detroit; Editing by Matthew Lewis)

    Key Takeaways

    • •Open-source AI models are vulnerable to criminal misuse.
    • •Hackers can exploit LLMs for spam, phishing, and disinformation.
    • •Thousands of LLMs lack adequate security guardrails.
    • •A significant portion of LLMs are variants of major AI models.
    • •AI developers share responsibility for mitigating misuse.

    Frequently Asked Questions about Open-source AI models vulnerable to criminal misuse, researchers warn

    1What is Artificial Intelligence?

    Artificial Intelligence (AI) refers to the simulation of human intelligence in machines that are programmed to think and learn like humans. It encompasses various technologies, including machine learning and natural language processing.

    2What is cybersecurity?

    Cybersecurity involves protecting computer systems, networks, and data from theft, damage, or unauthorized access. It includes measures to safeguard against cyber threats and attacks.

    3What are open-source models?

    Open-source models are software models whose source code is made available to the public. Users can modify, distribute, and use these models freely, often leading to collaborative improvements.

    4What is phishing?

    Phishing is a cybercrime where attackers impersonate legitimate organizations to trick individuals into revealing sensitive information, such as passwords or credit card numbers, often through deceptive emails or websites.

    Why waste money on news and opinion when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    More from Finance

    Explore more articles in the Finance category

    Germany's Merz: remains to be seen whether FCAS will yield joint aircraft
    Russia investigates care home deaths in new Siberian health scandal
    Marlboro-maker Altria forecasts 2026 profit above estimates after price hikes
    Western Balkans seek EU concessions for truckers as borders blocked for fourth day
    Denmark's King Frederik to visit Greenland in February
    Ukraine's central bank cuts key rate to 15% after inflation slows
    Italy to align its levy on small parcels with EU scheme, economy minister says
    Dutch lithium supplier AMG seeks to bypass China
    Temperatures as low as minus 30C in Ukraine next week may damage crops
    Packaging firm International Paper to split into two to sharpen North America focus
    EU pandemic recovery fund boosted growth, more effects to come later
    Blackstone beats estimates on strong dealmaking activity
    View All Finance Posts
    Previous Finance PostRussia's Putin tells UAE leader he wants to discuss Iran tensions with him
    Next Finance PostGeopolitics loom large over Big Oil earnings as investors seek Venezuela details