Search
00
GBAF Logo
trophy
Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

Subscribe to our newsletter

Get the latest news and updates from our team.

Global Banking and Finance Review

Global Banking & Finance Review

Company

    GBAF Logo
    • About Us
    • Profile
    • Privacy & Cookie Policy
    • Terms of Use
    • Contact Us
    • Advertising
    • Submit Post
    • Latest News
    • Research Reports
    • Press Release
    • Awards▾
      • About the Awards
      • Awards TimeTable
      • Submit Nominations
      • Testimonials
      • Media Room
      • Award Winners
      • FAQ
    • Magazines▾
      • Global Banking & Finance Review Magazine Issue 79
      • Global Banking & Finance Review Magazine Issue 78
      • Global Banking & Finance Review Magazine Issue 77
      • Global Banking & Finance Review Magazine Issue 76
      • Global Banking & Finance Review Magazine Issue 75
      • Global Banking & Finance Review Magazine Issue 73
      • Global Banking & Finance Review Magazine Issue 71
      • Global Banking & Finance Review Magazine Issue 70
      • Global Banking & Finance Review Magazine Issue 69
      • Global Banking & Finance Review Magazine Issue 66
    Top StoriesInterviewsBusinessFinanceBankingTechnologyInvestingTradingVideosAwardsMagazinesHeadlinesTrends

    Global Banking & Finance Review® is a leading financial portal and online magazine offering News, Analysis, Opinion, Reviews, Interviews & Videos from the world of Banking, Finance, Business, Trading, Technology, Investing, Brokerage, Foreign Exchange, Tax & Legal, Islamic Finance, Asset & Wealth Management.
    Copyright © 2010-2025 GBAF Publications Ltd - All Rights Reserved.

    ;
    Editorial & Advertiser disclosure

    Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

    Home > Technology > IT’S (STILL) NOT FAIR
    Technology

    IT’S (STILL) NOT FAIR

    IT’S (STILL) NOT FAIR

    Published by Gbaf News

    Posted on March 6, 2018

    Featured image for article about Technology

    Colin Gray, principal consultant SAS UK & Ireland

    As we progress into 2018, the General Data Protection Regulation (GDPR) is looming on most organisations’ radar. The requirements are becoming clearer, but there is still some ambiguity about precisely what needs doing. There is, however, no question that algorithms are an area of concern, and in particular, whether their effects are clearly understood — and more importantly, are fair.

    In this article, I’m following up on a previous blog where I discussed the fairness of algorithms used by businesses to help them make decisions. I described an example of an algorithm using age, income and number of dependants to determine whether to offer a loan to an individual. In this blog, I want to discuss the nature of risk and how algorithms use an assessment of risk to determine the price individuals pay.

    The case of motor insurance

    Let’s use motor insurance as an example, around 5%–10% of policyholders will claim each year, which means that 90%–95% of policyholders won’t claim. The claims are spread across all policyholders. We know that certain drivers pay more than others; young drivers, drivers with previous claims, those with expensive cars. Actuaries identify the factors most strongly correlated with accidents and set the premiums accordingly. Given that most people won’t have an accident in any given year, is it fair to charge different premiums?

    You could argue that some of the factors used are inherent, such as age, other factors are more choice-based; where you live, how fast you drive, the cost of your car. What would it mean if the regulations said that everyone had to be charged the same premium?

    • Young drivers à lower premium
    • More expensive cars à lower premium
    • More reckless driving à lower premium

    In practice, therefore, drivers wouldn’t be paying for the risk they represent. There would be subsidies across these groups, which as a society, we may or may not be happy with.

    However, I think we should be more concerned about second-order effects, that is, drivers are not being charged — some might call it penalised — for their risky behaviour or risky characteristics. This could encourage some drivers to take greater risks, knowing that the cost would be shared with others, and they would not personally face any financial penalty. There is very little question that the number of accidents and ultimately deaths would go up. This is presumably not something that we would want.

    Algorithms as a force for good

    I’m not suggesting that the regime above is being proposed by GDPR, far from it. I am just noting that algorithms (in this case pricing models) are used to make sound financial decisions that benefit society. Removing the ability to distinguish levels of risk in the interests of ‘fairness’ may have unforeseen or unexpected consequences.

    We’ve only considered the insurance industry, but the same could be applied to other industries like credit lending. A single price regime would probably increase the number of business failures and individual defaults, because it could encourage high-risk individuals and businesses to apply for credit.

    Algorithms aren’t exact — they can’t (yet) distinguish completely between those who will have an accident and those who won’t and I’m not sure we’d want them to anyway as this could create a class of uninsurable drivers. They do, however, help to spread risk across multiple groups, in a relatively equitable manner. They can be used to penalise high-risk behaviours that are detrimental to society, and therefore, hopefully, reduce those behaviours. Equally, like any tool, they can be misused and exclude sections of society and prevent people from fully participating.

    As analysts, data scientists and statisticians, we need to know the difference — and we need to ask the right questions to make sure that organisations are doing the right thing.

    Colin Gray, principal consultant SAS UK & Ireland

    As we progress into 2018, the General Data Protection Regulation (GDPR) is looming on most organisations’ radar. The requirements are becoming clearer, but there is still some ambiguity about precisely what needs doing. There is, however, no question that algorithms are an area of concern, and in particular, whether their effects are clearly understood — and more importantly, are fair.

    In this article, I’m following up on a previous blog where I discussed the fairness of algorithms used by businesses to help them make decisions. I described an example of an algorithm using age, income and number of dependants to determine whether to offer a loan to an individual. In this blog, I want to discuss the nature of risk and how algorithms use an assessment of risk to determine the price individuals pay.

    The case of motor insurance

    Let’s use motor insurance as an example, around 5%–10% of policyholders will claim each year, which means that 90%–95% of policyholders won’t claim. The claims are spread across all policyholders. We know that certain drivers pay more than others; young drivers, drivers with previous claims, those with expensive cars. Actuaries identify the factors most strongly correlated with accidents and set the premiums accordingly. Given that most people won’t have an accident in any given year, is it fair to charge different premiums?

    You could argue that some of the factors used are inherent, such as age, other factors are more choice-based; where you live, how fast you drive, the cost of your car. What would it mean if the regulations said that everyone had to be charged the same premium?

    • Young drivers à lower premium
    • More expensive cars à lower premium
    • More reckless driving à lower premium

    In practice, therefore, drivers wouldn’t be paying for the risk they represent. There would be subsidies across these groups, which as a society, we may or may not be happy with.

    However, I think we should be more concerned about second-order effects, that is, drivers are not being charged — some might call it penalised — for their risky behaviour or risky characteristics. This could encourage some drivers to take greater risks, knowing that the cost would be shared with others, and they would not personally face any financial penalty. There is very little question that the number of accidents and ultimately deaths would go up. This is presumably not something that we would want.

    Algorithms as a force for good

    I’m not suggesting that the regime above is being proposed by GDPR, far from it. I am just noting that algorithms (in this case pricing models) are used to make sound financial decisions that benefit society. Removing the ability to distinguish levels of risk in the interests of ‘fairness’ may have unforeseen or unexpected consequences.

    We’ve only considered the insurance industry, but the same could be applied to other industries like credit lending. A single price regime would probably increase the number of business failures and individual defaults, because it could encourage high-risk individuals and businesses to apply for credit.

    Algorithms aren’t exact — they can’t (yet) distinguish completely between those who will have an accident and those who won’t and I’m not sure we’d want them to anyway as this could create a class of uninsurable drivers. They do, however, help to spread risk across multiple groups, in a relatively equitable manner. They can be used to penalise high-risk behaviours that are detrimental to society, and therefore, hopefully, reduce those behaviours. Equally, like any tool, they can be misused and exclude sections of society and prevent people from fully participating.

    As analysts, data scientists and statisticians, we need to know the difference — and we need to ask the right questions to make sure that organisations are doing the right thing.

    Related Posts
    Treasury transformation must be built on accountability and trust
    Treasury transformation must be built on accountability and trust
    Financial services: a human-centric approach to managing risk
    Financial services: a human-centric approach to managing risk
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    LakeFusion Secures Seed Funding to Advance AI-Native Master Data Management
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Clarity, Context, Confidence: Explainable AI and the New Era of Investor Trust
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Data Intelligence Transforms the Future of Credit Risk Strategy
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    Architect of Integration Ushers in a New Era for AI in Regulated Industries
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    How One Technologist is Building Self-Healing AI Systems that Could Transform Financial Regulation
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    SBS is Doubling Down on SaaS to Power the Next Wave of Bank Modernization
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    Trust Embedding: Integrating Governance into Next-Generation Data Platforms
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    The Guardian of Connectivity: How Rohith Kumar Punithavel Is Redefining Trust in Private Networks
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    BNY Partners With HID and SwiftConnect to Provide Mobile Access to its Offices Around the Globe With Employee Badge in Apple Wallet
    How Integral’s CTO Chidambaram Bhat is helping to solve  transfer pricing problems through cutting edge AI.
    How Integral’s CTO Chidambaram Bhat is helping to solve transfer pricing problems through cutting edge AI.

    Why waste money on news and opinions when you can access them for free?

    Take advantage of our newsletter subscription and stay informed on the go!

    Subscribe

    Previous Technology PostFINANCIAL SERVICES SECTOR CASHING IN ON TECHNOLOGY-DRIVEN CHANGE
    Next Technology PostKNOWBE4 ISSUES 2018 THREAT IMPACT AND ENDPOINT PROTECTION REPORT

    More from Technology

    Explore more articles in the Technology category

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Physical Infrastructure Still Matters in a Digital Economy

    Why Compliance Has Become an Engineering Problem

    Why Compliance Has Become an Engineering Problem

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Can AI-Powered Security Prevent $4.2 Billion in Banking Fraud?

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    Reimagining Human-Technology Interaction: Sagar Kesarpu’s Mission to Humanize Automation

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    LeapXpert: How financial institutions can turn shadow messaging from a risk into an opportunity

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Intelligence in Motion: Building Predictive Systems for Global Operations

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    Predictive Analytics and Strategic Operations: Strengthening Supply Chain Resilience

    How Nclude.ai   turned broken portals into completed applications

    How Nclude.ai turned broken portals into completed applications

    The Silent Shift: Rethinking Services for a Digital World?

    The Silent Shift: Rethinking Services for a Digital World?

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Culture as Capital: How Woxa Corporation Is Redefining Fintech Sustainability

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Securing the Future: We're Fixing Cyber Resilience by Finally Making Compliance Cool

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    Supply chain security risks now innumerable and unmanageable for majority of cybersecurity leaders, IO research reveals

    View All Technology Posts