Technology
Firm foundations for responsible AI
By Dr. Scott Zoldi is chief analytics officer at FICO
As the use of Artificial Intelligence grows across all markets and all industries, so too does the focus on the reasoning behind the decision-making algorithms behind it. Scott Zoldi, AI expert at FICO discusses the ways in which data scientists and organisations can ensure they use AI responsibly and ethically.
Increasing volumes of digitally generated data coupled with automated decisioning have led to faster applications, form-filling, insurance claims handling and so much more. Artificial Intelligence (AI) has sped up our lives, increased convenience and fed our expectation for instant access, instant decisions, instant gratification. However, it has also brought a new set of challenges for businesses and governments alike.
Advocacy groups have started questioning the increasing use of AI to make decisions about the lives of people and the pushback is not always unfounded. With algorithms taking the lead and human understanding and empathy removed from the process, decision-making can seem callous, perhaps even careless. As a result of these concerns, regulations have been introduced to protect consumer rights and keep a close watch on AI developments.
Milestones on the journey to responsible AI
Building responsible AI models takes time and painstaking work. Systems must be built upon strong foundations. Continued monitoring, tweaking and upgrading must be employed to ensure the use of AI remains responsible while in production. As dependence on AI is growing by the day, organisations must act now to enforce responsible AI.
To do this, standards need to be established in the three pillars of responsible AI: explainability, accountability and ethics. With these in place, organisations of all types can be confident they are making sound digital decisions.
EXPLAINABILITY: AI decision systems should be based on an algorithmic construct that reports the reasons associated with the decision made, allowing a business to explain why the model made the decision it did – for example, why flag a transaction as fraud. This explanation can then be used by human analysts to further investigate the implications and accuracy of the decision; it will also enable a clear explanation to be provided to customers. A detailed explanation of the risk indicators ensures the decision is understandable, plausible and palatable. In addition, if an error has been made by the customer providing data or the AI system itself, that can be rectified and reassessed, potentially resulting in a different outcome.
ACCOUNTABILITY: The importance of thoughtful development of models should not be under-estimated. Algorithm limitations must be taken into account and carefully chosen to create reliable machine learning models. Technology must be transparent and compliant. Accountable development of models ensures the decisions make sense with changing inputs, for example, scores adapt appropriately with increasing risk.
Beyond explainable AI, there is the concept of humble AI — ensuring that the model is used only on the data examples and scenarios similar to data on which it was trained. Where that is not the case, the model may not be trustworthy and an organisation should downgrade to an alternate algorithm.
ETHICS: Having been built by humans and trained using societal data, which is often implicitly full of bias, AI can be far more discriminatory than many would expect from a machine. Explainable machine learning architectures allow extraction of the specific machine learned relationships between features that can lead to biased decision-making. Ethical models ensure that bias and discrimination are explicitly tested and removed.
Enforcing responsible AI
Having completed the careful, painstaking work of building a responsible AI model that is explainable, accountable and ethical, data scientists need to enlist external forces to ensure the model is and continues to deliver responsible AI. These forces include regulation, audit, and advocacy.
REGULATION: Without external regulation, there would be no restrictions on, or control over, how organisations could use data and AI. Regulations are vital for setting the standard of conduct and rule of law for use of algorithms, to ensure decisions are fair.
AUDIT: To demonstrate compliance with regulation, data scientists and organisations require a framework for creating auditable models and modelling processes. Audits must ensure essential steps such as explainability, bias detection, and accountability tests are performed ahead of model release, with explicit approvals recorded. This creates an audit trail for accountability, attribution, and forensic analysis. Furthermore, as data changes these same concepts of ethical AI must be retested and verified.
ADVOCACY: As mentioned earlier, there are many groups concerned about the wrong done by AI and willing to take up legal proceedings. Growing public awareness of how algorithms are making very serious life-changing decisions has led to organised advocacy efforts in many regions. Clearly there is a burgeoning need for collaboration between these advocates and machine learning experts for the greater good of both humans and AI.
‘Doing your best’ won’t be good enough
As the use of AI continues to grow across industries, borders and lives, ‘doing your best’ as a data scientist and as an organisation won’t be good enough. Responsible AI will soon be the expectation and standard Organisations must enforce responsible AI now and strengthen and set their standards of AI explainability, accountability, and ethics to ensure they are making digital decisions responsibly.
-
Business3 days ago
Mike Bahun and Fundraising University Make a Lasting Impact on Sports Programs Nationwide
-
Top Stories3 days ago
After VW plant victory, UAW sets its sights on Mercedes in Alabama
-
Investing3 days ago
Forex Market Trends to Watch Out For in 2024
-
Top Stories3 days ago
Hedge fund borrowing hits five-year peak, Goldman Sachs says