Technology
Working Toward Responsible AI: How businesses should ensure that AI is fair, accountable, safe and governed
By Alexei Zhukov, VP, Technology Solutions at EPAM Systems, Inc.
AI is increasingly engrained in everyday activities, from reading the news online and listening to music, to banking and driving to work. Its impact on our everyday lives is growing as algorithms minimize some or all decision-making. With worldwide business spending on AI set to reach $110 billion annually by 2024, it is undeniable that AI is an essential aspect of numerous industries. AI systems are now widely adopted as central to decisions regarding routing, purchasing, customer service and fraud detection across the banking, finance and insurance sectors. In fact, the financial services industry is regarded as the second biggest adopter behind retail.
With its ever-increasing impact on our lives, it is hardly surprising that governments and regulatory entities are becoming more concerned about AI’s widespread, evolving nature. In Europe, it is widely expected that the proposed EU AI Harmonisation legislation will be enacted in early 2024. The UK Financial Conduct Authority is also consulting on whether additional clarification of existing regulations may be helpful and how policy can best support safe and responsible AI adoption. In the United States, the Federal Trade Commission has emphasized that the agency already has enforcement powers with applicability to AI under three laws. The Algorithmic Accountability Act was also introduced in February 2022 in both Congress chambers.
Humanity has already borne witness to glimpses of what can go wrong with AI. In September 1983, the world averted a nuclear escalation thanks to Stanislav Petrov’s life-saving decision to wait for corroborating evidence, instead of triggering the chain of command, following an erroneous AI diagnostic that Russia was under attack. We have also seen how quickly a seemingly innocent chatbot like Twitter’s Tay can turn into a misogynist and racist conversationalist without the proper safeguard. Furthermore, with the increased intra-connectivity of AI services, realistic threat models must account for unpredictable systemic interference and sophisticated collusion that could take place and go completely undetected.
In parallel with the technological boom which has given AI much more power in recent years, the idea of how to do AI responsibly has also evolved and matured. The AI scientist, while having access to more powerful and complex algorithms, now also has access to a certain number of tools to decompose and dissect these complex algorithms. A certain number of frameworks and assessment tools are now available to ensure that the application of an AI solution is fair, accountable, appropriate, safe and governed.
And thus, in the absence of a strong negative reaction from consumers, and in a similar vein to what happened with the introduction of tighter Data Protection legislation a few years ago, it is very unlikely that AI will be stopped in its tracks. We could very well, however, witness the end of the Uncontrollable AI era and the birth of the Responsible AI era. And firms that will embrace this new era will be the AI winners of tomorrow, with a quicker return on AI investment and better AI innovation for their customers.
Five Steps Companies Can Take to Foster Responsible AI
As AI legislation continues to evolve and uncertainty remains, there are five proactive and practical steps companies can implement to promote responsible AI.
- Communicate: The first step is to anticipate cross-organizational AI regulation challenges. Working with AI systems will mean addressing complicated technical, economic and legal difficulties when conducting business. Investing in efficient communication strategies across interdisciplinary teams is a prerequisite to effective AI development and utilization.
- Contextualize: This next step involves organizations familiarizing themselves with relevant legislation in their jurisdiction(s). This is especially important for those who operate in multiple states or countries. While designated personnel overseeing AI related-projects must be aware of the current regulatory landscape, businesses should also work to coach, mentor and train all relevant staff members and decision-makers on this topic to encourage any required behavioral changes. Furthermore, the executive should determine how the ongoing changes in legislation will be monitored. This is a good time to introduce horizon service management. It is also helpful to develop hardline evaluation criteria to determine if a project oversteps established AI guidelines. Organizations must be ready to manage the characteristics of self-modifying AI, which means creating and managing well-specified baselines will be critical.
- Compare: Next, organizations should acquaint themselves with trustworthy and respectable AI safety companies and adopt their same practices. Brands will benefit by reaching out to these third parties, leveraging their professional capabilities and using them to make independent evaluations. Due to the immense complexity and potentially stochastic behavior of AI systems, effective monitoring through a well-developed measurement strategy is key to mitigating risks and maximizing the utility of AI.
- Control: The fourth step involves objectively defining how to self-regulate the risk of any AI programmatically and, if necessary, utilize the assistance and impartiality of a third-party provider in this, too. Whether businesses elect to monitor their use and management of datasets and algorithms themselves or use an outside source, they should create a risk assessment process for AI to prevent projects from crossing ethical boundaries. Depending on the jurisdiction, failure to do so may result in criminal liability.
- Community: The final step for working toward responsible AI is planning for ongoing systems monitoring, continual testing and validation. This includes introducing responsible design principles across all business units interacting with AI systems. AI is dynamic, and therefore pursuing responsible AI will require an ongoing commitment. Companies must remember that humans design AI and even with the best intentions applied, unintentional biases might appear in algorithms. Ultimately, the organization should create and apply social policies to ensure that AI is transparent and ethical. This is far from being only a technology problem. Policy and governance designers should seek and incorporate a balance of skills and experience from stakeholders across representative sets of roles.
-
Business2 days ago
Bluprintx strengthens sales and marketing leadership to drive global growth-as-a-service expansion
-
Top Stories2 days ago
Britain sets out marketing rules for EU investment funds in the UK
-
Top Stories2 days ago
Reconsidering climate risk: a return to valuation fundamentals
-
Business2 days ago
Leverage the customization revolution to promote your brand