By Akber Datoo, Managing Partner at D2 Legal Technology
Momentum appears to be building for the adoption of smart contracts. Smart contracts offer compelling benefits – from streamlining to automation of the contracting process – and they have the potential to, amongst other capabilities, improve efficiency, reduce client costs and legal risk. But just how should the legal industry respond?
Like with any new technology, a pragmatic approach must be adopted and current limitations must be taken into account. As Matthew Williams, Consultant and Akber Datoo, Managing Partner at D2 Legal Technology insist, any lawyers looking to adopt smart contracts need to understand where they can be applied and where they would deliver most value to clients. This means rapidly adapting the skills mix – from recruiting computer and data scientists with legal understanding to upskilling lawyers with foundational technology and data skills, ensuring they are equipped with the knowledge and confidence to help clients unlock business value through legal change and in this case, smart contracts.
Look beyond the hype
According to research organisation Gartner, by 2022, smart contracts will be in use by more than 25% of global organisations. Yet while this might sound like a ringing endorsement for this nascent legal technology, look beyond the headline and such assertions are carefully tempered with multiple caveats. Notably, this level of adoption applies only to ratified unbundled smart contracts – namely those that are both closely defined and with narrow impact. This does not apply to the real contracting world, with complex nested contracts that can be affected by multiple outcomes, which start to reach the limits of what is possible to test as a smart contract using formal methods.
Essentially, right now, commentators believe that just a highly simplified interpretation of the smart contract concept may gain adoption; representing a tiny fraction of overall commercial contracts negotiated. The 25% figure quoted by Gartner is open to interpretation and could be considered somewhat misleading: smart contract adoption en-masse will take much longer and require significant advances in technology, both security and volume processes, but more fundamentally understanding and culture to bring them into the mainstream.
There is no doubt that the concept of smart contracts has significant appeal. For commentators such as Richard Susskind who believe that automation could spell the end of lawyers, smart contracts very much fit that message. However, while clients may be getting excited about the opportunities to reduce complexity – and cost – and reduce the need for trusted third parties, are they willing to rely on two complementary, albeit distinct technologies in blockchain and smart contracts, that are still yet to be fully realised and their legal validity tested, let alone confirmed?
Right now, the hype is far ahead of the practical adoption of smart contracts, for several reasons. For a start, there is no agreed definition of the term – with academics and experts still lacking consensus, over twenty years since the concept was first conceived by Nick Szabo as a ‘set of promises, specified in digital form, including protocols within which the parties perform on these promises.’
The technology to enable smart contracts has, of course, evolved significantly over the past two decades, with the arrival of Distributed Ledger Technologies (DLT) such as Blockchain making the smart contract concept more commercially feasible. And while an agreed definition is still elusive, Dr Christopher Clack from the Centre of Blockchain Technologies at UCL has at least outlined the key characteristics of a smart contract: a smartcontract is an ‘automatable and enforceable agreement, automatable by computer, although some parts may require human input and control. It is enforceable either by legal enforcement of rights and obligations or via tamper-proof execution of computer code’.
It is this combination of automated process and human input, however, that underpins the ongoing challenges regarding the adoption and enforcement of smart contracts. A whitepaper jointly published by the trade association ISDA and magic circle law firm Linklaters explored the continuum that exists in the form of a smart contract which is entirely in code and a smart contract combining natural language and code. It is still uncertain as to where on this spectrum successful smart contracts will lie. The answer will in part depend on whether there will be this development of lawyers with the confidence in the technology and the ability to undertake, or at least be involved in the review of the code, possibly in some more amenable form, representing the smart contract. Additionally, they would need to be able to understand where to optimally blend code with the natural language required for non-operational contracts.
Furthermore, there are concerns regarding DLT; from the inherent reliance upon cryptocurrencies, with their dark web connotations and global concerns regarding their future stability, to the open and public nature of DLT networks. The financial market is looking to address this with the creation of closed, market-specific DLTs – and it is possible the legal profession will follow suit. Indeed, some law firms may decide the only secure option is to develop their own bespoke DLTs, but DLTs are impractically slow right now. By way of example, the Bitcoin Blockchain can only process five to seven transactions per second and the Ethereum Blockchain can process around just over twenty. Public Blockchain networks are also incredibly expensive to run, requiring a significant amount of computational power. Iceland’s energy consumption is expected to double due to the effect of blockchain mining, which may limit its practical commercial usage. The processes, skills and business practices relating to the wholesale adoption of smart contracts are still very much under development.
Clearly the automation and self-execution enabled by smart contracts is hugely compelling, offering the legal profession a chance to streamline processes and reduce overheads. A proportion of contracts could be automated, at least in part. Event-driven contracts, such as AXA’s Fizzy flight insurance smart contract has, to date, been a good example of a smart contract that could be executed via a DLT without difficulty. But the vast majority of contracts will require a combination of coded smart contract and natural language to support non-operational clauses. Firms need to develop strategies for storing these contracts – from determining how they are split across a DLT and the existing Document Management solution to ensuring ease of retrieval and review.
Furthermore, while a contract may be drawn up in one country, the smart contract code could be stored anywhere on the DLT globally; should a contract be questioned, under which governing law would a case be tried? The potential for a contract to face years of legal wrangling simply to agree jurisdiction cannot be overlooked – with many firms considering the creation of a pre-contract smart contract to a defined jurisdiction. With these considerations in mind, the judiciary is starting to take note of the legal implications of smart contracts with Sir Geoffrey Vos, Chancellor of the High Court, recently calling for a test case to provide some clarity as to legal status of smart contracts.
This is most definitely still a work in progress. Developments are exciting, as smart contract companies such as Clause.io are looking to feed real-time Internet of Things (IoT) data into a smart contract and use real-time data – such as from a delivery van to support supply chain processes, to the creation of standardised pieces of code to provide further automation – and minimise lawyers’ coding requirements. However, before any such innovation can be embraced, the fundamentals must be in place: lawyers will still require a basic understanding of systems, data and IT development before being let loose in the use of any standardised code if they are to avoid unintentionally including bugs or errors within the contract. (A secondary question of course being the professional expectations of a lawyer in respect of such activities, and where any liability might lie in cases where it goes wrong!). The concept of plug and play smart contracts is currently unrealistic.
Practical Next Steps
Smart contracts clearly have the potential to be a key technology for the future once it has matured and legal considerations are resolved. Until then, the legal profession must take practical next steps to fully unlock the business value of smart contracts. As well as addressing the educational and skill set gap, there needs to be greater appreciation of business processes related to legal agreement data and basic data governance principles put in place and implemented in relation to them. Smart contracts are data-driven to achieve a business outcome, and therefore smart contracts will fail to unlock any real business value, rather “rubbish in, rubbish out”.
With these practical steps in mind, it is essential to step back and understand the true implications of this technology. It is tantalising and logically compelling and without doubt will have a significant role to play in the years ahead. But there is a long way to go before businesses can confidently and wholeheartedly embrace smart contracts for anything other than the simplest of agreements. To gain real traction, DLTs need to mature, smartcontracts need to be legally tested, and they need to be enforceable. The future is now, the future is legal and DLT.
Does your institution have operational resilience? Testing cyber resilience may be a good way to find out
By Callum Roxan, Head of Threat Intelligence, F-Secure
If ever 2020 had a lesson, it was that no organization can possibly prepare for every conceivable outcome. Yet building one particular skill will make any crisis easier to handle: operational resilience.
Many financial institutions have already devoted resources to building operational resilience. Unfortunately, this often takes what Miles Celic, Chief Executive Officer of TheCityUK, calls a “near death” experience for this conversion to occur. “Recent years have seen a number of cases of loss of reputation, reduced enterprise value and senior executive casualties from operational incidents that have been badly handled,” he wrote.
But it need not take a disaster to learn this vital lesson.
“Operational resilience means not only planning around specific, identified risks,” Charlotte Gerken, the executive director of the Bank of England, said in a 2017 speech on operational resilience. “We want firms to plan on the assumption that any part of their infrastructure could be impacted, whatever the reason.” Gerken noted that firms that had successfully achieved a level of resilience that survives a crisis had established the necessary mechanisms to bring the business together to respond where and when risks materialised, no matter why or how.
We’ll talk about the bit we know best here; by testing for cyber resilience, a company can do more than prepare for the worst sort of attacks it may face. This process can help any business get a clearer view of how it operates, and how well it is prepared for all kinds of surprises.
Assumptions and the mechanisms they should produce are the best way to prepare for the unknown. But, as the boxer Mike Tyson once said, “Everyone has a plan until they get punched in the mouth.” The aim of cyber resilience is to build an effective security posture that survives that first punch, and the several that are likely to follow. So how can an institution be confident that they’ve achieved genuine operational resilience?
This requires an organization to honestly assess itself through the motto inscribed at the front of the Temple of Delphi: “Know thyself.” And when it comes to cyber security, there is a way for an organization to test just how thoroughly it comprehends its own strengths and weaknesses.
The Bank of England was the first central bank to help develop the framework for institutions to test the integrity of their systems. CBEST is made up of controlled, bespoke, intelligence-led cyber security tests that replicate behaviours of those threat actors, and often have unforeseen or secondary benefits. Gerken notes that the “firms that did best in the testing tended to be those that really understood their organisations. They understood their own needs, strengths and weaknesses, and reflected this in the way they built resilience.”
In short, testing cyber resilience can provide clear insight into an institution’s operational resilience in general.
Gaining that specific knowledge without a “near-death” experience is obviously a significant win for any establishment. And testing for operational resilience throughout the industry can provide some reminders of the steps every organization should take so that testing provides unique insists about their institution, and not just a checklist of cyber defence basics.
The IIF/McKinsey Cyber Resilience Survey of the financial services industry released in March lasy year provided six sets of immediate actions that institutions could take to improve their cyber security posture. The toplines of these recommendations were:
- Do the basics, patch your vulnerabilities.
- Review your cloud architecture and security capabilities.
- Reduce your supply chain risk.
- Practice your incident response and recovery capabilities.
- Set aside a specific cyber security budget and prioritise it
- Build a skilled talent pool and optimize resources through automation.
But let’s be honest: If simply reading a solid list of recommendations created cyber resilience, cyber criminals would be out of business. Unfortunately, cyber crime as a business is booming and threat actors targeting essential financial institutions through cyber attacks are likely earning billions in the trillion dollar industry of financial crime.A list can’t reveal an institution’s unique weaknesses, those security failings and chokepoints that could shudder operations, not just during a successful cyber attack but during various other crises that challenge their operations. And the failings that lead to flaws in an institution’s cyber defence likely reverberate throughout the organization as liabilities that other crises would likely expose.
The best way to get a sense of operational resilience will always be to simulate the worst that attackers can summon. That’s why the time to test yourself is now, before someone else does.
Thomson Reuters to stress AI, machine learning in a post-pandemic world
By Kenneth Li and Nick Zieminski
NEW YORK (Reuters) – Thomson Reuters Corp will streamline technology, close offices and rely more on machines to prepare for a post-pandemic world, the news and information group said on Tuesday, as it reported higher sales and operating profit.
The Toronto-headquartered company will spend $500 million to $600 million over two years to burnish its technology credentials, investing in AI and machine learning to get data faster to professional customers increasingly working from home during the coronavirus crisis.
It will transition from a content provider to a content-driven technology company, and from a holding company to an operational structure.
Thomson Reuters’ New York- and Toronto-listed shares each gained more than 8%.
It aims to cut annual operating expenses by $600 million through eliminating duplicate functions, modernizing and consolidating technology, as well as through attrition and shrinking its real estate footprint. Layoffs are not a focus of the cost cuts and there are no current plans to divest assets as part of this plan, the company said.
“We look at the changing behaviors as a result of COVID … on professionals working from home working remotely being much more reliant on 24-7, digital always-on, sort of real-time always available information, served through software and powered by AI and ML (machine learning),” Chief Executive Steve Hasker said in an interview.
Sales growth is forecast to accelerate in each of the next three years compared with 1.3% reported sales growth for 2020, the company said in its earnings release.
Thomson Reuters, which owns Reuters News, said revenues rose 2% to $1.62 billion, while its operating profit jumped more than 300% to $956 million, reflecting the sale of an investment and other items.
Its three main divisions, Legal Professionals, Tax & Accounting Professionals, and Corporates, all showed higher organic quarterly sales and adjusted profit. As part of the two-year change program, the corporate, legal and tax side will operate more as one customer-facing entity.
Adjusted earnings per share of 54 cents were ahead of the 46 cents expected, based on data from Refinitiv.
The company raised its annual dividend by 10 cents to $1.62 per share.
The Reuters News business showed lower revenue in the fourth quarter. In January, Stephen J. Adler, Reuters’ editor-in-chief for the past decade, said he would retire in April from the world’s largest international news provider.
Thomson Reuters also said its stake in The London Stock Exchange is now worth about $11.2 billion.
The LSE last month completed its $27-billion takeover of data and analytics business Refinitiv, 45%-owned by Thomson Reuters.
(Reporting by Ken Li, writing by Nick Zieminski in New York, editing by Louise Heavens and Jane Merriman)
Putting data protection back on the financial agenda
By Wim Stoop, CDP Customer and Product Director, Cloudera
Despite the wave of changes that Brexit has brought financial organisations, from the end of ‘passporting’ to uncertainty over the longer-term equivalence rules, one thing has remained a constant — data privacy regulations are a core responsibility to protect sensitive data and mitigate data breaches. From PSD2 to GDPR, financial institutions need to ensure they are still processing and transferring data in accordance with the industry’s stringent rules and regulations. If not, they risk fines of up to £17.5 million or 4% of their company’s annual global turnover.
As the stakes get higher, the amount of data which financial enterprises are having to deal with is on the rise too. In fact, research by IDC estimated that businesses created and captured 6.4 zettabytes of new data last year alone. This increase in data production has linked to the pandemic and the move to remote working. Replacing face-to-face interactions with online communications has meant that financial businesses suddenly had to cope with a larger amount of data flowing through their networks. In addition, employees working from home are increasingly doing so on potentially unsecured devices, outside of the corporate network, risking exposure and data breaches according to numerous cybersecurity reports.
With an extensive stock of sensitive customer data and so many regulations to keep on top of, remaining compliant can feel overwhelming for financial organisations. However, this shouldn’t be the case. Today we often see businesses trying to retrofit data protection strategies, or take a reactive approach to external forces. Instead, they should be taking a proactive stance on data management. In doing so, security becomes a natural side-effect and financial companies can operate with the assurance that no matter what new regulations come into play, they are compliant. The question is, how to achieve this?
Taking a proactive approach to data privacy
To remain compliant, financial institutions need to get on top of their data. When data is sat in siloes, on legacy systems, it’s inaccessible to all and it becomes a challenge to identify what is sensitive and what isn’t. Poorly managed data can’t be protected and the risk of data breaches increases. By contrast, when properly controlled and stored, it becomes easy to apply data security rules.
From customer names and contact details to transaction records and PINs, financial organisations hold a lot of personal and financial data on customers. However, the trick is understanding that all data holds varying degrees of sensitivity and thus, needs to be managed accordingly. For instance, a customer’s bank account details are more sensitive, compared to their basic personal data, such as name and address, which are usually publicly accessible. By proactively identifying, prioritising and classifying data by its degree of sensitivity, financial companies can apply any and all data protection rules that are necessary, such as restricting certain users from accessing highly confidential information.
Yet, this identification process is often looked at as a reactive measure by many financial businesses. The challenge in proactive data management lies in an organisation’s ability to eliminate the frictions it has in tracking, identifying and classifying information, as opposed to doing so retrospectively. After all, data classification plays a vital role in ensuring data protection is upheld.
A proactive approach is integral to effective data management and governance. The first step in achieving this approach involves creating a data marketplace, or a curated, secured and governed data repository. Having something like a data marketplace in place means that as soon as data enters an organisation, enterprises can determine its degree of sensitivity, how it should be managed, and which analytics need to be run, to extract the most value out of the data.
Once these steps are taken, compliance and data privacy happen almost naturally and become ingrained in the business. When companies are aware of every single piece of data in their possession, they can know exactly how it’s being protected. Such a robust strategy ensures that institutions meet the high standards of trust that their customers have bestowed upon them in protecting their personal data. And, with this level of control, enterprises can avoid data lockout, reduce friction for employees, and optimise the value they unlock from their data. At the same time, they can have the peace of mind that they are compliant and protected.
A business-ready solution for data protection
With so many rules and regulations to keep track of, data protection shouldn’t be another worry to add to the list. Financial companies can maximise the efficacy of their existing security and governance strategies by applying it to all datasets across the enterprise – whether that be on-premise, in the cloud, or a combination of the two. In particular, as a scalable and low-cost solution, organisations are increasingly turning to the cloud for their data management needs. It’s expected that over half (51%) of business data will be stored in the cloud by 2024.
This is where an enterprise data cloud (EDC) really shows it’s worth, allowing financial companies to keep their data protected, compliant, and successfully governed. Simply put, an EDC is a hybrid and multi-cloud platform that harnesses analytics at every stage of the data lifecycle. It enables organisations to extract the true value of their data while still providing a consistent layer of security.
An EDC gives financial businesses a single source of truth, built on technology that operates on any cloud environment and right through to the edge. Armed with an EDC, companies have complete visibility over their data, no matter where it resides in the enterprise or the data lifecycle, easing the task of managing and protecting data. On top of this, an EDC supports a variety of data functions, including the data marketplace, and works to provide control, visibility and examination over data. With all these aspects working together, financial institutions can ensure that all data which passes through their infrastructure and into the data marketplace is efficiently governed and protected.
Bringing technology, people, and process together
Technological solutions, like an EDC, work at their maximum potential when they are in harmony with people and process. But, the triad has been thrown off balance by the rise of remote working and reduction in staff numbers. While all businesses recognise that sensitive data needs to be encrypted and access should be restricted, this has been a difficult feat as employees work from home and use devices outside of the traditional network security parameters. In fact, nearly half (48%) of employees are less likely to follow safe data practices when working from home. This will exponentially increase the risk of data breaches.
In addition, with almost a fifth (18%) of the UK workforce on furlough and team numbers shrinking, companies don’t have the same amount of manpower to validate both the systems being used, as well as the data being run in these systems, to ensure that they are compliant. Within the office environment, organisations were able to create ‘islands of perfect governance’, with all departments being aware of the applications used to manage data and therefore, guaranteeing higher levels of compliance. However, these safety nets have collapsed during home working and it’s become more difficult to ensure the security and privacy of data within an enterprise.
What’s needed here is an overarching framework that provides a standard for data governance. This is enabled by having the right technology solution, a proactive approach to data management and people within a business supporting it from the bottom up in place — forming a triad that works in perfect harmony. A framework such as this also enables enterprises to assess what they need to do to create data protection rules internally that ensure compliance, and allows employees to self-check their data security protocols eliminating any uncertainty about protecting sensitive data.
It is important to remember that the right technology alone won’t make people compliant – whether they are working in an office or remotely. Rather, as pointed out above, it is technology, people, and process, working in sync, that will ensure that regulations are adhered to and data is managed and protected.
Long-lasting success with data protection
With data volumes growing and remote working creating security vulnerabilities, financial businesses need to get on top of their data from the get-go. By proactively identifying sensitive data, accurately securing it, and delivering trusted data to end-users, the right data can be put into the hands of the right people.
Creating a watertight data privacy strategy requires financial organisations to deliver a uniform approach to data management and protection across departments to ensure compliance. In addition, harnessing technology, such as an EDC, will provide visibility and control over sensitive data, enabling financial institutions to unlock real-time insights from their data while still providing a consistent layer of security. With technology, people and process in harmony, enterprises can operate with the confidence that their data is being managed successfully and they are compliant with both existing and new regulations.
Live Vaccines Market Size, High Demand, Manufacturers, Revenue, Growth Drivers, Traders, and Product Scope till 2029 | FMI Report
Future Market Insights (FMI) adopted a multidisciplinary approach during the pandemic-era to focus on the growth and development of the Live...
Canine Atopic Dermatitis Treatment Market to Gain Impetus from Increased Production of Monoclonal Antibodies
Resistance to different therapeutic agents is adding to the expanding need for immuno-specific vaccines and drugs. In this respect, researchers...
Magnetic Resonance Imaging (MRI) Contrast Agents Market to Gain Impetus from Growing Need for Image-guided Surgeries, Finds FMI
The market for magnetic resonance imaging (MRI) contrast agents is anticipated to expand quickly over the years to come, owing to the...
FMI Highlights Accelerating Sales of Antibiotics in the Corneal Ulcer Treatment Marketspace
FMI Highlights Accelerating Sales of Antibiotics in the Corneal Ulcer Treatment Marketspace Corneal ulcer is among the main sources of...
Radiation Toxicity Treatment Market is Expected to Register Highest CAGR of 6% During the Forecast Period 2019 – 2029
The rise in installation of radiation therapy machines to treat a wide range of cancers further uplifts the probability of...