Connect with us
Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

Technology

Managing Reference Data risk – A best practice approach

Published

on

Derek-Kemp

By Derek Kemp, EVP and Head of EMEA Region, iGATE
With an array of new regulations and a challenging economic environment, financial institutions are under immense pressure to improve data management processes and mitigate risk. The poor quality of data,substantial redundancy and duplication of effort in the area of reference data management, continues to create major problems for financial institutions, globally.Derek-Kemp

As a first step, firms first need to understand the gap that exists between their current reference data management processes and newer best-practice approaches.

The business impact of poor data quality

In the rush to improve efficiency with straight-through processing (STP) have firms failed to pay sufficient attention to the risks associated with poor data quality?

Based on current market trends, this certainly seems to be the case.

Duplication of reference data results in unnecessary costs: All large financial firms access data from a variety of sources where disparate and siloed data systems and operations are the norm. To add to the complexity, organizations typically source reference data from a range of internal and external providers based on the data consumption requirements of different departments. This siloed style of functioning however, makes it difficult for an individual department to access data that may have already been purchased by another department. This typically leads to reference data purchases being duplicated, thereby leading to unnecessary costs.

Increased operational risk: When inconsistencies or inaccuracies in reference data arise, exceptions in the trade lifecycle occur, leading to increased operational risk, lost revenue opportunities, and financial liabilities. This gains even more significance during periods of market volatility when firms feel the need for expensive and time-consuming manual trade duplication and reconciliation processes.

Consolidating market data management and counterparty systems

The reference data management problem is shared by both the buy-side and the sell-side. Each has to worry about securities reference data to settle trades reliably and to provide fair valuations. The sell-side firms are now beginning to consider market data and counterparty data as part of the larger reference data problem.

Trading firms have for a long time been concerned with sourcing accurate market data on a real-time basis with minimum latency in order to feed algorithmic trading engines. Only the largest Tier 1 firms can afford the luxury of storing their own tick histories, but for the majority, there are cost-effective industry utility services such as Tickdata and Reuters DataScope Tick History, which provide clean tick history on demand. These can be used for proof of best execution and algorithmic back testing.

Firms should closely examine whether the additional costs of the more complex platforms and the cost of the actual tick data storage provide sufficient benefits when compared with less expensive securities reference data management platforms and tick data history utilities.

Many firms have invested heavily in counterparty data management. However, through a spate of recent acquisitions, organizational, geographic and functional silos have developed resulting in multiple databases in different formats. For a firm, there is undoubtedly a benefit from integrating counterparty data in one place. Now that industry utilities exist where clean data can be reliably sourced, it is no longer a proprietary advantage. The advantage would stem from the uniform usage of that data source throughout the firm.

Market data and counterparty data should be acquired as a utility and mapped to reference data through cross-referencing. But a robust reference data management platform must underpin such efforts if they are to succeed.

Managing multiple risk points for efficient reference data management

Best-practice reference data management may involve outsourcing, but only when certain risk points in the process have been considered:

Risk-point 1: Purchase: In the case of reference data, it is common to buy multiple sets from multiple vendors, according to the needs and preferences of individual employees and teams within the organization. This results in organizations spending money on duplicate, non-optimized data. Best-practice reference data management circumvents purchase point risks by using advanced tools to track and analyze what data sets are being purchased and where they are being used, tracking the data path from contributor to consumer, and then monitoring by user and data element. Smart companies also turn to trusted independent partners to help them determine what data sources are right for the organization and its employees.

Risk-point 2: Cleansing: Reference data generally is non-optimized at the time of purchase and hence it needs to be cleansed in order to identify inconsistencies and faults. If reference data is held in multiple silos rather than centrally, it is likely to be cleansed multiple times. There is also the increasing impact of corporate actions on reference data to consider. Some corporate actions are relatively easy to handle, such as splits and dividends. Others, like complex rights issues, require labor-intensive intervention from a specialist. As a best-practice for reference data management, use automated tools that cleanse data by monitoring incoming data and checking for relationships and expected patterns. When exceptions occur, manual intervention may be required but smart companies use skilled staff in low-cost offshore locations to do this.

Risk-point 3: Distribution: Once reference data has been bought and cleansed, it needs to be fed to the individual systems that consume it. That requires an in-depth understanding of what data sets and fields each consuming system requires, when the data is needed, and in what format it is expected. As a best-practice for reference data management, use ETL (extraction, transformation, and loading) tools to subset and reformat the data contained in the Golden Copy, and send it to each consuming system in a way that it can be understood and used. When a request for a new feed is submitted, skilled personnel are on hand to decide which fields from which data sets are required to build it.

Leveraging third-party specialist organizations to manage commoditized reference data

Many firms have now realized that there is little competitive advantage to be gained from managing publicly available, highly commoditized reference data in-house. Increasingly, firms are turning to third-party specialist organizations, not only to manage reference data on their behalf, but also to re-architect data systems in such a way that they are outsourcing ready.

Using a combination of best-of-breed tools and skilled resources, the third-party specialist will normalize, cleanse, validate, and aggregate multi-source content to achieve a single, standardized Golden Copy of the reference data, which is fed back to the client as a managed service. There has never been a better time for financial firms to leverage third-party specialist organizations to manage commoditized reference data. The risks associated with reference data now are an immediate concern and require immediate action. Best-practice reference data management is critical to current performance and a prerequisite for achieving further growth and efficiency.

 

 

 

Technology

Why technology is key to the future of auditing

Published

on

Why technology is key to the future of auditing 1

By Piers Wilson, Head of Product Management at Huntsman Security

The Financial Reporting Council (FRC), which is responsible for corporate governance, reporting and auditing in the UK, has been consulting on the role of technology in audit processes. This highlights growing recognition for the fact that technology can assist audits, providing the ability to automate data gathering or assessment to increase quality, remove subjectivity and make the process more trustworthy and consistent. Both the Brydon review and the latest AQR thematic suggest a link between enhanced audit quality and the increasing use of technology. This goes beyond efficiency gains from process automation and relates, in part, to the larger volume of data and evidence which can be extracted from an audited entity and the sophistication of the tools available to interrogate it.

As one example, the PCAOB in the US has for a while advocated for the provision of audit evidence and reports to be timely (which implies computerisation and automation) to assure that risks are being managed, and for the extent of human interaction with evidence or source data to be reflected to ensure influence is minimised (the more that can be achieved programmatically and objectively the better).

However, technology may obscure the nature of analysis and decision making and create a barrier to fully transparent audits compared to more manual (yet labour intensive) processes. There is also a competition aspect between larger firms and smaller ones as regards access to technology:

Brydon raised concerns about the ability of challenger firms to keep pace with the Big Four firms in the deployment of innovative new technology.

The FRC consultation paper covers issues, and asks questions, in a number of areas. Examples include:

  • The use of AI and machine learning that collect or analyse evidence and due to the continual learning nature, their criteria for assessment may be difficult to establish or could change over time.
  • The data issues around greater access to networks and systems putting information at risk (e.g. under GDPR) or a reluctance for audited companies to allow audit firms to connect or install software/technologies into their live environments.
  • The nature of technology may mean it is harder for auditors to understand or establish the nature of data collection, analysis or decision making.
  • The ongoing need to train auditors on technologies that might be introduced, so they can utilise them in a way that generates trusted outputs.

Clearly these are real issues – for a process that aims to provide trustworthy, objective, transparent and repeatable outputs – any use of technology to speed up or improve the process must maintain these standards.

Audit technology solutions in cyber security

The cyber security realm has grown to quickly become a major area of risk and hence a focus for boards, technologists and auditors alike. The highly technical nature of threats and the adversarial nature of cybers attackers (who will actively try and find/exploit control failures) means that technology solutions that identify weaknesses and report on specific or overall vulnerabilities are becoming more entrenched in the assurance process within this discipline.

While the audit consultations and reports mentioned above cover the wider audit spectrum, similar challenges relate to cyber security as an inherently technology-focussed area of operation.

Benefits of speed

The gains from using technology to conduct data gathering, analysis and reporting are obvious – removing the need for human questionnaires, interviews, inspections and manual number crunching. Increasing the speed of the process has a number of benefits:

  • You can cover larger scopes or bigger samples (even avoid sampling all together)
  • You can conduct audit/assurance activities more often (weekly instead of annually)
  • You can scale your approach beyond one part of the business to encompass multiple business units or even third parties
  • You get answers more quickly – which for things that change continually (like patching status) means same day awareness rather than 3 weeks later

Benefits of flexibility

The ability to conduct audits across different sites or scopes, to specify different thresholds of risk for different domains, the ease of conducting audits at remote locations or on suppliers networks (especially during period of restricted travel) are ALL factors that can make technology a useful tool for the auditor.

Benefits of transparency

One part of the FRC’s perceived problem space is that of transparency, you can ask a human how they derived a result, and they can probably tell you, or at least show you the audit trail of correspondence, meeting notes or spreadsheet calculations. But can you do this with software or technology?

Certainly, the use of AI and machine learning makes this hard, the learning nature and often black box calculations are not easy to either understand, recalculate in a repeatable way or to document. The system learns, so is always changing, and hence the rationale that a decision might not always be the same.

In technologies that are geared towards delivering audit outcomes this is easier. First, if you collect and retain data, provide an easy interface to go from results to the underlying cases in the source data, it is possible to take a score/rating/risk and reveal the specifics of what led to it. Secondly, it is vital that the calculations are transparent, i.e. that the methods of calculating risks or the way results are scored is decipherable.

Benefits of consistency

This is one obvious gain from technology, the logic is pre-programmed in.  If you take two auditors and give them the same data sets or evidence case files they might draw different conclusions (possibly for valid reasons or due to them having different skill areas or experience), but the same algorithm operating on the same data will produce the same result every time.

Manual evidence gathering suffers a number of drawbacks – it relies on written notes, records of verbal conversations, email trails, spreadsheets, or questionnaire responses in different formats.  Retaining all this in a coherent way is difficult and going back through it even harder.

Using a consistent toolset and consistent data format means that if you need to go back to a data source from a particular network domain three months ago, you will have information that is readily available and readable.  And as stated above, if the source data and evidence is re-examined using a consistent solution, you will get the same calculations, decisions and results.

Benefits of systematically generated KPIs, cyber maturity measures and issues

The outputs of any audit process need to provide details of the issues found so that the specific or general cases of the failures can be investigated and resolved.  But for managers, operational teams and businesses, having a view of the KPIs for the security operations process is extremely useful.

Of course, following the “lines of defence” model, an internal or external “formal” audit might simply want the results and a level of trust in how they were calculated; however for operational management and ongoing continuous visibility, the need to derive performance statistics comes into its own.

It is worth noting that there are two dimensions to KPIs:   The assessment of the strength or configuration of a control or policy (how good is the control) and the extent or level of coverage (how widely is it enforced).

To give a view of the technical maturity of a defence you really need to combine these two factors together.  A weak control that is widely implemented or a strong control that provides only partial coverage are both causes for concern.

Benefits of separation of process stages

The final area where technology can help is in allowing the separation and distribution of the data gathering, analysis and reporting processes.  It is hard to take the data, evidence and meeting notes from someone else and analyse it. For one thing, is it trustworthy and reliable (in the case of third-party assurance questionnaires perhaps)? Then it is also hard to draw high-level conclusions about the analysis.

If technology allows the data gathering to be performed in a distributed way, say by local site administrators, third-party IT staff or non-expert users BUT in a trustworthy way, then the overhead of the audit process is much reduced. Instead of a team having to conduct multiple visits, interviews or data collection activities the toolset can be provided to the people nearest to the point of collection.

This allows the data analysis and interpretation to be performed centrally by the experts in a particular field or control area. So giving a non-expert user a way to collect and provide relevant and trustworthy audit evidence takes a large bite out of the resource overhead of conducting the audit, for both auditor and auditee.

It also means that a target organisation doesn’t have to manage the issue of allowing auditors to have access to networks, sites, data, accounts and systems to gather the audit evidence as this can be undertaken by existing administrators in the environment.

Making the right choice

Technology solutions in the audit process can clearly deliver benefits, however if they are too simplistic or aim to be too clever, they can simply move the problem of providing high levels of audit quality. A rapidly generated AI-based risk score is useful, but if it’s not possible to understand the calculation it is hard to either correct the control issues or trouble shoot the underlying process.

Where technology can assist the audit process, speed up data gathering and analysis, and streamline the generation of high- and low-level outputs it can be a boon.

Technology allows organisations to put trustworthy assurance into the hands of operations teams and managers, consultants and auditors alike to provide flexible, rapid and frequent views of control data and understanding of risk posture. If this can be done in a way that is cognisant of the risks and challenges as we have shown, then auditors and regulators such as the FRC can be satisfied.

Continue Reading

Technology

The Future Growth of AI and ML

Published

on

The Future Growth of AI and ML 2

By Rachel Roumeliotis, VP of Data and AI at O’Reilly

We’ve all come to terms with the fact that artificial intelligence (AI) is transforming how businesses operate and how much it can help a business in the long term. Over the past few years, this understanding has driven a spike in companies experimenting and evaluating AI technologies and who are now using it specifically in production deployments.

Of course, when organisations adopt new technologies such as AI and machine learning (ML), they gradually start to consider how new areas could be affected by technology. This can range across multiple sectors, including production and logistics, manufacturing, IT and customer service. Once the use of AI and ML techniques becomes ingrained in how businesses function and in the different ways in which they can be used, organisations will be able to gain new knowledge which will help them to adapt to evolving needs.

By delving into O’Reilly’s learning platform, a variety of information about the different trends and topics tech and business leaders need to know can be discovered. This will allow them to better understand their jobs and will ensure that their businesses continue to thrive. Over the last few months, we have analysed the platform’s user usage and have discovered the most popular and most-searched topics in AI and ML. We’ll be exploring some of the most important finding below which gives us a wider picture of where the state of AI and ML is, and ultimately, where it is headed.

AI outpacing growth in ML

First and foremost, our analysis shone a light on how interest in AI is continuing to grow. When comparing 2018 to 2019, engagement in AI increased by 58% – far outpacing growth in the much larger machine learning topic, which increased only 5% in 2019. When aggregating all AI and ML topics, this accounts for nearly 5% of all usage activity on the platform. While this is just slightly less than high-level, well-established topics like data engineering (8% of usage activity) and data science (5% of usage activity), interest in these topics grew 50% faster than data science. Data engineering actually decreased about 8% over the same time due to declines in engagement with data management topics.

We also discovered early signs that organisations are experimenting with advanced tools and methods. Of our findings, engagement in unsupervised learning content is probably one of the most interesting. In unsupervised learning, an AI algorithm is trained to look for previously undetected patterns in a data set with no pre-existing labels or classification with minimum human supervision or guidance. In 2018, the usage for unsupervised learning topics grew by 53% and by 172% in 2019.

But what’s driving this growth? While the names of its methods (clustering and association) and its applications (neural networks) are familiar, unsupervised learning isn’t as well understood as its supervised learning counterpart, which serves as the default strategy for ML for most people and most use cases. This surge in unsupervised learning activity is likely driven by a lack of familiarity with the term itself, as well as with its uses, benefits, and requirements by more sophisticated users who are faced with use cases not easily addressed with supervised methods. It is also likely that that the visible success of unsupervised learning in neural networks and deep learning has helped our interest, as has the diversity of open source tools, libraries and tutorials, that support unsupervised learning.

A Deep Learning Resurrection

While deep learning cooled slightly in 2019, it still accounted for 22% of all AI and ML usage. We also suspect that its success has helped spur the resurrection of a number of other disused or neglected ideas. The biggest example of this is reinforcement learning. This topic experienced exponential growth, growing over 1,500% since 2017.

Even with engagement rates dropping by 10% in 2019, deep learning itself is one of the most popular ML methods among companies that are evaluating AI, with many companies choosing the technique to support production use cases. It might be that engagement with deep learning topics has plateaued because most people are already actively engaging with the technology, meaning growth could slow down.

Natural language processing is another topic that has showed consistent growth. While its growth rate isn’t huge – it grew by 15% in 2018 and 9% in 2019 – natural language processing accounts for about 12% of all AI and ML usage on our platform. This is around 6x the share of unsupervised learning and 5x the share of reinforcement learning usage, despite the significant growth these two topics have experienced over the last two years.

Not all AI/ML methods are treated equally, however. For example, interest in chatbots seems to be waning, with engagement decreasing by 17% in 2018 and by 34% in 2019. This is likely because chatbots were one of the first application of AI and is probably a reflection of the relative maturity of its application.

The growing engagement in unsupervised learning and reinforcement learning demonstrates that organisations are experimenting with advanced analytics tools and methods. These tools and techniques open up new use cases for businesses to experiment and benefit from, including decision support, interactive games, and real-time retail recommendation engines. We can only imagine that organisations will continue to use AI and ML to solve problems, increase productivity, accelerate processes, and deliver new products and services.

As organisations adopt analytic technologies, they’re discovering more about themselves and their worlds. Adoption of ML, in particular, prompts people at all levels of an organisation to start asking questions that challenge what an organisation thinks it knows about itself. With ML and AI, we’re training machines to surface new objects of knowledge that help us as we learn to ask new, different, and sometimes difficult questions about ourselves. By all indications, we seem to be having some success with this. Who knows what the future holds, but as technologies become smarter, there is no doubt that we will we become more dependent.

Continue Reading

Technology

Artificial Intelligence and Speech Analytics are crucial to Financial Organisations’ future

Published

on

Artificial Intelligence and Speech Analytics are crucial to Financial Organisations’ future 3

By Richard Stevenson, CEO, Red Box

At the beginning of 2020, when the world was still largely unaware of the looming pandemic that was set to alter so many aspects of our lives and business operations, enterprises across all sectors, from finance to retail, already felt the clock was quickly ticking for them to embark on a radical technological change.

With Industry 4.0 in full swing, Artificial Intelligence (AI) and Speech Analytics are two key technologies that have promised to future proof the financial sector. The benefits of adopting such technologies include the streamlining of entire business processes, but if unlocking the value of voice data was a key goal for banks and financial organisations in the past, 2020 and the coronavirus pandemic has only served to fast track those plans.

The Data Speaks for Itself

We asked 500 CEOs, Directors and Middle Managers across enterprises of varying sizes to relay their thoughts on the importance of AI, Speech Analytics and voice data to their business operations. AI and Speech are already making waves in the financial sector, with banks using voice data to detect and combat fraud at a larger and faster rate than previously possible, and insurance companies fast-tracking their claims processing and underwriting through AI, so some of the research results come as no surprise. However, 91% of those surveyed in banking, insurance and finance already believe that voice data is, or will be, a strategic asset in the near future. This is a huge majority.

Living in such unprecedented times, businesses will be trying their best to leverage every competitive advantage they can, and the adoption of new technology is clearly high up on that list. With customer experience being key to retaining business during times of a crisis, having the right technology to support customers has proven to be a must.

To take the high street bank as an example, customers have, for decades, become accustomed to visiting their local branch. In March, many bank branches across the UK and the world closed for months on end or had their opening hours greatly reduced during the peak of lockdown. With cashiers and advisers unable to talk to customers or provide guidance of sometimes complex in-house machine operation, a whole new way of banking emerged. For those already familiar with modern banking methods – online banking, chatbots and mobile apps – this wasn’t so daunting. But contact centres found that they were dealing with a massive uptick in customer numbers as people were unable to access their traditional banking methods or were worried about their financial situation. Such a huge surge in calls, from customers worried about their mortgage payments or how they were going to deal with their next gas bill, put added stress on contact centre staff who were adjusting, in many cases, to having to work remotely.

Introducing the right AI and Speech Analytics tools and replacing many old-age, antiquated practices, is enabling those in the finance industry to look ahead to a post-pandemic future. With voice data set to unlock major new insights in the customer journey, enable organizations to experience newfound agility, and unlock the potential to improve both the customer and employee experience, all whilst cutting costs and enhancing productivity, the financial organisations of the future are looking to change how humans can be used more effectively .

Making Informed Decisions on AI & Speech Analytics

Adopting AI and Speech Analytics, and maximising the use of generated voice data can create a plethora of benefits to an organisation, however, only 7% of the financial sector currently see speech analytics as a strategic asset. To stay on top of the competition, CTOs and CIOs will often be pressured into making a decision quickly when going to market, and when being presented with endless choices, picking the right software is not only important, it can be game changing.

Without the proper foundations in place or the knowledge on how to maximise the value of corporate purchases, organisations shopping for new tools need to put the data they’re currently generating under the microscope. With such promise, nearly two-thirds of businesses (62%) are still failing to use transcribed voice data to fuel their AI engines. Organizations that are interested in adopting this new technology must remember that AI and analytics tools are fueled by high quality data, i.e. the data must be extracted, processed, stored and analysed in the most optimal way – that’s where the journey to extract value from AI and Speech Technology tools begins.

Unlocking the Full Potential of Voice Data

Captured voice data is the richest and most human source of insight, and most organisations in the financial services sector are already generating this at an incredible volume for compliance reasons. The pandemic has made C-Level executives Directors and Managers increasingly aware of the strategic importance this data source can have when fed through AI solutions.

Now that we are being pushed to digitization faster than ever before and entire processes once dealt with in person are being transferred to the call centre, organisations have never processed this much voice data. A single person, or team, can only go through such vast data sets with a helping hand from technology, making AI the next logical step to streamlining the customer and employee experience, and indeed the business as a whole. Correctly adopting AI and feeding it with high quality data sets will help steer organisations into a technology-enabled future. To remain relevant and competitive during and after this global pandemic, one thing is certain: companies must act now to better leverage what’s effectively one of their most valuable and strategic assets.

Continue Reading

Call For Entries

Global Banking and Finance Review Awards Nominations 2020
2020 Global Banking & Finance Awards now open. Click Here

Latest Articles

Return to Work Doesn’t Mean Business as Usual When it Comes to Travel and Expense 4 Return to Work Doesn’t Mean Business as Usual When it Comes to Travel and Expense 5
Top Stories3 days ago

Return to Work Doesn’t Mean Business as Usual When it Comes to Travel and Expense

By Rob Harrison, MD UK & Ireland, SAP Concur The last few months have been an exercise in adaptability for...

Why technology is key to the future of auditing 6 Why technology is key to the future of auditing 7
Technology3 days ago

Why technology is key to the future of auditing

By Piers Wilson, Head of Product Management at Huntsman Security The Financial Reporting Council (FRC), which is responsible for corporate governance,...

Staff training crucial for SME recovery post-COVID 8 Staff training crucial for SME recovery post-COVID 9
Business3 days ago

Staff training crucial for SME recovery post-COVID

47% of UK’s top performing SMEs provide regular, formalised training for all staff Despite this, 15% of small businesses report to...

What Is Globalization 10 What Is Globalization 11
Business4 days ago

What Is Globalization

What is globalization? Globalization, or inter-connectedness, is the ever-growing process of integration and interaction among countries, individuals, businesses, and even...

What Is Microsoft Teams 12 What Is Microsoft Teams 13
Business4 days ago

What Is Microsoft Teams

Microsoft Teams is an application and web-based collaboration tool that combines chat, videos, online collaboration, document storage, and collaboration with...

What Is Capitalism 14 What Is Capitalism 15
Business4 days ago

What Is Capitalism

What is capitalism? Is it a great economic system or just another economic system that is not so great? Well,...

How To Start A Youtube Channel 16 How To Start A Youtube Channel 17
Business4 days ago

How To Start A Youtube Channel

How to Start a YouTube Channel For Your Business: Do you have a blog or website? If you do, it’s...

What is URL 18 What is URL 19
Business4 days ago

What is URL

A Uniform Resource Locater, colloquially known as a URL, is an identification to a certain web resource, a directory or...

What Is Seo 20 What Is Seo 21
Business4 days ago

What Is Seo

Search engine optimization, also known as SEO, is the process of increasing the quantity and quality of site traffic from...

How Much Rent Can I Afford. 22 How Much Rent Can I Afford. 23
Business4 days ago

How Much Rent Can I Afford.

How much rent is too much to pay? Sometimes, apartment complexes look at an annual income that’s over forty times...

Newsletters with Secrets & Analysis. Subscribe Now