Connect with us

Technology

SPEAKING CLEARLY: THE CASE FOR VOICE BIOMETRICS

Published

on

SPEAKING CLEARLY: THE CASE FOR VOICE BIOMETRICS

By Martin Hummel, Voice Biometrics Consultant, Soitron UK

Martin Hummel

Martin Hummel

In 1891, a quiet revolution began when a Croatian-born police official called Juan Vucetich started collecting a unique ID for Argentinian criminals: their fingerprints, a part of the body it turned out was completely unique to each and every person, and so could be used to identify them.

With Vucetich’s insight, the field of biometrics was born. And in the intervening decades we’ve seen the finger and iris become huge areas of focus for designers of secure access to systems of all kinds.

But for some reason our most common, human communications feature – our voice – has only recently started to be used, too. The great news is that it turns out computers can now ‘hear,’ store and recognise our speech – a breakthrough that has huge implications in all sorts of areas, banking very much in the lead.

This is because a voice-driven system uses a sample of a customer voice (a ‘voiceprint’) and then links it securely in code with their specific records. That means that every time the customer contacts the bank by phone, they can choose to skip the tedious ID process and be verified in seconds by alone, simplifying their life but also doing away with the need for banks to work with PINs and two-factor authentication approaches to security.

Who wouldn’t welcome the end of the PIN or memorable word?

Sounds fantastic… and it is, when it works. The reality is that we’ve struggled to go beyond the initial hype and pilot stage to more widespread take-up in global financial services systems of voice biometrics. But that’s changing now, with systems in place at Tatra Banka, Barclays and more recently HSBC.

What’s making these brands look to voice? As we said, password security is a pretty deadly business, often taking up a lot of time, and if the team needs to pass the customer to another desk and the process has to be re-initiated – it’s not a great experience for the customer. And given how loathe customers seem to be to practice tight password discipline, even the ID check and PIN step is a far from foolproof gateway to banking in any case; a quick and accurate voice match would be far preferable, and a lot harder to ‘steal.’

Is that the right word to use? Fooling a voiceprint system is much, much more difficult than ripping off passwords, a personal identification number or physical documents. More use of biometrics would radically simplify the security end of system use, and as we become more and more of a digital economy it’s hard to see how users can be expected to remember more and more non-biometric ways of accessing services, after all.

From the customer contact centre or service desk point of view, meanwhile, stripping out the verification step and establishing immediate, solid knowledge that this is the customer would allow a lot more time to help other users.

And for both parties, voice could be a huge boon in the fight against fraud and identity theft, saving millions all round.

How robust is it?

No matter how good you are at impressions, no one can replicate someone else’s voice to the level of matching their individual voiceprint.  A voiceprint lasts for life. After adolescence, it doesn’t change over time and is yours for life – which means no need for endless re-recordings.

Given all this, and when you add industry-standard compliance systems like PCI-DSS to voice, which is the case with the best systems on the market now, then you have a compelling security option the vast majority of customers will instantly welcome.

An example of voice biometrics in successful practice we know is Slovakia’s Tatra Banka (member of Raiffeisen).  First introduced in 2013, it now has 250,000-plus registered customer voice samples – no less than a third of the whole customer roster. Have the promised timesavings materialised? The answer is absolutely yes – as average client identification now takes 27 seconds per visit, a reduction of over half the time, 66 per cent of what it took pre-voice. In fact, 85 per cent of all calls to the bank’s contact centre now go through voiceprint verification. It’s also saving cost – fewer operators required to provide the same level of service – which lets the team do more value-added work, like sales. And last but absolutely not least, voice boosted its important Net Promoter score by 62 per cent inside just three months.

It’s important to note that voice recognition is not only used in retail banking like this, but in a growing number of financial services applications, from topping up prepaid mobile phone cards, validation of Web transactions, even for authenticating mobile phone applications containing sensitive personal or corporate data. The reality is that where there’s a financial operation or transaction that needs to be done either quickly or unexpectedly, voice biometrics, which isn’t limited to one device like an ATM or phone banking, can be a great security standby.

So what do we ‘hear’ when we say ‘voice’ in biometric security? Convenience, usability, accessibility, functionality and most of all security. These are the reasons banks like yours could do a lot worse than join the biometrics revolution that Juan started back in 1891.

Technology

Using AI to identify public sector fraud

Published

on

Using AI to identify public sector fraud 1

When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public sector under increasing scrutiny to provide assurance that finances are being managed appropriately, but it is also vital to be able to give early warnings of financial pressures or failures. Right now, given the huge value of funds flowing from the public purse into the hands of individuals and companies due to COVID measures, renewed focus on audit is essential to ensure that these funds are used for the purposes intended by parliament.

As Rachel Kirkham, former Head of Data Analytics Research at the UK National Audit Office and now Director of AI Solutions at MindBridge, discusses, introducing AI to identify and rectify potential problems before they become an issue is a key way for public sector organisations and bodies to ensure public funds are being administered efficiently, effectively and economically.

Crime Wave

The National Crime Agency has warned repeatedly that criminals are seeking to capitalise on the Covid crisis and the latest warnings suggest that coronavirus-related fraud could end up costing the taxpayer £4bn. From the rise in company registrations associated with Bounce Back loan fraud, to job retention scheme (furlough) misuse, what plans are in place for government departments to identify the scale of fraud and error and then recoup lost funds?

There is no doubt that the speed with which these schemes were deployed, when the public sector was also dealing with a fundamental shift in service delivery, created both opportunities for fraud and risk of systematic error. But six months on, while the pandemic is still creating economic challenges, the peak of the financial crisis has passed. Ongoing financial support for businesses and individuals remains important and it is now essential to learn lessons in order to both target fraudulent activity and, critically, minimise the potential loss of public funds in the future.

Timing is everything. Government has an opportunity to review the last 6 months’ performance and strengthen internal controls to ensure that further use of public funds is appropriate. Technology should play a critical role in detecting and preventing future fraud and error.

Intelligence-Led Audit

If the public sector is to move beyond the current estimates of fraudulent activity and gain real insight into both the true level of fraud and the primary areas to address, an intelligent, data-led approach will be critical. The use of Artificial Intelligence (AI) in public sector IT systems can be used to detect errors, fraud or mismanagement of funds, and enable the process changes required to prevent further issues.

HMRC is leading the way, using its extensive experience in identifying and tackling tax fraud to address the misuse of furlough – an approach that has led to many companies making use of the amnesty to repay erroneous claims. Other public sector bodies, especially smaller local authorities, are less likely to have the skills or resources in place to undertake the required analysis. If public money is to be both recouped and safeguarded in the future, it is likely that a central government initiative will be required.

Data resources are key; the government holds a vast amount of data that could be used, although this will require cross-government collaboration and co-operation. It is possible that the delivery speed of COVID-19 responses will have led to data collection gaps – an issue that will need rapid exploration and resolution. It should be a priority to take stock of existing data holdings to identify any gaps and, at the same time, use Machine Learning to identify anomalies that could reveal either fraud or systematic error.

Taking Control

In addition to identifying fraud, this insight can also feed back into claims processes providing public sector bodies with a chance to move away from retrospective review towards the use of predictive analytics to improve control. With an understanding of the key indicators of fraud, the application process can automatically raise an alert when a claim looks unusual, minimising the risk of such claims being processed.

While many public sector bodies may still feel overwhelmed, it is essential to take these steps quickly. Even at a time of crisis, good processes are important – failing to learn from the mistakes of the past few months will simply compound the problem and lead to greater misuse of public funds. The public sector, businesses, and individuals need to learn how to operate in this environment, and that requires the right people to spend time looking at the data, identifying problems and putting in place new controls. With an AI-led approach, these individuals will learn lessons about what worked and what didn’t work in this unprecedented release of public funds. And they will gain invaluable insight into the identification of fraud – something that will provide on-going benefit for all public sector bodies.

Continue Reading

Technology

Why dependency on SMS OTPs should not be the universal solution

Published

on

Why dependency on SMS OTPs should not be the universal solution 2

By Chris Stephens, Head of Banking Solutions at Callsign

In our day-to-day lives, SMS one-time passwords, also known as OTPs, have unintentionally become the default authentication factor when carrying out high risk and confidential transactions online. Banks, telcos, and businesses are opting for this method as SMS OTPs are relatively quick and simple to put in place. In our digital age, this solution works for the majority of users, who more often than not possess a mobile phone and are familiar with the user experience. As a result, companies are using them to securely authenticate both their customers and employees.

When looking into SMS OTPs, businesses should consider the bigger picture and how time- and cost-efficient solutions are as a whole by taking into account other key elements that might have been neglected in the past, such as hidden fees and security vulnerabilities. Apart from this approach, there are also other options better suited to different business needs – the European Authority (EBA) has already recognised other forms, such as employing the secure binding of a device to achieve possession and the use of behavioural biometrics as an inherence factor. For example, earlier this year Google officially began moving away from SMS OTP-based authentication. Whilst in the UK both the Financial Conduct Authority (FCA) and UK Finance have recommended banks ought to reduce their dependence on its use in the longer-term.  Whereas, in the past, financial institutions were choosing to use this solution because it enabled them to save time on becoming compliant with the PSD2 Strong Customer Authentication (SCA) regulation.

It is common knowledge that SMS OTPs are not without their flaws, and with the extended deadline for SCA for e-commerce less than a year away (September 2021) – is now the best time for the industry to look elsewhere for more intelligent approaches to authentication?

SMS as the go-to solution

Fraudsters are sophisticated criminals, who attack the weakest points in the system – they have observed that banks and businesses heavily rely on SMS OTPs for 2FA (two-factor authentication) transactions, which is why they continue to abuse and weaken existing systems and exploit these solutions for their own benefit. Fraudsters commonly practise SIM-swap – where they steal personal information about the victim and then contact the target’s mobile operator pretending that their phone has been lost or stolen. With lockdown rules constantly changing, not all customers are able to easily visit stores right now, therefore operators are dependent on mobile-authentication channels that are more susceptible to this type of manipulation to service their customers.

SIM-swap fraud can easily be done. As soon as the fraudster has duped the mobile operator, a number transfer is authorised and then activated on a new SIM card – it works by granting cybercriminals access to the victim’s number and consequently all one-time passwords and authentication codes that are sent to that number. In March 2020, Europol warned that SIM-swap scams are a growing problem across Europe, following an investigation that resulted in the arrest of 12 suspects associated with the theft of more than €3 million ($3.3 million).

However, consumers and businesses need to be aware that SIM-swap fraud is not the only method cybercriminals are deploying to intercept OTPs from their victims during the pandemic and beyond.

Spotting a scam

SIM-swap attacks are not the only method scammers are using, there is also a growing number of cases that take advantage of malware and remote access applications to steal SMS OTPs. They do this by socially engineering individuals to download remote access apps or hidden surveillance apps to grant access to the victim’s device, without coming into contact with it. The cybercriminals can, therefore, directly read their messages or secretly record all their texts and phone calls to another device. The unknowing victim’s personal messages, including OTPs, are tapped into by the fraudster using the same approach as SIM-swap attacks. However, this time they also have direct access to the target’s device.

Several different parties are involved in the delivery of OTPs and at each stage of the process there is an opportunity for fraudsters to capture  messages. There is also the potential mass compromise as a result of hidden vulnerabilities in the SS7 network and the attack surface to consider. With all these in mind, banks need to have a good overview of all data sub-processors to allow them to adopt the most suitable security controls, such as multi-factor authentication (MFA), audit logs, and dashboards.

Watch out for hidden costs

It comes as no surprise that intercepted OTPs result in fraud losses, which quickly increase as hidden fees go unnoticed over time. Beyond the upfront costs of SMS OTPs, such as cost per text, there are also several hidden costs that are difficult to budget for and avoid. They are typically the result of the domino effect of the aforementioned issues – forcing businesses into a reactive mode that is tricky to handle.

As an example, where drop-offs take place in an authentication journey, including when SMS texts are not received, financial institutions need to be ready to manage an influx in calls to their customer service helplines and the associated fees. Or else the customer may decide to use another card to make the payment, which is worse for the bank. This is due to the fact that customers are likely to abandon the use of a card when they are fed up with a customer journey that involves too much unnecessary friction. These abandonments lead to a decrease in interchange fees for banks and could even potentially reduce the customer base for merchants.

Evaluating the user experience

Whilst most consumers possess a mobile phone, SMS is not a reliable solution for everybody. For instance, SMS OTPs are not accessible to those living in remote or low-service locations, who may struggle to receive SMS alerts. This overall experience is also cumbersome as it takes roughly 30 seconds of transaction time for the text to be delivered, compared with the almost instantaneous transactions experienced by alternative authentication approaches, such as biometrics.

In this digital age, businesses are constantly adapting to accommodate different generations including Gen Z who are digital natives – so mobile use is only going to increase and, along with it, the volume of transactions taking place on these devices will also grow. This goes hand in hand with the ever-changing needs and expectations of customers as they look for hyper-personalised online experiences as the new norm. Yes, SMS OTPs are mobile-first, but they do still require the user to switch to another app to view the SMS so they can complete the transaction, which can be annoying for the customer as it interrupts the e-commerce user journey. After a friction-filled experience, it would be unsurprising if the user then decides to abandon the transaction. With this and other existing security implications in mind, the EBA recommends banks adopt other options.

Chris Stephens

Chris Stephens

Benefits of behavioural biometrics

Every person has their own unique behaviour and habits when swiping across the screen, which can be tracked through the analysis of the data signals captured from hardware sensors when the user engages with their device. These signals are crucial to designing user features such as finger movement, hand orientation, and wrist strength. Together, artificial intelligence and machine learning provide us with the capability to analyse this information to develop a personalised prototype of that user’s swipe behaviour, which only takes milliseconds to confirm whether the customer is who they say they are. This immediately allows the bank to seamlessly carry out appropriate security actions and stop fraudsters in their path before they can even begin using a target’s device.

Behavioural biometrics is ideal for positively identifying an individual and also effectively identifies bad actors. Including when cybercriminals use technologies, such as bots or remote access Trojan (RAT) software, to control transactional flows without the user being aware. This approach to biometrics works on both high- and low-end devices and helps to protect potential victims against both blind (where the fraudster has never observed how the user swipes their phone) and over-the-shoulder attacks (where the fraudster has been able to observe the victim’s swipe movements). Both forms of attack can be detected unique algorithms, with an accuracy rate of 98%; by layering in device intelligence and locational habits it is the most accurate and robust identification method currently available on the market. By preventing criminal access, even when the attacker has observed the user’s behaviour, it offers an added level of security to businesses and banks that other traditional methods, such as a PIN or password, cannot.

In order for organisations to maintain a competitive edge and successfully navigate through the pandemic, they will need to deliver hyper-personalised journeys to meet consumers’ expectations. They are increasingly looking to bank with or sign-up to services that offer a secure and bespoke service that meets their daily needs during and beyond the pandemic.

Therefore, a holistic approach to security empowers businesses to take back control of their fraud and authentication management. Unfortunately, single point solutions, like SMS OTPs, do not allow businesses to scale or provide enough flexibility to meet these requirements. By adopting a strategic, and intelligence-based, approach financial institutions and organisations will be able to upgrade security measures and enhance the user experience – whilst keeping IT spend low.

Continue Reading

Technology

The rise of AI in compliance management

Published

on

The rise of AI in compliance management 3

By Martin Ellingham, director, product management compliance at Aptean, looks at the increasing role of AI in compliance management and just what we can expect for the future

Artificial Intelligence (or AI as it’s now more commonly known) has been around in some shape or form since the 1960s. Although now into its eighth decade, as a technology, it’s still in its relative infancy, with the nirvana of general AI still just the stuff of Hollywood. That’s not to say that AI hasn’t developed over the decades, of course it has, and it now presents itself not as a standalone technology but as a distinct and effective set of tools that, although not a panacea for all business ills, certainly brings with it a whole host of benefits for the business world.

As with all new and emerging technologies, wider understanding takes time to take hold and this is proving especially true of AI where a lack of understanding has led to a cautious, hesitant approach. Nowhere is this more evident that when it comes to compliance, particularly within the financial services sector. Very much playing catch-up with the industry it regulates, up until very recently the UK’s Financial Conduct Authority (FCA) had hunkered down with their policy of demanding maximum transparency from banks in their use of AI and machine learning algorithms, mandating that banks justify the use of all kinds of automated decision making, almost but not quite shutting down the use of AI in any kind of front-line customer interactions.

But, as regulators are learning and understanding more about the potential benefits of AI, seeing first-hand how businesses are implementing AI tools to not only increase business efficiencies but to add a further layer of customer protection to their processes, so they are gradually peeling back the tight regulations to make more room for AI. The FCA’s recent announcement of the Financial Services AI Public Private Forum (AIPPF), in conjunction with the Bank of England, is testament to this increasing acceptance of the use of AI. The AIPFF is set to explore the safe adoption of AI technologies within financial services, and while not pulling back on its demands that AI technology be applied intelligently, it signals a clear move forward in its approach to AI, recognising how financial services already are making good use of certain AI tools to tighten up compliance.

Complexity and bias

So what are the issues that are standing in the way of wider adoption of AI? Well, to start with is the inherently complex nature of AI. If firms are to deploy AI, in any guise, they need to ensure they not only have a solid understanding of the technology itself but of the governance surrounding it. The main problem here is the shortage of programmers worldwide. With the list of businesses wanting to recruit programmers no longer limited to software businesses, now including any type of organisation who recognises the potential competitive advantage to be gained by developing their own AI systems, the shortage is getting more acute. And, even if businesses are able to recruit AI programmers, if it takes an experienced programmer to understand AI, what hope does a compliance expert have?

For the moment, there is still a nervousness among regulators about how they can possibly implement robust regulation when there is still so much to learn about AI, particularly when there is currently no standard way of using AI in compliance. With time this will obviously change, as AI becomes more commonplace and general understanding increases, and instead of the digital natives that are spoken about today, businesses and regulators will be led by AI-natives, well-versed in all things AI and capable of implementing AI solutions and the accompanying regulatory frameworks.

As well as a lack of understanding, there is also the issue of bias. While businesses have checks and balances in place to prevent human bias coming into play for lending decisions for example, they might be mistaken in thinking that implementing AI technologies will eradicate any risk of bias emerging. AI technologies are programmed by humans and are therefore fallible, with unintended bias a well-documented outcome of many AI trials leading certain academics to argue that bias-free machine learning doesn’t exist. This presents a double quandary for regulators. Should they be encouraging the use of a technology where bias is seemingly inherent and if they do pave the way for the wider use of AI, do they understand enough about the technology to pinpoint where any bias has occurred, should the need arise? With questions such as this, it’s not difficult to see why regulators are taking their time to understand how AI fits with compliance.

Complementary AI

So, bearing all this in mind, where are we seeing real benefits from AI with regards to compliance, if not right now but in the near future? AI is very good at dealing with tasks on a large scale and in super-quick time. It’s not that AI is more intelligent than the human brain, it’s just that it can work at much faster speeds and on a much bigger scale, making it the perfect fit for the data-heavy world in which we all live and work. For compliance purposes, this makes it an ideal solution for double-checking work and an accurate detector of systemic faults, one of the major challenges that regulators in the financial sector in particular have faced in recent years.

In this respect, rather than a replacement for humans in the compliance arena, AI is adding another layer of protection for businesses and consumers alike. When it comes to double-checking work, AI can pinpoint patterns or trends in employee activity and customer interactions much quicker than any human, enabling remedial action to be taken to ensure adherence to regulations. Similarly, by analysing the data from case management solutions across multiple users, departments and locations, AI can readily identify systemic issues before they take hold, enabling the business to take the necessary steps to rectify practices to guarantee compliance before they adversely affect customers and before the business itself contravenes regulatory compliance.

Similarly, when it comes to complaint management for example, AI can play a vital role in determining the nature of an initial phone call, directing the call to the right team or department without the need for any human intervention and fast-tracking more urgent cases quickly and effectively. Again, it’s not a case of replacing humans but complementing existing processes and procedures to not only improve outcomes for customers, but to increase compliance, too.

At its most basic level, AI can minimise the time taken to complete tasks and reduce errors, which, in theory, makes it the ideal solution for businesses of all shapes, sizes and sectors. For highly regulated industries, where compliance is mandatory, it’s not so clear cut. While there are clearly benefits to be had from implementing AI solutions, for the moment, they should be regarded as complementary technologies, protecting both consumers and businesses by adding an extra guarantee of compliant processes. While knowledge and understanding of the intricacies of AI are still growing, it would be a mistake to implement AI technologies across the board, particularly when a well-considered human response to the nuances of customer behaviours and reactions play such an important role in staying compliant. That’s not to say that we should be frightened of AI, and nor should the regulators. As the technology develops, so will our wider understanding. It’s up to businesses and regulators alike to do better, being totally transparent about the uses of AI and putting in place a robust, reliable framework to monitor the ongoing behaviour of their AI systems.

Continue Reading
Editorial & Advertiser disclosureOur website provides you with information, news, press releases, Opinion and advertorials on various financial products and services. This is not to be considered as financial advice and should be considered only for information purposes. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third party websites, affiliate sales networks, and may link to our advertising partners websites. Though we are tied up with various advertising and affiliate networks, this does not affect our analysis or opinion. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you, or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish sponsored articles or links, you may consider all articles or links hosted on our site as a partner endorsed link.

Call For Entries

Global Banking and Finance Review Awards Nominations 2020
2020 Global Banking & Finance Awards now open. Click Here

Latest Articles

Will covid-19 end the dominance of the big four? 4 Will covid-19 end the dominance of the big four? 5
Top Stories8 hours ago

Will covid-19 end the dominance of the big four?

By Campbell Shaw, Head of Bank Partnerships, Cardlytics Across the country, we are readjusting to refreshed restrictions on our daily...

Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic 6 Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic 7
Business15 hours ago

Why cybercriminals have ‘Gone Vishing’ during the COVID-19 Pandemic

More than 215,000 vishing attempts in the last year alone As new coronavirus restrictions look set to confine much of...

Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking 8 Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking 9
Finance15 hours ago

Risk Mitigation vs. Risk Avoidance: Why FIs Need to Maintain Risk Appetite and Not Place All Bets on De-Risking

De-risking aims to protect financial institutions from the increasing pressures placed by regulators and threats, associated with clients operating in...

Using AI to identify public sector fraud 10 Using AI to identify public sector fraud 11
Technology19 hours ago

Using AI to identify public sector fraud

When it comes to audits in the public sector, both accountability and transparency are essential. Not only is the public...

Five golden rules of recruitment 12 Five golden rules of recruitment 13
Business19 hours ago

Five golden rules of recruitment

Former investment banker and entrepreneur, Connie Nam, discusses five ways in which basing your recruitment process around understanding a candidate’s...

Using data analytics to improve SME cash flow and treasury management 15 Using data analytics to improve SME cash flow and treasury management 16
Business20 hours ago

Using data analytics to improve SME cash flow and treasury management

The pressure facing SMEs this year is widely known, and they are looking for ways to improve their cash flow...

Why dependency on SMS OTPs should not be the universal solution 17 Why dependency on SMS OTPs should not be the universal solution 18
Technology20 hours ago

Why dependency on SMS OTPs should not be the universal solution

By Chris Stephens, Head of Banking Solutions at Callsign In our day-to-day lives, SMS one-time passwords, also known as OTPs, have...

The chosen one 19 The chosen one 20
Business21 hours ago

The chosen one

By Jesse Swash, Co-Founder Design by Structure. The lessons for the future lie in the past. The same truths still hold. This time...

How PR can help franchise businesses emerge stronger from 2020 21 How PR can help franchise businesses emerge stronger from 2020 22
Business21 hours ago

How PR can help franchise businesses emerge stronger from 2020

By Mimi Brown, Head of Entrepreneurs & Business at The PHA Group A second wave of coronavirus is gathering pace...

Cash and digital payments – a balancing act to aid financial inclusion 23 Cash and digital payments – a balancing act to aid financial inclusion 24
Finance21 hours ago

Cash and digital payments – a balancing act to aid financial inclusion

By Matthew Jackson, Head of Partner Development, EMEA at PPRO The cashless debate is one that continues to spark both conversation...

Newsletters with Secrets & Analysis. Subscribe Now