- Tony Bethell, VP Strategic Alliances, ClusterSeven
With the GDPR commencement date of 25th May 2018 just around the corner, there is no contention on the ramifications of non-compliance for organisations. However, the approach of many organisations – as they race towards this compliance deadline – appears to be simply to identify the inventory of IT supported assets that hold GDPR sensitive data. For many, this is proving a significant task. While a sensible first step, this is just scratching the surface of this far-reaching regulation. The real challenge for GDPR practitioners is to create a ‘sustainable’ compliance process for the foreseeable future.
Much effort in addressing GDPR has gone into identifying and managing the relevant data in core IT applications. The next challenge for the organisation is to apply the same level of control, monitoring and attestation over unstructured data – typically spreadsheets, and other applications managed by the business rather than IT.
The key challenge here – whether looking at Excel spreadsheets, Access databases, or business management analytics tools for example – is that these are distributed right across the business, in an uncontrolled way, and may contain the type of personal data covered by GDPR. Additional files and application may well be added, obliging the organisation to update their inventory. Given many are using Excel spreadsheets to manage their inventory, there is a management headache, as this updating will be done manually, with no audit trail in place to satisfy GDPR, and the auditors.
Organisations must demonstrate to their auditor that they have permission to hold the personal data in these files; as well as ensure easy portability and erasure of records on demand. It is an enormous task and anecdotal evidence suggests that the majority of organisations haven’t even turned their attention to the problem these files present for GDPR compliance.
FAIM – A four-step process for sustainable GDPR compliance
Organisations can mitigate the non-compliance risk of spreadsheets by adopting FAIM – a technology-supportedfour step process (Find, Analyse, Inventory and Monitor) – that enables them to make compliance with the GDPR more‘business as usual’(BAU) when it comes to the spreadsheet environment.
Identifying the files that contain the sensitive personal data is obvious, but given the often significantbusiness-owned application environment, finding a GDPR-relevant file can be akin to finding a needle in a haystack.Today there are a number of tools available on the market that organisations can take advantage of to scan the files in the business environment.
Powerful search tools are essential here, so that huge volumes of files can be analysed for GDPR-relevant terms, such as name, address, email address, employee reference and similar. Some of these files are easier to search than others for this. Spreadsheets can be especially challenging to search, with the key GDPR information being held at cell level, which can be difficult to scan.
The next step is to produce reports based on the organisation’s GDPR profile and assess them to show ‘hot spots’ – i.e. files in the spreadsheet landscape that potentially contain GDPR-relevant data. Additionally, the tools categorise the files on the basis of high, medium and low risk, which is very useful from a prioritisation perspective. For instance, files that include personal data such as ethnic information, passport numbers, credit card details, trade union membership and so on, would be categorised as high risk files and would need compliance processes to be applied to them urgently.
Much of the effort into complying with GDPR thus far has focussed on these two stages. The key to sustainable GDPR compliance focuses on the next two stages.
Having identified and analysed the key GDPR files, the next step is to pull them into a management framework that allows a business to proactively monitor their GDPR files. This can encompass both IT and non-IT managed GDPR files. Placing the key GDPR files in an inventory framework allows businesses to proactively monitor their most sensitive, highest risk files. It provides a framework for providing attestation for GDPR files. To make compliance with the GDPR ‘business as usual’, an automated attestation process, underpinned by full auditability,is fundamental. It will ensure that the organisation is capturing data in accordance with the corporate’s GDPR policy. This attestation capability provides a robust, flexible and powerful model that helps staff and line managers manage their GDPR compliance, by confirming the GDPR status of files, and confirming they comply with the regulations. It also provides an efficient framework for managing and resolving non-compliant files. For example, if an individual needs to have their records removed from a file, or set of files, the attestation framework allows staff to confirm that individual has been removed, bringing accuracy and consistency to an often manual, error-prone process.
GDPR compliance isn’t about being complaint on 25th May 2018, it’s about meeting the regulator’s requirements in the days, weeks and years to come. Organisations must ensure that they are able to monitor GDPR-relevant data for version control, changes and approvals, new data, as well as the attestation process.
A recent poll during a recent ClusterSeven webinar showed that 91 per cent of organisations are merely building GDPR relevant file lists – i.e. only nine per cent are actually creating sustainable processes for compliance. These findings are potentially reflective of organisations in the wider industry too. Organisations will do well do change their approach to GDPR compliance. A fine of up to four per cent of the global turnover, plus the reputational damage that will possibly come with it, warrants a sustainable approach to compliance with the GDPR.
However an organisation approaches GDPR, they need to recognise that their GDPR compliance will evolve as their business evolves. They need systems and processes in place that capture unstructured, as well as their structured data. They must be able to accommodate changes to this dataset, such as new data being added, or when people request their data is removed or moved to a new organisation. Finally, they need to be able to deliver this capability, and demonstrate it efficiently and cost effectively.