By James E. Lee
Complying with the new rules is so much complicated than that
It seems there has always been a fundamental disconnect between how the United States and the European Union view privacy and data security. The newly enforceable EU General Data Privacy Regulation (GDPR) and its opt-in email rule is forcing a reckoning that could fundamentally change how data is collected, used, and protected around the globe.
Fundamentally different approaches
In the US, privacy and security are largely treated as two separate issues. While privacy is a right rooted in the First Amendment to the US Constitution, it is subject to change by judicial interpretation or further amendment. The last comprehensive data protection law the US Congress passed was 32 years ago – the Computer Fraud and Abuse Act of 1986. Since then, new legislation has been more indirect – where improved data security was implied, not mandated, or left to the states to address.
In the European Union, privacy is considered a Human Right and data security an integral element of privacy.
Concepts like the “Right to be Forgotten” and “Privacy by Design” are bedrock beliefs in the EU that simply do not exist in the US (and many other parts of the world), creating a near constant conflict between the US government, the US private sector, and EU regulators.
It’s all about the data
US companies make billions of dollars collecting and selling consumer data with few requirements to obtain permission or share what information is collected and stored.On the other hand, EU companies are severely restricted in what information they can collect and how it can be used. The data is still considered to be the consumers’ information. In the US, the information is owned by the company that collects the data.
When the original EU Privacy Directive adopted in 1995 failed to be the catalyst for companies to protect the privacy and data, the European Parliament adopted the General Data Privacy Regulation (GDPR) – an EU law that is binding on all member states. (The United Kingdom has already passed legislation that enshrines the GDPR and will survive the so-called Brexit.)
Adopted in 2016, and enforceable as of 25 May 2018, the GDPR is wide ranging law. Most of the attention and compliance effort, though, has focused on only two areas: the requirement to give EU residents control over their information; and, the potential for significant fines.
The former resulted in inboxes filled to capacity prior to the deadline with mandatory opt-in emails in the EU or updated privacy policies outside the EU. The latter has business leaders holding their collective breaths waiting to see who becomes the first organization worthy of fines of up to 20 million EUR or 4% of global sales, whichever is greater (!).
(As an aside, we may find out sooner rather than later. Lawsuits were immediately filed the day the GDPR was enforceable against major tech companies including Google and Facebook alleging their efforts fall short of GDPR compliance.)
Marketing databases have been cleansed and privacy policies have been updated. The heavy lifting to comply with the GDPR is over, right? Not by a long-shot. Arguably, the real work is only starting.
More than just Opt-in
Chapter Four, Article 32 of the GDPR requires organizations subject to the law to take “appropriate technical and organisational measures to ensure a level of security appropriate to the risk.”
The terms being used to describe this requirement are “Privacy by Design” and “Security by Default.”In other words, privacy protection must be considered from the very beginning of the product development cycle and data security must be embedded in every product, process, and service.
Let’s not forget about those big fines for violating the GDPR. Enforcement actions are based on a company’s failure to comply, not just when a breach occurs as is usually the case in the US. “Failing to ensure a level of security appropriate to the risk” can take many forms, but we already know of one threat that is particularly problematic for software dependent businesses: failure to patch known software flaws on a timely basis.
One vulnerability management vendor claims that 86% of high severity flaws go unpatched for 30 days or more in web applications. Oracle executives, whose company offers the world’s most popular software development language, saytheircustomers lag in patching by months if not years.
That was the state of play in early 2018 when the United Kingdom’s Information Commissioner’s Office (ICO) issued a fine against Carphone Warehouse for a breach, citing a “seriously inadequate” patching program. The ICO also issued additional guidance:
“Under the General Data Protection Regulation taking effect from May 25 this year, there may be some circumstances where organizations could be held liable for a breach of security that relates to measures, such as patches, that should have been taken previously.”
That’s an enormous task. With an estimated 111 billion new lines of code written each year and the US National Vulnerability Database growing at an average rate of one new software flaw reported every 30 minutes, there simply is not enough time or fingers-on-keyboards to fix known software flaws before hackers can exploit the bugs. New technologies like real-time patching where software code is fixed on the fly without downtime or expensive, time consuming source code changes are proven to offer better, faster, cheaper protection for organizations of all sizes.
One other area where organizations are likely to struggle is the 72-hour reporting provision in the event of a breach. While it takes attackers less than a week on average to penetrate organizations and begin to extract consumer data, it takes the targeted company more than six monthsto learnthey’ve been attacked. It can take another three months to stop the attackand fix the problem according to research from the Ponemon Institute.
Even the most sophisticated organizations with cutting edge protections struggle with rapidly detecting and assessing the depth and breadth of an attack. The information initially gathered almost always needs to be updated as cybersecurity experts learn more about what happened.Think of all the cybersecurity attacks that turned out to be worse than first reported.
Yet, the GDPR requires company officials to alert regulators within 72 hours of any data breach that could cause harm to a consumer. Rapidly patching known software flaws can be fixed with new technology. Rapidly informing government regulators will require significant adjustments to company cultures and behaviors. Think of the companies that paid hackers rather than fess-up or only reported a breach after many months or years later. Doing so within hours requires a seismic shift in attitude and approach.
While companies subject to GDPR work through the rest of the compliance requirements, the ripple effect is in full force. The New York Department of Financial Services has already adopted the 72-hour reporting rule and annual risk assessments to name two GDPR concepts. California is considering the strictest privacy and data security law in the US, including the potential for payments of $1000 to each consumer impacted by a breach.
Like the laws and regulations that came before them, some organizations will embrace them willingly and some with fight them vigorously. Ultimately, though, regulations can only create an environment for improvement. They do not improve privacy or security. People and technology do.
James E. Lee is the Executive Vice President of Waratek, a leading application security company. He is also the former Chair of the US-based Identity Theft Resource Centerand an executive at ChoicePoint, the first US company to issue a nationwide data breach notice in 2005.