Why Law Firms Must Keep Data Security at the Core of Their Practice

In a world where more is done online than ever, law firms find themselves at a unique risk of data security attacks. Constantly handling sensitive matters such as Intellectual Property (IP), their systems are an ideal target for criminals in search of exploitable data and files. That’s why it’s essential for legal professionals to stay on top of the latest security measures. Here, we’ll provide an overview of the special vulnerabilities these companies face and review some best practices for mitigation.

Recognizing the Unprecedented Risks of Today’s Digital Business Landscape

Data security is top of mind for all industries heading into 2023. We’re coming off an unprecedented year of attacks, not to mention novel risks that haven’t been seen before.

According to recent statistics, data breaches climbed by an annual average of 15.1% in 2021, costing U.S. businesses more than $6.9 billion. That’s a 392 percent-plus increase from only four years earlier in 2017 when the same metric was estimated to be around $1.4 billion.

Experts only predict that this reality will get worse; ongoing digital transformation across sectors has made businesses more reliant on technology than ever. Practically everyone – from your local tax professional to your healthcare provider – utilize digital tools to get the most important parts of their job done and must now operate with the added vulnerabilities that come with these connected operations.

From ransomware, malware, and phishing to third-party attacks, insider threats, social engineering, form jacking, and more, the potential risks are endless. Quickly evolving strategies are increasing organizations’ susceptibility to suffering loss – 82% report serious concerns regarding their vulnerability. With the cost of cybercrime anticipated to reach $10.5 trillion by 2025, there’s serious pressure on businesses that want to stay afloat to prioritize the solutions they have in place to mitigate it.

Law Firms’ Unique Vulnerabilities to This Environment

While cybersecurity is a relevant issue for all businesses in the twenty-first century, it has specific importance to legal professionals and the firms they operate. Similarly to healthcare, education, and finance, the legal industry works with sensitive public information on a day-to-day basis. This includes – but of course, is not limited to – names, records, contact information, addresses, health history, and financial documents. Most importantly of all, lawyers often handle cases involving issues of Intellectual Property (IP), which must remain confidential in order to protect their clients. Law firms handle incredibly sensitive information, and this makes them a prime target for hackers. Smaller groups are especially vulnerable, as these businesses seldom have the resources to devote to a robust security system.

According to a recent survey, one quarter 25% of law firms report having experienced a data breach before. That proportion is only expected to rise in parallel with the criminal sophistication of the 21st century. In fact, many experts believe that legal firms are particularly at risk to suffer security incidents because they’re not taking the necessary steps to secure their data. Whether it’s through lack of training, failure to invest in technology, or not having sufficient policies in place, there are several ways that law firms can leave themselves open to attack.

Why Law Firms Should Now Prioritize Data Security More Than Ever

Beyond the fact that they’re uniquely vulnerable, law firms have plenty of reasons and incentives to take data security for the serious issue that it is. Below are four of the most notable and why they should be important considerations for legal professionals assessing their strategies.

Changing Industry Standards

As the digital landscape continues to grow more complex, law firms and businesses of all sizes have begun paying extra attention to their data security standards. So much so, in fact, that data security has become a requirement in most vendor contracts. If a law firm fails to meet these standards, it can face serious consequences including termination of the contract and fines.

In addition to higher security standards, many businesses now require that their vendors demonstrate proof of compliance. This means that law firms are expected to have some form of evidence that proves their data is secure. Common compliance methods include ISO 27001/2 and SOC 2 Type II, both of which require frequent measurement and validation.

Ethical and Regulatory Obligations

Lawyers are governed by a number of legal and ethical principles in the course of their work. Every state in America has its own expectations based on the Model Rules for Professional Conduct (MRPC), which specifically cover the issues of safekeeping property and confidentiality of information. Violations of these rules can result in fines, disciplinary action, and other penalties.

Law firms must also be aware of the data privacy regulations that are unique to their individual regions and states. Aside from rules directly applicable to lawyers, many states also have their own general laws on data privacy, most notably the California Consumer Privacy Act (CCPA), Virginia Consumer Data Protection Act (VCDPA), and Colorado Privacy Act (CPA). These rules focus on protecting consumer data and require those that handle it to take appropriate steps in doing so. They also require law firms to notify impacted parties whenever a data breach occurs. Violations of these regulations can have incredibly severe consequences, ranging from hefty fines to lawsuits.

Client Acquisition and Retention

The online world’s current level of risk hasn’t gone unnoticed by consumers. They’ve become increasingly aware of and concerned about the issue of data security and privacy, and are keeping these top of mind when choosing what businesses they want to work with.

From a general standpoint, roughly 55% of people in the United States say that they would be less likely to work with a company with a history of data breaches. Add to that the sensitive and high-stakes nature of legal endeavors, and that number is likely a lot higher for law firms.

If lawyers want to win new clients and maintain the trust of current ones, they need to show that they’re taking cybersecurity seriously. This is especially important when onboarding new clients, as they will want to know exactly how their data will be used and what the firm is doing to keep it secure.

Growing Risks

As cybersecurity solutions are advancing with technology, the strategies hackers use to circumvent them are too. What passed as a viable defense system 10 years ago certainly wouldn’t hold up against today’s new risks. These threats are craftier than ever, not to mention increasingly effective and efficient.

Research reports that in over nine out of ten cases, an external attacker can break through an organization’s network perimeter and obtain access to local network resources. The average time it takes them to breach its internal assets? Only two days.

As these risks continue to evolve, it’s essential for law firms to stay ahead of the curve and continually review their cyber practices. They need to be proactive in assessing their security systems and implementing strategies to protect against any potential threats. Failing to do so can be the difference between a viable business and one that winds up as a data breach statistic.

Reputational Damage

Cyber-attacks can have serious reputational consequences for businesses. Not only do they attract negative press, but the public’s trust and confidence in the business can be quickly eroded.

This is particularly important to consider in the legal industry, where clients rely on their lawyers to act with discretion and integrity. If a law firm is hacked, the public can lose faith in its ability to handle sensitive information, and it may even begin to doubt the firm’s overall competency. That’s the last thing you want when your job is to make people feel safe and secure.

How Law Firms Can Maintain Strong Data Privacy and Cybersecurity Practices

It’s clear that the legal profession must take immediate action to protect both its own data and that of its clients. There are a number of ways to do this, which when used together, can greatly decrease a firm’s chances of falling victim to cybercrime. Below are some of the easiest, most straightforward initiatives lawyers can take to bolster their business’ security.

Leverage Secondary Channels or Two-Factor Authentication

When handling sensitive information, verifying requests for changes or access to data should always be done through secondary channels. This is an especially critical approach when it comes to important account information, such as passwords and contact details.

By using two-factor authentication, business owners can ensure that any requests for changes in account information are only made when verified by an independent source. This extra layer of security ensures that only legitimate users can access sensitive data. It also helps to prevent the potential for cyber-attacks, as any attempts to log in from an unverified source will be blocked.

Think Before You Click

Employees of any business must be trained to think critically before clicking on links or downloading content from unknown sources. The same applies to law firms – anyone working within the business must be aware that clicking a malicious link could open the door for an attacker to gain access to confidential data.

Hackers will intentionally create similar-looking URLs in an attempt to get unsuspecting users to click. They can also attach links to malicious files, which when downloaded, could cause serious harm to a company’s entire system. Employees ought to know how to recognize these traps, and should be trained to always double-check any emails, text messages, or other communications before taking action.

Monitor Activity

Firms should implement monitoring and logging software, which tracks all user activity on any associated network. This allows businesses to identify any suspicious behaviors and take the necessary steps to stop an attack before it can become a major issue. Business owners must also ensure that all employees are aware of their logging and monitoring policies and that they understand the implications of any breach in protocol.

Invest In Employee Training

A company’s security stance is only ever as good as the knowledge of its employees. Without proper training, even the most secure networks can be breached. Business owners should ensure that all their employees are up to date with the latest security technologies and have the necessary understanding of how to prevent cyber-attacks.

Update Software Regularly

Software programs are regularly updated with fixes for discovered security vulnerabilities. Putting an update off can increase the risk that malicious actors could exploit any known issues. Businesses need to stay up to date on all their software programs, including their operating systems and security suites.

Refrain From Supplying Sensitive Information Over Email

Phishing is one of the easiest and most common ways businesses become victims of cybercrime. Everyone – from major corporations to government officials – has been duped by this strategy, which involves an attacker posing as a legitimate source to gain access to sensitive information.

Law firms must be especially vigilant when it comes to ensuring the safety of their emails. Whenever possible, sensitive information should never be shared over email. Instead, it should be done through other methods such as encrypted messaging or a secure file-sharing application.

Create and Enforce Policies

Creating and enforcing policies can be an effective way to prevent cyber-attacks, especially when it comes to law firms that handle large amounts of confidential data. Business owners should consider creating a policy that outlines the steps employees must take to protect data and enforce any repercussions for failing to do so. Employers should also consider updating their policy regularly to ensure that it is up to date with the latest security techniques.

By understanding the risks that come with working within the legal industry, and taking proactive steps to mitigate them, law firms can ensure that their businesses remain well-protected against any potential cyber threats. TeraDact’s Tokenizer+, Redactor+, and Secrets+ are powerful tools that can be utilized to ensure that law firms, and all other companies, have the best security measures to protect important data. With the stakes being higher than ever, doing so is essential to the success of any organization.

Incognito Mode & User Privacy

Fun fact: over half of internet users believe that Incognito Mode prevents Google from seeing their search history.

Another fun fact: 37% think it’s capable of preventing their employer from tracking them.

The truth? It actually does neither of those things.

In fact, Google collects so much data on its users that it’s become the subject of multiple lawsuits in recent years – the latest being a class-action lawsuit that could potentially cost the company billions of dollars. It alleges that Google illegally collected user data while they were browsing in Incognito Mode and used it to target them with ads.

In this article, we’ll take a look at the specifics of the lawsuit, the true breadth of Incognito Mode, and what it actually does (and doesn’t) protect.

More About the Lawsuit

Originally filed in June 2020 by law firm Boies Schiller Flexner LLP, this latest class-action lawsuit is officially seeking at least $5 billion in damages on behalf of its clients. It accuses Google’s parent company Alphabet of covertly collecting users’ information, including details about what they browse and view, under false pretenses of privacy with Incognito Mode.

The plaintiffs, all of whom are Google account holders, say that the search engine collected, distributed, and sold their personal data for targeted advertising purposes, even in Incognito Mode. They allege that although being led to believe their activity was private, Chrome still tracked their online behavior via Google Analytics, Google ‘fingerprinting’ techniques, Google Ad Manager, and concurrent Google applications on their devices. These technologies are very common throughout the internet – apparently, more than 70% of all online websites use one or more. Google’s reported ability to use them for the collection of consumer data – even in Incognito mode – means that the search engine can bypass any privacy safeguards consumers might reasonably expect.

Lawyers say they have a large body of evidence supporting their argument that Google intentionally misled its users regarding the feature’s security. Among the most damning are several internal emails that show executives were directly aware of misconceptions surrounding Incognito and specifically chose not to act.

The emails, which were released as part of the court process, clearly illustrate multiple attempts by employees to raise concerns about the issue with their superiors. Some show that staff actively joked about the fact that Incognito didn’t provide privacy, while others highlighted criticism towards Google’s approach to protecting user data.

The most telling though, include multiple emails between top company executives that prove this issue was known about at every level. A 2019 message from Google’s Chief Marketing Officer Lorraine Twohill to CEO Sundar Pichai explicitly reads that Incognito is “not truly private” – as clear of admission as you could get.

In addition to emails, the released court documents reference multiple internal presentations that further acknowledge Google’s awareness of Incognito’s privacy problem. One states that users “overestimate the protections that Incognito provides”, and another proposes removing the word “private” from its start screen altogether.

Essentially, what the lawyers are arguing with this evidence is that not only were top Google execs aware of users’ misconceptions about their privacy on Incognito but specifically chose not to act in favor of sustaining ad profits.

Of course, Google refutes all of the claims against it, stating they have been upfront with its users all along and those plaintiffs of this lawsuit have “purposely mischaracterized” their statements. The tech giant’s lawyers moved to dismiss the case 82 times in 2021, each of which was ruled against, allowing it to get to the certification process we’re at today. Google was also ordered to pay almost $1 million in legal penalties this past July for failing to disclose evidence in a timely manner.

A Growing Problem

This is by no means the first lawsuit Google has faced in recent years. As a matter of fact, its legal department is currently juggling tens of active cases, the plaintiffs of which range from the States of Texas and Washington to the District of Columbia, the Republican National Committee, Video game maker Epic, and dating app company Match Group. The search giant is also in the middle of issuing settlement payouts to several recently wrapped cases, including one of $85 million to the State of Arizona and another of $391.5 million to a 40-state privacy coalition.

But this new lawsuit in particular may be the biggest Google’s ever dealt with – the class-action initiative represents millions of individual users and is fighting for payouts of between $100 and $1,000 to every single one of them. You don’t need to be a genius at math to figure out this could easily rack up to billions of dollars in damages.

The plaintiffs’ lawyers are currently working on getting the case certified, which would move it one big step closer to an actual trial. And if Google does end up losing in court, it may not have any choice but to start writing very large checks.

Understanding ‘Incognito Mode’

Fully understanding this recent lawsuit and the implications it has for Google comes down to understanding Incognito Mode itself and the role it plays in user privacy.

Incognito Mode is a feature on all major browsers (Chrome, Edge, Safari, Firefox, etc.) that allows users to browse the internet without saving any local data to their device. This includes things like cookies, browsing history, and form autofill information. Essentially, it’s a way to ensure that your internet activity can’t be traced back to you or your device once you close the window.

While this sounds like a fool-proof way to browse privately, the reality is that Incognito Mode only offers what’s called “local privacy”. This means that while your internet service provider (ISP) and the websites you visit can’t track what you’re doing, any software you’re using can – including Google.

By definition, the word “incognito” means to disguise or conceal one’s identity.

The biggest problem Google’s Incognito Mode has faced over the years is the degree of purported concealment it really offers users. From a broad perspective, most people believe that the feature makes their online activity invisible, which as we’ll go on to establish in the next section, isn’t true. Its claims, nature, and name all lure users into a false assumption of security – leading to accusations of privacy violations and lawsuits when the true scope of Incognito’s visibility is revealed.

Does ‘Incognito Mode’ Really Protect Users’ Privacy?

So, does Incognito Mode really protect Google users’ privacy? The answer may depend on what you consider private information.

Industry experts explain that private browsing modes like Incognito are designed to safeguard customer activity on a very basic level. They’re mainly meant to keep your browsing history clean in cases where you share a computer with others. Effective at keeping the secret of what you’ve bought your partner for Christmas, but not protecting data regarding your online activities, interests, and behaviors.

To offer more concise answers, we’ve broken down the exact things Incognito officially does and doesn’t collect when you open a window:

What ‘Incognito Mode’ Does Protect

Incognito Mode isn’t completely pointless – it protects multiple facets of online user activity, including the following.

Browsing History

Browsing history refers to the list of web addresses conventional search engines automatically collect when you use your computer. The feature mainly exists for convenience, allowing you to quickly revisit websites without having to remember the URL. While this is useful in some cases, it can also be a major invasion of privacy, especially if you frequently visit sites other people might find controversial or sensitive. Browsing history is one aspect of online activity that Google explicitly promises not to save when you use Incognito Mode.


A cookie is a small piece of data that’s stored on your computer or mobile device whenever you visit a website. Its main purpose is to remember information about you, such as your login details, language preferences, and items added to your shopping cart. Cookies can make the online experience more convenient, but they also allow companies to track your movements across the internet – even when you’re not using their specific services. Google will still place cookies on your device while you’re in Incognito Mode, but they’ll automatically be deleted as soon as you close the window. This means that any information these cookies collect about your online activity can’t be used to identify you at a later date.

Download History

Download history is a record of every file you’ve downloaded while using Chrome. Like browsing history, this feature is designed for convenience, allowing you to quickly access files without having to search for them on your computer. However, it also represents a significant invasion of your privacy, as it can be used to track the types of files you download, where you download them from, and what you do with them after they’re on your device. Google doesn’t save your download history when you use Incognito Mode, meaning that any files you download while in this mode can’t be traced back to you.

Search History

Search history refers to the terms you’ve entered into Google’s search engine, as well as the results pages you’ve accessed through these searches. Like cookies, search history is used to tailor your future experience of the internet, serving you more relevant results and ads based on your previous behavior. Google doesn’t save your search history when you’re in Incognito Mode, meaning that your future searches won’t be influenced by the terms you enter while in this mode.

Site and Form Data

Site and form data is information like usernames, passwords, addresses, and preferences that you’ve entered on specific websites. This data is generally stored in cookies, but can also be saved in your browser’s cache – a temporary storage space for frequently accessed files. Google doesn’t save site data when you use Incognito Mode, meaning that any information you enter on websites while in this mode can’t be accessed or used at a later date.

As mentioned earlier, these capabilities are mainly designed to conceal your local browser history, mainly to keep others from snooping on your browsing or download habits. However, they do little to actually protect your anonymity online – for that, you’ll need to use tools like VPNs. 

What ‘Incognito Mode’ Doesn’t Protect You From

While it delivers some value, Google Incognito Mode doesn’t go as far in protecting users’ privacy as many think. The following are just some of the ways in which your activity can still be monitored and recorded while using this feature.


Based on the lawsuit discussed in this article, Google’s core web tools – including Analytics and Ad Manager – still track and collect data from users in Incognito Mode. While this information can’t be used to personally identify you, it can be used to build up a detailed profile of your web activity, interests, and habits.

Employer or School Networks

If you’re using a work or school computer, it’s likely that your employer or school has installed monitoring software that allows them to track your activity, even in Incognito Mode. This software can record the websites you visit, the files you download, and the searches you perform, meaning that your employer or school will still be able to see what you’re doing online, even if Google can’t.

Your ISP

Your internet service provider (ISP) can still see the websites you visit while you’re in Incognito Mode. They can also track the amount of data you’re using and the time you spend online. This information can be used to deliver targeted ads and content and can even be sold to third-party companies. The best way to hide online activity from an ISP is to use a Virtual Private Network (VPN) service, which will encrypt your traffic and prevent your ISP from being able to track it.


Malware is malicious software that can be installed on your computer without your knowledge. Once it’s in place, it can be used to track your activity and collect sensitive information, even when you’re in Incognito Mode.

Government Surveillance

While Incognito Mode can help to protect your privacy from snooping on family members or roommates, it won’t do much to shield you from government surveillance. If the government is monitoring your activity, they’ll still be able to see the websites you visit and the searches you perform, even when you’re in Incognito Mode.

While it has yet to be officially labeled a ‘dark pattern’, Google’s Incognito Mode is likely in store for further controversy in the years to come. For now, it’s important to be aware of the limitations of this privacy feature and to use other tools – like VPNs – to ensure that your activity is truly private and anonymous. Speaking of data privacy and protection, our solutions Tokenizer+, Redactor+, and Secrets+ can improve your security framework and protect your organization from potential cyber threats. Contact us today for more information.

Privacy Law Violations: Who investigates and what are the consequences?

In today’s digital age, the issue of privacy is more important than ever. With the advent of new technologies that allow for the collection and use of large amounts of personal data, the need for comprehensive privacy law has never been greater.

The United States has several federal laws that deal with various aspects of privacy, but there is no all-encompassing privacy law that covers everything. Instead, the various laws deal with specific issues and are often very siloed from one another.

In this article, we’ll take a look at some of the major federal privacy laws in the United States and what they cover.

Fair Credit Reporting Act of 1970

The Fair Credit Reporting Act of 1970 was one of the earliest federal privacy laws to be passed in the United States. It was implemented under Richard Nixon in an effort to guarantee the privacy and accuracy of consumer credit bureau files.

The FCRA protects United States citizens’ personal financial information upon collection by groups like credit agencies, medical information companies, and tenant screening services. The privacy law outlines what guidelines these organizations must follow when handling individuals’ sensitive data and also informs consumers of their rights in regard to the information on their credit reports.

The FCRA is enforced by the Federal Trade Commission, an independent government agency that focuses its work on protecting consumer privacy interests. Inaccurate debt reporting, failure to send poor credit notifications, failure to provide a satisfactory process to prevent identity theft, and dissemination of credit report information without consent are some of the most common forms of violations they encounter.

Upon violating the FCRA, companies can expect to incur a number of penalties and losses, namely damages awarded to victims, court costs, and attorney fees.

Statutory damages don’t require supportive evidence and can range in compensation limit from $100 to $1,000. Actual damages that result from a proven failure to act have no limit and are determined on a case-by-case basis. The FCRA also permits a class-action lawsuit against companies in violation, which can end up costing companies millions.

Privacy Act of 1974

The Privacy Act of 1974 is a federal law that prevents federal agencies from disclosing personal information they collect without an individual’s consent. It was signed by President Gerald Ford near the end of 1974 in response to the Watergate scandal and public concern over the privacy of computerized databases. The Act requires that federal agencies publicly disclose their record systems in the Federal Register, which is a national and official record managed by the U.S. government.

Multiple groups share the responsibility of enforcing the Privacy Act of 1974, as the legislation contains a range of protections that apply to different areas of government. The director of the Office of Management and Budget maintains the interpretation of the act and can release guidelines to these groups as needed. The Federal Register is another important tool in the enforcement of the Privacy Act as it keeps track of all record systems subject to the act, as well as any changes that are made to these systems.

Violation of the Privacy Act of 1974 can be considered both civil and criminal, depending on the specific situation at hand. For instance, an individual may choose to sue an agency to prevent disclosure of their records or to compel an agency to correct inaccurate information. They could similarly sue to have records produced or to receive damages as the result of an intentional violation. 

Alternatively, if an agency willfully discloses personal information without an individual’s consent, they can be fined up to $5,000 and cited for a misdemeanor. It’s important to also mention that this misdemeanor charge can apply to anyone if they request an individual’s record from an agency under false pretenses.

Computer Fraud and Abuse Act of 1986

The Computer Fraud and Abuse Act of 1986 is a federal law that prohibits the unauthorized use of protected devices connected to the internet. In plain language, it essentially makes it a crime to hack into someone else’s computer.

The law was first passed in 1986 and has been amended several times since then to better reflect the changing nature of digital technology. It has been the subject of scrutiny over the years, as some argue its language is often vague and allows for broad interpretation. This can result in the law being applied to everyday activities that people might not realize are technically illegal. This is something that has been addressed in recent years and continues to be a point of contention.

The CFAA’s provisions criminalize several activities, including:

●          Unauthorized access of a computer

●          Acquisition of protected information through unauthorized access

●          Extortion involving computers

●          Intentional unauthorized access to a computer that results in damage

Penalties for violation can apply to these offenses even if they are ultimately unsuccessful.

The Department of Justice is in charge of enforcing the Computer Fraud and Abuse Act. They investigate potential cases and, if they believe there is enough evidence, will file charges against the accused.

If someone is found guilty of violating the Computer Fraud and Abuse Act, they can face a number of penalties. These include fines, imprisonment, or both. The amount of the fine and length of imprisonment will depend on the severity of the offense and whether or not the accused has any prior convictions. Generally, first-time violators can expect up to a decade in prison, while second offenders can get up to 20 years.

Children’s Online Privacy Protection Act of 1998

The Children’s Online Privacy Protection Act of 1998 (COPPA) is a federal law that was enacted with the goal of protecting the online privacy of children under the age of 13. The FTC is responsible for enforcing this privacy law and they have the authority to impose fines on companies who violate COPPA. These fines can be up to $43,280 per violation.

In order to comply with COPPA, companies must provide clear and concise information about their privacy practices on their website or online service. They must also get parental consent before collecting, disclosing, or using any personal information from children under the age of 13.

There are a few exceptions to this rule. Companies don’t need parental consent in order to collect a child’s name, email address, or other online contact information if they only use this information to:

– Respond directly to a one-time request from the child (such as responding to a question or entering the child in a contest)

– Protect the safety of the child or others

– Comply with the Children’s Internet Protection Act

Additionally, companies are allowed to collect, use, and disclose a child’s personal information without parental consent if they do so to support the website or online service’s internal operations. These operations include things like site maintenance, content delivery, and security measures. The FTC has published a set of Frequently Asked Questions that provides more information about COPPA and how it applies to businesses.

Gramm-Leach-Bliley Act of 1999

The Gramm-Leach-Bliley Act (GLBA) is a federal law that was enacted in 1999. The GLBA’s primary purpose is to protect the privacy of consumer financial information. It applies to any company that has access to this type of information, including banks, credit unions, and other financial institutions.

Under the GLBA, financial institutions must take steps to safeguard the customer information they collect and maintain. They must also provide customers with a notice of their privacy policies and practices. This notice must explain how the institution collects, uses, and discloses customer information.

In addition, the GLBA gives customers the right to opt-out of having their information shared with third parties. Financial institutions must provide customers with a clear and conspicuous way to exercise this right.

The GLBA also requires financial institutions to take steps to protect the security of customer information. This includes implementing physical, technological, and procedural safeguards. Financial institutions must also train their employees on how to handle customer information in a secure manner.

Violations of the GLBA can result in a number of penalties, including fines, imprisonment, or both. For each violation, a financial institution can get a fine of up to $100,000. An institution’s directors and officers can face a fine of up to $10,000 or five years in prison (or both).

The Federal Trade Commission is responsible for enforcing the GLBA and has the authority to pursue legal action against companies that violate the act.

Health Insurance Portability and Accountability Act of 1996

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a national law that was enacted in order to protect the privacy of patient’s health information. HIPAA applies to any company or organization that handles protected health information (PHI). These entities are known as “covered entities” under HIPAA.

Covered entities must take steps to ensure that PHI is kept confidential and secure. They must also provide patients with a Notice of Privacy Practices that explains how their PHI will be used and disclosed.

Patients have the right to request that their PHI be released to them or to another party. They can also request that their PHI be corrected if they believe it is inaccurate. The ultimate goal of HIPAA is to ensure that patient’s health information is protected while also allowing them to have control over how it is used.

If a covered entity violates HIPAA, it can be subject to civil and/or criminal penalties. These penalties can include fines of up to $50,000 per violation and up to 10 years in prison for individuals who knowingly violate HIPAA.

The Department of Health and Human Services is responsible for enforcing HIPAA. They have a website that provides more information about HIPAA and how it applies to businesses.

Telephone Records and Privacy Protection Act of 2006

The Telephone Records and Privacy Protection Act of 2006 is a federal law that regulates how telephone companies can use and collect customer information. The law was passed in 2006 in response to a growing concern over the way that phone companies were handling customer data. At the time, a number of phone companies were selling customer information to third parties without customers’ knowledge or consent.

Telephone Records and Privacy Protection Act of 2006 requires telephone companies to get customers’ consent before using or sharing their information for marketing purposes. Companies are also required to provide customers with clear and concise notice of their privacy rights, and to allow them to opt-out of having their information used or shared for marketing purposes.

Violation of the Telephone Records and Privacy Protection Act can result in a jail sentence of up to 10 years and range in financial penalty. Cases involving more than 50 victims can double fines and add an additional 5-year jail sentence. If the illegally acquired phone records were used to commit a violent crime, crime against federal officers, or domestic violence, the jail sentence can be extended by another five years.


America’s privacy laws have a long history, and as we continue to move into the future, are sure to evolve even further. The laws discussed in this article are just a snapshot of the many that exist in order to protect Americans’ privacy rights. While some may argue that these laws are too restrictive or not enough, they nonetheless provide a foundation for how we as a society can safeguard our personal information. Products like Tokenizer+, Redactor+, and Secrets+ provide intelligent and automated AI/ML-based solutions to protect your company’s personal information. With the ever-growing importance of data security, it’s only a matter of time before even more laws are enacted to keep up with the changing landscape. Despite the continuous addition of privacy laws across the globe, cyber-attacks still exist. Contact us for more information on how your company can improve your security network and protect your data from cyber threats.

Cross-Border Sharing in the G-7 while Protecting Sensitive Data

Globalization has utterly redefined the state of the world we live in. People, businesses, and governments are now more interconnected than at any other moment in history, and the flow of information has become the lifeblood of this new era. For countries to maintain a leadership role in the global economy, it is essential that they embrace this new reality and adapt their policies to take advantage of the opportunities cross-border data flows present.

One area where this is particularly relevant is data sharing. In the past, businesses and governments were able to keep their data close to the chest, using it as a competitive advantage or simply keeping it out of the hands of others. However, in today’s interconnected world, this is no longer an option. Countries that want to remain competitive need to be able to share data with others, while still ensuring that sensitive information is protected.

This has been a top priority for members of the G-7, who recently concluded a two-day summit on the topic of cross-border data flow regulation. It was one of several recent meetings in the group’s ongoing effort to standardize law around the matter, which while not completely revolutionary, marked an important step forward in developing a global framework for data sharing.

Defining Cross-border Data Flows

Before we get into the specifics of the G-7 summit, it’s important to first establish what is meant by cross-border data flows. In short, these are the electronic transmission of data across national borders. This can include everything from email and text messages to more complex data sets used by businesses and governments.

Cross-border data flows have become increasingly important in recent years as our world has become more connected. They provide a way for businesses to operate in multiple countries, for people to keep in touch with loved ones who live far away, and for governments to share information and resources.

However, cross-border data flows can also pose a risk to national security and public safety.

When data is transmitted across borders, it often goes through multiple jurisdictions and may be subject to different laws in each country. This can make it difficult to protect sensitive information, as there may be holes in the security net. In addition, cross-border data flows make it easier for criminals and terrorists to operate across borders. And finally, they can also be used to evade taxes or launder money.

This is why it’s so important for countries to strike a balance between encouraging data sharing and protecting sensitive information.

What Types of Data Are We Talking About?

It’s important to note that not all data is created equal. When we talk about cross-border data flows, we are usually referring to three different types of data: PII, PHI, ETC.

PII, or Personally Identifiable Information, is any data that can be used to identify an individual. This includes things like name, address, date of birth, Social Security number, and so on.

PHI, or Protected Health Information, is any data related to an individual’s health. This includes things like medical records, prescriptions, and insurance information.

ETC, or Encrypted Taxpayer Communications, is any data related to an individual’s taxes. This includes things like tax returns, W-2 forms, and 1099 forms.

All three of these types of data are considered sensitive and need to be protected.

What Types of Risks Are We Talking About?

As we mentioned earlier, cross-border data flows can pose several risks to national security and public safety. Here are a few of the most common:

Data breaches: When sensitive data is transmitted across borders, it increases the chances of a data breach. This is because there are more opportunities for hackers to intercept the data. In addition, cross-border data flows make it difficult to track down the source of a breach, as the data may have gone through multiple jurisdictions.

Identity theft: Cross-border data flows make it easier for criminals to steal people’s identities. This is because sensitive data, like Social Security numbers and date of birth, can be used to open new accounts or get new credit cards.

Fraud: Cross-border data flows can also be used to commit fraud. For example, criminals may use stolen credit card numbers to make purchases online. Or they may use fake identities to take out loans or open new bank accounts.

Money laundering: Cross-border data flows can be used to launder money. This is when criminals use legitimate businesses to move money around, so it’s difficult to track. For example, they may use a cross-border money transfer service to send money from one country to another.

The G-7 Meeting

Top privacy regulators from member nations of the G-7 met in Bonn, Germany last month to discuss the issue of cross-border data flows in detail. The main priority of the meeting was to find a way to standardize data privacy laws across borders, in order to make it easier for businesses to operate in multiple countries and to protect sensitive information.

The Group of Seven already has several legal deals in place addressing this exact issue, however none specifically between the United States and European Union.

“The only piece of the puzzle that is missing is the trans-Atlantic agreement,” says Wojciech Wiewiórowski, the European Data Protection Supervisor who attended the two-day meeting.

While a final legal text of the new U.S.-European Union agreement hasn’t been published yet, negotiators said in March that they reached a preliminary deal. This is a big step forward, as it’s the first time that both sides have been able to agree on a framework for data sharing.

The new agreement will likely build on the EU-U.S. Privacy Shield, which was proposed years ago but eventually ruled illegal by the EU’s top court in 2020.

In that case, challengers had successfully argued that American government surveillance posed a threat to Europeans’ privacy if their data was moved to the United States.

Dismantling the Privacy Shield “basically left data transfers in limbo” for all international companies, says Svetlana Stoilova, digital economy adviser at Business Europe.

Businesses have urged lawmakers from both member nations to speed up the process of finding a new replacement. Some companies have been using other legal mechanisms to transfer data, but they are seen as being more cumbersome.

The new agreement is still being negotiated and no final text has been published yet. However, both sides have said that they are committed to finding a solution that will protect people’s data while also allowing businesses to operate across borders.

Ultimately, the goal is to align the data privacy laws of the United States and the European Union, so that businesses can operate in multiple countries without having to worry about breaching data privacy laws. This would also make it easier for people to know their rights when it comes to their data being shared across borders.

Last month’s meeting saw several suggestions to make such a system work, including…

Applying data anonymization techniques: This would involve stripping transferred information of personally identifiable details. For example, a user’s name, address, and credit card number could be replaced with a unique identifier. This would make it more difficult for someone to identify an individual from the data. (Tokenizer+)

Pseudonymizing data: This would involve replacing personally identifiable information with a pseudonym. For example, a user’s name could be replaced with a randomly generated identifier. This would make it more difficult to identify an individual, but not impossible. (Tokenizer+)

Applying data redaction techniques: This would involve the redaction of data that can’t be shared in any safe fashion so that only appropriate users may access the data. (Redactor+)

Using encryption to protect information in transit: This would involve encrypting data so that it can’t be read by anyone who doesn’t have the key to decrypt it. This would make it more difficult for someone to intercept and read the data as it’s being transferred between countries.

Arguably the most important outcome of the meeting, however, was the renewed commitment from both sides to find a solution to this problem.

In summaries published after the meeting Thursday, regulators committed to collaborating on legal methods to move data and provide businesses with options for choosing cross-border transfer tools that fit their business needs. The document also stated that nations need legislation ensuring that personal data is only accessed as “essential” for national security purposes.

This is a big issue for businesses operating in multiple countries, as well as for people who may have their data shared across borders. The goal is to find a way to standardize data privacy laws across borders so that businesses can operate in multiple countries without having to worry about breaching data privacy laws. This would also make it easier for people to know their rights when it comes to their data being shared across borders.

It’s still early, but the meeting was a step in the right direction toward finding a solution that will work for everyone.


Being the world’s seven largest economies, the G-7 has a responsibility to lead the way in developing policies that will allow for cross-border data transfers while also protecting people’s privacy. Their actions will set a precedent for how other countries should and will approach this issue. Time will tell whether they will be successful in finding a balance between these two competing interests. The rise of big data and the global interconnectedness of trade and business have necessitated new ways to facilitate cross-border data transfers. At the same time, data privacy concerns have grown, as has public awareness of the ways that companies collect and use personal data. These trends have created both opportunities and challenges in implementing cross-border data transfers. 

As the world becomes more digitized, it’s important that we find a way to protect people’s data while also allowing businesses to operate across borders. This is not an easy task, as data privacy laws vary from country to country. But it’s an important one, as cross-border data transfers are essential for businesses and trade. Regulated companies will soon turn to providers like TeraDact to protect their sensitive data. Our products Tokenizer+, Redactor+, and Secrets+ were developed to help protect people’s sensitive data while still allowing businesses to operate in the most efficient way possible. The products mentioned above are the answer to properly protect your data from cyber threats across the world.

Data Protection Trends in Children’s Online Gaming

When we think about children’s data protection, the first issues our minds usually jump to are topics like social media. And that would make sense – online social networks make up a great portion of kids’ internet usage and therefore pose a proportionally high risk.

But what is often overlooked is the fact that many children are also spending their time playing online video games. A recent report found that 76% of kids younger than 18 in the United States play video games regularly.

This is a problem because, like social media, online gaming platforms collect a large amount of data from their users. This includes personal information like names, addresses, and birthdays, as well as more sensitive data like GPS location and biometric data. And, due to the nature of gaming, this data is often collected without the user’s knowledge or consent.

This raises a number of concerns about children’s privacy and data security, as well as the potential for misuse of this information. In this article, we’ll explore some of the key issues related to children’s online gaming and data protection, as well as what measures can be taken to mitigate these risks.

The Safety and Data Risks Faced by Children in Online Gaming

Children and youth are uniquely vulnerable to the dangers posed by the internet. They are still in the process of developing both physically and mentally, which can make them more susceptible to harm. This is especially true in the case of video games, where a slew of potential risks exist.


Children’s data can be used to exploit their vulnerabilities and hook them into playing video games for long periods of time. This can lead to addiction, which in turn can have several negative consequences. These include social isolation, sleep deprivation, and even poor academic performance. In severe cases, it can lead to mental health problems.


One of the biggest dangers children face when gaming online is manipulation. Game developers and companies have a vested interest in keeping players engaged, and they often do this by using personal information to curate highly targeted in-game advertisements and content. This can be extremely persuasive, and children may be coerced into making social connections or purchases that they wouldn’t otherwise make.

Contact Risks

Another potential danger of online gaming is the possibility of contact risks. When players reveal their personal information, such as their email address or home address, they open themselves up to the possibility of being contacted by someone they don’t know. This can be especially dangerous for young children, who may not yet have the ability to distinguish between safe and unsafe people.

Gambling-Like Mechanisms

Many online games make use of gambling-like mechanisms, such as loot boxes, that can entice players to spend more money. These mechanisms are particularly risky for children, who may not have a full understanding of how they work or the potential financial consequences.

International Examples of Legislative Age Assurance Requirements

As experts have sounded the alarm over children’s data security in the scope of online play, governments have responded through the proposal and institution of several regulatory frameworks aimed at addressing the problem. A number of noteworthy pieces of legislation have come into force around the world over the past few years, and while each differs slightly in content, they all have one common goal: doubling down on companies’ responsibility to protect their youngest users.

Here are just a few examples of prominent regulatory frameworks to have been rolled out in major countries and regions:

U.K. Information Commissioner’s Office Age-Appropriate Design Code

The age-appropriate design code, informally known as the Children’s Code, was first implemented by the UK’s Secretary of State in September 2020 in an effort to codify the rules and enforcement procedures surrounding online services that process children’s data. It applies to any company that offers online services – such as social media platforms, apps, websites, or gaming services – that are likely to be accessed by children under the age of 18.

The AADC outlines standards on 15 different topics:

●          Best interests of the child

●          Data protection impact assessments (“DPIA”)

●          Age-appropriate application

●          Transparency

●          Detrimental use of data

●          Policies and community standards

●          Default settings

●          Data minimization

●          Data sharing

●          Geolocation

●          Parental controls

●          Profiling

●          Nudge techniques

●          Connected toys and devices

●          Online tools

Each of these covers a unique facet of online service design, but all work together to create a robust sense of protection for minors. Companies are expected to take a risk-based approach to their compliance for each, meaning that the solutions they implement should be appropriate for the risks posed by their products.

While failure to comply with the Age-Appropriate Design Code itself does not make a person or business liable to legal proceedings, it does open their risk to being prosecuted for violation of the UK GDPR and/or PECR.

OECD Recommendation on Children in the Digital Environment

Adopted in 2021, the OECD Recommendation on Children in the Digital Environment is a formal set of guidelines aimed at promoting children’s data safety online. It sits in tandem with the OECD’s Digital Service Provider Guidelines to outline the organization’s position on data governance for digital economy actors.

The Recommendation is unique in that it is non-binding, meaning that countries are not held to its standards in a legal sense. However, it does provide a sort of international benchmark for how different nations might approach regulation in this area.

The main tenet of the OECD’s recommendation is to create online environments in which online providers take the “steps necessary to prevent children from accessing services and content that should not be accessible to them, and that could be detrimental to their health and well-being or undermine any of their rights.” 

EU Digital Services Act

The EU Digital Services Act is a newer piece of legislation that was just agreed to by EU members in April 2022. It’s set to be the Union’s main ‘rulebook’ when it comes to protecting citizens’ online privacy both now and in the future as big tech continues to redefine the way we interact with the internet.

Under the DSA, online service providers will be held to higher standards when it comes to the way they process the personal information of both child and adult EU citizens. The Act includes several provisions specifically aimed at protecting minors, including a ban on advertising aimed at children and the algorithmic promotion of content that could potentially cause them harm such as violence or self-harm.

Once formally adopted by EU co-legislators, the Digital Services Act will apply after 15 months, or January 1, 2024, whichever is later. It’s being lauded as a major first step in the effort to protect children’s (and all users’) privacy online and has set the standard for future frameworks of its kind.

UK Online Safety Bill

While still before the UK’s House of Commons, the Online Safety Bill is another potential change to come in the data privacy landscape. It addresses the rights of both adults and children when it comes to their data online, with a special focus on the latter.

If passed, the bill would impose a safety duty upon organizations that process minors’ data to implement proportionate measures to mitigate risks to their online safety. While the legislation has had a few bumps in the road since its original proposal, new UK Prime Minister Liz Truss says she plans to adapt and move forward with it in the coming months.

California Age-Appropriate Design Code Act

California is no stranger to data privacy laws. Honing one of the most comprehensive sets of state regulations in North America, the CCPA, its priorities are clearly set on protecting citizens’ rights and personal information online. In our “California Consumer Protection Act (CCPA) Fines” blog post we discuss which companies the act would apply to, the basics of the CCPA, the penalties for violating the law, and the proposed changes that could affect the law in the future.

The state’s government has just taken another step in that direction with the Age-Appropriate Design Code Act, which unanimously passed a Senate vote on August 29th of this year.

If enacted by Governor Newsom, it will require businesses to take extra measures to ensure their online platforms are safe for young users. This entails regulating things like the use of algorithms and targeted ads, as well as considering how product design may pose risks to minors.

An August 2022 article on the legislation in The New York Times stipulated that when signed, the CAADCA “could herald a shift in the way lawmakers regulate the tech industry” on a broad level in the United States. It pointed to the fact that both regional and national laws in the country have a proven ability to affect the way tech companies operate across the board, and a change in California could very well mean a change for the rest of the US.

Emergent Solutions

Recent regulatory frameworks in data privacy have marked a massive shift in the way companies are required to handle and protect the personal information of their users, with a specific focus on children. In response, many online platforms and service providers have made changes to their terms of service and product design in order to adhere to these new standards.

Some of the biggest emerging solutions include:

Privacy by Design

Privacy by design is an engineering methodology that refers to the incorporation of data privacy into the design of products, services, and systems. The goal is to ensure that privacy is considered from the very beginning of the development process, rather than being an afterthought.

There are seven principles of privacy by design:

1.         Action that is proactive not reactive, preventive not remedial

2.         Privacy as a default setting and assumption

3.         Privacy embedded into design

4.         Full functionality – positive-sum, not zero-sum

5.         End-to-end security and full lifecycle protection

6.         Visibility and transparency

7.         Respect for user privacy

The privacy by design methodology was first introduced in the 90s by Ontario Privacy Commissioner Ann Cavoukian. It’s considered one of the most important data privacy frameworks in the world, and its principles are being promoted as a basis upon which online video games and other digital platforms can better protect children’s privacy.

Risk-Based Treatment

As has been seen in recent years, data protection legislation is moving away from a one-size-fits-all approach and towards a more risk-based treatment of personal information. This refers to the idea that data controllers should consider the risks posed by their processing activities when determining what measures to put in place to protect the rights and freedoms of data subjects. For children, this means taking into account the fact that they are a vulnerable population and tailoring data protection measures accordingly.

Responsible Governance

Responsible governance refers to the ethical and transparent management of data by organizations.  It’s based on the principle that data should only be collected, used, and shared in a way that is transparent to the individual and serves their best interests.

There are four main pillars of responsible governance:

Transparency: individuals should be aware of how their data is being used and why

Choice: individuals should have the ability to choose whether or not to share their data

Responsibility: organizations should be held accountable for their use of data

Security: data should be protected against unauthorized access, use, or disclosure

The concept of responsible governance is gaining traction as a way to protect children’s privacy online. It’s being promoted as a means of ensuring that data collected from children is only used in ways that are beneficial to them, and not for commercial or other ulterior purposes.

Parental Controls

In the face of ever-growing concerns about children’s privacy online, many parents are taking matters into their own hands by implementing parental controls on their devices and home internet networks. There are several different ways to go about this, but some of the most popular methods include setting up child-friendly browsers and content filters, as well as using apps that track screen time and limit app usage. While parental controls are not a perfect solution, they can be a helpful way to give parents some peace of mind when it comes to their kids’ online activity.

Video games can help children develop their creativity, social skills, and knowledge. However, as digital technologies become more sophisticated and firmly entrenched in our daily lives, it is increasingly important that we begin to structure them in a way that considers and respects children’s privacy rights. By understanding the trends in data protection, and by implementing responsible governance practices, we can help create a safer and more secure online environment for children to play and learn in.

California Consumer Protection Act (CCPA) Fines

Any company, organization, or marketer that does business online knows (or should know) about the California Consumer Protection Act (CCPA). But with all the talk about the law, it can be hard to understand what it actually is and how it affects businesses. In this article, we’ll take a look at the basics of the CCPA, the penalties for violating the law, and the proposed changes that could affect the law in the future.

What Is the California Consumer Protection Act?

The California Consumer Protection Act (CCPA) is a set of regulatory guidelines imposed upon businesses that collect consumers’ personal data established by the California State Government. It is among the strongest and most stringent privacy laws in the United States and has a far-reaching impact in terms of both the businesses to which it applies and the rights it affords consumers.

The CCPA was passed in response to the numerous high-profile data breaches that have occurred in recent years, as well as the growing concern over the use of personal data by businesses for marketing and other purposes. The law is designed to give consumers more control over their personal data, and to hold businesses accountable for the way they collect, use, and protect that data.

The Provisions of The California Consumer Protection Act

The California Consumer Protection Act covers four principal provisions: the right to know, the right to opt-out, the right to delete, and the right to equal service. We’ll briefly explain each below.

1. The Right to Know

Under the CCPA, consumers have the right to know the personal information businesses collect and how they use it. They’re entitled to the direct disclosure of what categories of data this information falls under and are also given the ability to request further, more specific details about its use as needed. This includes inquiries about what personal information a business has sold, what types of third parties it has sold the information to, and where it got that data in the first place.

(Cal. Civ. Code § 1798.100, § 1798.110, § 1798.115)

2. The Right to Opt-Out

The California Consumer Protection Act mandates that businesses must provide individuals with an easy and direct way to opt-out of the sale of their personal information. The most common way this is done is through a “Do Not Sell My Personal Information” link on a website homepage or cookie preference banner with a similar toggle.

It’s also worth noting that businesses must automatically opt-out of the sale of an individual’s data if they have direct reason to believe that the person is under 16 years old. In these cases, it is only their parent’s, guardian’s, or own decision (if between 13 and 16) to consent to anything otherwise.

(Cal. Civ. Code § 1798.120)

3. The Right to Delete

Individuals protected by the California Consumer Protection Act have the right to request the deletion of their personal information from the entities who collect it. Businesses that receive these requests are obliged to fulfill them upon receipt unless the information they have collected is necessary for things like the completion of a related transaction or contract.

(Cal. Civ. Code § 1798.105)

4. The Right to Receive Equal Service

The CCPA is very clear about discrimination and its intolerance for businesses that use it against consumers who exercise their rights. The law directly prohibits businesses and entities from treating individuals unfairly because they’ve requested to know what personal information is being collected about them, or because they’ve opted out of the sale of their information. This also includes refusing service, providing a lower quality of service, or charging different prices or rates for services.

(Cal. Civ. Code § 1798.125)

Defining ‘Personal Information’

The CCPA’s definition of what qualifies as ‘personal information’ is important to fully understand the scope of the law and how it applies.

As directly written, it considers ‘personal information’ to be any “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” (Cal. Civ. Code § 1798.140(o)(1)).

Examples of what type of data this can cover include:

●         Social Security Numbers

●         Purchase histories

●         Drivers’ license numbers

●         Internet Protocol addresses

The information listed above falls into the personally identifiable information (PII) category. To learn more about PII and how legislation is trying to protect it, view our previous posts: “PIPL: What You Need to Know About Changing Cybersecurity in China”, and “A Guide to the GDPR, Europe’s Stringent Data Protection Law”. Protecting PII is our focus here at TeraDact.

It’s worth noting that while technically meeting the definition, some types of information are not considered to meet the threshold of ‘personal’ and are not subject to CCPA rules. Publicly available information, for example – like someone’s name printed in a newspaper – is not included. Nor is de-identified or aggregate data, which are both defined and further explained in the CCPA itself.

Who Does the California Consumer Protection Act Apply To?

So, who’s subject to all of these rules and provisions? The CCPA was specifically designed to target businesses but can still apply to any organization or person that operates in California and meets at least one of the following criteria.

Annual Revenues Of $25 Million Or Higher

This part is pretty self-explanatory. Businesses making more than $25 million in annual revenue are generally required to comply with the law.

Commercially Buying, Sharing, Receiving, Or Selling the Data of Over 50,000 Consumers Annually

Another clear-cut rule. If your business handles the personal information of more than 50,000 Californian consumers, residents, or households on an annual basis, you’ll have to comply with the law.

It’s important to note that this rule applies even if you don’t share or sell the information you collect – simply having it in your possession puts you over the threshold.

Deriving Over 50 Percent of Annual Revenues from The Sale of Personal Information

This is another fairly straightforward rule, but one that’s worth unpacking a bit. The ‘sale’ of personal information under the CCPA can be broadly defined as anything that would enable access to the data – including exchanging, renting, releasing, disclosing, or otherwise making it available.

So, if more than 50 percent of your business’s annual revenue comes from activities like this, you’ll be required to comply with the law.

What Are the Penalties for Non-Compliance with The California Consumer Protection Act?

Violations of the California Consumer Protection act don’t go unpunished; the law outlines several penalties for non-compliance with its regulations. And because it applies to businesses, service providers, and individuals, there’s a range of potential punishments that could be levied.

Civil Penalties

The most common penalties for violating the CCPA are civil penalties. Civil penalties are a type of financial remedy government entities impose for wrongdoing. In the case of the CCPA, civil penalties are assessed and enforced by the state attorney general’s office, which has the authority to investigate potential violations and file lawsuits on behalf of Californian consumers.

The California Attorney-General can pursue penalties from organizations that violate any part of the California Consumer Protection Act.

Just some examples of what these violations can look like include:

●         Failing to respond to consumers’ requests for the deletion of their personal information

●         Failing to have or uphold CCPA-compliant privacy policies

●         Selling consumers’ personal data without offering them a means to opt-out

●         Discriminating against individuals who exercise their rights under the CCPA

●         Failing to give adequate notice of the collection of personal information

Service providers who retain, use, or disclose personal data for purposes outside of their contracts with businesses may also be liable for penalty under the CCPA.

People can dispose themselves to penalty as well, by unlawfully breaching rules on the onward transfer of personal data.

The costs of violating the CCPA are severe, with maximum fines of up to $2,500 per violation or $7,500 per intentional violation. And because the law applies to each consumer whose data is mishandled, a single incident could result in multiple penalties.

Waiting Period

It’s important to note that businesses that violate the California Consumer Protection Act have a waiting period before they can be fined. The law stipulates that businesses have 30 days’ notice to correct any violations before they can be subject to penalties.

If the business can cure the noticed violation(s) and provide an express written statement indicating so and that no further violations shall occur, then no action may be brought.

Enforcement by The California Attorney-General

The CCPA gives the state attorney general’s office broad enforcement powers, including the authority to investigate potential violations and file lawsuits on behalf of Californian consumers.

In addition to seeking civil penalties, the attorney general can also seek injunctions or temporary restraining orders to stop businesses from violating the law.

Private Right of Action

In addition to the civil penalty route, the CCPA also gives consumers the right to take legal action on their own behalf in the case of a violation. Private action is a term that refers to the ability of an individual to bring a lawsuit against another party without the involvement of the government.

The CCPA gives Californian consumers the right to sue businesses, service providers, or any person acting on behalf of a business or service provider for data breaches that result from the unauthorized access, theft, or disclosure of their personal information.

Consumers can sue for damages even if they haven’t suffered any financial loss because of the breach, and they can also seek punitive damages if the court finds that the business or service provider acted recklessly or intentionally violated the law.

The financial repercussions of these cases are somewhat less severe, with a range of $100 to $750 that can be sought per consumer per incident. Actual damages may also be awarded, but only if the consumer can prove that they’ve suffered a financial loss because of the breach.

(Cal. Civ. Code § 1798.150)

Unlike civil penalties, private action lawsuits do not require consumers to provide notice to businesses of their intention to sue.

Proposed Amendments to the CCPA

Like any major piece of legislation, the California Consumer Protection Act is poised to change with time. This is especially true given the law’s subject matter; because technology is always changing, the ways in which personal data is collected and used will likely continue to evolve.

Considering this, lawmakers have already proposed several amendments to the CCPA. These amendments range from technical corrections to substantive changes that would modify the scope or enforcement of the law.

Some potential prominent amendments to come include:

A Shift Away from Dark Patterns

Dark Patterns are a type of user interface design meant to trick people into doing things they might not want to do, such as signing up for a service they don’t need or providing personal information they might not want to share.

One recently proposed amendment to the CCPA would make it illegal for businesses to use dark patterns when collecting personal information from consumers. This would help to ensure that consumers are only providing their personal data willingly and with full knowledge of how it will be used.

The Right to Correct Personal Information

Newly proposed amendments suggest adding a ‘right to correct’ inaccurate personal information to the CCPA. This new section would give consumers the right to correct any inaccurate personal data businesses collect, as well as outline documentation requirements, methods for correction, disclosure requirements for denial, and alternative solutions.

While relatively new to the CCPA, this concept has been around for some time on an international level and is already familiar to many businesses that are subject to the GDPR. For local, California businesses though, this proposed amendment would simply be another obligation to add to their CCPA compliance checklist.

Privacy Policy Requirements

In addition to the information already required to be disclosed in a privacy policy under the CCPA, proposed amendments would add several new specific elements that businesses would need to include.

These are:

●         The date the privacy policy was last updated

●         The length of time the business plans to retain each category of personal information, or if that’s not possible the criteria it uses to determine how long it will be retained

●         Disclosure of whether the business allows third parties to control their collection of personal data, and if so, the names and business practices of these parties

●         A description of consumers’ new rights as described in the amendment

●         Clear directions for how consumers can exercise their newly amended rights

●         A description of how the business will process opt-out requests

Organizations that process the personal data of 10 million consumers or more will also be required to include a link to certain reporting requirements in their privacy policy under this new amendment.

The CCPA’s reach and impression on business is significant, there’s no doubt about that. The law gives Californian consumers a number of rights with respect to their personal data, and businesses that mishandle that data can be subject to some severe penalties. By educating yourself on the law and taking steps to ensure that your business complies, you can help avoid any potential problems down the road.