The Coming Revolution in Tech

Generative AI marks a massive leap forward in human innovation. In a matter of decades, we’ve come from a point of imagining this technology to a reality in which it can be used in everyday life.

ChatGPT is the first thing that comes to mind for many in this respect; released in November 2022, the conversational chatbot quickly went viral for its ability to churn out college-level essays, cooking recipes, and original poems with nothing more than a short text prompt. It’s just one of many generative AI tools that pioneer tech firm Open AI has developed over the past few years, starting with the initial release of Generative Pretrained Transformer version 1, or GPT-1, in 2018 to its most recent launch of GPT-4 this past March. Today’s newest model boasts a number of added features, such as the ability to accept image and text inputs, emit larger outputs, and maintain longer conversations with users.

The past few years in particular have been the most formative for generative AI as investors and industry giants recognize the potential gains ahead. They’ve begun competing against each other in a race to see who can capitalize upon opportunities first and dominate an emerging, but extremely promising and lucrative market. Big names Google, Microsoft, IBM, and Amazon all have their hats – or dollars – in the ring and major projects on the horizon.

Google, for instance, just released the beta version of its proprietary conversational bot Bard, which it hopes to one day integrate into users’ search experience. While Microsoft has its own initiatives in-house, it recently invested in industry leader Open AI to the tune of $10 Billion USD. Both IBM and Amazon have respective projects in the works as well; the former is working on a supercomputer, named Vela, designed to help its scientists create and optimize new AI models. Amazon expanded its partnership with startup Hugging Face in February to bolster AWS and potentially provide customers with exclusive AI-powered tools.

All of this action has experts predicting a major revolution to come in the tech industry – and the world at large. According to data from PwC, AI contributed a whopping $2 trillion to global GDP in 2019. Just a few years later and IBM now forecasts it to add more than $16 trillion by 2030.

And unexpectedly rapid adoption is only adding fuel to this growth. IBM’s 2022 Global AI Adoption Index reports that over 35% of companies are now using AI to help run their business. Some countries, such as Brazil, exceed the average and have adoption rates upwards of 41%. 

Looking forward, an additional 42% of organizations around the world say they plan on exploring Artificial Intelligence in the coming years. We could very much be headed for a future reality in which computers that think for themselves are commonplace, forcing anyone who wants to stay competitive to adopt AI or risk obsolescence.

Tapping Into Tech’s AI Revolution in Three Steps

Akin to other technological marvels of the past, business leaders must choose how they want to respond to revolutionary developments in AI technology: with reluctance or with enthusiasm. The latter has historically proven to be the better option, as those who embrace new ideas and technologies first are typically the ones that reap the biggest benefits. Just ask Google, Microsoft, IBM, and Amazon – they all got their start by capitalizing upon emerging trends, and by the looks of it, are poised to do so again.

But just how can business leaders leverage the transformative power of AI? Here are three steps to get you started:

1. Identifying Potential

Part of the reason for AI’s significance to date is its accessibility. Unlike many other cutting-edge technologies, generative tools have been developed in front of the public eye and with an interest in open use. This has enabled them to become mainstream much earlier, providing opportunities for businesses of all sizes to assess their own potential applications.

In this sense, the first step every organization needs to take in entering AI is to consider how they can best capitalize upon its capabilities. The answer will look different from business to business depending on industry and size but could range from customer service improvements to increased organizational efficiency or enhanced product designs.

The key to unlocking full value lies in finding ‘golden’ use cases – those different from what competitors are exploring and that could yield the biggest operational advantage. This is critical because virtually everyone is jumping on the AI bandwagon. Seeing real returns on AI investments means not only implementing the technology but doing so in a way that no other company has thought of.

Once use cases have been established, it’s a matter of narrowing down options to decide which course of adoption will be the most effective. For organizations without a particularly unique use case, buying or using an existing open-source model is likely enough. There are plenty of options available on the market that can be implemented at a relatively low cost and in a short matter of time. The only tradeoff for this is reduced flexibility and control; those that don’t need either have nothing to lose.

Companies with advanced functionalities in mind, on the other hand, are best suited to training their proprietary model, either in-house or through a partnership with other organizations. This requires a lot more investment upfront but may be worth it for organizations that want to use generative AI in very specific ways.

2. Preparing Staff and Resources

The unbelievable efficiencies of AI are a two-sided coin. Implementing them will redefine standards in more ways than one, potentially changing the face of workplaces forever. As adoption ramps up, it will only be inevitable that some jobs are replaced by AI-powered tools. This is probably the biggest public fear surrounding the technology to date; a narrative that technology will simply eradicate the need for human labor.

The reality is more nuanced than this, however. Certain jobs will indeed be automated out of existence, but many others are likely to be augmented or transformed by AI. For leaders embracing this technology, the primary concern should be on upskilling current staff and putting systems in place to ensure workers are as prepared as possible for this shift.

Effective next steps include:

Reassuring Employees

The first and most important challenge of ensuring an effective AI adoption is getting employees on board. As already mentioned, many don’t want to do so. They see the technology as a threat to their livelihoods and dismiss the idea that they could ever become “AI experts”.

In order to alter this perception, business leaders need to make it clear that the technology is not replacing them, but rather augmenting their existing roles. This requires effective communication around how exactly AI will be used; what new capabilities it will bring; and how it is expected to improve existing processes.

Regular checkups to track employee sentiment will also be important to ensure they remain comfortable with the changes AI is bringing to the organization. Any noted resistance should be addressed directly and employees should always be given the opportunity to ask questions or express ideas on how AI can be used more effectively.

Adjusting Operating Models

The implementation of AI will also lead to changes in how work is organized, performed, and evaluated. Processes that were once tested for accuracy through manual review are likely now to be supervised through AI algorithms. In the same vein, hiring practices may become more automated and data-driven over time.

Organizations should look to establish cross-functional teams of IT, HR, and line-of-business professionals to address the new requirements associated with AI adoption. An emphasis on building scalability and creating feedback loops should also be a priority goal. This will ensure AI is not deployed in one-off projects, but rather, integrated into the way the organization functions from the ground up.

Upskilling

Developing an effective AI environment requires a new skillset; one not necessarily held by existing staff. Organizations should consider investing in external training programs to provide employees with the technical know-how they need to take full advantage of the technology.

The extent of this training will vary depending on the complexity of the AI solution but could range from developing code to understanding basic machine learning concepts. The key is that employees should be familiar enough with the technology to use it effectively and see the value of its implementation in their day-to-day roles.

3. Implementing Policies

The tricky thing about great power is that it often comes with downsides. In the case of Artificial Intelligence, this manifests in the form of risk.

While highly capable, today’s models are nowhere near the point where they could be trusted to carry out tasks completely unsupervised. They seem smart – some people even argue that AI has already reached a point of sentience – but in reality, are just really good at acting like it. 

Generative models are trained on mass amounts of data to learn the context surrounding given inputs and respond with relevant outputs. The information they’re fed essentially defines what they know, while the way they’re trained determines how they interpret that information. Mistakes are very common – as are bias, inconsistency, and inaccuracy.

Not only that, but AI tools aren’t inherently secure. Third-party applications and cloud services often require access to confidential data, which increases the risk of data breaches. The code that these models generate is also vulnerable; one study conducted by New York University’s Tandon School of Engineering found that GitHub Copilot produces insecure outputs about 40% of the time. This opens the door to bugs, design flaws, malicious attacks, and data theft.

As far as the potential for error goes, organizations need to recognize that there’s currently no generative AI model they can simply leave on autopilot. The day may come when one does exist, but we’re not getting there without learning experiences from what’s available as of now. Effectively implementing the technology in 2023 means doing so with human oversight. Policies should be set in place to ensure this is the case, and that any errors or malfunctions are quickly identified and addressed. It’s also important to build a culture of ethical AI in order to ensure human values remain the primary driver of decision-making.

Preventing the privacy risks posed by AI starts with policy. As the rulebook organizations follow, it’s the most direct way of ensuring AI is used responsibly and securely. Policies should cover topics such as data usage, storage, and accessibility; how the technology will be monitored and maintained; and how it will be used to make decisions. Companies should also consider implementing a system of checks and balances in the form of technical reviews, ethical review boards, and an AI audit trail. This will help to identify issues before they become problems – allowing organizations to address them promptly and effectively.

A revolution in tech is coming. And while the possibilities offered by generative AI were once unimaginable, they’re just one iteration of a very long line of discoveries that have reshaped the way our world works. As with previous cycles of evolution, survival in this new era depends on an ability to adapt. Businesses must step up and embrace the potential AI holds for them or risk losing out in an ever-evolving market. TeraDact is an AI/ML Information Security company and our products allow for on-prem and cloud-based proactive protection of your sensitive data. Organizations with a clear, well-defined plan for leveraging AI technology such as ours stand to gain a competitive edge, while those that wait too long could be left behind. Reach out today before it’s too late.

Why Businesses Need to Focus on Data Privacy

Privacy is a difficult, yet increasingly relevant issue in the world of business. As technology continues to advance – and more companies begin to rely on it – the potential for personal business data to be accessed and misused grows. Criminals no longer need to rely on physical acts of theft and vandalism to make a profit; they’re now able to do so through the exploitation of digital information.

Unethical practices such as identity theft, phishing scams, and other cyber-attacks can wreak havoc both among businesses and consumers. We’ve entered a modern age in which everyone is at risk, and where the potential consequences are higher than ever. This article will discuss the evolving challenges of privacy and business, what they cost, and how organizations can insulate themselves from the growing threats of malicious data theft.

Why Is Data Privacy Important?

Let’s start by addressing why this is such a big issue in the first place. Digital data has become a valuable commodity in the twenty-first century; it has commercial value, and when left unprotected, can be easily acquired by malicious actors. In the wrong hands, confidential personal information can be used to commit a range of offenses – from blackmail and fraud to identity theft and more.

The University of Maryland estimates that hacking attacks occur an average of every 39 seconds among computers connected to the internet. 2017 research analyzed the behavior of “brute force” criminals who use simple software-aided techniques to attempt to gain access to usernames and password-protected devices. The computers used in the study were attacked over 2,200 times a day.

This is concerning when you consider the volume of valuable data there is to steal out there; the average company has over half a million files containing sensitive information. Documents like customer records, invoices, financial statements, and private emails all carry a hefty price tag on the black market.

Statistics from IBM show that the average internal file breach costs $150 per record lost. While that may not seem like a lot, it adds up quickly with the massive swaths of information often taken in a cyber-attack. The typical data breach costs businesses a whopping $3.92 million, and that’s without factoring in the long-term repercussions such as reputational damage, lost customers, and legal fees.

Pew Research says that 81% of consumers believe the potential risks of giving a company data outweigh the benefits. 92% want businesses to take a proactive approach to protect their information, while 64% of Americans say that they would blame the company, not the attacker, for the theft of their personal data.

Yet despite these glaring reasons for change, there hasn’t been near enough action to address privacy risks. Varonis estimates that over half of businesses (53%) leave more than 1,000 sensitive files open to all of their employees. All too many take a band-aid approach to data security – waiting until an incident occurs to implement any kind of protective measures.

This isn’t only irresponsible, but also a move that can worsen the effects of what would already be a damaging cyber-attack. Reports show that the global average time to detect and contain a breach was 279 days in 2022, a figure proving just how inadequate most companies’ security measures are.

What Is a Privacy Program?

While there could never be a single solution to a problem as complex as business data security, one strategy has the power to make a big impact – privacy programs. These organized sets of policies, processes, and systems are a must for companies looking to limit their exposure to cyber risks and the associated losses.

At its core, a privacy program is an organization-wide initiative designed to protect the private information of consumers, employees, and other stakeholders. It works by securely collecting data, setting up safeguards to keep it safe, providing access only to authorized personnel, and minimizing the risk of a breach.

The goal of a privacy program is three-fold:

1. To Protect the Data of Customers and Employees

The primary objective of any privacy program is to keep the data of customers and employees secure. There should be measures in place both to prevent theft, as well as to limit the damage of a successful attack.

2. To Give Individuals Control Over How Their Information Is Shared

Privacy programs ensure that customers and employees are fully aware of the information a company collects from them, how it’s stored, and how it’s used. This is important for reasons of both transparency and informed decision-making.

3. To Meet Regulatory Requirements

Privacy programs help companies stay compliant with relevant data protection laws, such as the EU’s General Data Protection Regulation (GDPR), by outlining processes for the collection, storage, and usage of data. While it isn’t necessarily required to implement a privacy program to adhere to regulations, doing so can save a lot of time and money in both the short term (by avoiding fines) and the long term (by preempting potential incidents).

What Goes into a Privacy Program?

There’s no such thing as a one-size-fits-all privacy program; in order to be effective, solutions must address an organization’s specific needs, considering things like its industry, size, number of employees, the volume of data, risk profile, and existing security measures. Failure to do so can create several open holes in the system and leave a company vulnerable to attack.

That said, some principles should be included in any privacy program: company-wide policies, a data inventory, advanced security measures, compliance tracking and auditing, employee training, and incident response. Together, they form the basis of any effective security strategy – which can be built up in accordance with specific organizational needs.

1. Privacy Policies

The first step in any good privacy program is to create comprehensive and compliant policies. Policies are sets of rules that govern how staff members can use, store, and share data. They include specific guidance related to the types of data that are collected, where it is stored, and who has access to it.

Companies can also choose to create policies related to activities such as data transfers and third-party access, along with any rules about using personal data for marketing. These documents are essential for ensuring that all employees and affiliated parties remain in compliance with national, international, and industry privacy laws.

2. Data Inventory

Data inventory is the practice of mapping out all the different types of data an organization holds, where they are stored, and who has access to them. It’s a critical component of data security, as it enables organizations to identify potential deviations from policy and take corrective measures.

Data inventory can be conducted manually or through automated tools. It should cover the entirety of a company’s digital infrastructure and focus on all types of data – from customer information to confidential business documents.

3. Security Measures

Once an organization has established its policies and inventoried its data, it needs to take steps to protect it. This means investing in malware protection and firewalls, implementing two-factor authentication, and encrypting data as standard. Organizations must also make sure that all of their systems are kept up to date with the latest security patches. These updates can be a nuisance but are vital to staying ahead of emerging risks.

4. Compliance Tracking and Auditing

Organizations must continuously monitor how their systems are being used to ensure they remain compliant with any relevant privacy laws. They must also audit these systems regularly to identify any malicious or anomalous activity. An effective way to do this is to create an audit log, which documents the activities of users across different systems and applications for review later. This can be done manually or with automated tools such as SIEM (Security Information and Event Management).

5. Employee Training

No matter how effective an organization’s security measures are, they are only as strong as the people who use them. That’s why it is important to provide employees with adequate training on data privacy and security best practices, such as password hygiene, avoiding phishing emails, and recognizing suspicious activity. Organizations can also use simulated phishing campaigns to test their employees’ ability to recognize malicious links or attachments. This is a great way for companies to ensure their staff members are well-prepared and aware of the threats they may face daily.

6. Incident Response

Even with the best security measures in place, organizations can still fall victim to malicious attacks. They must have an incident response plan in place to ensure that any breaches are addressed quickly and efficiently.

Why Implement a Data Privacy Program?

Privacy programs can seem daunting at face value – there are a lot of parts, people, and policies to consider when implementing them. But at the same time, they are essential for any organization that collects and stores sensitive data.

Take a look at just a few of the reasons for and benefits of investing in a privacy program:

Compliance – Data privacy laws are complex and ever-evolving, which is why proper compliance is essential for any organization that deals with personal data. Having a robust privacy program and comprehensive policy in place can help organizations ensure they remain compliant with all relevant laws.

Security – Data breaches can be incredibly damaging, both to an organization’s reputation and its bottom line. Privacy programs help organizations prevent or minimize the likelihood of a breach by implementing strong security measures and protocols.

Trust – Privacy programs help to build trust with customers, partners, and other stakeholders by demonstrating that an organization takes data privacy seriously. This can help to attract and retain customers, as well as open new opportunities for collaboration.

Emerging Risks – Today’s cyber criminals are lightyears ahead of where they were 20, or even 10 years ago. And as technology continues to evolve, the risks will only multiply. Privacy programs are essential for staying ahead of emerging threats and ensuring the security of an organization’s data over the long term.

Efficiency – Threats like ransomware can completely derail an organization’s operations. Well-implemented privacy programs enable them to detect and respond to these threats quickly, minimizing downtime and disruption.

Financial Savings – Companies that take data security seriously are less likely to face fines or other penalties for non-compliance. They’re also better equipped to deal with ransomware attacks, which can cost the average business tens or even hundreds of thousands of dollars.

How To Implement a Privacy Program

Here are some steps organizations can take to get started:

1.  Establish a Privacy Team: Assemble a team of individuals from different departments to create and maintain the privacy program. This team should be chaired by a senior manager or data security officer who is responsible for overseeing everything.

2. Assess Privacy Risks: Conduct a privacy assessment to identify and analyze potential privacy risks and determine the most effective measures for mitigating them.

3. Develop a Privacy Policy: Develop a privacy policy that outlines how information is to be collected, used, and protected. It should be communicated to employees, customers, and other stakeholders.

4. Implement Procedures: Establish procedures for collecting, managing, and storing personal information in accordance with the privacy policy. Examples include employee training, customer opt-in forms, and secure data storage.

5. Monitor Compliance: Conduct audits and reviews, respond to inquiries and complaints, and regularly assess the program to ensure its ongoing effectiveness.

Implementing a privacy program takes time, effort, and resources. But the rewards for doing so are well worth it. Utilizing privacy programs and products like Tokenizer+, Redactor+, and Secrets+ will allow for proper on-prem and cloud protection of your company’s sensitive data. Contact us today to learn more.

Why Education Data Security Is More Important Than Ever

America’s schools face an increased risk of cybercrime breaches. Breaches are becoming unfortunately commonplace among institutions K to post-secondary, leaving students vulnerable at all stages of their educational journeys. This article will explore the issue of education data security, its impacts, current challenges, and what can be done to mitigate the risks posed by cybercriminals who target them. 

A Quick Look at the Current State of Affairs

Like virtually every other facet of society, education faces a growing risk of cybercrime in today’s day and age. New technology has allowed classrooms to innovate in terms of the instruction they provide students, but at the same time, opened the door to new vulnerabilities.

The FBI recently sounded the alarm regarding schools’ increased victimization in cybercrime. It has reported seeing an uptick in the number of ransomware attacks targeting education data security, often originating from remote desktop protocol (RDP) credentials and phishing emails.

Ransomware is a type of malicious software that encrypts data until the victim pays a ransom to receive their decryption key.

Schools can be especially vulnerable to this type of attack due to the large amounts of financial information and personal data they store. CBS News reports that the average ransom payment for a ransomware attack on an educational institution is roughly $50,000, but some schools have paid as much as $1.4 million to get decryption keys and regain access to their data.

K12 SIX also released shocking numbers in its recent ‘State of Cybersecurity’ report, which highlighted four separate incidents resulting in financial damage ranging from $206,000 to an astounding $9.8 million. The latter case involved an attacker obtaining information from the district’s investment advisor and bank.

What are Hackers Looking For?

While it is impossible to definitively state the goals of any given cybercriminal, there are some trends when it comes to education data security breaches.

Financial information is often at the top of their list, as schools may store anything from tuition payments to student loan applications on their servers. Personal information such as Social Security numbers and email addresses can also be of interest, as can other pieces of personally identifiable information (PII) such as medical histories and academic transcripts.

In some cases, cybercriminals may also be after intellectual property, such as research data or proprietary software code. Therefore, schools need to take steps to properly protect their networks from unauthorized access and breaches.

What Impacts Can Ransomware Have on an Educational Institution?

Ransomware can have devastating effects on its victims in any circumstance, but in the case of schools, the impacts are particularly damaging. Educational institutions serve as the backbone of our communities and are essential for a functioning society. When they’re compromised, it can have serious implications beyond the financial losses we’re used to seeing.

Impact on Students

As most educational institutions are responsible for the health and safety of their students, ransomware can be especially damning. It can prevent teachers from accessing course materials or student records, preventing them from providing quality instruction. It can also delay tests, disrupt grading systems, and limit access to the internet—all of which are essential parts of the learning process.

Impact on Faculty

Ransomware attacks can also have a serious impact on teachers and other faculty members. It can prevent them from accessing their emails, files, or any other resources they rely on to do their jobs properly. Additionally, it can lead to administrators having to quarantine certain systems or the entire network, limiting access to essential tools.

Impact on Institutional Resources

Ransomware attacks can have a serious impact on the institution’s financial resources. It can prevent them from collecting tuition fees or other payments, as well as leave them vulnerable to legal action if student data is compromised. This could result in hefty fines and damage to their reputation—the latter of which could cause enrollment numbers to drop.

Impact on Community

Schools play a fundamental role in their local communities and can affect the lives of many people beyond their students and faculty. Cyberattacks could lead to delays in school projects, lost educational resources, and even closures of certain schools. These can all have serious repercussions for the economic, educational, and cultural well-being of a community at large.

Why are Educational Institutions Targeted in Cyberattacks?

Amid the countless incidents, breaches, and attacks to hit education data security in recent years, one giant question prevails – why?

Schools are large organizations, sure, but they don’t have anything near as valuable as a bank or major corporation, right? Wrong.

Educational institutions are privy to a wealth of important information and records, much of which can be tied back to a person’s overall identity. Not only do they register information regarding course enrollments and grades, but also sensitive things like addresses, contact information, ID, transaction records, and payment data. Hackers can make use of all of the above to exploit victims in any kind of cyberattack.

Aside from the fact that they hold valuable assets, schools may also experience increased rates of cybercrime due to the lack of measures in place to protect them. The COVID-19 pandemic served as an excellent example of just how unprepared America’s schools were to handle instruction online, making them easy targets for malicious actors.

One high-profile incident involving the University of California San Francisco saw the school pay over $1 million in ransom when threat actors infected servers in its School of Medicine. NetWalker ransomware held their critical data and medical records hostage, leaving the school with no choice but to pay up.

Another university, this one in Massachusetts, was forced to shut down for a week in June 2021 after it experienced an unexpected ‘cybersecurity incident’.

K-12 schools haven’t been immune either, with The K12 Security Information Exchange (K12 SIX) reporting that more than 1,330 incidents have taken place since 2016. That works out to an average of one incident per school day over the same data period.

While some school administrators may, the unfortunate truth is that most teachers have little to no training in cybersecurity. They’re the front lines of defense against cyberattacks and not only are they unprepared, but the technology at their disposal is often outdated.

According to a recent survey conducted by IBM, sixty percent of teachers received no extra cybersecurity training from their employers during the COVID-19 pandemic. Even more shocking, half of the respondents indicated that they had never received cybersecurity training before.

While federal laws like the Family Educational Rights and Privacy Act (FERPA), Children’s Internet Protection Act (CIPA), and Children’s Online Privacy Protection Rule (COPPA) aim to regulate students’ activity online, they fall short of fully protecting schools themselves. Education records remain a hot item for attackers, and schools are simply not doing enough to defend against them.

But according to experts, this might be an issue of resource scarcity rather than one of carelessness. Although many schools recognize the importance of robust data security, few have the funding necessary to put proper solutions in place. It’s getting hard to find a district with enough funding to invest in proper classroom sizes and resources. IT? That isn’t on the top of the priority list.

As attacks continue to proliferate though, the consensus among both lawmakers and educators is that cybersecurity can’t wait any longer. Many states have begun stepping up their efforts in data privacy enforcement, with 45 having enacted some form of new student-oriented law since 2014. Among the most prominent include the Student Online Personal Protection Act (SOPPA), which requires Illinois’ State Board of Education to ongoingly publish lists of online services and applications the district uses, data being collected, why it’s being collected, and to notify parents of any compromised security within 30 days.

Other states have followed suit, with laws regulating education data being passed in various forms. Examples include the Student Online Personal Information Protection Act (SOPIPA) in California and the Massachusetts Data Security Law.

How are Schools Attacked by Cybercriminals?

Cyberattacks directed at schools can take many forms, but generally target unsuspecting teachers, students, or staff members. Among the most common are phishing and ransomware attacks, both of which can be devastating for educational institutions.

In the case of phishing, malicious actors try to get users to click on a malicious link or open an infected file, usually by masquerading as a legitimate email. It’s the same tactic hackers use to steal credentials and money from individual users, but in the case of schools, they may also be after personal data like Social Security numbers or financial details.

Ransomware attacks on the other hand are especially dangerous for educational institutions due to their sheer potential for destruction. This type of attack occurs when malicious actors infiltrate a school’s computer system, encrypt the data stored on it and then demand a ransom for its return. What makes this attack particularly dangerous is that schools don’t always have the resources necessary to restore their data, leaving them in a difficult situation.

Ultimately, cybercriminals are motivated by financial gain or political agendas – or sometimes both. Through their attacks, they can access and steal sensitive data or demand hefty ransoms for the return of important files. For schools already struggling with resource shortages, these attacks can be especially damaging. They have the potential to further exacerbate an already-dire crisis in education, with both students and staff forced to pay the cost.

How School Data Security Can Be Improved

It’s clear that there’s a problem regarding the education data security of America’s institutions. Attacks and breaches have become all too common, and many decision-makers continue to push forward for legislative protections and guidelines.

But besides that, what else can be done? Government moves at a notoriously slow pace, and as we’ve already established, cybercrime does the opposite. In fact, regulations may never keep pace with the real threats that exist out there; should schools simply sit back and accept that risk?

The answer is a hard no. While the current landscape may be full of threats, and many districts underfunded, schools aren’t defenseless against the throes of cybercrime. There are several things any educational institution can do to help strengthen its security posture, both in the short- and long term.

A few examples are listed below.

Creating a Cybersecurity Plan

Having a cyber security plan in place is essential for any educational institution, as it outlines the steps they will take to protect their data and systems from malicious attacks. This should be regularly reviewed and updated to reflect the ever-changing landscape of threats, while also making sure that everyone in the district understands exactly what’s expected of them.

Investing in Security Training

All staff members need to be familiar with basic security protocols, so investing in regular training sessions can go a long way. They don’t have to be expensive, either; many third-party providers offer free webinars or online courses that can give everyone the fundamentals they need to stay safe.

Using and Regularly Updating Antivirus and Anti-Malware Software

Antivirus and anti-malware software are essential tools for any school, as they can detect and neutralize potential threats before they cause damage. However, this is only effective if the software is regularly updated with the latest security patches. This should be done at least once a month, but more frequently if possible.

It’s clear that the age of data privacy is upon our educational institutions, and with it comes a slew of new challenges to be overcome. Schools must begin finding the resources to bolster their security, or risk becoming easy prey for cybercriminals.

By investing in the right tools and personnel, any educational institution can protect itself from a cybersecurity standpoint – and in the long run, this could make all the difference. Tools like TeraDact’s Tokenizer+, Redactor+, and Secrets+ can buffer security measures for any size institution. With the right attention and resources, our schools may soon be better able to protect the data entrusted to them.

Data Security Must Be Prioritized in the Healthcare Industry

We’ve always seen the digital landscape as an entity alternate from our own lives. What happens online is what happens online, and what goes on in the real world is separate. But with the increasing adoption of technology around the globe, it’s becoming increasingly clear that these two worlds are merging more and more every day. Nowhere is this truer than in the healthcare industry, where organizations must ensure the safety of their patients’ information and data. Cybersecurity is now a critical element to any successful healthcare practice, and those who don’t take the necessary steps to protect themselves are at serious risk of losing both their patients’ trust and a lot of money.

In this article, we’ll take an in-depth look at the current digital risks faced in the healthcare industry, their implications, as well as what steps can be taken to mitigate them.

The Current State of Affairs

Cyberattacks on the healthcare industry are unfortunately on the rise. While the sector’s ongoing digitization has certainly helped it evolve with modern times, new technologies are also opening the door to advanced risks providers simply aren’t ready for. The trend seen over the past few years is staggering. Data shows that 45 million people were affected by healthcare data breaches in 2021, 11 million more than the 34 million impacted the year before. This is even more worrying when you consider how fast things have climbed in less than five years; 2021’s total is more than three times the number of people affected by breaches in 2018.

Hospitals are facing an onslaught of attacks from cybercriminals looking to exploit patient data. Never-ending changes in technology have made consistently circumventing security measures easy for hackers – costing healthcare providers billions in turn. Statistics indicate that the sector’s average percentage increase of data breaches was three times higher than the global average and nearly twice as costly year-over-year in 2021. Business disruption, revenue losses from system downtime, reputation losses, and diminished goodwill cost providers roughly $10 million dollars per incident, each taking an average of 212 days to identify and another 75 to contain. Compromised credentials have been the most common factor behind these crises to date, although phishing, cloud misconfiguration, and business email compromise have also had a hand in attacks.

Healthcare’s current risk outlook has only been made worse by emerging global conflicts and political tensions. The war in Ukraine has specifically spurred malicious activity as Russian actors look to target Kyiv and its Western allies.

Back in April of last year, not long after the United States imposed sanctions against Russia in response to its invasion of Ukraine, the U.S. Department of Homeland Security released a statement warning Americans of possible retaliations on domestic digital infrastructure. The security warning directly mentioned threats to healthcare and referenced several prominent hacking groups that had already levied such attacks. The American Hospital Association (AHA) also sounded the alarm, telling providers to prepare for the potential disruption of critical systems, supply chains, and electronic medical records.

Months of fighting, further sanctions, and escalations later, we’re where we’re at today – an unknown, adverse landscape in which cybercrime runs rampant and the healthcare industry risks have never been greater.

The Consequences of Cyber Attacks on Healthcare

Cyberattacks are bad news whenever and wherever they occur, but are particularly damaging to the healthcare industry. As a foundational part of modern society, hospitals, clinics, and doctor’s offices have a direct relationship with public well-being. Their work literally saves lives and is relied upon to fight our most existential threats. COVID-19 is just one example; it’s hard to imagine where the world would be at this point if not for modern medicine or the breakthroughs it pioneers every day.

For all it does to keep our world running, the healthcare industry is an essential sector that can’t be lived without. Impairing it in any way has the potential to create a ripple effect that impacts society at large. This is a consequence we’ve seen time and time again as cybercrime continues to soar on an international level.

A 2021 study conducted by The Ponemon Institute found that 89% of healthcare groups it surveyed had experienced a cyberattack within the previous year. Those among them dealt with an average of 43 incidents each, the most expensive costing roughly $4.4 million.

But the impacts extend far beyond financials – the providers highlighted in the study saw a direct decline in quality of care as a result of cyber attacks. From cloud compromises and ransomware to supply chain and business email compromises, 57% indicated that these incidents resulted in poor patient outcomes, nearly half in increased complications, and 23% in increased patient mortality rates.

2021 research from the U.S. Cybersecurity and Infrastructure Agency (CISA) also warns of an immediate connection between digital crime and the collapse of healthcare systems as a whole. It outlined that IT network failure can impact multiple facets of a hospital’s functioning, from access to electronic health records and diagnostic technology to ambulance diversion, ICU bed utilization, and strain management. In some cases, it may mean the difference between life and death in an emergency. This is a serious issue many experts have begun speaking out over; as healthcare is one of the most critical aspects of public infrastructure, attacks on it can easily take a human toll.

What Cyber Threats Do Healthcare Providers Face?

Saying that healthcare providers are affected by cybercrime simply doesn’t expose the true breadth of risks the industry is currently facing. Hospitals, clinics, and all organizations holding valuable patient data need to keep up with a growing sea of digital threats that can each upend their operations differently. Below is a breakdown of the most common along with the strategies they involve.

Phishing

Phishing is a type of social engineering attack conducted over email, phone, and text. It involves sending out deceptive messages that appear to be from reputable entities like banks or service providers and contain malicious links or attachments. When clicked, these either download malware onto the user’s computer or redirect them to a fraudulent website where they are asked for personal information.

Phishing has consistently ranked as the number one cause of healthcare data breaches among analysts. They say that the strategy is behind as much as 60% of the sector’s attacks, mainly due to its ability to easily mislead victims and infiltrate their systems undetected. Anyone – from a doctor or nurse to an administrative employee – can fall for a phishing scam, making it a popular option among cybercriminals looking for quick access.

Ransomware Attacks

Ransomware attacks are a form of cybercrime that involve targeting victims’ computers with malicious software (malware). Once implemented, these programs lock users out of their files and deny access until a ransom is paid. 

The healthcare sector is especially vulnerable to this type of attack, with more than one in three providers falling victim in 2020. It’s an especially damaging form of attack as it can not only compromise data but also disrupt operations and prevent access to important healthcare services.

Data Breaches

Data breaches are a type of digital attack that occurs when an unauthorized user gains access to sensitive data or systems. This can be done through a variety of methods, such as exploiting weak passwords or unknown vulnerabilities in IT infrastructure. Once the attacker is inside, they can steal, delete, or modify any information stored on the network.

Data breaches are one of the biggest risks healthcare providers face, as they can lead to the exposure of important patient data – including financial and medical records. This type of information is highly valuable to cybercriminals and can be used for a variety of purposes, such as insurance fraud or identity theft. Healthcare is disproportionately impacted by data breaches, with an average daily number of incidents of 1.76 in 2020.

What Can Be Done to Protect Healthcare?

It’s no secret that healthcare is one of our most important assets as a society. Yet, by all indications, it’s also among the most threatened. As technology evolves, it’s only inevitable that risks will continue to emerge. The current state of affairs we’re facing proves that this is an issue of ill-preparedness, something that can be mitigated with the right solutions in place.

While every healthcare provider has unique risks, there are some general steps any organization can take to lessen its vulnerability to today’s cyber threats. The following section will list the seven most effective and how practitioners can implement them.

1.        Develop a Comprehensive Security Policy

Having a detailed security policy in place is vital to any organization’s defense against digital threats. It outlines the steps and procedures people must follow for the protection of data, systems, and resources. This document should be updated regularly to reflect any changes in technology or threats.

2.        Perform Regular Risk Assessments

Risk assessments are a precautionary, proactive measure, yet just as important to cybersecurity as any other practice included on this list. They involve identifying and analyzing the potential threats a healthcare organization may face, as well as developing strategies to mitigate them. Risk assessments should be conducted regularly, as threats and vulnerabilities can change over time.

3.        Train Staff on Cybersecurity Practices

One of the most important steps healthcare providers can take to improve their security is training staff on cybersecurity practices. Allowing employees to become familiar with the basics of cyber-hygiene can go a long way in reducing the risks of an attack. Training should include information on identifying phishing attempts, verifying sender identities, and following safe online practices.

4.        Update IT Infrastructure

Hospitals and clinics have long relied on outdated systems and technologies to get their work done. It’s often due to circumstance – organizations are so busy combatting overlapping crises like COVID-19 and drug epidemics that they hardly have the time to take a break and breathe, let alone upgrade their IT.

But as secondary as it may seem, updating IT infrastructure is a critical step that every healthcare facility must take to protect itself.  This includes making sure all systems are up-to-date and patched, installing the latest virus protection software, and performing regular backups of important data.

5.        Adopt Encryption Technology

Encryption technology is an essential tool when it comes to protecting data. It scrambles information so that only the intended user can read it, making it difficult for outsiders to access. The healthcare industry should employ encryption technology wherever applicable, including emails, messages, and documents stored on the cloud. Doing so can add an extra layer of protection against cyber attacks and ultimately make it harder for criminals to exploit patient information.

6.        Invest in Cybersecurity Insurance

Healthcare providers must also consider investing in cybersecurity insurance. This type of policy offers financial protection against any losses incurred as a result of a data breach. It’s not just a smart option for healthcare organizations – it has become increasingly necessary as cybercriminals become more sophisticated and the risks of attack grow.

7.        Review Relationships With Third-Party Partners

It’s not enough to implement strong cybersecurity measures within healthcare organizations themselves – the third-party groups they work with can be an equally vulnerable place for criminals to exploit.

It’s in every healthcare group’s best interest to thoroughly review the companies it has relationships with and ensure they’re just as protected from cybercrime. Existing guidelines and assessments like SOC 2 + HITRUST are available to provide healthcare executives with confidence that partners will safeguard data.

While it is – and will likely always be – impossible to completely insulate organizations from cyberattacks, the points outlined in this article are a great start. In many ways, they’ll become essential to businesses that want to survive in this increasingly adverse digital landscape. By taking the time to invest in the right solutions like Tokenizer+, Redactor+, and Secrets+, healthcare providers can ensure that their systems and staff are properly equipped to weather a modern future. For more information on mitigating risk and preventing cyberattacks on your sensitive data contact TeraDact today.

Why Law Firms Must Keep Data Security at the Core of Their Practice

In a world where more is done online than ever, law firms find themselves at a unique risk of data security attacks. Constantly handling sensitive matters such as Intellectual Property (IP), their systems are an ideal target for criminals in search of exploitable data and files. That’s why it’s essential for legal professionals to stay on top of the latest security measures. Here, we’ll provide an overview of the special vulnerabilities these companies face and review some best practices for mitigation.

Recognizing the Unprecedented Risks of Today’s Digital Business Landscape

Data security is top of mind for all industries heading into 2023. We’re coming off an unprecedented year of attacks, not to mention novel risks that haven’t been seen before.

According to recent statistics, data breaches climbed by an annual average of 15.1% in 2021, costing U.S. businesses more than $6.9 billion. That’s a 392 percent-plus increase from only four years earlier in 2017 when the same metric was estimated to be around $1.4 billion.

Experts only predict that this reality will get worse; ongoing digital transformation across sectors has made businesses more reliant on technology than ever. Practically everyone – from your local tax professional to your healthcare provider – utilize digital tools to get the most important parts of their job done and must now operate with the added vulnerabilities that come with these connected operations.

From ransomware, malware, and phishing to third-party attacks, insider threats, social engineering, form jacking, and more, the potential risks are endless. Quickly evolving strategies are increasing organizations’ susceptibility to suffering loss – 82% report serious concerns regarding their vulnerability. With the cost of cybercrime anticipated to reach $10.5 trillion by 2025, there’s serious pressure on businesses that want to stay afloat to prioritize the solutions they have in place to mitigate it.

Law Firms’ Unique Vulnerabilities to This Environment

While cybersecurity is a relevant issue for all businesses in the twenty-first century, it has specific importance to legal professionals and the firms they operate. Similarly to healthcare, education, and finance, the legal industry works with sensitive public information on a day-to-day basis. This includes – but of course, is not limited to – names, records, contact information, addresses, health history, and financial documents. Most importantly of all, lawyers often handle cases involving issues of Intellectual Property (IP), which must remain confidential in order to protect their clients. Law firms handle incredibly sensitive information, and this makes them a prime target for hackers. Smaller groups are especially vulnerable, as these businesses seldom have the resources to devote to a robust security system.

According to a recent survey, one quarter 25% of law firms report having experienced a data breach before. That proportion is only expected to rise in parallel with the criminal sophistication of the 21st century. In fact, many experts believe that legal firms are particularly at risk to suffer security incidents because they’re not taking the necessary steps to secure their data. Whether it’s through lack of training, failure to invest in technology, or not having sufficient policies in place, there are several ways that law firms can leave themselves open to attack.

Why Law Firms Should Now Prioritize Data Security More Than Ever

Beyond the fact that they’re uniquely vulnerable, law firms have plenty of reasons and incentives to take data security for the serious issue that it is. Below are four of the most notable and why they should be important considerations for legal professionals assessing their strategies.

Changing Industry Standards

As the digital landscape continues to grow more complex, law firms and businesses of all sizes have begun paying extra attention to their data security standards. So much so, in fact, that data security has become a requirement in most vendor contracts. If a law firm fails to meet these standards, it can face serious consequences including termination of the contract and fines.

In addition to higher security standards, many businesses now require that their vendors demonstrate proof of compliance. This means that law firms are expected to have some form of evidence that proves their data is secure. Common compliance methods include ISO 27001/2 and SOC 2 Type II, both of which require frequent measurement and validation.

Ethical and Regulatory Obligations

Lawyers are governed by a number of legal and ethical principles in the course of their work. Every state in America has its own expectations based on the Model Rules for Professional Conduct (MRPC), which specifically cover the issues of safekeeping property and confidentiality of information. Violations of these rules can result in fines, disciplinary action, and other penalties.

Law firms must also be aware of the data privacy regulations that are unique to their individual regions and states. Aside from rules directly applicable to lawyers, many states also have their own general laws on data privacy, most notably the California Consumer Privacy Act (CCPA), Virginia Consumer Data Protection Act (VCDPA), and Colorado Privacy Act (CPA). These rules focus on protecting consumer data and require those that handle it to take appropriate steps in doing so. They also require law firms to notify impacted parties whenever a data breach occurs. Violations of these regulations can have incredibly severe consequences, ranging from hefty fines to lawsuits.

Client Acquisition and Retention

The online world’s current level of risk hasn’t gone unnoticed by consumers. They’ve become increasingly aware of and concerned about the issue of data security and privacy, and are keeping these top of mind when choosing what businesses they want to work with.

From a general standpoint, roughly 55% of people in the United States say that they would be less likely to work with a company with a history of data breaches. Add to that the sensitive and high-stakes nature of legal endeavors, and that number is likely a lot higher for law firms.

If lawyers want to win new clients and maintain the trust of current ones, they need to show that they’re taking cybersecurity seriously. This is especially important when onboarding new clients, as they will want to know exactly how their data will be used and what the firm is doing to keep it secure.

Growing Risks

As cybersecurity solutions are advancing with technology, the strategies hackers use to circumvent them are too. What passed as a viable defense system 10 years ago certainly wouldn’t hold up against today’s new risks. These threats are craftier than ever, not to mention increasingly effective and efficient.

Research reports that in over nine out of ten cases, an external attacker can break through an organization’s network perimeter and obtain access to local network resources. The average time it takes them to breach its internal assets? Only two days.

As these risks continue to evolve, it’s essential for law firms to stay ahead of the curve and continually review their cyber practices. They need to be proactive in assessing their security systems and implementing strategies to protect against any potential threats. Failing to do so can be the difference between a viable business and one that winds up as a data breach statistic.

Reputational Damage

Cyber-attacks can have serious reputational consequences for businesses. Not only do they attract negative press, but the public’s trust and confidence in the business can be quickly eroded.

This is particularly important to consider in the legal industry, where clients rely on their lawyers to act with discretion and integrity. If a law firm is hacked, the public can lose faith in its ability to handle sensitive information, and it may even begin to doubt the firm’s overall competency. That’s the last thing you want when your job is to make people feel safe and secure.

How Law Firms Can Maintain Strong Data Privacy and Cybersecurity Practices

It’s clear that the legal profession must take immediate action to protect both its own data and that of its clients. There are a number of ways to do this, which when used together, can greatly decrease a firm’s chances of falling victim to cybercrime. Below are some of the easiest, most straightforward initiatives lawyers can take to bolster their business’ security.

Leverage Secondary Channels or Two-Factor Authentication

When handling sensitive information, verifying requests for changes or access to data should always be done through secondary channels. This is an especially critical approach when it comes to important account information, such as passwords and contact details.

By using two-factor authentication, business owners can ensure that any requests for changes in account information are only made when verified by an independent source. This extra layer of security ensures that only legitimate users can access sensitive data. It also helps to prevent the potential for cyber-attacks, as any attempts to log in from an unverified source will be blocked.

Think Before You Click

Employees of any business must be trained to think critically before clicking on links or downloading content from unknown sources. The same applies to law firms – anyone working within the business must be aware that clicking a malicious link could open the door for an attacker to gain access to confidential data.

Hackers will intentionally create similar-looking URLs in an attempt to get unsuspecting users to click. They can also attach links to malicious files, which when downloaded, could cause serious harm to a company’s entire system. Employees ought to know how to recognize these traps, and should be trained to always double-check any emails, text messages, or other communications before taking action.

Monitor Activity

Firms should implement monitoring and logging software, which tracks all user activity on any associated network. This allows businesses to identify any suspicious behaviors and take the necessary steps to stop an attack before it can become a major issue. Business owners must also ensure that all employees are aware of their logging and monitoring policies and that they understand the implications of any breach in protocol.

Invest In Employee Training

A company’s security stance is only ever as good as the knowledge of its employees. Without proper training, even the most secure networks can be breached. Business owners should ensure that all their employees are up to date with the latest security technologies and have the necessary understanding of how to prevent cyber-attacks.

Update Software Regularly

Software programs are regularly updated with fixes for discovered security vulnerabilities. Putting an update off can increase the risk that malicious actors could exploit any known issues. Businesses need to stay up to date on all their software programs, including their operating systems and security suites.

Refrain From Supplying Sensitive Information Over Email

Phishing is one of the easiest and most common ways businesses become victims of cybercrime. Everyone – from major corporations to government officials – has been duped by this strategy, which involves an attacker posing as a legitimate source to gain access to sensitive information.

Law firms must be especially vigilant when it comes to ensuring the safety of their emails. Whenever possible, sensitive information should never be shared over email. Instead, it should be done through other methods such as encrypted messaging or a secure file-sharing application.

Create and Enforce Policies

Creating and enforcing policies can be an effective way to prevent cyber-attacks, especially when it comes to law firms that handle large amounts of confidential data. Business owners should consider creating a policy that outlines the steps employees must take to protect data and enforce any repercussions for failing to do so. Employers should also consider updating their policy regularly to ensure that it is up to date with the latest security techniques.

By understanding the risks that come with working within the legal industry, and taking proactive steps to mitigate them, law firms can ensure that their businesses remain well-protected against any potential cyber threats. TeraDact’s Tokenizer+, Redactor+, and Secrets+ are powerful tools that can be utilized to ensure that law firms, and all other companies, have the best security measures to protect important data. With the stakes being higher than ever, doing so is essential to the success of any organization.

Incognito Mode & User Privacy

Fun fact: over half of internet users believe that Incognito Mode prevents Google from seeing their search history.

Another fun fact: 37% think it’s capable of preventing their employer from tracking them.

The truth? It actually does neither of those things.

In fact, Google collects so much data on its users that it’s become the subject of multiple lawsuits in recent years – the latest being a class-action lawsuit that could potentially cost the company billions of dollars. It alleges that Google illegally collected user data while they were browsing in Incognito Mode and used it to target them with ads.

In this article, we’ll take a look at the specifics of the lawsuit, the true breadth of Incognito Mode, and what it actually does (and doesn’t) protect.

More About the Lawsuit

Originally filed in June 2020 by law firm Boies Schiller Flexner LLP, this latest class-action lawsuit is officially seeking at least $5 billion in damages on behalf of its clients. It accuses Google’s parent company Alphabet of covertly collecting users’ information, including details about what they browse and view, under false pretenses of privacy with Incognito Mode.

The plaintiffs, all of whom are Google account holders, say that the search engine collected, distributed, and sold their personal data for targeted advertising purposes, even in Incognito Mode. They allege that although being led to believe their activity was private, Chrome still tracked their online behavior via Google Analytics, Google ‘fingerprinting’ techniques, Google Ad Manager, and concurrent Google applications on their devices. These technologies are very common throughout the internet – apparently, more than 70% of all online websites use one or more. Google’s reported ability to use them for the collection of consumer data – even in Incognito mode – means that the search engine can bypass any privacy safeguards consumers might reasonably expect.

Lawyers say they have a large body of evidence supporting their argument that Google intentionally misled its users regarding the feature’s security. Among the most damning are several internal emails that show executives were directly aware of misconceptions surrounding Incognito and specifically chose not to act.

The emails, which were released as part of the court process, clearly illustrate multiple attempts by employees to raise concerns about the issue with their superiors. Some show that staff actively joked about the fact that Incognito didn’t provide privacy, while others highlighted criticism towards Google’s approach to protecting user data.

The most telling though, include multiple emails between top company executives that prove this issue was known about at every level. A 2019 message from Google’s Chief Marketing Officer Lorraine Twohill to CEO Sundar Pichai explicitly reads that Incognito is “not truly private” – as clear of admission as you could get.

In addition to emails, the released court documents reference multiple internal presentations that further acknowledge Google’s awareness of Incognito’s privacy problem. One states that users “overestimate the protections that Incognito provides”, and another proposes removing the word “private” from its start screen altogether.

Essentially, what the lawyers are arguing with this evidence is that not only were top Google execs aware of users’ misconceptions about their privacy on Incognito but specifically chose not to act in favor of sustaining ad profits.

Of course, Google refutes all of the claims against it, stating they have been upfront with its users all along and those plaintiffs of this lawsuit have “purposely mischaracterized” their statements. The tech giant’s lawyers moved to dismiss the case 82 times in 2021, each of which was ruled against, allowing it to get to the certification process we’re at today. Google was also ordered to pay almost $1 million in legal penalties this past July for failing to disclose evidence in a timely manner.

A Growing Problem

This is by no means the first lawsuit Google has faced in recent years. As a matter of fact, its legal department is currently juggling tens of active cases, the plaintiffs of which range from the States of Texas and Washington to the District of Columbia, the Republican National Committee, Video game maker Epic, and dating app company Match Group. The search giant is also in the middle of issuing settlement payouts to several recently wrapped cases, including one of $85 million to the State of Arizona and another of $391.5 million to a 40-state privacy coalition.

But this new lawsuit in particular may be the biggest Google’s ever dealt with – the class-action initiative represents millions of individual users and is fighting for payouts of between $100 and $1,000 to every single one of them. You don’t need to be a genius at math to figure out this could easily rack up to billions of dollars in damages.

The plaintiffs’ lawyers are currently working on getting the case certified, which would move it one big step closer to an actual trial. And if Google does end up losing in court, it may not have any choice but to start writing very large checks.

Understanding ‘Incognito Mode’

Fully understanding this recent lawsuit and the implications it has for Google comes down to understanding Incognito Mode itself and the role it plays in user privacy.

Incognito Mode is a feature on all major browsers (Chrome, Edge, Safari, Firefox, etc.) that allows users to browse the internet without saving any local data to their device. This includes things like cookies, browsing history, and form autofill information. Essentially, it’s a way to ensure that your internet activity can’t be traced back to you or your device once you close the window.

While this sounds like a fool-proof way to browse privately, the reality is that Incognito Mode only offers what’s called “local privacy”. This means that while your internet service provider (ISP) and the websites you visit can’t track what you’re doing, any software you’re using can – including Google.

By definition, the word “incognito” means to disguise or conceal one’s identity.

The biggest problem Google’s Incognito Mode has faced over the years is the degree of purported concealment it really offers users. From a broad perspective, most people believe that the feature makes their online activity invisible, which as we’ll go on to establish in the next section, isn’t true. Its claims, nature, and name all lure users into a false assumption of security – leading to accusations of privacy violations and lawsuits when the true scope of Incognito’s visibility is revealed.

Does ‘Incognito Mode’ Really Protect Users’ Privacy?

So, does Incognito Mode really protect Google users’ privacy? The answer may depend on what you consider private information.

Industry experts explain that private browsing modes like Incognito are designed to safeguard customer activity on a very basic level. They’re mainly meant to keep your browsing history clean in cases where you share a computer with others. Effective at keeping the secret of what you’ve bought your partner for Christmas, but not protecting data regarding your online activities, interests, and behaviors.

To offer more concise answers, we’ve broken down the exact things Incognito officially does and doesn’t collect when you open a window:

What ‘Incognito Mode’ Does Protect

Incognito Mode isn’t completely pointless – it protects multiple facets of online user activity, including the following.

Browsing History

Browsing history refers to the list of web addresses conventional search engines automatically collect when you use your computer. The feature mainly exists for convenience, allowing you to quickly revisit websites without having to remember the URL. While this is useful in some cases, it can also be a major invasion of privacy, especially if you frequently visit sites other people might find controversial or sensitive. Browsing history is one aspect of online activity that Google explicitly promises not to save when you use Incognito Mode.

Cookies

A cookie is a small piece of data that’s stored on your computer or mobile device whenever you visit a website. Its main purpose is to remember information about you, such as your login details, language preferences, and items added to your shopping cart. Cookies can make the online experience more convenient, but they also allow companies to track your movements across the internet – even when you’re not using their specific services. Google will still place cookies on your device while you’re in Incognito Mode, but they’ll automatically be deleted as soon as you close the window. This means that any information these cookies collect about your online activity can’t be used to identify you at a later date.

Download History

Download history is a record of every file you’ve downloaded while using Chrome. Like browsing history, this feature is designed for convenience, allowing you to quickly access files without having to search for them on your computer. However, it also represents a significant invasion of your privacy, as it can be used to track the types of files you download, where you download them from, and what you do with them after they’re on your device. Google doesn’t save your download history when you use Incognito Mode, meaning that any files you download while in this mode can’t be traced back to you.

Search History

Search history refers to the terms you’ve entered into Google’s search engine, as well as the results pages you’ve accessed through these searches. Like cookies, search history is used to tailor your future experience of the internet, serving you more relevant results and ads based on your previous behavior. Google doesn’t save your search history when you’re in Incognito Mode, meaning that your future searches won’t be influenced by the terms you enter while in this mode.

Site and Form Data

Site and form data is information like usernames, passwords, addresses, and preferences that you’ve entered on specific websites. This data is generally stored in cookies, but can also be saved in your browser’s cache – a temporary storage space for frequently accessed files. Google doesn’t save site data when you use Incognito Mode, meaning that any information you enter on websites while in this mode can’t be accessed or used at a later date.

As mentioned earlier, these capabilities are mainly designed to conceal your local browser history, mainly to keep others from snooping on your browsing or download habits. However, they do little to actually protect your anonymity online – for that, you’ll need to use tools like VPNs. 

What ‘Incognito Mode’ Doesn’t Protect You From

While it delivers some value, Google Incognito Mode doesn’t go as far in protecting users’ privacy as many think. The following are just some of the ways in which your activity can still be monitored and recorded while using this feature.

Google

Based on the lawsuit discussed in this article, Google’s core web tools – including Analytics and Ad Manager – still track and collect data from users in Incognito Mode. While this information can’t be used to personally identify you, it can be used to build up a detailed profile of your web activity, interests, and habits.

Employer or School Networks

If you’re using a work or school computer, it’s likely that your employer or school has installed monitoring software that allows them to track your activity, even in Incognito Mode. This software can record the websites you visit, the files you download, and the searches you perform, meaning that your employer or school will still be able to see what you’re doing online, even if Google can’t.

Your ISP

Your internet service provider (ISP) can still see the websites you visit while you’re in Incognito Mode. They can also track the amount of data you’re using and the time you spend online. This information can be used to deliver targeted ads and content and can even be sold to third-party companies. The best way to hide online activity from an ISP is to use a Virtual Private Network (VPN) service, which will encrypt your traffic and prevent your ISP from being able to track it.

Malware

Malware is malicious software that can be installed on your computer without your knowledge. Once it’s in place, it can be used to track your activity and collect sensitive information, even when you’re in Incognito Mode.

Government Surveillance

While Incognito Mode can help to protect your privacy from snooping on family members or roommates, it won’t do much to shield you from government surveillance. If the government is monitoring your activity, they’ll still be able to see the websites you visit and the searches you perform, even when you’re in Incognito Mode.

While it has yet to be officially labeled a ‘dark pattern’, Google’s Incognito Mode is likely in store for further controversy in the years to come. For now, it’s important to be aware of the limitations of this privacy feature and to use other tools – like VPNs – to ensure that your activity is truly private and anonymous. Speaking of data privacy and protection, our solutions Tokenizer+, Redactor+, and Secrets+ can improve your security framework and protect your organization from potential cyber threats. Contact us today for more information.

Privacy Law Violations: Who investigates and what are the consequences?

In today’s digital age, the issue of privacy is more important than ever. With the advent of new technologies that allow for the collection and use of large amounts of personal data, the need for comprehensive privacy law has never been greater.

The United States has several federal laws that deal with various aspects of privacy, but there is no all-encompassing privacy law that covers everything. Instead, the various laws deal with specific issues and are often very siloed from one another.

In this article, we’ll take a look at some of the major federal privacy laws in the United States and what they cover.

Fair Credit Reporting Act of 1970

The Fair Credit Reporting Act of 1970 was one of the earliest federal privacy laws to be passed in the United States. It was implemented under Richard Nixon in an effort to guarantee the privacy and accuracy of consumer credit bureau files.

The FCRA protects United States citizens’ personal financial information upon collection by groups like credit agencies, medical information companies, and tenant screening services. The privacy law outlines what guidelines these organizations must follow when handling individuals’ sensitive data and also informs consumers of their rights in regard to the information on their credit reports.

The FCRA is enforced by the Federal Trade Commission, an independent government agency that focuses its work on protecting consumer privacy interests. Inaccurate debt reporting, failure to send poor credit notifications, failure to provide a satisfactory process to prevent identity theft, and dissemination of credit report information without consent are some of the most common forms of violations they encounter.

Upon violating the FCRA, companies can expect to incur a number of penalties and losses, namely damages awarded to victims, court costs, and attorney fees.

Statutory damages don’t require supportive evidence and can range in compensation limit from $100 to $1,000. Actual damages that result from a proven failure to act have no limit and are determined on a case-by-case basis. The FCRA also permits a class-action lawsuit against companies in violation, which can end up costing companies millions.

Privacy Act of 1974

The Privacy Act of 1974 is a federal law that prevents federal agencies from disclosing personal information they collect without an individual’s consent. It was signed by President Gerald Ford near the end of 1974 in response to the Watergate scandal and public concern over the privacy of computerized databases. The Act requires that federal agencies publicly disclose their record systems in the Federal Register, which is a national and official record managed by the U.S. government.

Multiple groups share the responsibility of enforcing the Privacy Act of 1974, as the legislation contains a range of protections that apply to different areas of government. The director of the Office of Management and Budget maintains the interpretation of the act and can release guidelines to these groups as needed. The Federal Register is another important tool in the enforcement of the Privacy Act as it keeps track of all record systems subject to the act, as well as any changes that are made to these systems.

Violation of the Privacy Act of 1974 can be considered both civil and criminal, depending on the specific situation at hand. For instance, an individual may choose to sue an agency to prevent disclosure of their records or to compel an agency to correct inaccurate information. They could similarly sue to have records produced or to receive damages as the result of an intentional violation. 

Alternatively, if an agency willfully discloses personal information without an individual’s consent, they can be fined up to $5,000 and cited for a misdemeanor. It’s important to also mention that this misdemeanor charge can apply to anyone if they request an individual’s record from an agency under false pretenses.

Computer Fraud and Abuse Act of 1986

The Computer Fraud and Abuse Act of 1986 is a federal law that prohibits the unauthorized use of protected devices connected to the internet. In plain language, it essentially makes it a crime to hack into someone else’s computer.

The law was first passed in 1986 and has been amended several times since then to better reflect the changing nature of digital technology. It has been the subject of scrutiny over the years, as some argue its language is often vague and allows for broad interpretation. This can result in the law being applied to everyday activities that people might not realize are technically illegal. This is something that has been addressed in recent years and continues to be a point of contention.

The CFAA’s provisions criminalize several activities, including:

●          Unauthorized access of a computer

●          Acquisition of protected information through unauthorized access

●          Extortion involving computers

●          Intentional unauthorized access to a computer that results in damage

Penalties for violation can apply to these offenses even if they are ultimately unsuccessful.

The Department of Justice is in charge of enforcing the Computer Fraud and Abuse Act. They investigate potential cases and, if they believe there is enough evidence, will file charges against the accused.

If someone is found guilty of violating the Computer Fraud and Abuse Act, they can face a number of penalties. These include fines, imprisonment, or both. The amount of the fine and length of imprisonment will depend on the severity of the offense and whether or not the accused has any prior convictions. Generally, first-time violators can expect up to a decade in prison, while second offenders can get up to 20 years.

Children’s Online Privacy Protection Act of 1998

The Children’s Online Privacy Protection Act of 1998 (COPPA) is a federal law that was enacted with the goal of protecting the online privacy of children under the age of 13. The FTC is responsible for enforcing this privacy law and they have the authority to impose fines on companies who violate COPPA. These fines can be up to $43,280 per violation.

In order to comply with COPPA, companies must provide clear and concise information about their privacy practices on their website or online service. They must also get parental consent before collecting, disclosing, or using any personal information from children under the age of 13.

There are a few exceptions to this rule. Companies don’t need parental consent in order to collect a child’s name, email address, or other online contact information if they only use this information to:

– Respond directly to a one-time request from the child (such as responding to a question or entering the child in a contest)

– Protect the safety of the child or others

– Comply with the Children’s Internet Protection Act

Additionally, companies are allowed to collect, use, and disclose a child’s personal information without parental consent if they do so to support the website or online service’s internal operations. These operations include things like site maintenance, content delivery, and security measures. The FTC has published a set of Frequently Asked Questions that provides more information about COPPA and how it applies to businesses.

Gramm-Leach-Bliley Act of 1999

The Gramm-Leach-Bliley Act (GLBA) is a federal law that was enacted in 1999. The GLBA’s primary purpose is to protect the privacy of consumer financial information. It applies to any company that has access to this type of information, including banks, credit unions, and other financial institutions.

Under the GLBA, financial institutions must take steps to safeguard the customer information they collect and maintain. They must also provide customers with a notice of their privacy policies and practices. This notice must explain how the institution collects, uses, and discloses customer information.

In addition, the GLBA gives customers the right to opt-out of having their information shared with third parties. Financial institutions must provide customers with a clear and conspicuous way to exercise this right.

The GLBA also requires financial institutions to take steps to protect the security of customer information. This includes implementing physical, technological, and procedural safeguards. Financial institutions must also train their employees on how to handle customer information in a secure manner.

Violations of the GLBA can result in a number of penalties, including fines, imprisonment, or both. For each violation, a financial institution can get a fine of up to $100,000. An institution’s directors and officers can face a fine of up to $10,000 or five years in prison (or both).

The Federal Trade Commission is responsible for enforcing the GLBA and has the authority to pursue legal action against companies that violate the act.

Health Insurance Portability and Accountability Act of 1996

The Health Insurance Portability and Accountability Act of 1996 (HIPAA) is a national law that was enacted in order to protect the privacy of patient’s health information. HIPAA applies to any company or organization that handles protected health information (PHI). These entities are known as “covered entities” under HIPAA.

Covered entities must take steps to ensure that PHI is kept confidential and secure. They must also provide patients with a Notice of Privacy Practices that explains how their PHI will be used and disclosed.

Patients have the right to request that their PHI be released to them or to another party. They can also request that their PHI be corrected if they believe it is inaccurate. The ultimate goal of HIPAA is to ensure that patient’s health information is protected while also allowing them to have control over how it is used.

If a covered entity violates HIPAA, it can be subject to civil and/or criminal penalties. These penalties can include fines of up to $50,000 per violation and up to 10 years in prison for individuals who knowingly violate HIPAA.

The Department of Health and Human Services is responsible for enforcing HIPAA. They have a website that provides more information about HIPAA and how it applies to businesses.

Telephone Records and Privacy Protection Act of 2006

The Telephone Records and Privacy Protection Act of 2006 is a federal law that regulates how telephone companies can use and collect customer information. The law was passed in 2006 in response to a growing concern over the way that phone companies were handling customer data. At the time, a number of phone companies were selling customer information to third parties without customers’ knowledge or consent.

Telephone Records and Privacy Protection Act of 2006 requires telephone companies to get customers’ consent before using or sharing their information for marketing purposes. Companies are also required to provide customers with clear and concise notice of their privacy rights, and to allow them to opt-out of having their information used or shared for marketing purposes.

Violation of the Telephone Records and Privacy Protection Act can result in a jail sentence of up to 10 years and range in financial penalty. Cases involving more than 50 victims can double fines and add an additional 5-year jail sentence. If the illegally acquired phone records were used to commit a violent crime, crime against federal officers, or domestic violence, the jail sentence can be extended by another five years.

Conclusion

America’s privacy laws have a long history, and as we continue to move into the future, are sure to evolve even further. The laws discussed in this article are just a snapshot of the many that exist in order to protect Americans’ privacy rights. While some may argue that these laws are too restrictive or not enough, they nonetheless provide a foundation for how we as a society can safeguard our personal information. Products like Tokenizer+, Redactor+, and Secrets+ provide intelligent and automated AI/ML-based solutions to protect your company’s personal information. With the ever-growing importance of data security, it’s only a matter of time before even more laws are enacted to keep up with the changing landscape. Despite the continuous addition of privacy laws across the globe, cyber-attacks still exist. Contact us for more information on how your company can improve your security network and protect your data from cyber threats.

Cross-Border Sharing in the G-7 while Protecting Sensitive Data

Globalization has utterly redefined the state of the world we live in. People, businesses, and governments are now more interconnected than at any other moment in history, and the flow of information has become the lifeblood of this new era. For countries to maintain a leadership role in the global economy, it is essential that they embrace this new reality and adapt their policies to take advantage of the opportunities cross-border data flows present.

One area where this is particularly relevant is data sharing. In the past, businesses and governments were able to keep their data close to the chest, using it as a competitive advantage or simply keeping it out of the hands of others. However, in today’s interconnected world, this is no longer an option. Countries that want to remain competitive need to be able to share data with others, while still ensuring that sensitive information is protected.

This has been a top priority for members of the G-7, who recently concluded a two-day summit on the topic of cross-border data flow regulation. It was one of several recent meetings in the group’s ongoing effort to standardize law around the matter, which while not completely revolutionary, marked an important step forward in developing a global framework for data sharing.

Defining Cross-border Data Flows

Before we get into the specifics of the G-7 summit, it’s important to first establish what is meant by cross-border data flows. In short, these are the electronic transmission of data across national borders. This can include everything from email and text messages to more complex data sets used by businesses and governments.

Cross-border data flows have become increasingly important in recent years as our world has become more connected. They provide a way for businesses to operate in multiple countries, for people to keep in touch with loved ones who live far away, and for governments to share information and resources.

However, cross-border data flows can also pose a risk to national security and public safety.

When data is transmitted across borders, it often goes through multiple jurisdictions and may be subject to different laws in each country. This can make it difficult to protect sensitive information, as there may be holes in the security net. In addition, cross-border data flows make it easier for criminals and terrorists to operate across borders. And finally, they can also be used to evade taxes or launder money.

This is why it’s so important for countries to strike a balance between encouraging data sharing and protecting sensitive information.

What Types of Data Are We Talking About?

It’s important to note that not all data is created equal. When we talk about cross-border data flows, we are usually referring to three different types of data: PII, PHI, ETC.

PII, or Personally Identifiable Information, is any data that can be used to identify an individual. This includes things like name, address, date of birth, Social Security number, and so on.

PHI, or Protected Health Information, is any data related to an individual’s health. This includes things like medical records, prescriptions, and insurance information.

ETC, or Encrypted Taxpayer Communications, is any data related to an individual’s taxes. This includes things like tax returns, W-2 forms, and 1099 forms.

All three of these types of data are considered sensitive and need to be protected.

What Types of Risks Are We Talking About?

As we mentioned earlier, cross-border data flows can pose several risks to national security and public safety. Here are a few of the most common:

Data breaches: When sensitive data is transmitted across borders, it increases the chances of a data breach. This is because there are more opportunities for hackers to intercept the data. In addition, cross-border data flows make it difficult to track down the source of a breach, as the data may have gone through multiple jurisdictions.

Identity theft: Cross-border data flows make it easier for criminals to steal people’s identities. This is because sensitive data, like Social Security numbers and date of birth, can be used to open new accounts or get new credit cards.

Fraud: Cross-border data flows can also be used to commit fraud. For example, criminals may use stolen credit card numbers to make purchases online. Or they may use fake identities to take out loans or open new bank accounts.

Money laundering: Cross-border data flows can be used to launder money. This is when criminals use legitimate businesses to move money around, so it’s difficult to track. For example, they may use a cross-border money transfer service to send money from one country to another.

The G-7 Meeting

Top privacy regulators from member nations of the G-7 met in Bonn, Germany last month to discuss the issue of cross-border data flows in detail. The main priority of the meeting was to find a way to standardize data privacy laws across borders, in order to make it easier for businesses to operate in multiple countries and to protect sensitive information.

The Group of Seven already has several legal deals in place addressing this exact issue, however none specifically between the United States and European Union.

“The only piece of the puzzle that is missing is the trans-Atlantic agreement,” says Wojciech Wiewiórowski, the European Data Protection Supervisor who attended the two-day meeting.

While a final legal text of the new U.S.-European Union agreement hasn’t been published yet, negotiators said in March that they reached a preliminary deal. This is a big step forward, as it’s the first time that both sides have been able to agree on a framework for data sharing.

The new agreement will likely build on the EU-U.S. Privacy Shield, which was proposed years ago but eventually ruled illegal by the EU’s top court in 2020.

In that case, challengers had successfully argued that American government surveillance posed a threat to Europeans’ privacy if their data was moved to the United States.

Dismantling the Privacy Shield “basically left data transfers in limbo” for all international companies, says Svetlana Stoilova, digital economy adviser at Business Europe.

Businesses have urged lawmakers from both member nations to speed up the process of finding a new replacement. Some companies have been using other legal mechanisms to transfer data, but they are seen as being more cumbersome.

The new agreement is still being negotiated and no final text has been published yet. However, both sides have said that they are committed to finding a solution that will protect people’s data while also allowing businesses to operate across borders.

Ultimately, the goal is to align the data privacy laws of the United States and the European Union, so that businesses can operate in multiple countries without having to worry about breaching data privacy laws. This would also make it easier for people to know their rights when it comes to their data being shared across borders.

Last month’s meeting saw several suggestions to make such a system work, including…

Applying data anonymization techniques: This would involve stripping transferred information of personally identifiable details. For example, a user’s name, address, and credit card number could be replaced with a unique identifier. This would make it more difficult for someone to identify an individual from the data. (Tokenizer+)

Pseudonymizing data: This would involve replacing personally identifiable information with a pseudonym. For example, a user’s name could be replaced with a randomly generated identifier. This would make it more difficult to identify an individual, but not impossible. (Tokenizer+)

Applying data redaction techniques: This would involve the redaction of data that can’t be shared in any safe fashion so that only appropriate users may access the data. (Redactor+)

Using encryption to protect information in transit: This would involve encrypting data so that it can’t be read by anyone who doesn’t have the key to decrypt it. This would make it more difficult for someone to intercept and read the data as it’s being transferred between countries.

Arguably the most important outcome of the meeting, however, was the renewed commitment from both sides to find a solution to this problem.

In summaries published after the meeting Thursday, regulators committed to collaborating on legal methods to move data and provide businesses with options for choosing cross-border transfer tools that fit their business needs. The document also stated that nations need legislation ensuring that personal data is only accessed as “essential” for national security purposes.

This is a big issue for businesses operating in multiple countries, as well as for people who may have their data shared across borders. The goal is to find a way to standardize data privacy laws across borders so that businesses can operate in multiple countries without having to worry about breaching data privacy laws. This would also make it easier for people to know their rights when it comes to their data being shared across borders.

It’s still early, but the meeting was a step in the right direction toward finding a solution that will work for everyone.

Conclusion

Being the world’s seven largest economies, the G-7 has a responsibility to lead the way in developing policies that will allow for cross-border data transfers while also protecting people’s privacy. Their actions will set a precedent for how other countries should and will approach this issue. Time will tell whether they will be successful in finding a balance between these two competing interests. The rise of big data and the global interconnectedness of trade and business have necessitated new ways to facilitate cross-border data transfers. At the same time, data privacy concerns have grown, as has public awareness of the ways that companies collect and use personal data. These trends have created both opportunities and challenges in implementing cross-border data transfers. 

As the world becomes more digitized, it’s important that we find a way to protect people’s data while also allowing businesses to operate across borders. This is not an easy task, as data privacy laws vary from country to country. But it’s an important one, as cross-border data transfers are essential for businesses and trade. Regulated companies will soon turn to providers like TeraDact to protect their sensitive data. Our products Tokenizer+, Redactor+, and Secrets+ were developed to help protect people’s sensitive data while still allowing businesses to operate in the most efficient way possible. The products mentioned above are the answer to properly protect your data from cyber threats across the world.

Data Protection Trends in Children’s Online Gaming

When we think about children’s data protection, the first issues our minds usually jump to are topics like social media. And that would make sense – online social networks make up a great portion of kids’ internet usage and therefore pose a proportionally high risk.

But what is often overlooked is the fact that many children are also spending their time playing online video games. A recent report found that 76% of kids younger than 18 in the United States play video games regularly.

This is a problem because, like social media, online gaming platforms collect a large amount of data from their users. This includes personal information like names, addresses, and birthdays, as well as more sensitive data like GPS location and biometric data. And, due to the nature of gaming, this data is often collected without the user’s knowledge or consent.

This raises a number of concerns about children’s privacy and data security, as well as the potential for misuse of this information. In this article, we’ll explore some of the key issues related to children’s online gaming and data protection, as well as what measures can be taken to mitigate these risks.

The Safety and Data Risks Faced by Children in Online Gaming

Children and youth are uniquely vulnerable to the dangers posed by the internet. They are still in the process of developing both physically and mentally, which can make them more susceptible to harm. This is especially true in the case of video games, where a slew of potential risks exist.

Addiction

Children’s data can be used to exploit their vulnerabilities and hook them into playing video games for long periods of time. This can lead to addiction, which in turn can have several negative consequences. These include social isolation, sleep deprivation, and even poor academic performance. In severe cases, it can lead to mental health problems.

Manipulation

One of the biggest dangers children face when gaming online is manipulation. Game developers and companies have a vested interest in keeping players engaged, and they often do this by using personal information to curate highly targeted in-game advertisements and content. This can be extremely persuasive, and children may be coerced into making social connections or purchases that they wouldn’t otherwise make.

Contact Risks

Another potential danger of online gaming is the possibility of contact risks. When players reveal their personal information, such as their email address or home address, they open themselves up to the possibility of being contacted by someone they don’t know. This can be especially dangerous for young children, who may not yet have the ability to distinguish between safe and unsafe people.

Gambling-Like Mechanisms

Many online games make use of gambling-like mechanisms, such as loot boxes, that can entice players to spend more money. These mechanisms are particularly risky for children, who may not have a full understanding of how they work or the potential financial consequences.

International Examples of Legislative Age Assurance Requirements

As experts have sounded the alarm over children’s data security in the scope of online play, governments have responded through the proposal and institution of several regulatory frameworks aimed at addressing the problem. A number of noteworthy pieces of legislation have come into force around the world over the past few years, and while each differs slightly in content, they all have one common goal: doubling down on companies’ responsibility to protect their youngest users.

Here are just a few examples of prominent regulatory frameworks to have been rolled out in major countries and regions:

U.K. Information Commissioner’s Office Age-Appropriate Design Code

The age-appropriate design code, informally known as the Children’s Code, was first implemented by the UK’s Secretary of State in September 2020 in an effort to codify the rules and enforcement procedures surrounding online services that process children’s data. It applies to any company that offers online services – such as social media platforms, apps, websites, or gaming services – that are likely to be accessed by children under the age of 18.

The AADC outlines standards on 15 different topics:

●          Best interests of the child

●          Data protection impact assessments (“DPIA”)

●          Age-appropriate application

●          Transparency

●          Detrimental use of data

●          Policies and community standards

●          Default settings

●          Data minimization

●          Data sharing

●          Geolocation

●          Parental controls

●          Profiling

●          Nudge techniques

●          Connected toys and devices

●          Online tools

Each of these covers a unique facet of online service design, but all work together to create a robust sense of protection for minors. Companies are expected to take a risk-based approach to their compliance for each, meaning that the solutions they implement should be appropriate for the risks posed by their products.

While failure to comply with the Age-Appropriate Design Code itself does not make a person or business liable to legal proceedings, it does open their risk to being prosecuted for violation of the UK GDPR and/or PECR.

OECD Recommendation on Children in the Digital Environment

Adopted in 2021, the OECD Recommendation on Children in the Digital Environment is a formal set of guidelines aimed at promoting children’s data safety online. It sits in tandem with the OECD’s Digital Service Provider Guidelines to outline the organization’s position on data governance for digital economy actors.

The Recommendation is unique in that it is non-binding, meaning that countries are not held to its standards in a legal sense. However, it does provide a sort of international benchmark for how different nations might approach regulation in this area.

The main tenet of the OECD’s recommendation is to create online environments in which online providers take the “steps necessary to prevent children from accessing services and content that should not be accessible to them, and that could be detrimental to their health and well-being or undermine any of their rights.” 

EU Digital Services Act

The EU Digital Services Act is a newer piece of legislation that was just agreed to by EU members in April 2022. It’s set to be the Union’s main ‘rulebook’ when it comes to protecting citizens’ online privacy both now and in the future as big tech continues to redefine the way we interact with the internet.

Under the DSA, online service providers will be held to higher standards when it comes to the way they process the personal information of both child and adult EU citizens. The Act includes several provisions specifically aimed at protecting minors, including a ban on advertising aimed at children and the algorithmic promotion of content that could potentially cause them harm such as violence or self-harm.

Once formally adopted by EU co-legislators, the Digital Services Act will apply after 15 months, or January 1, 2024, whichever is later. It’s being lauded as a major first step in the effort to protect children’s (and all users’) privacy online and has set the standard for future frameworks of its kind.

UK Online Safety Bill

While still before the UK’s House of Commons, the Online Safety Bill is another potential change to come in the data privacy landscape. It addresses the rights of both adults and children when it comes to their data online, with a special focus on the latter.

If passed, the bill would impose a safety duty upon organizations that process minors’ data to implement proportionate measures to mitigate risks to their online safety. While the legislation has had a few bumps in the road since its original proposal, new UK Prime Minister Liz Truss says she plans to adapt and move forward with it in the coming months.

California Age-Appropriate Design Code Act

California is no stranger to data privacy laws. Honing one of the most comprehensive sets of state regulations in North America, the CCPA, its priorities are clearly set on protecting citizens’ rights and personal information online. In our “California Consumer Protection Act (CCPA) Fines” blog post we discuss which companies the act would apply to, the basics of the CCPA, the penalties for violating the law, and the proposed changes that could affect the law in the future.

The state’s government has just taken another step in that direction with the Age-Appropriate Design Code Act, which unanimously passed a Senate vote on August 29th of this year.

If enacted by Governor Newsom, it will require businesses to take extra measures to ensure their online platforms are safe for young users. This entails regulating things like the use of algorithms and targeted ads, as well as considering how product design may pose risks to minors.

An August 2022 article on the legislation in The New York Times stipulated that when signed, the CAADCA “could herald a shift in the way lawmakers regulate the tech industry” on a broad level in the United States. It pointed to the fact that both regional and national laws in the country have a proven ability to affect the way tech companies operate across the board, and a change in California could very well mean a change for the rest of the US.

Emergent Solutions

Recent regulatory frameworks in data privacy have marked a massive shift in the way companies are required to handle and protect the personal information of their users, with a specific focus on children. In response, many online platforms and service providers have made changes to their terms of service and product design in order to adhere to these new standards.

Some of the biggest emerging solutions include:

Privacy by Design

Privacy by design is an engineering methodology that refers to the incorporation of data privacy into the design of products, services, and systems. The goal is to ensure that privacy is considered from the very beginning of the development process, rather than being an afterthought.

There are seven principles of privacy by design:

1.         Action that is proactive not reactive, preventive not remedial

2.         Privacy as a default setting and assumption

3.         Privacy embedded into design

4.         Full functionality – positive-sum, not zero-sum

5.         End-to-end security and full lifecycle protection

6.         Visibility and transparency

7.         Respect for user privacy

The privacy by design methodology was first introduced in the 90s by Ontario Privacy Commissioner Ann Cavoukian. It’s considered one of the most important data privacy frameworks in the world, and its principles are being promoted as a basis upon which online video games and other digital platforms can better protect children’s privacy.

Risk-Based Treatment

As has been seen in recent years, data protection legislation is moving away from a one-size-fits-all approach and towards a more risk-based treatment of personal information. This refers to the idea that data controllers should consider the risks posed by their processing activities when determining what measures to put in place to protect the rights and freedoms of data subjects. For children, this means taking into account the fact that they are a vulnerable population and tailoring data protection measures accordingly.

Responsible Governance

Responsible governance refers to the ethical and transparent management of data by organizations.  It’s based on the principle that data should only be collected, used, and shared in a way that is transparent to the individual and serves their best interests.

There are four main pillars of responsible governance:

Transparency: individuals should be aware of how their data is being used and why

Choice: individuals should have the ability to choose whether or not to share their data

Responsibility: organizations should be held accountable for their use of data

Security: data should be protected against unauthorized access, use, or disclosure

The concept of responsible governance is gaining traction as a way to protect children’s privacy online. It’s being promoted as a means of ensuring that data collected from children is only used in ways that are beneficial to them, and not for commercial or other ulterior purposes.

Parental Controls

In the face of ever-growing concerns about children’s privacy online, many parents are taking matters into their own hands by implementing parental controls on their devices and home internet networks. There are several different ways to go about this, but some of the most popular methods include setting up child-friendly browsers and content filters, as well as using apps that track screen time and limit app usage. While parental controls are not a perfect solution, they can be a helpful way to give parents some peace of mind when it comes to their kids’ online activity.

Video games can help children develop their creativity, social skills, and knowledge. However, as digital technologies become more sophisticated and firmly entrenched in our daily lives, it is increasingly important that we begin to structure them in a way that considers and respects children’s privacy rights. By understanding the trends in data protection, and by implementing responsible governance practices, we can help create a safer and more secure online environment for children to play and learn in.

California Consumer Protection Act (CCPA) Fines

Any company, organization, or marketer that does business online knows (or should know) about the California Consumer Protection Act (CCPA). But with all the talk about the law, it can be hard to understand what it actually is and how it affects businesses. In this article, we’ll take a look at the basics of the CCPA, the penalties for violating the law, and the proposed changes that could affect the law in the future.

What Is the California Consumer Protection Act?

The California Consumer Protection Act (CCPA) is a set of regulatory guidelines imposed upon businesses that collect consumers’ personal data established by the California State Government. It is among the strongest and most stringent privacy laws in the United States and has a far-reaching impact in terms of both the businesses to which it applies and the rights it affords consumers.

The CCPA was passed in response to the numerous high-profile data breaches that have occurred in recent years, as well as the growing concern over the use of personal data by businesses for marketing and other purposes. The law is designed to give consumers more control over their personal data, and to hold businesses accountable for the way they collect, use, and protect that data.

The Provisions of The California Consumer Protection Act

The California Consumer Protection Act covers four principal provisions: the right to know, the right to opt-out, the right to delete, and the right to equal service. We’ll briefly explain each below.

1. The Right to Know

Under the CCPA, consumers have the right to know the personal information businesses collect and how they use it. They’re entitled to the direct disclosure of what categories of data this information falls under and are also given the ability to request further, more specific details about its use as needed. This includes inquiries about what personal information a business has sold, what types of third parties it has sold the information to, and where it got that data in the first place.

(Cal. Civ. Code § 1798.100, § 1798.110, § 1798.115)

2. The Right to Opt-Out

The California Consumer Protection Act mandates that businesses must provide individuals with an easy and direct way to opt-out of the sale of their personal information. The most common way this is done is through a “Do Not Sell My Personal Information” link on a website homepage or cookie preference banner with a similar toggle.

It’s also worth noting that businesses must automatically opt-out of the sale of an individual’s data if they have direct reason to believe that the person is under 16 years old. In these cases, it is only their parent’s, guardian’s, or own decision (if between 13 and 16) to consent to anything otherwise.

(Cal. Civ. Code § 1798.120)

3. The Right to Delete

Individuals protected by the California Consumer Protection Act have the right to request the deletion of their personal information from the entities who collect it. Businesses that receive these requests are obliged to fulfill them upon receipt unless the information they have collected is necessary for things like the completion of a related transaction or contract.

(Cal. Civ. Code § 1798.105)

4. The Right to Receive Equal Service

The CCPA is very clear about discrimination and its intolerance for businesses that use it against consumers who exercise their rights. The law directly prohibits businesses and entities from treating individuals unfairly because they’ve requested to know what personal information is being collected about them, or because they’ve opted out of the sale of their information. This also includes refusing service, providing a lower quality of service, or charging different prices or rates for services.

(Cal. Civ. Code § 1798.125)

Defining ‘Personal Information’

The CCPA’s definition of what qualifies as ‘personal information’ is important to fully understand the scope of the law and how it applies.

As directly written, it considers ‘personal information’ to be any “information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household.” (Cal. Civ. Code § 1798.140(o)(1)).

Examples of what type of data this can cover include:

●         Social Security Numbers

●         Purchase histories

●         Drivers’ license numbers

●         Internet Protocol addresses

The information listed above falls into the personally identifiable information (PII) category. To learn more about PII and how legislation is trying to protect it, view our previous posts: “PIPL: What You Need to Know About Changing Cybersecurity in China”, and “A Guide to the GDPR, Europe’s Stringent Data Protection Law”. Protecting PII is our focus here at TeraDact.

It’s worth noting that while technically meeting the definition, some types of information are not considered to meet the threshold of ‘personal’ and are not subject to CCPA rules. Publicly available information, for example – like someone’s name printed in a newspaper – is not included. Nor is de-identified or aggregate data, which are both defined and further explained in the CCPA itself.

Who Does the California Consumer Protection Act Apply To?

So, who’s subject to all of these rules and provisions? The CCPA was specifically designed to target businesses but can still apply to any organization or person that operates in California and meets at least one of the following criteria.

Annual Revenues Of $25 Million Or Higher

This part is pretty self-explanatory. Businesses making more than $25 million in annual revenue are generally required to comply with the law.

Commercially Buying, Sharing, Receiving, Or Selling the Data of Over 50,000 Consumers Annually

Another clear-cut rule. If your business handles the personal information of more than 50,000 Californian consumers, residents, or households on an annual basis, you’ll have to comply with the law.

It’s important to note that this rule applies even if you don’t share or sell the information you collect – simply having it in your possession puts you over the threshold.

Deriving Over 50 Percent of Annual Revenues from The Sale of Personal Information

This is another fairly straightforward rule, but one that’s worth unpacking a bit. The ‘sale’ of personal information under the CCPA can be broadly defined as anything that would enable access to the data – including exchanging, renting, releasing, disclosing, or otherwise making it available.

So, if more than 50 percent of your business’s annual revenue comes from activities like this, you’ll be required to comply with the law.

What Are the Penalties for Non-Compliance with The California Consumer Protection Act?

Violations of the California Consumer Protection act don’t go unpunished; the law outlines several penalties for non-compliance with its regulations. And because it applies to businesses, service providers, and individuals, there’s a range of potential punishments that could be levied.

Civil Penalties

The most common penalties for violating the CCPA are civil penalties. Civil penalties are a type of financial remedy government entities impose for wrongdoing. In the case of the CCPA, civil penalties are assessed and enforced by the state attorney general’s office, which has the authority to investigate potential violations and file lawsuits on behalf of Californian consumers.

The California Attorney-General can pursue penalties from organizations that violate any part of the California Consumer Protection Act.

Just some examples of what these violations can look like include:

●         Failing to respond to consumers’ requests for the deletion of their personal information

●         Failing to have or uphold CCPA-compliant privacy policies

●         Selling consumers’ personal data without offering them a means to opt-out

●         Discriminating against individuals who exercise their rights under the CCPA

●         Failing to give adequate notice of the collection of personal information

Service providers who retain, use, or disclose personal data for purposes outside of their contracts with businesses may also be liable for penalty under the CCPA.

People can dispose themselves to penalty as well, by unlawfully breaching rules on the onward transfer of personal data.

The costs of violating the CCPA are severe, with maximum fines of up to $2,500 per violation or $7,500 per intentional violation. And because the law applies to each consumer whose data is mishandled, a single incident could result in multiple penalties.

Waiting Period

It’s important to note that businesses that violate the California Consumer Protection Act have a waiting period before they can be fined. The law stipulates that businesses have 30 days’ notice to correct any violations before they can be subject to penalties.

If the business can cure the noticed violation(s) and provide an express written statement indicating so and that no further violations shall occur, then no action may be brought.

Enforcement by The California Attorney-General

The CCPA gives the state attorney general’s office broad enforcement powers, including the authority to investigate potential violations and file lawsuits on behalf of Californian consumers.

In addition to seeking civil penalties, the attorney general can also seek injunctions or temporary restraining orders to stop businesses from violating the law.

Private Right of Action

In addition to the civil penalty route, the CCPA also gives consumers the right to take legal action on their own behalf in the case of a violation. Private action is a term that refers to the ability of an individual to bring a lawsuit against another party without the involvement of the government.

The CCPA gives Californian consumers the right to sue businesses, service providers, or any person acting on behalf of a business or service provider for data breaches that result from the unauthorized access, theft, or disclosure of their personal information.

Consumers can sue for damages even if they haven’t suffered any financial loss because of the breach, and they can also seek punitive damages if the court finds that the business or service provider acted recklessly or intentionally violated the law.

The financial repercussions of these cases are somewhat less severe, with a range of $100 to $750 that can be sought per consumer per incident. Actual damages may also be awarded, but only if the consumer can prove that they’ve suffered a financial loss because of the breach.

(Cal. Civ. Code § 1798.150)

Unlike civil penalties, private action lawsuits do not require consumers to provide notice to businesses of their intention to sue.

Proposed Amendments to the CCPA

Like any major piece of legislation, the California Consumer Protection Act is poised to change with time. This is especially true given the law’s subject matter; because technology is always changing, the ways in which personal data is collected and used will likely continue to evolve.

Considering this, lawmakers have already proposed several amendments to the CCPA. These amendments range from technical corrections to substantive changes that would modify the scope or enforcement of the law.

Some potential prominent amendments to come include:

A Shift Away from Dark Patterns

Dark Patterns are a type of user interface design meant to trick people into doing things they might not want to do, such as signing up for a service they don’t need or providing personal information they might not want to share.

One recently proposed amendment to the CCPA would make it illegal for businesses to use dark patterns when collecting personal information from consumers. This would help to ensure that consumers are only providing their personal data willingly and with full knowledge of how it will be used.

The Right to Correct Personal Information

Newly proposed amendments suggest adding a ‘right to correct’ inaccurate personal information to the CCPA. This new section would give consumers the right to correct any inaccurate personal data businesses collect, as well as outline documentation requirements, methods for correction, disclosure requirements for denial, and alternative solutions.

While relatively new to the CCPA, this concept has been around for some time on an international level and is already familiar to many businesses that are subject to the GDPR. For local, California businesses though, this proposed amendment would simply be another obligation to add to their CCPA compliance checklist.

Privacy Policy Requirements

In addition to the information already required to be disclosed in a privacy policy under the CCPA, proposed amendments would add several new specific elements that businesses would need to include.

These are:

●         The date the privacy policy was last updated

●         The length of time the business plans to retain each category of personal information, or if that’s not possible the criteria it uses to determine how long it will be retained

●         Disclosure of whether the business allows third parties to control their collection of personal data, and if so, the names and business practices of these parties

●         A description of consumers’ new rights as described in the amendment

●         Clear directions for how consumers can exercise their newly amended rights

●         A description of how the business will process opt-out requests

Organizations that process the personal data of 10 million consumers or more will also be required to include a link to certain reporting requirements in their privacy policy under this new amendment.

The CCPA’s reach and impression on business is significant, there’s no doubt about that. The law gives Californian consumers a number of rights with respect to their personal data, and businesses that mishandle that data can be subject to some severe penalties. By educating yourself on the law and taking steps to ensure that your business complies, you can help avoid any potential problems down the road.