This edition of Cisive’s Quarterly Compliance Update includes articles about a range of legal and...
This quarter’s compliance article roundup underscores the expanding impact of AI, data privacy, and cybersecurity regulations across federal, state, and international landscapes. With the DOJ enforcing new data rules, the FTC handing down penalties, and states continuing to shape privacy and AI laws, organizations must remain vigilant. Whether addressing deepfake threats, managing sensitive data, or navigating global privacy frameworks, employers and HR professionals must align hiring practices and data use with evolving legal expectations.
Key Takeaways
|

The digital transformation has led to significant advancements in authentication and identity verification technologies and other cyber defenses.
From biometrics to multi-factor authentication (MFA) to use of Artificial Intelligence (AI) enhanced detection and response tools, these systems are the first line of critical defense against unauthorized access in critical sectors such as finance, healthcare, manufacturing, and government. However, with the rapid development of Multi-Modal AI and agentic AI, a new challenge has emerged—one that may compromise the very systems designed to protect us. By integrating multiple forms of data (e.g., voice, video, text) in multi-modal AI and use of agentic AI (automated decision-making with little or no human intervention), malicious actors are increasingly capable of bypassing authentication and identity verification security and other defenses, thereby posing a new level of cybersecurity threat. The rapid deployment of AI integrated into a wide variety of commercial products, platforms and workflows has dramatically expanded the potential attack surface.
Indeed, on November 13, 2025, Anthropic reported how its AI-powered Claude Code tool was leveraged for a fully automated sophisticated attack targeting large technology companies, financial institutions, manufacturing, and government agencies: “We believe this the first documented case of a large-scale cyberattack executed without substantial human intervention.” Similarly, researchers recently reported the discovery of a strain of ransomware that used large language models to autonomously implement ransomware attacks by generating malicious code in real time. We have previously highlighted in our blogs the escalating threats to employees from Deepfake technologies and AI augmented phishing attacks.
Multi-modal AI refers to systems that can process and combine information from diverse sources to understand and respond to inputs in ways that are more holistic and human-like. For example, rather than relying on just one modality, such as voice recognition or facial recognition, multi-modal systems can integrate text, video, and other sensory data for improved accuracy and flexibility. While these advancements offer immense potential in fields like healthcare and customer service, they also raise serious concerns when leveraged maliciously.
As more organizations implement biometric authentication, such as facial recognition and voice biometrics, multi-modal AI offers attackers a new arsenal for bypassing these security measures. By synthesizing data from multiple sources—such as voice recordings, photos, and even social media interactions—an attacker can create a comprehensive digital identity profile that closely mirrors the real thing. This new breed of attack can go beyond traditional hacking methods, using AI to trick systems that once seemed impenetrable.
Agentic AI generally refers to artificial intelligence systems that are capable of operating and developing autonomously and independently with little or no human oversight. Agentic AI may be integrated into systems through Application Programming Interfaces (APIs). Gartner reports that “[b]y 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously.”
One immediate concern is the rise of AI-driven deepfakes. Deepfakes—hyper-realistic media created through AI that can mimic someone's appearance, voice, and behavior—have already made waves in the world of media and politics. However, these technologies are increasingly being adapted for malicious purposes, particularly in the realm of identity fraud.
An attacker could use multi-modal AI to create a convincing deepfake that mimics not just one, but several facets of an individual’s identity. For instance, by combining a victim’s facial data and voice samples with text-based information (like emails or social media posts), an AI could generate an extremely accurate imitation of the individual. This synthetic identity could then be used to bypass security systems, such as voice-activated banking systems, facial recognition used for mobile authentication, or even online verification processes employed by financial institutions.
As noted by the Center for Cybersecurity Policy and Law, deepfakes and other AI-powered impersonation techniques are particularly dangerous in financial services. Systems that rely on voice recognition or facial biometrics are becoming increasingly vulnerable to attacks that could potentially manipulate the very data they rely on for authentication. As acknowledged by the U.S. Treasury, AI has the capability to mimic biometrics (such as photos/video or a customer or the customer’s voice). As discussed further below, this capability is a growing concern, especially in the context of digital identities in the financial sector, where the consequences of breaches could be severe.
Click Here for the Original Article
Over the past fifteen years, artificial intelligence (AI) has become increasingly embedded in the legal field, often in ways that were not widely anticipated. While it was initially expected that legal professionals would need to familiarize themselves with AI tools to enhance efficiency in practice, few foresaw the necessity for attorneys to possess a deep understanding of AI’s technical functions in order to competently advise clients on the legal ramifications of its use. Although the application of traditional AI to accomplish tasks such as resolve analytical problems or sort large datasets has become routine, the evolution of the technology now enables generative AI to not merely process information, but to autonomously make decisions based on the information presented. Generative AI is capable of generating new content such as images or text by learning from data supplied to it. The most popular examples of generative AI that have been integrated into professional’s everyday use are Google’s Gemini and Microsoft’s Copilot. Every time a user drafts an email or a word document or simply completes a search on Google, generative AI is taking that information to provide suggestions and generating responses. Other examples of generative AI include Chat GPT, Meta AI, Claude and many more that are constantly under development.
In recognition of these risks, the Illinois Supreme Court has explicitly cautioned against the uncritical adoption of generative AI in legal proceedings and emphasized the necessity of protecting due process, equal protection and access to justice.[ii] The Illinois Supreme Court warned that AI-generated content, lacking evidentiary foundation or accuracy, may entrench bias, prejudice litigants and obscure truth-finding and decision-making.[iii] While the Illinois Supreme Court’s warning of the unintended consequences of AI was limited to those in the legal profession, the warning is one that should be considered by all users of generative AI.
In particular, there has been an increase of the use of AI in the employment sector, which has resulted in enhanced and efficient decision-making for many employers, particularly in the areas of recruitment and staff management. Employers are using AI to analyze candidate qualifications and employee performance. These practices are frequently justified on the basis that algorithmic decision-making can reduce human bias and produce objective outcomes. At first glance, this practice seems appropriate and can result in quick and efficient decisions, improving the flow of businesses.
Closer scrutiny of this practice by legislatures and legal professionals, however, revealed complex legal and ethical concerns. Critics of generative AI have recognized that the manner in which information is deciphered and sorted may border on improper or cross the line into illegal, replicating or amplifying existing biases. These same critics began to wonder what guidelines and information AI was using to make its decisions. How was the technology making its critical decisions? Was it possible the technology is biased and producing biased results?
At the legislative level, Illinois has recognized the possibility of the misuse of generative AI and the potential consequences and has responded in kind by regulating its use. In the employment context, key statutes that regulate the use of AI include the Artificial Intelligence Video Interview Act (AVIA) and the Illinois Human Rights Act (IHRA). These legislative enactments focus on transparency of use and prohibiting discrimination through AI. Although the Illinois General Assembly has introduced several bills aimed at establishing broad regulatory oversight of AI, none have yet to be enacted into law.
Click Here for the Original Article
With the U.S. Department of Justice’s Data Security Program (DSP) now in full effect, companies that handle sensitive personal data, operate across borders, or rely on global vendor ecosystems face an increasingly complex compliance environment. The DSP restricts certain data transactions involving individuals and countries of concern, imposes new compliance contractual obligations, and signals a clear national-security approach to data governance.
The DSP marks a new era in the federal government’s regulation of data transactions, applying concepts traditionally used in U.S. export control law to bulk or sensitive data exchanges. The DSP is designed to address what the DOJ has described as “the extraordinary national security threat” posed by U.S. adversaries acquiring Americans’ most sensitive data through commercial means to “commit espionage and economic espionage, conduct surveillance and counterintelligence activities, develop AI and military capabilities, and otherwise undermine our national security.”
Click Here for the Original Article
On December 11, 2025, President Trump signed an Executive Order on “Ensuring A National Policy Framework for Artificial Intelligence” (the “Order”). This follows Executive Order 14179 issued on January 23, 2025, under the Trump Administration on the topic of AI leadership (“Removing Barriers to American Leadership in Artificial Intelligence”). Key provisions in the Order are highlighted below.
The Order directs the Attorney General to establish an AI Litigation Task Force (the “Task Force”) within 30 days. The Task Force will be dedicated exclusively to challenging state-level AI regulations that conflict with the federal policy set forth in Section 2 of the Order, which outlines a commitment to maintaining and strengthening global leadership in AI through a “minimally burdensome national policy framework for AI.”
The Order further instructs the Secretary of Commerce to publish, within 90 days, an evaluation of current state AI laws. This evaluation will include any state laws that the Secretary finds conflict with the policy goals outlined in Section 2 of the Order and identify laws that will be referred to the Task Force. The evaluation may also highlight state laws that align with the Order’s policy objectives.
Click Here for the Original Article
On December 10, 2025, the U.S. Department of Justice (DOJ) issued a final rule removing liability for disparate impact discrimination under Title VI of the Civil Rights Act of 1964. This rule applies to recipients of federal funding, including state and local government agencies, nonprofits, schools, and government contractors.
Title VI of the Civil Rights Act of 1964 prohibits employment discrimination based on race, color, or national origin in any program or activity that receives federal funding. Title VI prohibits employment discrimination only if employment is a primary objective of the federal investment, and the alleged discriminatory employment practices negatively affect the delivery of services to the program’s ultimate beneficiaries, such as students, patients, or those served by government agencies.
Disparate impact generally refers to when a neutral policy or practice disproportionately and negatively affects a legally protected group. Disparate impact does not require plaintiffs to prove an intent to discriminate existed. The original rule and regulations allowed agencies to consider federal-funding recipients’ policies and practices that had an alleged discriminatory effect.
Click Here for the Original Article
As Epstein Becker Green previously reported, the National Security Division of the U.S. Department of Justice (“DOJ”) issued a final rule, effective on April 8, 2025, called the Bulk Sensitive Data Rule (“BSD Rule”) (codified at 28 C.F.R. Part 202).
The BSD Rule prohibits and/or restricts U.S. persons and/or companies from engaging in certain transactions involving certain categories of government-related data and sensitive personal data with covered persons or six countries of concern, including China (including Hong Kong and Macau), Russia, Iran, North Korea, Cuba, and Venezuela.
This final rule implemented the Biden administration’s Executive Order 14117, dated February 28, 2024 – entitled “Preventing Access to Americans’ Bulk Sensitive Personal Data and United States Government-Related Data by Countries of Concern.” In addition to safeguarding sensitive data, the BSD Rule allows for the DOJ to investigate non-compliance with its requirements and enforce civil and criminal penalties when non-compliance is discovered. Implementation of the BSD Rule is a result of a heightened interest in ensuring the security of data, especially in cross-border data sharing arrangements.
Even though the BSD Rule took effect in April, the DOJ implemented a 90-day safe harbor period during which time companies were encouraged to become compliant with the rule before the July 2025 enforcement date. Now, almost six months since the DOJ began enforcement, and as the BSD Rule’s reporting requirements take effect in 2026, it is vital that companies assess their business relationships and data to ensure compliance with this complex rule that imposes new requirements on both U.S. organizations and persons.
For instance, impacted relationships that companies should be aware of may include those held with:
Even business relationships and transactions with foreign countries not among the six countries of concern should be evaluated to assess whether the non-country of concern recipients still receive bulk U.S. sensitive personal data. If so, companies must ensure that the appropriate downstream data protection language is included in all relevant contracts and appropriate diligence of those transactions is routinely performed.
Click Here for the Original Article
On December 1, 2025, the Federal Trade Commission (FTC) approved a proposed complaint and order against Illuminate Education, Inc., an education technology provider requiring it to “to implement a data security program and delete unnecessary data to settle allegations that the company’s data security failures led to a major data breach, which allowed hackers to access the personal data of more than 10 million students.”
The FTC alleges that Illuminate “failed to deploy reasonable security measures to protect student data stored in cloud-based databases. These failures led to a major data breach.” According to the complaint, in late December 2021, a hacker used the credentials of a former employee to access Illuminate’s databases stored in the cloud. The threat actor accessed information including students’ email addresses, mailing addresses, dates of birth, student records, and health information.
The FTC further alleges that Illuminate failed to notify school districts in a timely manner, as “it waited nearly two years to notify some school districts, comprising more than 380,000 students, about the data breach.”
Click Here for the Original Article

Indiana’s comprehensive consumer privacy law, the Indiana Consumer Data Protection Act (“ICDPA”), is set to take effect on January 1, 2026. Although the ICDPA shares many features with other comprehensive state privacy laws, several aspects set it apart as a more business-friendly law, including:
Click Here for the Original Article
The California Consumer Privacy Act (CCPA), as amended and effective January 1, 2026, brings the most detailed and sweeping changes since the law’s introduction. If you do business in California or handle Californians’ personal information, here’s what your company must know, and do, to avoid compliance risks.
The Delete Act requires data brokers to register with the CPPA annually in January and pay a fee that funds the Data Broker Registry and the Delete Request and Opt-Out Platform (“DROP”). DROP, launching in 2026, will allow consumers to request all registered data brokers to delete their personal data with a single request.
The updated regulations demand detailed transparency:
Click Here for the Original Article
On November 6, 2025, Connecticut Attorney General William Tong, along with California Attorney General Rob Bonta and New York Attorney General Letitia James, announced a significant settlement stemming from the enforcement of Connecticut’s Student Data Privacy Law. This case marked the first enforcement action since the law's enactment and involved Illuminate Education, Inc. ("Illuminate"), an educational technology provider whose 2022 data breach exposed sensitive information belonging to millions of students.
In December 2021, hackers gained access to Illuminate’s systems using credentials from a former employee. The hackers downloaded unencrypted database files containing sensitive information such as student names, birth dates, IDs, and demographic details. The number of students affected in each state was as follows:
Illuminate will pay a total of $5.1 million in penalties, distributed as follows:
In addition to the monetary penalties above, the settlement requires Illuminate to implement comprehensive security measures, including:
Click Here for the Original Article
Earlier this week, the District Court for the Southern District of New York approved a massive $900,000 class settlement in Lee v. Springer Nature America, Inc. No. 24-cv-4493, 2025 WL 3523134.
This case concerns allegations that the Defendant’s website https://scientificamerican.com (the “Website”), disclosed subscribers’ Facebook ID, the titles of prerecorded audiovisual materials they accessed on the Website, and the URLs for those materials to Meta without consent, in violation of the Video Privacy Protection Act of 1988 (“VPPA”), 18 U.S.C. § 2710. In March 2025, the Court denied the defendant’s motion to dismiss, finding that Plaintiff Lee had adequately alleged standing and a viable VPPA claim.
Following that ruling, the parties attended an in-person, full-day mediation before retired Judge Diane Welsh, after exchanging informal discovery relevant to class certification and the merits. The mediation resulted in a Settlement Agreement under which the Defendant agreed to pay $900,000 into a settlement fund to cover class claims, administration costs, attorneys’ fees and costs, and a service award to Plaintiff. The Defendant also agreed to suspend its use of the Meta Pixel on portions of the Website relevant to VPPA compliance, i.e., webpages that include both video content and have a URL that identifies the video content reviewed. However, the Defendant preserved its ability to obtain VPPA-compliant consent or use the Meta Pixel where the disclosure of information does not identify specific video materials that the user has requested or obtained.
Lee moved for preliminary approval of the Settlement Agreement on June 27, 2025. The Court initially requested clarification regarding a provision that would have required all individuals objecting to the settlement to state whether they had ever asked for or received any payment in exchange for dismissal of an objection or any related appeal without modification to the settlement. After the parties removed the provision, the Court granted preliminary approval on July 10, 2025.
Click Here for the Original Article

In today’s business environment, data moves across borders faster than most organizations can track it. Whether a company operates in Seattle, Chicago, or Berlin, its privacy obligations rarely stop at the water’s edge. That reality is largely due to the European Union’s General Data Protection Regulation (GDPR), which, even seven years after going into effect, still shapes global privacy standards in profound ways.
At its core, the GDPR regulates how organizations collect, use, store, and share personal data about individuals in the European Union. Personal data under the GDPR includes any information that can identify a person: names, emails, IP addresses, device IDs, location coordinates, biometric information, and more.
Julian Schneider of the University of California, San Francisco Law School highlights the philosophical roots of the law, explaining that “the GDPR is built on the idea that privacy is a fundamental human right, not just a business obligation.” Schneider’s point helps explain the regulation’s broad reach.
The law’s territorial scope means companies can be subject to GDPR even with no physical presence in Europe. If a US company markets products to EU residents or monitors the online behavior of EU users, GDPR obligations apply.
Article 5 of the GDPR sets out seven foundational principles:
“Data minimization should be in neon lights above every privacy program. The less you collect, the less you can mishandle,” advises Alex Sharpe of Sharpe Management Consulting LLC.
All seven foundational principles, including ‘data minimization,’ frame the day-to-day decisions companies must make regarding notices, record keeping, retention, and security controls.
The GDPR distinguishes between three major roles:
Before processing any personal data, controllers must identify at least one lawful basis:
Gabriel Buehler of Buehler Law, PLLC, stresses that poor consent practices are a major risk area. “Most companies underestimate the documentation burden, i.e., policies, audits, transfer assessments, until they’re in the middle of it,” he notes, adding that consent obtained through bundled disclosures or so called ‘dark patterns’ is unlikely to meet GDPR standards.
GDPR gives individuals strong rights over their personal data, including:
For US companies, the operational challenge is creating workflows to validate identities, collect data from internal systems, and deliver responses within the required timeframes.
Article 25 requires organizations to incorporate privacy into the design of their systems, products, and internal processes. This includes:
Here, Alex Sharpe warns that organizations overcomplicate compliance when the fundamentals matter most, remarking that many companies “try to get clever instead of keeping systems simple, transparent, and user focused.”
Click Here for the Original Article
From AI regulation to data breach enforcement, the compliance landscape is more complex than ever. Cisive helps organizations stay ahead by proactively tracking regulatory changes and building screening programs that align with the latest legal and cybersecurity standards. Our experts are here to help you hire with confidence, eliminate blind spots, and protect what matters most. Talk to a background screening pro today.
Author: Michael Kendrick
Bio: Director of Corporate Legal/Compliance at Cisive.
Let's Connect on LinkedIn
This edition of Cisive’s Quarterly Compliance Update includes articles about a range of legal and...
In 2025, organizations across industries must prepare for a wave of new compliance challenges...
In the ever-evolving landscape of HR and talent management, staying on top of the latest federal...