

For Cisive’s most recent benchmark report, Cisive Insights: Talent Screening Trends 2021, Cisive...
This edition of Cisive’s Quarterly Compliance Update includes articles about a range of legal and regulatory developments shaping the compliance landscape across federal, state, and international jurisdictions.
From AI and privacy laws to shifts in civil rights enforcement and background screening requirements, recent updates reflect a rapidly evolving environment that employers (especially those in highly regulated industries) must navigate carefully. This article roundup offers essential insights into how these changes may impact your policies, hiring practices, and compliance strategy.
Key Takeaways
|
Effective May 12, 2025, the Consumer Financial Protection Bureau (CFPB) formally revoked 67 different guidance documents by publishing a notice in the Federal Register. The CFPB’s action covers various guidance documents, interpretive rules, policy statements and advisory opinions across a range of laws and topics. The stated purpose of this move is threefold:
The CFPB’s new policy is to only issue guidance “where that guidance is necessary and would reduce compliance burdens.”
There is “no pressing need for interpretive guidance to remain in effect” because the CFPB is “reducing its enforcement activities” in light of the current administration’s “directives to deregulate and streamline bureaucracy.”
The CFPB’s “guidance is generally non-binding and generally does not create substantive rights,” and sometimes goes beyond the relevant statute or regulation it seeks to interpret.
The CFPB’s action has raised significant questions for financial institutions. For example, does the withdrawal of a particular guidance document mean that the current CFPB disagrees with the interpretation or position articulated in that document? Should an entity change course and take a different approach to complying with whatever the guidance addressed? And does the CFPB’s choice to keep a certain guidance document in force and not withdraw it mean that the current CFPB agrees with the interpretation or policy articulated in that document?
Unfortunately, there aren’t any answers to these questions at present. For starters, the CFPB does not explain why any particular guidance document is being withdrawn. Instead, the agency offers a number of potential explanations, such as inconsistency with the statutory text, violations of notice-and-comment rulemaking requirements, inconsistency with the agency’s current positions, or even just the agency’s “current policy to avoid issuing guidance except where necessary and where compliance burdens would be reduced rather than increased.”
The problem for industry is that the explanation for withdrawing each document matters. For example, if the CFPB withdrew one guidance document because it now deems the interpretation to be inconsistent with the statutory text, then regulated entities would have to consider changing how they interpret and comply with various parts of the law. On the other hand, if the CFPB withdrew a document as “unnecessary” (notwithstanding the fact that it advances a permissible interpretation), a regulated entity would not necessarily need to change their internal policies or procedures, as the current policy would remain compliant with federal law.
Click Here for the Original Article
On Wednesday, April 23, 2025, President Trump signed EO 14281, titled Restoring Equality of Opportunity and Meritocracy (EO), stating a new Trump Administration policy “to eliminate the use of disparate-impact liability in all contexts to the maximum degree possible . . . .”
We, along with several of our colleagues, already explained this EO, but this shift in federal policy – barely noticed by most people amidst myriad controversies, memes, and crypto schemes, as well as a number of other executive orders – is important enough to warrant further consideration by anyone who manages workplaces and those of us who advise employers about civil rights laws.
In 1971, the Supreme Court of the United States (SCOTUS) recognized in Griggs v. Duke Power Co. that Title VII of the Civil Rights Act of 1964 “proscribes not only overt discrimination but also practices that are fair in form, but discriminatory in operation.” Thus, in the first case in which SCOTUS addressed such a Title VII claim on the merits, the Court approved disparate impact as a theory of liability under Title VII; i.e., that a plaintiff can establish a prima facie case of discrimination by showing that a facially neutral employment policy disproportionately excluded members of a protected class at a statistically relevant level.
Two years after the Supreme Court held in Wards Cove Packing Co. v. Atonio that employers defending a disparate impact claim need only “produce evidence of a legitimate business justification” for the policy in question, Congress amended Title VII with the Civil Rights Act of 1991 (CRA). The CRA requires defendants to prove that a neutral employment policy with a statistically significant adverse impact on a protected class was job related and consistent with business necessity, a more difficult standard to meet than the standard set in Wards Cove. See 42 U.S.C. § 2000(e)-2 (k), the statutory provision on “Burden of proof in disparate impact cases” that Congress created and President George H.W. Bush approved.
In 2009, in Ricci v. DeStefano, a divided SCOTUS addressed whether an employer that engages in “disparate treatment” can justify doing so to avoid “disparate impact” liability. The majority held that an employer may do so only if it can prove its reasoning under a “strong-basis-in evidence” standard.
Concurring with the majority, Justice Scalia amplified the argument that the disparate impact provisions in Title VII are at odds with the Constitution’s equal protection clause. This viewpoint has won favor in certain corners of legal scholarship. See, for example, a Harvard Journal of Law & Public Policy discussion of disparate impact by Pacific Legal Foundation Fellow Alison Slomin, and an article posted by the Federalist Society asking whether the disparate impact doctrine is unconstitutionally vague.
Click Here for the Original Article
The concept of the “supergroup” may have originated with rock and roll, but on April 16, 2025, privacy practitioners in the United States learned that a whole new type of supergroup has been formed. Far from being a reboot of Cream or the Traveling Wilburys, however, this latest supergroup is comprised of eight state privacy regulators from seven states (each of which has enacted a comprehensive state privacy law), who announced they have formed a bipartisan coalition to “safeguard the privacy rights of consumers” by coordinating their enforcement efforts relating to state consumer privacy laws.
The Consortium of Privacy Regulators, comprised of state attorneys general from California, Colorado, Connecticut, Delaware, Indiana, New Jersey, and Oregon, as well as the California Privacy Protection Agency, seeks to facilitate discussions of privacy law developments and shared regulatory priorities, and to share expertise and resources to focus on multijurisdictional consumer protection issues. The constituent attorneys general come from states that have been particularly active in the privacy regulation space, and this coalition will ostensibly allow them to pursue more coordinated, large-scale efforts to investigate and hold companies accountable for noncompliance with common legal requirements applicable to personal information. Of particular importance to this new regulatory body is the facilitation of consumer privacy rights, such as the “rights to access, delete, and stop the sale of personal information, and similar obligations on businesses.”
While this announcement is certainly big news, it is not entirely surprising. Over the course of the past several years, there has been an apparent uptick in coordinated regulation in other areas of data privacy law, especially with respect to data breach investigation and enforcement at the state regulatory level. Just as state attorneys general have been following up with companies that have reported data breaches with an increased diligence and depth (and, in some cases, imposing more substantial civil penalties and seeking to enter into consent orders with these companies), companies can likely expect similarly heightened scrutiny with respect to their consumer privacy practices. And, given the Consortium’s announced intent to hold regular meetings and coordinate enforcement based on members’ common interests, businesses can likely expect that this additional scrutiny will begin very quickly.
Click Here for the Original Article
Since January, the federal government has moved away from comprehensive legislation on artificial intelligence (AI) and adopted a more muted approach to federal privacy legislation (as compared to 2024’s tabled federal legislation). Meanwhile, state legislatures forge ahead – albeit more cautiously than in preceding years.
As we previously reported, the Colorado AI Act (COAIA) will go into effect on February 1, 2026. In signing the COAIA into law last year, Colorado Governor Jared Polis (D) issued a letter urging Congress to develop a “cohesive” national approach to AI regulation preempting the growing patchwork of state laws. Absent a federal AI law, Governor Polis encouraged the Colorado General Assembly to amend the COAIA to address his concerns that the COAIA’s complex regulatory regime may drive technology innovators away from Colorado. Eight months later, the Trump Administration announced its deregulatory approach to AI regulation making federal AI legislation unlikely. At that time, the Trump Administration seemed to consider existing laws – such as Title VI and Title VII of the Civil Rights Act and the Americans with Disabilities Act which prohibit unlawful discrimination – as sufficient to protect against AI harms. Three months later, a March 28 Memorandum issued by the federal Office of Management and Budget directs federal agencies to implement risk management programs designed for “managing risks from the use of AI, especially for safety-impacting and rights impacting AI.”
On April 28, two of the COAIA’s original sponsors, Senator Robert Rodriguz (D) and Representative Brianna Titone (D) introduced a set of amendments in the form of SB 25-318 (AIA Amendment). While the AIA Amendment seems targeted to address the concerns of Governor Polis, the legislative session ended May 7 before the bill could be considered, though it may resurface in the January 2026 session.
Click Here for the Original Article
Effective 1 January 2026, Illinois House Bill 3773 (HB 3773) amends the Illinois Human Rights Act, (IHRA) to expressly prohibit employers from using artificial intelligence (AI) that “has the effect of subjecting employees to discrimination on the basis of protected classes.” Specifically, Illinois employers cannot use AI that has a discriminatory effect on employees, “[w]ith respect to recruitment, hiring, promotion, renewal of employment, selection for training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”
Employers are increasingly using AI during the employment life cycle, including resume scanners, chatbots, and AI-powered performance management software. While AI tools can streamline processes and increase data-based decision making, they also carry risks, such as perpetuating bias and discrimination. In light of HB 3773, Illinois employers should be mindful of these risks and carefully select and regularly audit their AI applications to ensure that the applications do not have a discriminatory effect on applicants1 and employees.
Click Here for the Original Article
Recently, the California Civil Rights Council, which is the arm of the California Civil Rights Department that is responsible for promulgating regulations, voted to approve final “Employment Regulations Regarding Automated-Decision Systems” (“Regulations”). The Regulations attempt to curb discriminatory practices that can arise when using AI tools in the workplace. If they are approved by the Office of Administrative Law, the Regulations will become effective on July 1, 2025. The Regulations have undergone several revisions since they were initially proposed in May 2024, and their adoption would make California one of the first states to implement anti-discrimination regulations pertaining to automated-decision technology.
The updated Regulations define “Automated-Decision Systems” (ADS) as “[a] computational process that makes decisions or facilitates human decision making regarding an employment benefit,” that “may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.” Examples of functions that ADS can perform include resume screening, computer-based assessments, and analysis of applicant or employee data from third parties.
Both employers and “agents” are covered under the Regulations. Agents are defined as “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity . . . .” Such functions could include applicant recruiting and screening, hiring, or decisions pertaining to leaves of absence or benefits.
Click Here for the Original Article
On April 28, 2025, the Cleveland City Council unanimously passed Ordinance No. 104-2025 (the “salary ordinance”), which will ban any employer that employs fifteen or more employees in the City of Cleveland, as well as any employment agency operating on the employer’s behalf, from asking about or considering a job applicant’s salary history. The salary ordinance also requires job postings to provide the salary range or scale of the position. The ordinance will take effect on October 27, 2025.
Ordinance No. 104-2025, which the city council approved on April 28, 2025, also requires job postings to provide the salary range or scale of the position.
The city council noted in the ordinance that Cincinnati, Columbus, and Toledo have similar pay equity laws.
The ordinance makes it an unlawful discriminatory practice to: (1) inquire about a job applicant’s salary history; (2) screen applicants based on their current or prior salary history; (3) rely solely on an applicant’s salary history in deciding whether to offer the applicant employment; or (4) refuse to hire or otherwise retaliate against an applicant who refuses to disclose his or her salary history. The ordinance also requires Cleveland employers to include the salary range or scale of the position in the notification, advertisement, or other formal posting that offers the opportunity to apply. The ordinance does not, however, prohibit an employer from inquiring about a job applicant’s salary expectations.
The ordinance only applies to positions that will be performed within Cleveland’s geographic boundaries, and “whose application, in whole or in part, will be solicited, received, processed, or considered in the City of Cleveland, regardless of whether the person is interviewed.”
Click Here for the Original Article
The Wisconsin Supreme Court has clarified that non-criminal, municipal citations are covered by the prohibition on arrest record discrimination under the Wisconsin Fair Employment Act (WFEA). Oconomowoc Area School District v. Cota, et al., 2025 WI 11 (Apr. 10, 2025). The decision reversed a 2024 court of appeals opinion.
The court also narrowed the scope of an exception to the law that allows employers to make employment decisions based on independent investigations.
This decision is the latest in the ever-changing jurisprudence on the WFEA’s prohibition against discrimination based on employees’ arrest and conviction records.
The Oconomowoc Area School District previously employed the plaintiffs as members of its grounds crew. Another employee accused the plaintiffs of stealing from the District, and the District investigated the allegations internally. Its investigation led the District to believe the plaintiffs did indeed steal. Despite the District’s initial belief, it turned the matter over to the Oconomowoc Police Department to continue the investigation instead of immediately firing the plaintiffs.
Law enforcement continued investigating and issued the plaintiffs citations for municipal theft, a non-criminal offense. In communications with the District, the assistant city attorney said he believed the plaintiffs were guilty and he could obtain convictions.
The District terminated the plaintiffs’ employment only after the city attorney’s statements and based on its independent belief the plaintiffs stole, as well as the plaintiffs’ municipal citations.
The plaintiffs filed a complaint against the District with the Wisconsin Department of Workforce Development, Equal Rights Division alleging their terminations constituted unlawful arrest record discrimination under the WFEA. An agency administrative law judge, the Labor and Industry Review Commission, and the county circuit court all agreed, finding the District violated the WFEA because the plaintiffs’ municipal citations fell within the WFEA’s definition of “arrest record.” The Wisconsin Court of Appeals disagreed, reversing the prior decisions and finding the non-criminal, municipal citations were not an “arrest record” under the WFEA and employers were free to utilize such citations in making employment decisions. An appeal to the Wisconsin Supreme Court followed.
The Wisconsin Supreme Court reversed the court of appeals, finding that even non-criminal, municipal citations were an “arrest record” under the WFEA. The court found that the phrase “any … other offense” in Wis. Stat. § 111.32(1) includes violations of both criminal and non-criminal laws.
The court then turned to whether the plaintiffs’ terminations were based on their arrest records. The District argued the terminations were lawful under the “Onalaska defense” because the District’s decision was based on its internal investigation in addition to the plaintiffs’ arrest record. The District argued its belief that the plaintiffs were guilty after the internal investigation demonstrated its decision was not based on the plaintiffs’ arrest records. The court’s majority disagreed.
The court said that Onalaska holds “simply that an employer who does not rely on arrest-record information when making a discharge decision does not discriminate against an employee because of their arrest record.” Because the court agreed with the Labor and Industry Review Commission’s finding that the District did not act until after the law enforcement investigation and citations, the court found the District relied on the plaintiffs’ arrest records and concluded the District violated the WFEA.
The court’s holding means that arrest record discrimination can occur even when the arrest record played only a small part in an employer’s motivation for its decision.
Employers should be mindful of the Wisconsin Supreme Court decision before making an employment decision based on an employee’s potentially unlawful activity. Relying on a complete and thorough internal investigation to the extent possible in making an adverse employment decision will help minimize the risk of running afoul of the WFEA.
Based on the court’s decision, employers should be cautious when considering an employee’s arrest record or other potentially unlawful conduct. Employers should take action only after determining whether the offense substantially relates to the employee’s employment and consulting with legal counsel.
Click Here for the Original Article
In 2018, Washington enacted a Fair Chance Act, requiring covered employers to wait until after considering an applicant to be “otherwise qualified” for the position at issue to inquire about or consider criminal history when making hiring or other employment decisions. There are exceptions for, among others, financial institutions and certain regulated employers.
Starting on July 1, 2026, however, unless an exception applies, Washington employers with 15 or more employees must change their screening practices as follows:
Employers with less than 15 employees must comply starting January 1, 2027.
The amendment adds a requirement not seen in other fair chance laws—if an employer advises job applicants that the position will be subject to a post-offer criminal history background check, the employer must immediately make a written disclosure to the applicant that summarizes certain aspects of the law and includes a copy of the Attorney General’s Fair Chance Act guide, which can be found here. An employer must provide these same disclosures if an applicant voluntarily discloses information about their criminal history during a job interview.
Failure to comply can result in civil penalties and aggrieved individuals may be entitled to compensatory damages.
All employers should consider a privileged review of their background screening practices by experienced counsel. Beyond Washington, several states and localities have their own so-called fair chance laws with “job relatedness” requirements for an employer’s use of criminal history information, including California, Illinois, New York, and Wisconsin, among others. And several have enhanced notice requirements that go beyond the FCRA’s requirements.
Click Here for the Original Article
There have been a number of developments in the anti-bribery/anti-corruption sector following the start of President Trump’s second term. First, on February 5th, Attorney General Pamela Bondi issued a memo titled, “Total Elimination of Cartels and Transnational Criminal Organizations” in which she instructed the Foreign Corrupt Practices Act (FCPA) Unit at DOJ to focus on “investigations related to foreign bribery that facilitate the criminal operations of Cartels” and Transnational Criminal Organizations (TCOs). She also suspended the requirement that the Fraud Section lead investigations involving the FCPA and Foreign Extortion Prevention Act if the investigation is into “foreign bribery associated with Cartels and TCOs.”
That memo was followed by President Trump signing Executive Order 14209 titled, “Pausing Foreign Corrupt Practices Act Enforcement to Further American Economic and National Security.” In that order, President Trump noted that FCPA enforcement has been “stretched beyond proper bounds and abused in a manner that harms” the U.S. He further explained that FCPA enforcement compromises the U.S.’s foreign policy goals, “the President’s Article II authority over foreign affairs,” national security, and the ability of the U.S. “and its companies gaining strategic business advantages.” President Trump ordered Attorney General Bondi to revise DOJ guidelines and policies related to the FCPA, require new FCPA investigations and enforcement actions to have Attorney General Bondi’s approval, review pending FCPA investigations and enforcement actions to ensure their compliance with the order, and identify whether any remedial actions are necessary. This order was accompanied by a fact sheet that echoed the points described above.
In response to the federal government’s shift with FCPA enforcement, states have indicated they may step up their bribery enforcement actions. For example, California’s Attorney General Rob Bonta issued a legal advisory “reminding businesses operating in California that it is illegal to make payments to foreign-government officials to obtain or retain business.” Attorney General Bonta explained that FCPA violations can be the basis for actions under California’s Unfair Competition Law. Similarly, the District Attorney for Manhattan shared that his office was exploring methods for taking on various enforcement priorities that the DOJ has indicated it will not be pursuing as heavily as it once did, including domestic bribery and corruption.
Internationally, the UK, France, and Switzerland announced the launch of the International Anti-Corruption Prosecutorial Taskforce. The taskforce includes (1) the UK’s Serious Fraud Office, (2) France’s National Financial Prosecutor’s Office, and (3) the Office of the Attorney General of Switzerland. Among other things, the taskforce announced its commitment to tackling “the significant threat of bribery and corruption and the severe harm that it causes.” The taskforce also noted that it would “invite other like-minded agencies” to join the taskforce’s efforts. To accomplish its goal, the taskforce identified four action items: (1) regularly exchanging “insight and strategy,” (2) “devising proposals for co-operation on cases,” (3) sharing best practices “to make full use of” the taskforce’s “combined expertise,” and (4) “seizing opportunities for operational collaboration.”
Last week, Jean-Francois Bohnert, the head of France’s agency tasked with bringing enforcement actions against, among other things, corruption, mentioned that other countries have contacted the taskforce to discuss their interest in potentially cooperating with the taskforce. Those countries included Germany, Italy, the Netherlands, Spain, and multiple unnamed countries in Latin America.
These developments have led some to question the trajectory of the FCPA enforcement landscape for the duration of the Trump administration. That said, speculation that FCPA enforcement would be relatively non-existent appears to be overstated at this point given that although the federal government has refrained from proceeding with some pending FCPA trials, they have proceeded with others. As for state enforcement actions in this area, it remains to be seen just how robust those actions can be given restrictions such as jurisdictional limitations and limited state resources (which are already stretched in numerous areas currently).
Click Here for the Original Article
With the entry into force of the AI Act (Regulation 2024/1689) in August 2024, a pioneering framework of AI was established.
On February 2, 2025, the first provisions of the AI Act became applicable, including the AI system definition, AI literacy and a limited number of prohibited AI practices. In line with article 96 of the AI Act, the European Commission released detailed guidelines on the application of the definition of an AI system on February 6, 2025.
These non-binding guidelines are of high practical relevance, as they seek to bring legal clarity to one of the most fundamental aspects of the act – what qualifies as an “AI system” under EU law. Their publication offers critical guidance for developers, providers, deployers and regulatory authorities aiming to understand the scope of the AI Act and assess whether specific systems fall within it.
Article 3(1) of the AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate output, such as predictions, content, recommendations or decisions that can influence physical or virtual environments.
The European Commission emphasizes that this definition is based on a lifecycle perspective, covering both the building phase (pre-deployment) and the usage phase (post-deployment). Importantly, not all definitional elements must always be present—some may only appear at one stage, making the definition adaptable to a wide range of technologies, in line with the AI Act’s future-proof approach.
The guidelines reaffirm that all AI systems must operate through machines – comprised of both hardware (e.g., processors, memory and interfaces) and software (e.g., code, algorithms and models) components. This includes not only traditional digital systems, but also advanced platforms such as quantum computing and biological computing, provided they possess computational capacity.
Another essential requirement is autonomy, described as a system´s capacity to function with some degree of independence from human control. This does not necessarily imply full automation, but may include systems capable of operating based on indirect human input or supervision. Systems that are designed to operate solely with full manual human involvement and intervention
An AI system may, but is not required to, exhibit adaptiveness – meaning it can modify its behavior post-deployment based on new data or experiences. Importantly, adaptiveness is optional and systems without learning capabilities can still qualify as AI if other criteria are met. However, this characteristic is crucial in differentiating dynamic AI systems from static software.
AI systems are designated to achieve specific objectives, which can be either explicit (clearly programmed) or implicit (derivate from training data or system behavior). These internal objectives are different from the intended purpose, which is externally defined by its provider and context of use.
It is the capacity to infer how to generate output based on input data that defines an AI system. This distinguishes them from the traditional rule-based or deterministic software. According to the guidelines, “inferencing” encompasses both the use phase, where the outputs such as predictions, decisions or recommendations are generated, as well as the building phase, where models or algorithms are derived using AI techniques.
The output of an AI system (predictions, content, recommendations or decisions) must be capable of influencing physical or virtual environments. This captures the wide functionality of modern AI, from autonomous vehicles and language models to recommendation engines. Systems that only process or visualize data without influencing any outcome fall outside the definition.
Finally, AI systems must be able to interact with their environment, either physical (e.g., robotic systems) or virtual (e.g., digital assistants). This element underscores the practical impact of AI systems and further distinguishes them from purely passive or isolated software.
Click Here for the Original Article
As governments respond to emerging technologies and shifting political priorities, compliance obligations are becoming more complex and fragmented. Employers must stay alert to both the risks and the opportunities these changes present.
Cisive helps clients in highly regulated industries eliminate blind spots, proactively adapt to legal shifts, and hire with confidence. If you need guidance on how these developments could affect your background screening program, our compliance experts are here to support you every step of the way.
Author: Michael Kendrick
Bio: Director of Corporate Legal/Compliance at Cisive.
Let's Connect on LinkedInFor Cisive’s most recent benchmark report, Cisive Insights: Talent Screening Trends 2021, Cisive...
In 2025, organizations across industries must prepare for a wave of new compliance challenges...
Beginning July 27, 2025, Washington State will implement a newly revised Fair Chance Act, expanding...