In the ever-evolving landscape of HR and talent management, staying on top of the latest federal...
Navigating the complexities of compliance within highly regulated industries is increasingly critical as regulatory landscapes evolve. From recent federal updates affecting financial services to significant privacy law changes and the integration of AI in workplace practices, HR and talent acquisition professionals must stay informed. This blog provides a deep dive into the latest legal shifts, offering insights to guide your strategies in compliance and talent management.
5 Key Takeaways
|
FEDERAL UPDATES
CFPB Poised To Up the Ante after Supreme Court Victory
In what should not come as a surprise to anyone who observed the October oral argument in the case of Consumer Financial Protection Bureau v. Community Financial Services Association of America, the U.S. Supreme Court emphatically ruled on May 16 that the bureau’s funding structure did not violate the appropriations clause of the U.S. Constitution.
The bureau’s comments — and actions — after the decision make clear that entities subject to the CFPB’s regulatory, enforcement and supervisory authority can expect it to ramp up its enforcement activity significantly, especially since many lawsuits were stayed pending a decision in Community Financial Services.
What Was at Issue in Community Financial Services?
Briefly, this case involved a challenge to the CFPB’s payday lending regulation that was challenged by a group of trade associations that represent payday lenders. The trade associations argued, among other things, that the regulation was unconstitutional because the bureau’s funding mechanism violated the appropriations clause of the U.S. Constitution. The U.S. District Court for the Western District of Texas rejected that argument, finding that “[t]he Appropriations Clause ‘means simply that no money can be paid out of the Treasury unless it has been appropriated by an act of Congress.'”
On appeal, the U.S. Court of Appeals for the Fifth Circuit reversed and found that the bureau’s “self-actualizing, perpetual funding mechanism” violated the appropriations clause. The bureau quickly appealed to the Supreme Court, which accepted certiorari. In the meantime, many actions brought by the bureau throughout the U.S. were promptly stayed pending the outcome of this matter. The number of lawsuits brought by the bureau also significantly slowed while its appeal was pending.
Click Here for the Original Article
The American Privacy Rights Act's Definition of Covered Data
In combing through a proposed or draft bill, privacy professionals naturally orient themselves by seeking out defined terms, scanning for the foundational and consequential definition of "personal data." Within the discussion draft of the latest effort to enact a national comprehensive privacy law, the American Privacy Rights Act, such a search comes up empty. APRA drafters have eschewed attaching the modifier "personal" to the elemental definition of the data it covers. Despite nominally departing from existing terminology found in existing privacy legislation, the APRA's definition of "covered data" draws heavily from privacy and data protection regimes in the states and abroad.
The definition
The APRA discussion draft defines "covered data" as "information that identifies or is linked or reasonably linkable, alone or in combination with other information, to an individual or a device that identifies or is linked or reasonably linkable to 1 or more individuals." At a basic level, this definition only slightly deviates from the terms "personal information" and "personal data" used in other U.S. state comprehensive privacy legislation, but further digging reveals important differences in scope.
Information linked or reasonably linkable to an individual
The requirement for covered data to be "linked or reasonably linkable" to an individual expands the scope from just data collected from or about one or more individuals to include data created that relates to one or more individuals. Likewise, any data that has a reasonable possibility of being linked in the future to an individual will also fit under the definition of covered data.
Comparing this definition to EU data protection law, personal data under the EU General Data Protection Regulation is defined as information "relating to an identified or identifiable natural person." Despite their similar but different verbiage, both definitions mark the outer bounds of what constitutes personal or covered data through a test of reasonableness.
Where the APRA's covered data must at the very least be "reasonably linkable" to an individual, whether data is personal under the GDPR is determined after accounting for "all means likely reasonably to be used by the controller … to identify the said person," according to Recital 26. Data that does not immediately appear personal or covered may become so if processing identifies or links it to an individual, but only so far as the means of processing likely to be used to identify or link to that individual are reasonable.
The covered data definition also leads to the inclusion of derived or inferred data, topics that have grown contentious given the extent to which many entities rely on new data created based on data previously collected, especially across the advertising technology and AI industries. Provided that such data is linked or reasonably linkable to an individual or device, alone or in combination with other data, it will likely be considered covered data under the APRA.
The extent to which technology enables entities to derive or infer sensitive personal data from personal and publicly available data has grown in recent years, a concern raised by a small number lawmakers at the state level. California Attorney General Rob Bonta opined the California Consumer Privacy Act confers protections to such inferences drawn from personal information. Thus far, the Oregon Consumer Privacy Act is the only other enacted state bill to follow suit.
Click Here for the Original Article
50 States, 1 Goal: Examining State-Level Recidivism Trends in the Second Chance Act Era
Since its passage in 2008, the Second Chance Act has invested $1.2 billion, infusing state and local efforts to improve outcomes for people leaving prison and jail with unprecedented resources and energy. Over the past 15 years, the Bureau of Justice Assistance and the Office of Juvenile Justice and Delinquency Prevention have awarded funding to 1,123 Second Chance Act grantees to improve reentry outcomes for individuals, families, and communities.1 And critically, the Second Chance Act-funded National Reentry Resource Center has built up a connective tissue across local, state, Tribal, and federal reentry initiatives, convening the many disparate actors who contribute to reentry success.
The result? A reentry landscape that would have been unrecognizable before the Second Chance Act’s passage. State and local correctional agencies across the country now enthusiastically agree that ensuring reentry success is core to their missions. And they are not alone: state agencies that work on everything from housing and mental health to education and transportation now agree that they too have a role to play in determining outcomes for people leaving prison or jail.
Community-based organizations, many led or staffed by people who were once justice- involved themselves, are contributing passion and creativity, standing up innovative programs to connect people with housing, jobs, education, treatment, and more. Researchers have built a rich body of evidence about what works to reduce criminal justice involvement and improve reentry outcomes, allowing the National Reentry Resource Center to create and disseminate toolkits and frameworks to support jurisdictions to scale up effective approaches. And private corporations that once saw criminal justice involvement as fatal to a candidate’s job application are now using their platforms to champion second chance employment as both a moral and business imperative. The efforts of these key stakeholders are bigger, bolder, and better coordinated than ever, and they are producing results. Recidivism has declined significantly in states across the country, saving governments money, keeping neighborhoods safer, and allowing people to leave their justice involvement behind in favor of rich and meaningful lives in their communities.
Click Here for the Original Article
Proposed Marijuana Reclassification and Impact on Employers
On May 16, 2024, the U.S. Department of Justice announced that the Attorney General has submitted a notice of proposed rulemaking initiating a formal rulemaking process to consider moving marijuana from a Schedule I to a Schedule III controlled substance.
Regulatory Next Steps
The rescheduling of a controlled substance follows a formal rulemaking procedure that requires notice to the public, and an opportunity for comment and an administrative hearing. The Attorney General’s proposal starts the process, where the Drug Enforcement Administration (DEA) will gather and consider information and views submitted by the public, in order to make a determination about the appropriate schedule. During that process, and until a final rule is published, marijuana remains a Schedule I controlled substance. The Controlled Substances Act, passed in 1970, created five schedules of classifications of various substances, placing cannabis on Schedule I, along with heroin, LSD, and other drugs with “no currently accepted medical use or treatment” value. While reclassifying cannabis would not legalize recreational cannabis nationwide, it would place cannabis with other Schedule III drugs, including ketamine, anabolic steroids, and some acetaminophen-codeine combinations.
Implications for Employers
The DEA proposal has no immediate impact on state or federal laws regulating marijuana. Prospectively, the biggest impact may concern employers in industries, such as transportation, that perform drug testing in accordance with federal requirements. The Federal Omnibus Transportation Employee Testing Act requires all U.S. Department of Transportation (DOT) agencies to implement drug and alcohol testing requirements for “safety-sensitive” employment positions regulated by those agencies. Accordingly, employees with safety-sensitive responsibilities regulated by agencies, including the Federal Aviation Administration, Federal Motor Carrier Safety Administration, and the Federal Railroad Administration, are subject to extensive mandatory drug testing.
Click Here for the Original Article
An Oxymoron or a Road Map? US Department of Labor’s Artificial Intelligence and Worker Well-Being
The Department of Labor's (DOL) May 16, 2024 guidance, Artificial Intelligence and Worker Well-Being: Principles for Developers and Employers, published in response to the mandates of Executive Order 14110 (EO 14110) (Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence), weighs the benefits and risks of an AI-augmented workplace and establishes Principles to follow that endeavor to ensure the responsible and transparent use of AI.
The DOL’s publication of these Principles follows in the footsteps of the EEOC and the OFCCP’s recent guidance on AI in the workplace and mirrors, in significant respects, the letter and spirit of their pronouncements.
While not “exhaustive,” the Principles” should be considered during the whole lifecycle of AI” from ”design to development, testing, training, deployment and use, oversight, and auditing.” Although the DOL intends the Principles to apply to all business sectors, the guidance notes that not all Principles will apply to the same extent in every industry or workplace, and thus should be reviewed and customized based on organizational context and input from workers.
While not defined in the Principles, EO 14110 defines artificial intelligence as set forth in 15 U.S.C. 9401(3): “A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.”
Good News and Bad News of AI
The DOL notes that implementation of AI in the workplace will create demand for workers to gain new skills and spur training to learn how to use AI in their day-to-day work. In addition, the guidance notes that AI will also continue creating new jobs, including those focused on the development, deployment, and human oversight of AI.
As the guidance warns, however, AI-augmented work also poses risks to workers, including the loss of autonomy and direction over their work and a potential decline in job quality: “The risks of AI for workers are greater if it undermines workers' rights, embeds bias and discrimination in decision-making processes, or makes consequential workplace decisions without transparency, human oversight and review.” Moreover, in its final dire prediction, the guidance notes that the use of workplace AI also risks displacing workers from their jobs.
Click Here for the Original Article
STATE, CITY, COUNTY AND MUNICIPAL UPDATES
Minnesota Passes New Job Posting Transparency Law
Minnesota is the latest jurisdiction to enact a pay transparency law, joining other localities such as California, Colorado, Illinois, Maryland and Washington, D.C., New York, and Washington State that are taking steps to address pay disparity concerns. On May 17, 2024, Minnesota Gov. Walz signed the Omnibus Labor and Industry policy bill, including new job posting requirements. This law is in addition to existing Minnesota wage disclosure protections for employees.
The new law takes effect Jan. 1, 2025.
Pay Transparency Disclosures
The law applies to employers that employ 30 or more employees at one or more sites in Minnesota. The law does not address whether it applies to positions that could be filled by an employee working remotely.
Under the law, in each posting for a job opening, a covered employer must disclose the starting salary range or fixed pay rate and a general description of all benefits and other compensation to be offered to a selected job applicant. The description of benefits and other compensation must include, at a minimum, any health benefits, retirement benefits, and other financial perks associated with the position.
Employers must set the salary range in good faith. Salary range is defined as the “minimum and maximum annual salary or hourly range of compensation” anticipated “at the time of the posting of an advertisement for such opportunity.” The range cannot be open-ended. Alternatively, if the employer does not want to provide a wage range, the employer must list a “fixed pay rate,” meaning the exact salary or hourly rate the employer intends to pay a successful applicant.
Under the law, a job posting encompasses (1) any solicitation intended to recruit job applicants for a specific available position, (2) recruitment performed by an employer directly or by a third party, such as job sites, and (3) electronic or hard-copy job postings that list qualifications for desired applicants.
Click Here for the Original Article
What You Need to Know about Colorado’s New Comprehensive AI Law
On May 8, 2024, Colorado’s legislature enacted “An Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (SB205), a state law that comprehensively regulates the use of certain “Artificial Intelligence (AI)” systems. The law is aimed at addressing AI bias, establishing a requirement of human oversight throughout the life cycle of AI systems, and requiring significant documentation around the use of AI. This blog post covers to whom the law applies, effective dates and penalties, important definitions, and initial steps companies should consider taking to prepare for complying with the law.
To whom does SB205 apply?
SB205 applies to any person doing business in Colorado who develops an “AI system” or deploys a “high-risk AI system” (each are discussed further below). The law defines “deploy” as “use,” meaning that SB205 applies to any company using a high-risk AI system, whether or not that system is consumer-facing. Developing an AI system as defined in the law will also include actions that “intentionally and substantially modify” an existing AI system.
How is the law enforced?
SB205 explicitly excludes a private right of action, leaving enforcement solely with the Colorado Attorney General. Additionally, SB205 provides that if the Attorney General brings an enforcement action relating to high-risk AI systems, there is a rebuttable presumption that a company used “reasonable care” under the law if the company complied with the provisions of the applicable section setting forth the respective obligations (§1702 for a developer, §1703 for a deployer), along with any additional requirements that the Attorney General may promulgate. For example, if a developer faced an enforcement action related to the development of a high-risk AI system, and could demonstrate it had the requisite processes and documentation in place as required by Section 6-1-1702, it may benefit from a rebuttable presumption that the developer exercised reasonable care to protect consumers from risks of algorithmic discrimination. The law also provides companies with an affirmative defense against actions by the Attorney General if the company discovers the violation and takes corrective actions, in addition to maintaining a compliance program that meets certain criteria.
Click Here for the Original Article
California Civil Rights Department Unveils New Proposed Regulations on Employers’ Use of AI and Automated Systems
The California Civil Rights Department (CRD) has released new proposed regulations regarding employers’ use of artificial intelligence (AI) and automated decision-making systems that would affirm that the use of such technology in a way that discriminates against employees and job applicants based on protected characteristics is a substantive violation of California law.
Quick Hits
-
-
-
- The California Civil Rights Department released new proposed regulations for employers’ use of AI and automated decision-making systems.
- The proposed regulations would affirm that employers’ use of such hiring technologies may violate the state’s antidiscrimination laws and clarify limits on the use of such technology in criminal background checks and medical/psychological inquiries.
- The proposed regulations would also clarify the scope of third-party liability arising from the use of AI tools.
- Written comments on the proposed regulations must be submitted by July 18, 2024.
-
-
On May 17, 2024, the Civil Rights Council, a branch of the CRD, issued a notice of proposed rulemaking and new proposed modifications to California’s employment discrimination regulations. The notice kicks off a forty-five-day period for public comment on the proposed regulations.
The regulations, titled “Proposed Modifications to Employment Regulations Regarding Automated-Decision Systems,” address the growing use of AI and other automated systems by employers to make decisions regarding job applicants and employees, such as hiring or promotion decisions. While this technology has the potential to improve efficiency, in a statement, the CRD highlighted concerns that AI and automated systems can “exacerbate” existing biases and discrimination, such as reinforcing gender or racial stereotypes.
In general, the California Civil Rights Council’s proposed regulations would affirm that California’s antidiscrimination laws and regulations apply to potential discrimination caused by the use of automated systems, whether or not an employer uses a third party to design or implement the systems. The proposed regulations also address the use of automated systems for background checks and medical or psychological inquiries.
The proposed regulations come after years of consideration by the CRD and amid growing concern and follow-up on proposed regulations released in March 2022. Several states and the federal government are considering restrictions on the use of such emerging technologies—with most states focused on procedural regulations, such as requiring certain notices be provided to employees and that employers take certain steps to discover and rout out bias in such systems. If adopted, the proposed regulations would make California the first state to adopt substantive restrictions.
Click Here for the Original Article
INTERNATIONAL UPDATES
Do I Really Have To? A Two-Part Framework for Determining if the EU AI Act Applies to You - Part 1: What’s Your Role?
This is the second installment in our series of client advisory articles in which we unpack the European Union’s (EU’s) Artificial Intelligence Act (AI Act) to help you evaluate if, when, and how you need to comply with the AI Act’s requirements.
Our first installment provided an overview of the AI Act, its risk-based regulatory approach and anticipated compliance timeline. See “The European Union Has Assigned Your AI Some Homework.” With that basic understanding, the immediate next question becomes: “Do I really have to?”
To answer this question requires an understanding of both your place in the AI lifecycle and the risk category your AI use case falls within. This article will focus on this lifecycle analysis, while our next article will discuss risk categories in more detail.
Let’s Start with the Basics: Is the AI Act Relevant to Your Business or Organization?
As discussed in our overview article, while the AI Act broadly defines AI and has a strong extraterritorial effect, it does not capture every AI use case into its regulatory orbit. Two preliminary questions can be clarified:
1. Do we develop, market, deploy, operate, or otherwise use technology that fits within the EU’s definition of Artificial Intelligence (AI)?
If you are reading this, then the answer is most likely yes. The AI Act broadly defines AI as a “machine-based system” that “infers from the input it receives how to generate output as predictions, content, recommendations, or decisions that can influence physical or virtual environments” and “may exhibit adaptiveness after deployment.” Article 3(1).
As a result, the AI Act’s definition covers a wide range of existing “smart” products, from the thermostat in your home to the algorithmic suggestions on your favorite shopping platform and, of course, the general-purpose AI you use instead of that search engine to find dad-jokes because the search engine kept serving you links to suspiciously free electronics.
Breaking the definition down to its basic parts, consider the technology and ask:
-
-
-
- Is it machine-based? Remember, the AI Act contemplates human decision-making involved in AI system operation; indeed, in some instances, the AI Act requires human interventions. A human in the mix does not negate its possible classification as a regulated AI.
- Does it respond to inputs (i.e., data)? Odds are that it does.
- Does it generate “predictions, content, and recommendations” or make “decisions”? Really, is there anything software-related that does not fall into this category?
- Can those outputs influence physical or “virtual” environments? Emphasis on the word “can,” as this does not mean it must.
- Finally, does it exhibit adaptiveness after deployment? It doesn’t have to! The use of the word “may” in the definition indicates it is not required, although it is a strong indicator that something may qualify as an AI system.
- Bonus Question: Is our AI capable of performing a wide range of tasks?
-
-
If you conclude it is indeed AI, then you now have a bonus question: Is it general-purpose AI? Focus on its capabilities, not how you intend to use or market it. It can still qualify as a general-purpose AI even if you intend to only use or market it for a specific purpose, as it does not matter how “the model is placed on the market.” Article 3(63). We will dig deeper into this when we cover the second part of our framework, which asks what risk classification your specific use-case and AI system fall into, in our next article in this series.
2. Do our AI-related operations intersect with the EU in any manner?
This is an intentionally vague question, as there are many ways in which an AI system or general-purpose AI model can reach the EU and thereby come within the scope of the AI Act.
For example, the AI Act automatically applies if you are based in the EU. It also automatically applies if you put the AI system on the EU market or “in service” there. But the AI Act as currently drafted also applies whenever mere outputs of the AI system are used in the EU or intended to be used there. This is even more expansive than the extra-territorial reach of the General Data Protection Regulation (GDPR) since it could incorporate AI systems that never formally entered the EU market (e.g., were not marketed, put into service, or otherwise put on the EU market).
There are open questions around how this extra-territorial component will ultimately be interpreted by EU regulators and the degree to which intent will be required (i.e., if one must intend for the AI output to be used in the EU for the Act to apply). Final language will be released any day now, and this final language will hopefully include some clarity on this point.
In any event, this extraterritoriality raises critical questions that organizations should consider when evaluating if and how the AI Act applies to them. Do you intend to use any of the outputs within the EU? Do you merely forward the outputs to EU customers for potential downstream use? If so, then the AI Act arguably applies to you – even if you are a US-centric business that is not formally marketing your AI system or putting it on the EU market.
If you are marketing or offering AI-related services in some way in the EU (by offering the AI system directly to customers or offering AI-enhanced services or products, for example), then the AI Act more clearly and directly applies.
The AI Act also covers any activity in which you are using AI to support your operations whenever there is a nexus with the EU market—such as AI-assisted monitoring and management of EU employees. The AI Act can even be a feature embedded within a broader tool rather than a standalone system. To effectively gauge relevancy, remember to inventory all AI features, tools, and systems within your business operations.
Click Here for the Original Article
Conclusion
As HR and talent management professionals in highly regulated sectors, staying ahead of these developments is not just about compliance, but also about seizing opportunities to enhance organizational practices. Whether it's adapting to new privacy laws, understanding the implications of AI in the workplace, or leveraging reentry programs for a more diverse workforce, informed strategies are key.
To ensure your organization remains at the forefront of compliance and innovation, speak with a Cisive expert today. Let's navigate these complex regulations together, ensuring your talent management strategies are both compliant and competitive.