Navigating the complexities of compliance within highly regulated industries is increasingly...
In 2025, organizations across industries must prepare for a wave of new compliance challenges driven by advancements in artificial intelligence (AI), privacy regulations, and anti-discrimination laws. From evolving employment laws regulating AI in decision-making processes to strengthened reproductive health privacy protections under HIPAA, the following compliance articles emphasize the importance of transparency, accountability, and consumer protection. This summary highlights key developments that will shape the regulatory landscape in the coming year and underscores the need for proactive compliance strategies.
Key Takeaways:
|
Table of Contents
FEDERAL UPDATES
Regulating Artificial Intelligence in Employment Decision-Making: What’s on the Horizon for 2025
Employment law in 2024 could aptly be summarized as the “Year of Artificial Intelligence Legislation.” Indeed, all but five states introduced new artificial intelligence (AI) legislation in 2024, with four of the five outliers simply not having 2024 legislative sessions. Texas was one such outlier state but is poised to join the majority when its legislature reconvenes in January 2025 and considers recently proposed legislation known as the Texas Responsible AI Governance Act (TRAIGA). As we transition into the new year, employers — particularly those operating in multiple states — need to be aware of current and proposed AI-related legislation in their jurisdictions to ensure they remain in compliance with the ever-evolving AI regulatory landscape.
AI regulation is a frequent topic of this blog because, even though there is no comprehensive federal legislation on the topic, the Equal Employment Opportunity Commission (EEOC) and the Department of Labor (DOL) have issued guidance documents and are strongly focused on ensuring human oversight of responsible AI utilization. Valid concerns abound, particularly in the use of AI tools for human resources decision-making, ranging from data privacy and algorithmic discrimination to job security and transparency.
Click Here for the Original Article
HIPAA Reproductive Privacy Rule Takes Effect Amid Legal and Political Uncertainties
As of December 23, health care providers, health plans, and health care clearinghouses (covered entities) and their business associates (collectively, regulated entities) must comply with new reproductive health care privacy protections under the Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. Enforcement will begin against the backdrop of a pending lawsuit challenging the validity of the new protections and an incoming Trump Administration with an uncertain enforcement posture.
The 2024 Final Rule
The US Department of Health and Human Services (HHS) Office for Civil Rights (OCR) adopted the privacy protections in a 2024 final rule earlier this year, which we discussed in a prior alert. The rule prohibits regulated entities from using or disclosing protected health information (PHI) to identify, investigate, or hold someone liable for seeking, obtaining, providing, or facilitating reproductive health care that was lawfully provided in the relevant circumstances (for example, an abortion in a state where abortion is legal). “Reproductive health care” is any health care that “affects the health of an individual in all matters relating to the reproductive system and to its functions and processes.” This may include, for example, abortion, contraception, fertility medicine, and certain gender-affirming care procedures.
Starting December 23, regulated entities must obtain a signed, written attestation that PHI will not be used or disclosed for prohibited purposes when the following criteria are present:
-
- The regulated entity receives a request for PHI potentially related to reproductive health care.
- The request relates to health oversight activities, judicial and administrative proceedings, law enforcement purposes, or disclosures to coroners and medical examiners.
- The regulated entity receives a request for PHI potentially related to reproductive health care.
For instance, if a regulated entity receives a subpoena for medical records from a state licensing agency investigating a physician’s performance of an abortion, the attestation requirement would apply. Failure to secure this attestation could result in a privacy breach, potentially leading to administrative penalties or even criminal sanctions.
HHS Releases Model Attestation to Aid Compliance
To assist regulated entities in their compliance efforts, HHS released a model attestation after publishing the 2024 final rule. The document includes fields where a party must provide information about a request for PHI potentially related to reproductive health care for one of the above-noted purposes, including information about the intended recipient of the PHI, the requester, and the individual whose PHI is requested. The model attestation also includes a statement that the PHI will not be used or disclosed for a prohibited purpose. While not mandatory, use of the model attestation may simplify compliance and reduce unnecessary variation.
Even if a regulated entity adopts the model attestation for use, it may encounter challenges complying with the attestation requirement, which is unprecedented under the HIPAA Privacy Rule. At the outset, the entity must determine whether a request for PHI is “potentially related to reproductive health care.” Given how broadly the rule defines “reproductive health care,” some regulated entities may struggle to accurately assess whether a request potentially relates to reproductive health care, leading to inconsistent application of the rule and potential inadvertent disclosures. These challenges could result in administrative burdens and delays in processing requests, even when the individual whose PHI is involved has not sought or obtained reproductive health care.
Additionally, a regulated entity must consider whether a request for PHI potentially related to reproductive health care is for a prohibited purpose. If the entity has “actual knowledge that material information in the attestation is false” or if a “reasonable” entity “in the same position would not believe that the attestation is true,” the attestation is invalid. This may require analyzing whether reproductive health care was lawfully rendered.
In making these determinations, some regulated entities may face resistance from requesting parties reluctant to provide the required attestation, which could complicate compliance efforts. Particularly with requests from law enforcement agencies, a regulated entity may be placed in the difficult position of choosing between adhering to a court order and risking an impermissible disclosure of PHI, potentially affecting investigations into serious issues such as rape, incest, and domestic violence.
Click Here for the Original Article
The Intersection of Agentic AI and Emerging Legal Frameworks
The evolution of artificial intelligence (AI) has introduced systems capable of making autonomous decisions, known as agentic AI. While generative AI essentially “creates” – providing content such as text, images, etc. – agentic AI “does” – performing tasks such as searching for and ordering products online. These systems are beginning to emerge in public-facing applications, including Salesforce’s Agentforce and Google’s Gemini 2.0.
As agentic AI continues to proliferate, legal systems must adapt to address the risks and harness the benefits of AI systems that are able to think more logically and take action rather than merely guide or create. Initiatives such as the California Consumer Privacy Act (CCPA) and its proposed modifications for automated decision-making technologies (ADMT) highlight the ways in which regulators are working to ensure privacy and accountability in the AI-driven era.
Practical Uses of Agentic AI
Agentic systems pursue complex goals using sophisticated reasoning with limited human supervision. Unlike traditional generative AI systems that respond to prompts, agentic AI can execute tasks using third parties as a user’s “agent.” For example, when prompted to book a flight an agentic AI will access flight databases, search for available flights based on the user’s preferences and budget, evaluate trade-offs in price and travel time, and finally book the flight by interacting with the airline’s booking system, inputting all necessary information for the passenger.
Further applications in various industries include health care, where it can assist in disease diagnosis by analyzing patient data and medical imaging, and finance, by enabling fraud detection and credit risk assessment through advanced data analytics. Retail companies can also utilize agentic AI to personalize shopping experiences, recommending products based on user behavior.
On a consumer level, portable agentic AI gadgets such as Rabbit R1 have introduced consumers to the early stages of autonomous decision-making. The device demonstrated how agentic AI can navigate third-party apps and perform tasks such as ordering food or booking rides via voice commands. On a small scale such as this, misunderstood prompts have minor consequences to users, perhaps leading to a wrong delivery order or sending a rideshare driver to the wrong location. However, when applied to a more complex use-case scenario, the ramifications of a misunderstood prompt are magnified.
Understanding Agentic AI and its Governance
While businesses will likely be able to streamline workflows and save resources, the legal and ethical implications of deploying such systems in consumer-facing roles demand careful consideration. For example, the integration of agentic AI into legal contract review, modification, and (someday soon) negotiation raises a number of important implications that expose businesses and consumers to the greater risks of automating legally binding documents without human supervision and nuanced judgment.
ADMT and Emerging Regulatory Frameworks
The California Privacy Protection Agency (CPPA) has proposed a set of national standards for regulating Automated-Decision Making Technology (ADMT) to address growing concerns in agentic AI. Defined under the California Consumer Privacy Act (CCPA), ADMT includes “any technology that processes personal information and uses a computation to execute a decision, replace human decision-making or substantially facilitate human decision-making.”[1] Importantly, the definition of “substantially facilitate human decision-making” specifies that ADMT includes instances where its output serves as a key factor in a human’s decision-making process.
However, the CCPA excludes certain technologies that do not independently execute decisions or significantly influence human decision-making from this definition. Examples include basic tools like spellchecks or calculators, which organize or compute data without making autonomous decisions.[2] These distinctions establish the regulatory focus on agentic AI systems (coined ADMT) capable of independent or heavily influencing decision-making.
Click Here for the Original Article
STATE, CITY, COUNTY AND MUNICIPAL UPDATES
Are You Ready for 2025? New State Privacy Laws to Take Effect Beginning on January 1
2025 is set to be another important year for US state privacy laws, with five new laws effective in January and three more coming into effect through October. New laws in Delaware, Iowa, Nebraska, and New Hampshire will go into effect on January 1, 2025, while New Jersey’s law will follow on January 15, 2025. Below, we detail these laws effective in January. Make sure to check back here in 2025 for more information on the laws effective in July and October.
Below are the effective dates, thresholds, and key aspects of these new laws.
Delaware Personal Data Privacy Act – Effective January 1, 2025
The Delaware law applies to entities that conduct business in Delaware or produce products or services targeted to Delaware residents and, during the previous calendar year, did at least one of the following:
-
- Controlled or processed the personal data of 35,000 or more Delaware residents (excluding personal data controlled or processed solely for the purpose of completing a payment transaction), or
- Controlled or processed the personal data of 10,000 or more Delaware residents and derived more than 20% of their gross revenue from the sale of personal data.
- Controlled or processed the personal data of 35,000 or more Delaware residents (excluding personal data controlled or processed solely for the purpose of completing a payment transaction), or
Delaware’s law differs from most of the comprehensive state privacy laws in effect now or through January 2025 in that it applies to nonprofit organizations and educational institutions. The law also has a unique definition of “sensitive data,” which includes pregnancy status and nonbinary identity.
Iowa Consumer Data Protection Act – Effective January 1, 2025
The Iowa law applies to organizations that conduct business in Iowa or produce products or services that are targeted to Iowa residents and, during a calendar year, do at least one of the following:
-
- Control or processes the personal data of 100,000 or more Iowa residents, or
- Control or processes the personal data of 25,000 or more Iowa residents and derive more than 50% of their gross revenue from the sale of personal data.
- Control or processes the personal data of 100,000 or more Iowa residents, or
Iowa’s law differs from most comprehensive state privacy laws in that it does not provide consumers with a right to correct their personal data. Businesses subject to the Iowa law are not required to conduct risk assessments for activities that pose a significant risk of harm to consumers, which is a requirement in other states.
Nebraska Data Privacy Act – Effective January 1, 2025
The Nebraska law applies to organizations that:
-
- Conduct business in Nebraska or produce products or services consumed by Nebraska residents,
- Process or engage in the sale of personal data, and
- Are not small businesses (as defined by the US Small Business Administration).
- Conduct business in Nebraska or produce products or services consumed by Nebraska residents,
Nebraska follows Texas by excluding small businesses from the scope of its privacy law. On the other hand, Nebraska’s law includes a broad definition of “sale,” similar to definitions in the laws of California, Connecticut, Delaware, and New Jersey: the exchange of personal data for “monetary or other valuable consideration.”
New Hampshire Data Privacy Act – Effective January 1, 2025
The New Hampshire law applies to organizations that conduct business in New Hampshire or produce products or services that are targeted to New Hampshire residents and, during a one-year period, do at least one of the following:
-
- Control or process the personal data of 35,000 or more unique New Hampshire residents (excluding personal data controlled or processed solely for the purpose of completing a payment transaction), or
- Control or process the personal data of 10,000 or more unique New Hampshire residents and derive more than 25% of their gross revenue from the sale of personal data.
- Control or process the personal data of 35,000 or more unique New Hampshire residents (excluding personal data controlled or processed solely for the purpose of completing a payment transaction), or
The New Hampshire law requires businesses to allow consumers to opt out of processing their personal data by using universal opt-out mechanisms. The law also requires businesses to obtain the consumer’s consent before processing their sensitive data. Unlike other state privacy laws, the law requires the New Hampshire attorney general to provide a 60-day cure period to businesses for violations through December 31, 2025. After that time, the New Hampshire Attorney General will have continuing discretion to provide the 60-day cure period.
New Jersey Data Privacy Act – Effective January 15, 2025
The New Jersey law applies to organizations that produce products or services that are targeted to New Jersey residents and, during a calendar year, do at least one of the following:
-
- Controls or process the personal data of 100,000 or more New Jersey residents (excluding personal data processed solely for the purpose of completing a payment transaction), or
- Control or process the personal data of 25,000 or more New Jersey residents and derive revenue, or receive a discount on, the price of any goods or services from the sale of personal data.
- Controls or process the personal data of 100,000 or more New Jersey residents (excluding personal data processed solely for the purpose of completing a payment transaction), or
New Jersey’s law defines “sensitive data” broadly compared to other comprehensive state privacy laws, including in the definition financial credentials such as account numbers, login details, and PINs. Unlike most comprehensive state privacy laws, nonprofits are largely not exempt from the New Jersey law and must comply with it if they meet the other threshold requirements. Starting July 15, 2025, the law requires businesses to allow consumers to opt out of processing their personal data by using a user-selected universal opt-out mechanism to be clarified in rules set out by the New Jersey Division of Consumer Affairs in the Department of Law and Public Safety.
Future Forecast
As new state laws continue come into effect, it is crucial for businesses to be aware of the quickly changing legal landscape and to invest in robust compliance programs.
Click Here for the Original Article
California 2025: The Next Wave of New Employer Compliance Obligations Is About to Hit
The California Legislature and Governor Gavin Newsom have again enacted a number of laws that will affect California employers.
This Insight summarizes the significant changes to California employment laws taking effect in 2025. Unless otherwise indicated, the laws discussed below will take effect on January 1, 2025.
Intersectionality Protections and Definition of “Race” in Anti-Discrimination Laws Clarified
In a nationwide first, as we previously explained, California now explicitly recognizes the concept of “intersectionality” within the state’s anti-discrimination laws. The legislation (SB 1137) clarifies that the Unruh Civil Rights Act, the Education Code, and the Fair Employment and Housing Act (FEHA) all prohibit discrimination based not only on individual protected characteristics but also on the basis of the intersection or any combination of those characteristics.
Another bill, AB 1815, further amends FEHA to clarify the definition of “race” by removing the word “historically” from the provision that race is inclusive of traits associated with race, including hair texture and protective hairstyles. By way of background, California passed the Create a Respectful and Open Workplace for Natural Hair Act, known as the CROWN Act, in 2019, codifying that race discrimination includes discrimination based on hairstyles, such as braids, locs, and twists. AB 1815 removes the word “historically” from the phrase “traits historically associated” because it was vague and confusing in application.
Note that these FEHA amendments will apply both retroactively and prospectively because they are declaratory of existing law.
Local Enforcement of Anti-Discrimination Laws
Existing state law expressly authorizes only the California Civil Rights Department (CRD) to enforce complaints of employment discrimination. SB 1340 will change that by authorizing cities, counties, and other political subdivisions to enforce their own employment discrimination laws as long as those laws are at least as protective as the state’s laws.
Under SB 1340, any city, county, or political subdivision can enforce its own laws prohibiting discrimination in employment if the local enforcement:
-
-
-
- concerns an employment complaint filed with the CRD,
- occurs after the CRD has issued a right-to-sue notice,
- commences before the time to file a civil action expires, and
- is pursuant to a local law that is at least as protective as FEHA.
-
-
Changes to California Paid Sick Time and Leave Laws Expanded Paid “Safe” Leave
AB 2499 amends the Healthy Workplaces Healthy Families Act, the state’s paid sick leave law, to expand what is often referred to as “safe time” or “safe leave” by:
-
-
-
- permitting employees to take time off in connection with a “qualifying act of violence,” which includes domestic violence, sexual assault, stalking, or acts involving bodily injury or death, a dangerous weapon, or a threat of physical injury or death (previously, paid sick leave for safe time was limited to victims of domestic violence, sexual assault, and stalking);
- making paid safe leave available not only to employees who are victims of qualifying acts of violence but also to employees whose family members are victims of qualifying acts of violence; and
- providing that the definition of “family member” for purposes of this law is the definition stated by the California Family Rights Act (CFRA), which includes a child, parent, grandparent, grandchild, sibling, spouse, domestic partner, or “designated person” (any individual related by blood or whose association with the employee is the equivalent of a family relationship and identified by the employee at the time the employee requests leave).
-
-
These enhanced safe leave purposes for using paid sick time do not increase the total amount of paid sick time that employees have available. We wrote about the 2024 updates to these requirements here.
Expanded Unpaid Safe Leave Protections
In addition to enhancing the “safe” purposes for which paid sick leave may be used, AB 2499 also expands the rights of employees to take unpaid leave for reasons related to qualifying acts of violence. Existing law allows employees to take unpaid time off to appear as a witness pursuant to a subpoena or court order or to obtain relief, such as a restraining order, to protect themselves or their children.
Employers with at least 25 employees must also now allow not only employees who are victims of qualifying acts of violence to take unpaid leave for purposes defined in the statute but also employees whose covered family members are victims of qualifying acts of violence.
The purposes for which the employee may take leave are:
-
-
-
- To obtain or attempt to obtain any relief, including, but not limited to, a temporary restraining order, restraining order, or other injunctive relief, to help ensure the health, safety, or welfare of the employee or family member.
- To seek, obtain, or assist a family member to seek or obtain, medical attention for or to recover from injuries caused by a qualifying act of violence.
- To seek, obtain, or assist a family member to seek or obtain services from a domestic violence shelter, program, rape crisis center, or victim services organization or agency as a result of a qualifying act of violence.
- To seek, obtain, or assist a family member to seek or obtain psychological counseling or mental health services related to an experience of a qualifying act of violence.
- To participate in safety planning or take other actions to increase safety from future qualifying acts of violence.
- To relocate or engage in the process of securing a new residence due to the qualifying act of violence, including, but not limited to, securing temporary or permanent housing or enrolling children in a new school or childcare.
- To provide care to a family member who is recovering from injuries caused by a qualifying act of violence.
- To seek, obtain, or assist a family member to seek or obtain civil or criminal legal services in relation to the qualifying act of violence.
- To prepare for, participate in, or attend any civil, administrative, or criminal legal proceeding related to the qualifying act of violence.
- To seek, obtain, or provide childcare or care to a care-dependent adult if the childcare or care is necessary to ensure the safety of the child or dependent adult as a result of the qualifying act of violence.
-
-
The length of the permissible unpaid leave may vary. Employees taking leave as a victim of a qualifying act of violence for any of the applicable qualifying reasons above may be eligible for up to 12 weeks, to run concurrently with leave under the federal Family and Medical Leave Act and/or CFRA, if applicable. Employees taking leave in connection with a covered family member who is a victim may be limited to 10 days, up to five days of which can be for purposes of relocation.
Additionally, while this leave is generally unpaid, as set forth above, employees may use any available paid sick time during the leave.
Employees may also request reasonable accommodations for the safety of the employee while at work. Reasonable accommodations may include transfer, reassignment, modified schedule, changed work telephone, permission to carry a telephone at work, changed workstation, installed lock, assistance in documenting a qualifying act of violence that occurs in the workplace, an implemented safety procedure, or another adjustment to a job structure, workplace facility, or work requirement.
AB 2499 also moves these provisions, along with jury and witness leave protections, from the Labor Code to the Government Code (FEHA) to allow for enforcement by the CRD rather than the Department of Labor Standards Enforcement.
The CRD will post a form entitled “Survivors of Violence and Family Members of Victims Right to Leave and Accommodations” on its website that employers will need to provide: (i) to new employees upon hire, (ii) to all employees annually, (iii) at any time upon request, and (iv) any time an employee informs an employer that the employee or the employee’s family member is a victim.
For more, click the link to the original article below.
Click Here for the Original Article
INTERNATIONAL UPDATES
The European Data Protection Board Releases Opinion on Artificial Intelligence
On December 18, 2024, the European Data Protection Board (EDPB) issued an opinion on personal data use in artificial intelligence (AI) in response to the Irish Data Protection Commission's request for more clarity regarding how the EU General Data Protection Regulation (GDPR) applies to AI.
The EDPB's opinion offers a robust framework for the ethical use and development of AI. The EDPB outlined that AI developers can use legitimate interest as a legal basis for model training; however, Data Protection Authorities should apply a three-step test:
- identify if there is a legitimate interest by the controller;
- determine if the processing is necessary; and
- balance the interests or fundamental rights and freedoms of data subjects with the legitimate interest of AI use.
The EDPB stressed that AI models developed with unlawfully processed personal data face significant legal scrutiny and:
- Controllers must address and rectify any non-compliance during development; and
- DPAs retain the discretion to enforce corrective measures, such as retraining the model or deleting unlawfully processed data.
The opinion supports the use of AI in threat detection and cybersecurity under legitimate interest. THE EDPB emphasizes the need for careful risk assessments and strict adherence to GDPR principles. In addition, the EDPB reiterates the importance of data minimization and transparency in AI model lifecycle management. It highlights governance practices such as regular audits, training and documentation to ensure compliance. The EDPB also stresses the need for robust anonymization techniques.
The EDPB is preparing additional guidelines to address anonymization, web scraping and automated decision-making. The EDPB reinforces that AI models must adhere to GDPR principles not only for compliance but also to foster trust and transparency in their AI-driven initiatives.
Click Here for the Original Article
How Do Your Internal Dispute Resolution Processes Stack Up?
Financial firms are required to maintain clear internal dispute resolution (IDR) processes to allow customers to seek redress where they are dissatisfied with the firm’s products or services. Access to fair, timely and effective IDR is an important tenet of consumer protection. Financial firms are required to acknowledge the receipt of a customer’s complaint within 24 hours and resolve standard complaints within 30 days.
Compliance Concerns
Financial firms are required to lodge bi-annual IDR reports with the Australian Securities and Investments Commission (ASIC) detailing information about complaints received during the relevant period. ASIC’s recently published Report 801 analyses the data collected under this requirement. Data presented in the Report will provide a baseline for trend reporting in future IDR publications.
In the Report, ASIC expresses its concerns about the quality and accuracy of financial firms’ IDR data and observes multiple issues with IDR reporting including:
-
-
-
- large variations in complaint numbers between comparable firms;
- overuse of “other” or “unknown” categories;
- reporting gaps related to products or issues; and
- a high number of firms having no complaints to report.
-
-
ASIC expects all financial firms to comply with their IDR obligations, including the enforceable provisions contained in ASIC’s Regulatory Guide relating to IDR, RG271.
IDR Insights
Of the 7,051 financial firms which submitted IDR reports for the period 1 July 2023 to 30 June 2024, 5,035 firms disclosed having received no complaints. The number of such ‘nil submissions’ was much higher than ASIC had expected.
Complaint Categories
An analysis of the IDR data received revealed that over 4.7 million complaints were lodged during the period, with 2,420,611 of those relating to banking and finance products. General insurance complaints were the second highest category at 1,561,824 while the lowest complaints category was life insurance products with only 54,896 complaints.
The three products that were the focus of the most complaints were general insurance products, credit products and deposit taking products. Issues-related complaints were about service, charges and transactions.
Turnaround
Over three quarters of complaints received were resolved within one day. Same-day resolution was highest for general insurance (76%) and closely followed by banking and finance (71%). In contrast, life insurance complaints had the lowest same-day turnaround of 51%.
Outcomes
The most common complaint outcome involved an explanation or apology only, or no remedy. Of the 4.7 million complaints received by financial firms, only 623,555 (13%) resulted in the complainant receiving a monetary remedy. The form of such monetary remedies varied and included compensation, waiver of fees or charges, or a reduction in ongoing fees or charges.
What’s next?
Financial firms are on notice that ASIC will continue to closely monitor firms’ compliance with IDR obligations and take action where necessary for compliance failures. In 2025, ASIC will be publishing firm-level data in order to promote transparency in the financial services industry. In doing so, ASIC seeks to encourage financial firms to foster ‘a positive complaints management culture that delivers quality outcomes for consumers’.
Click Here for the Original Article
Conclusion
The regulatory landscape for 2025 demands that businesses adapt to a complex compliance environment shaped by advancements in AI, privacy expectations, and employment law changes. Staying ahead of these changes requires a robust compliance strategy that ensures adherence to both federal and state regulations. To safeguard your organization and build trust with stakeholders, consult with a Cisive expert today to strengthen your compliance background screening processes and mitigate potential risks.