Why the H.R.3831 – AI Disclosure Act of 2023 is a Perfect Example of Bad AI Legislation
The H.R.3831 – AI Disclosure Act of 2023, introduced by Representative Torres on June 5, 2023, aims to mandate that generative AI disclose that its output has been generated by AI. While the bill’s intent is clear—requiring AI-generated content to carry a disclaimer—it falls short in several critical areas, making it a perfect example of bad AI legislation amongst many AI legislation from other lawmakers. (See at the end of this blog post our examples of other bad AI legislation)
1. Constitutional Alignment
The AI Disclosure Act raises significant concerns about constitutional alignment, particularly regarding free speech and privacy rights. The bill mandates a broad and compulsory disclaimer on AI-generated content: “Disclaimer: this output has been generated by artificial intelligence” (H.R. 3831, Sec. 2(a)). This blanket requirement could potentially infringe on First Amendment rights by compelling speech without sufficient justification. Additionally, the lack of clear guidelines on how this disclaimer interacts with existing privacy protections leaves room for legal challenges.
2. Clear Purpose
While the bill’s purpose is to inform the public when content is AI-generated, it lacks clarity in defining the specific problem it seeks to address. The broad application of the disclaimer does not differentiate between various contexts where AI is used, such as artistic creation versus factual reporting. This lack of nuance undermines the effectiveness of the legislation, making it more of a blanket regulation than a targeted solution.
3. Interoperability and Collaboration
The AI Disclosure Act is a federal mandate enforced by the Federal Trade Commission (FTC), yet it does not promote collaboration with state and local governments or provide a framework for interoperability of AI systems across different jurisdictions (H.R. 3831, Sec. 2(b)). This could lead to a fragmented approach to AI regulation, where inconsistent enforcement across regions creates confusion and reduces the overall effectiveness of the law.
4. Transparency and Accountability
Although the bill mandates transparency by requiring AI-generated content to carry a disclaimer, it does not establish comprehensive guidelines for transparency in AI development and deployment. The enforcement powers granted to the FTC focus solely on ensuring compliance with the disclaimer requirement, without addressing broader issues of accountability for AI-related actions and decisions (H.R. 3831, Sec. 2(b)(2)).
5. Ethical Considerations
The AI Disclosure Act fails to incorporate ethical standards that address fairness, nondiscrimination, and privacy. By focusing narrowly on disclosure, the bill overlooks the need to address biases in AI systems and ensure equitable outcomes. This oversight could result in AI technologies that perpetuate existing societal inequalities, particularly if the disclaimer requirement is applied unevenly across different industries and communities.
6. Public Engagement and Input
The process of drafting the AI Disclosure Act does not appear to have included mechanisms for public consultation or stakeholder input. This lack of engagement is a missed opportunity to incorporate diverse perspectives and ensure that the legislation reflects the concerns and needs of the community. Without public input, the bill risks being out of touch with the realities faced by those most affected by AI technologies.
7. Data Protection and Privacy
Data protection is a critical aspect of AI legislation, yet the AI Disclosure Act does not address this issue adequately. The bill’s focus on content disclaimers does not include provisions for data protection measures related to AI-generated content or the data used to train AI systems. This omission leaves significant gaps in the regulatory framework, potentially exposing individuals to privacy violations.
8. Compliance and Enforcement
The enforcement mechanism for the AI Disclosure Act is centered on the FTC, which is tasked with treating violations of the disclaimer requirement as unfair or deceptive acts (H.R. 3831, Sec. 2(b)(1)). However, the bill does not outline clear compliance requirements beyond the disclaimer, nor does it establish robust enforcement measures for noncompliance. This lack of detail weakens the legislation’s ability to ensure meaningful oversight and accountability.
9. Adaptability and Future Proofing
AI technologies are evolving rapidly, and legislation must be adaptable to keep pace with these advancements. Unfortunately, the AI Disclosure Act lacks provisions for regular reviews and updates, making it vulnerable to becoming obsolete as AI continues to develop. Without adaptability, the legislation may fail to address new challenges and opportunities that arise in the AI landscape.
10. Risk Assessment and Management
The AI Disclosure Act does not include a framework for assessing and managing the risks associated with AI technologies. By focusing solely on disclosure, the bill overlooks the broader risks that AI poses to society, such as the potential for misuse or unintended consequences. A more comprehensive approach would include strategies for identifying and mitigating these risks.
11. Education and Training
Effective AI legislation should promote education and training for policymakers, businesses, and the public to ensure a thorough understanding of AI technologies. The AI Disclosure Act, however, does not address this need. Without initiatives to educate stakeholders, the legislation may be difficult to implement effectively and could lead to misunderstandings and misuse.
12. International Standards and Cooperation
AI is a global issue, and aligning U.S. legislation with international standards is crucial for maintaining competitiveness and ensuring ethical practices. The AI Disclosure Act does not encourage international cooperation on AI governance, nor does it align with international AI standards. This isolationist approach could hinder the U.S. from participating in and shaping global AI policies.
13. Economic Impact
The economic implications of the AI Disclosure Act are not thoroughly considered. The bill’s broad disclosure requirements could place an undue burden on businesses, particularly startups and small enterprises, without providing clear benefits. This could stifle innovation and reduce the competitiveness of U.S. companies in the global AI market.
14. Whistleblower Protections
Whistleblower protections are essential for encouraging the reporting of unethical or illegal AI practices. However, the AI Disclosure Act does not establish clear and enforceable whistleblower protection measures. Without these safeguards, individuals who expose AI-related wrongdoing may face retaliation, which could deter others from coming forward and allow harmful practices to continue unchecked.
15. Oversight and Review
Finally, the AI Disclosure Act lacks provisions for independent oversight and regular review. The bill does not establish an oversight body to monitor its implementation and impact, nor does it mandate regular audits to assess its effectiveness. This absence of oversight could lead to unchecked abuses of power and a lack of accountability in the AI space.
Summary
The H.R.3831 – AI Disclosure Act of 2023, despite its well-intentioned goal of promoting transparency in AI-generated content, is a deeply flawed piece of legislation. It fails to align with constitutional principles, lacks a clear and targeted purpose, and does not promote collaboration or adaptability. The bill’s narrow focus on disclaimers overlooks critical issues such as ethical considerations, data protection, and public engagement. To ensure that AI legislation is effective, comprehensive, and aligned with societal values, lawmakers must move beyond the simplistic approach of the AI Disclosure Act and craft laws that address the full spectrum of challenges and opportunities presented by AI technologies.
Here are a series of scenarios where the AI Disclosure Act of 2023 (H.R. 3831) could potentially fail to address critical issues related to AI transparency and disclosure:
Scenario 1: AI in Healthcare Decision-Making
Situation: A hospital uses an AI system to assist doctors in diagnosing medical conditions and recommending treatment plans. Patients receive diagnoses and treatment suggestions without being informed that AI was involved in the decision-making process.
Failure Point: The AI Disclosure Act of 2023 focuses primarily on generative AI and content creation, leaving a gap in industries like healthcare. As a result, patients may not know that an AI system influenced their medical treatment, leading to concerns about transparency, accountability, and trust in healthcare.
Scenario 2: AI in Financial Services
Situation: A bank uses AI algorithms to evaluate loan applications and determine interest rates. The bank does not disclose to customers that their loan approval and terms were determined by an AI system.
Failure Point: Since the AI Disclosure Act of 2023 does not explicitly cover AI systems in financial services, it fails to require banks to inform customers about the AI-driven decisions affecting their financial lives. This lack of disclosure could lead to biases, unfair lending practices, and a lack of recourse for customers who feel they were unfairly treated by the AI system.
Scenario 3: AI in Law Enforcement
Situation: Law enforcement agencies use AI for predictive policing, identifying potential crime hotspots and individuals likely to commit crimes. Community members are not informed about the AI’s role in policing strategies and decisions.
Failure Point: The AI Disclosure Act of 2023 is not designed to address AI use in law enforcement, leading to a lack of transparency in how AI-driven predictions influence policing practices. This could result in civil liberties being compromised, particularly in communities disproportionately affected by biased AI algorithms.
Scenario 4: AI in Employment Decisions
Situation: A company uses AI to screen job applications, filter candidates, and make hiring decisions. Job applicants are unaware that an AI system was responsible for evaluating their applications and determining their suitability for the position.
Failure Point: The AI Disclosure Act of 2023 does not extend to AI systems used in human resources, meaning job applicants are left in the dark about the AI’s role in their employment prospects. This lack of disclosure could perpetuate biases in hiring processes and reduce trust in AI-driven HR tools.
Scenario 5: AI in Social Media and Content Moderation
Situation: A social media platform uses AI to moderate content, automatically flagging and removing posts that violate community guidelines. Users are not informed that AI is responsible for these actions, nor do they have a clear way to appeal decisions made by the AI.
Failure Point: While the AI Disclosure Act of 2023 addresses generative AI, it does not adequately cover AI systems used in content moderation. This could lead to users being unfairly censored without understanding the AI’s role, creating a lack of accountability and potential harm to free speech.
Scenario 6: AI in Government Services
Situation: A government agency uses AI to process applications for public benefits, such as social security or unemployment benefits. Applicants are not informed that an AI system was involved in the decision to approve or deny their benefits.
Failure Point: The AI Disclosure Act of 2023 does not require disclosure in government services, which can lead to a lack of transparency in how citizens’ applications are processed. This could result in people being unfairly denied benefits or not understanding why their applications were rejected.
Scenario 7: AI in Advertising and Consumer Targeting
Situation: An online retailer uses AI to analyze consumer data and personalize advertisements, leading to targeted marketing campaigns. Consumers are unaware that AI-driven data analysis influenced the ads they see and the products recommended to them.
Failure Point: While the AI Disclosure Act of 2023 addresses generative AI in content creation, it does not mandate transparency in AI-driven consumer targeting. This could lead to ethical concerns about privacy, manipulation, and consumer rights, as individuals may not realize the extent to which AI influences their purchasing decisions.
Scenario 8: AI in Education
Situation: An educational institution uses AI to grade assignments and provide personalized learning experiences. Students and parents are not informed that an AI system is responsible for these educational decisions.
Failure Point: The AI Disclosure Act of 2023 does not cover AI applications in education, resulting in a lack of transparency for students and parents. This could lead to questions about the fairness and accuracy of AI-driven grading and learning assessments, undermining trust in educational institutions.
Scenario 9: AI in Real Estate
Situation: Real estate companies use AI to assess property values and recommend prices to buyers and sellers. Clients are unaware that AI algorithms were used to determine these values.
Failure Point: The AI Disclosure Act of 2023 does not require disclosure in the real estate industry, meaning clients may be unaware that AI influenced the pricing of their property. This lack of transparency could lead to distrust in real estate transactions and concerns about the accuracy of AI assessments.
Scenario 10: AI in Customer Service
Situation: A telecommunications company uses AI-powered chatbots to handle customer inquiries and complaints. Customers do not realize they are interacting with an AI rather than a human agent.
Failure Point: Although the AI Disclosure Act of 2023 addresses generative AI in communication, it may not fully cover AI in customer service scenarios. This could lead to customer dissatisfaction and confusion if they believe they are communicating with a human agent, especially in cases where the AI fails to resolve their issue.
Summary of Failures
The AI Disclosure Act of 2023 (H.R. 3831) primarily focuses on generative AI in content creation and communication. However, it fails to address AI applications in critical areas like healthcare, finance, law enforcement, employment, social media moderation, government services, consumer targeting, education, real estate, and customer service. These gaps in coverage could lead to significant transparency issues, ethical concerns, and public distrust in AI systems across various industries.
Notable examples of poorly crafted AI legislation, along with explanations for why they are considered flawed:
1. H.R. 3831 AI Disclosure Act of 2023 (USA)
- Why it’s flawed: To reiterate, this legislation requires companies to disclose the use of AI in their products and services. However, the bill’s language is vague, leading to confusion about what constitutes “AI” and when disclosure is necessary. This could result in excessive compliance burdens for companies and stifle innovation. Moreover, the bill does not address the specific risks or benefits associated with AI, making it more of a blanket requirement than a targeted regulatory measure.
2. AI Act (European Union)
- Why it’s flawed: The EU’s AI Act attempts to classify AI systems into categories of risk (e.g., unacceptable, high, and minimal risk). While well-intentioned, the act’s broad and rigid classification system fails to account for the nuanced and context-specific nature of AI applications. For instance, a “high-risk” AI system in one context may not pose the same risks in another. This one-size-fits-all approach could lead to overregulation of harmless technologies or underregulation of more dangerous ones. Additionally, the compliance costs for companies could be prohibitive, particularly for smaller firms, potentially stifling innovation within the EU.
3. SB 1047 (California, USA)
- Why it’s flawed: This bill is critiqued for being overly complex and difficult to interpret, leading to potential legal ambiguities. Its heavy-handed regulatory approach imposes significant compliance burdens without providing clear guidelines or support for companies. The law’s focus on AI systems’ potential harms fails to balance these concerns with the need to foster innovation and technological advancement. It also lacks a robust framework for enforcement and monitoring, leaving gaps in its practical implementation.
4. Facial Recognition Technology Moratorium Act (USA)
- Why it’s flawed: This legislation proposed a blanket moratorium on the use of facial recognition technology by federal agencies. While it aimed to address privacy and civil liberties concerns, the act was criticized for its overly broad scope, which could hinder the development of beneficial AI applications. By not distinguishing between different contexts or uses of facial recognition (e.g., public safety vs. commercial applications), the bill potentially stifles innovation and prevents the government from utilizing AI in ways that could enhance security and efficiency.
5. Algorithmic Accountability Act of 2019 (USA)
- Why it’s flawed: This act required companies to conduct impact assessments of their AI systems for potential biases and risks. While the goal of promoting transparency and accountability in AI is commendable, the legislation was criticized for being overly prescriptive without providing clear guidance on how companies should conduct these assessments. The act’s requirements could be especially burdensome for smaller companies, potentially stifling innovation. Moreover, it failed to consider the varying levels of risk associated with different AI applications, treating all AI systems as equally problematic.
6. AI Regulation (South Korea)
- Why it’s flawed: South Korea’s early attempts at AI regulation focused heavily on protecting consumers from AI-related risks. However, the regulations were criticized for being overly stringent and not sufficiently aligned with the needs of the AI industry. The strict rules, combined with heavy penalties for non-compliance, discouraged companies from developing AI technologies within South Korea, leading to a potential loss of competitive advantage in the global AI market.
7. Brazil’s AI Law (Draft Bill 21/20)
- Why it’s flawed: This draft bill aimed to regulate AI by establishing a comprehensive legal framework. However, it was criticized for being too ambitious and lacking focus. The bill attempted to address all aspects of AI, from ethical considerations to technical standards, resulting in a complex and unwieldy piece of legislation. The lack of clear definitions and practical guidelines made it difficult for companies to comply, potentially hindering AI innovation in Brazil. Additionally, the bill did not provide a phased or gradual approach to implementation, which could overwhelm businesses and regulators alike.
Key Issues Across These Examples:
- Vague Definitions and Requirements: Many of these laws suffer from a lack of clear definitions, leading to confusion and inconsistent application. This vagueness can result in excessive compliance burdens, legal challenges, and hinder innovation.
- Overregulation: Several of these laws impose strict or blanket regulations without considering the context or varying levels of risk associated with different AI applications. Overregulation can stifle innovation, especially for smaller companies that may struggle with the compliance costs.
- Lack of Practical Guidelines: Even when the intent behind the legislation is sound, a lack of clear guidelines for implementation can lead to confusion and difficulties in compliance. This can result in companies either over-complying to avoid penalties or under-complying due to a lack of understanding.
- Failure to Balance Innovation and Regulation: A common flaw is the failure to balance the need for regulation with the importance of fostering innovation. Overly stringent regulations can discourage companies from developing or deploying AI technologies, potentially putting countries at a disadvantage in the global AI race.
- Inflexibility: Some legislation takes a rigid approach to AI regulation, not allowing for flexibility as AI technologies evolve. This can lead to outdated or ineffective regulations that do not address the actual risks or benefits of AI.
These examples illustrate the challenges of crafting effective AI legislation and highlight the importance of creating laws that are clear, balanced, and adaptable to the rapid pace of technological advancement.