Key takeaways
- This report is intended as a tool for Congress to craft and consider AI policy that encourages American leadership in the AI landscape while setting up proper guardrails for any current or emerging threats.
- This report is organized into 15 chapters and provides guiding principles, 66 key findings and 89 recommendations.
- As AI legislation is considered by Congress, further legal development should be closely monitored.
In depth
In February 2024, the Bipartisan House Task Force on AI was established to explore Congress's role in encouraging American leadership in AI innovation and providing guardrails for any possible current and emerging threats. On 17 December 2024, the task force released the Bipartisan House Task Force Report on AI, a tool intended to guide legislators in crafting AI policy that strikes this balance.
The report defines AI as "software systems capable of performing tasks typically expected to require human intelligence, e.g., voice recognition, image analysis, and language translation." The report also clarifies that "the field of AI encompasses various subfields, including machine learning, natural language processing, and computer vision."
With this backdrop, the report addresses 15 key areas, including intellectual property, data privacy, healthcare and federal preemption of state law. While the report does not fully explore every AI-related area, it encourages future exploration of these topics. Additionally, the report adopts several high-level principles to frame the policy analysis, such as identifying the novelty of AI issues to avoid duplicative mandates and keeping humans at the center of AI policy.
The task force outlined key findings and recommendations for each of the 15 key areas. For instance, for intellectual property, the report states that it was "unclear whether legislative action is necessary in some cases, but that generative AI poses a unique challenge to the creative community." To address this, the report recommends "clarify[ing] IP laws, regulations, and agency activity while appropriately countering the growing harm of AI-created deepfakes."
Likewise, for data privacy, the report finds that "AI has the potential to exacerbate privacy harms, that Americans have limited recourse for many privacy harms, and that federal privacy laws could potentially augment state laws." To address this, the report recommends "explor[ing] mechanisms to promote access to data in privacy-enhanced ways and ensur[ing] privacy laws are generally applicable and technology-neutral."
Similarly, in healthcare, the report finds that administrative burdens can potentially be reduced, and drug development and clinical diagnosis can be sped up with the help of AI. However, the "lack of ubiquitous, uniform standards for medical data and algorithms impedes system interoperability and data sharing." Among other recommendations, the report recommends "encourag[ing] practices needed to ensure AI in healthcare is safe, transparent, and effective."
As for federal preemption of state law, the report identifies the preemption of state AI laws by federal legislation as a possible guardrail for AI use. Preemption is a doctrine designed to address conflicts between two authorities, such as federal and state law. In the United States, federal law is the highest authority and preempts state law when both address the same area and conflict. With this background, the report found that "federal preemption of state law on AI issues is complex and has benefits and drawbacks." Nevertheless, the report found that preemption can allow state action subject to floors or ceilings, can be multifaceted and requires precise federal statutory definitions to represent the intended scope of preemption. To address this, the report recommends conducting a study on applicable AI regulations across sectors.
Overall, new opportunities to use AI in these 15 key areas are anticipated. Therefore, the report encourages Congress to adopt an agile approach, enabling an appropriate, targeted and regularly updated response.