Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Amid an increasingly tense and destabilizing week for international news, it should not escape any technical decision-makers’ notice that some lawmakers in the U.S. Congress are still moving forward with new proposed AI regulations that could reshape the industry in powerful ways — and seek to steady it moving forward.
Case in point, yesterday, U.S. Republican Senator Cynthia Lummis of Wyoming introduced the Responsible Innovation and Safe Expertise Act of 2025 (RISE), the first stand-alone bill that pairs a conditional liability shield for AI developers with a transparency mandate on model training and specifications.
As with all new proposed legislation, both the U.S. Senate and House would need to vote in the majority to pass the bill and U.S. President Donald J. Trump would need to sign it before it becomes law, a process which would likely take months at the soonest.
“Bottom line: If we want America to lead and prosper in AI, we can’t let labs write the rules in the shadows,” wrote Lummis on her account on X when announcing the new bill. We need public, enforceable standards that balance innovation with trust. That’s what the RISE Act delivers. Let’s get it done.”
It also upholds traditional malpractice standards for doctors, lawyers, engineers, and other “learned professionals.”
If enacted as written, the measure would take effect December 1 2025 and apply only to conduct that occurs after that date.
Why Lummis says new AI legislation is necessary
The bill’s findings section paints a landscape of rapid AI adoption colliding with a patchwork of liability rules that chills investment and leaves professionals unsure where responsibility lies.
Lummis frames her answer as simple reciprocity: developers must be transparent, professionals must exercise judgment, and neither side should be punished for honest mistakes once both duties are met.
In a statement on her website, Lummis calls the measure “predictable standards that encourage safer AI development while preserving professional autonomy.”
With bipartisan concern mounting over opaque AI systems, RISE gives Congress a concrete template: transparency as the price of limited liability. Industry lobbyists may press for broader redaction rights, while public-interest groups could push for shorter disclosure windows or stricter opt-out limits. Professional associations, meanwhile, will scrutinize how the new documents can fit into existing standards of care.
Whatever shape the final legislation takes, one principle is now firmly on the table: in high-stakes professions, AI cannot remain a black box. And if the Lummis bill becomes law, developers who want legal peace will have to open that box—at least far enough for the people using their tools to see what is inside.
How the new ‘Safe Harbor’ provision for AI developers shielding them from lawsuits works
RISE offers immunity from civil suits only when a developer meets clear disclosure rules:
- Model card – A public technical brief that lays out training data, evaluation methods, performance metrics, intended uses, and limitations.
- Model specification – The full system prompt and other instructions that shape model behavior, with any trade-secret redactions justified in writing.
The developer must also publish known failure modes, keep all documentation current, and push updates within 30 days of a version change or newly discovered flaw. Miss the deadline—or act recklessly—and the shield disappears.
Professionals like doctors, lawyers remain ultimately liable for using AI in their practices
The bill does not alter existing duties of care.
The physician who misreads an AI-generated treatment plan or a lawyer who files an AI-written brief without vetting it remains liable to clients.
The safe harbor is unavailable for non-professional use, fraud, or knowing misrepresentation, and it expressly preserves any other immunities already on the books.
Reaction from AI 2027 project co-author
Daniel Kokotajlo, policy lead at the nonprofit AI Futures Project and a co-author of the widely circulated scenario planning document AI 2027, took to his X account to state that his team advised Lummis’s office during drafting and “tentatively endorse[s]” the result. He applauds the bill for nudging transparency yet flags three reservations:
- Opt-out loophole. A company can simply accept liability and keep its specifications secret, limiting transparency gains in the riskiest scenarios.
- Delay window. Thirty days between a release and required disclosure could be too long during a crisis.
- Redaction risk. Firms might over-redact under the guise of protecting intellectual property; Kokotajlo suggests forcing companies to explain why each blackout truly serves the public interest.
The AI Futures Project views RISE as a step forward but not the final word on AI openness.
Daily insights on business use cases with VB Daily
If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.
Read our Privacy Policy
Thanks for subscribing. Check out more VB newsletters here.
An error occured.
