Last updated Nov 29, 2025
aigovernment
At some future point (no specific year given), AI models will be formally allowed to obtain professional certifications such as bar admission (law) and medical licensing, to the extent that prediction markets like Polymarket could list and resolve a market on this event occurring.
It sounds crazy today, but I guarantee if you put it on polymarket, there will be a date when this happens.View on YouTube
Explanation

As of November 30, 2025, there is no evidence that any jurisdiction has formally granted a professional license (such as bar admission or a medical license) directly to an AI system or model.

Medicine:

  • Some AI systems have passed or achieved high scores on medical licensing exams (e.g., the Xiaoyi robot in China passing the national medical licensing exam; OpenEvidence’s model scoring up to 100% on the USMLE), but these systems are not recognized as licensed physicians. They function as decision-support tools for human doctors, not as independently licensed professionals. (en.wikipedia.org)
  • U.S. regulators are explicitly reinforcing that only humans can be licensed clinicians. California’s AB 489, signed into law in October 2025, prohibits AI systems from presenting themselves as licensed health professionals and extends existing title-protection rules to AI developers and deployers. The California Medical Board has emphasized that only a “natural person” may be a licensed physician in the state. (medscape.com)

Law:

  • In the U.S., unauthorized practice of law (UPL) statutes in all states restrict legal practice to individuals who are licensed attorneys, and existing case law and guidance frame the practice of law as limited to natural persons who have been admitted to the bar. (americanbar.org)
  • While LLMs have passed bar-exam-style benchmarks, and legal AI tools like Harvey or others are widely used, they operate under the supervision of human lawyers rather than holding any bar license themselves. Regulatory reform efforts in states like Utah, Arizona, and Washington expand business models and tech use but still premise actual legal practice on human licensees. (reuters.com)

Emerging proposals, not law:

  • The U.S. Healthy Technology Act of 2025 (H.R. 238) would allow AI/ML systems to qualify as practitioners eligible to prescribe drugs under specified conditions, but as of late November 2025 it remains only introduced and referred to committee, with no passage in either chamber. (congress.gov)

Given this:

  • The specific event Friedberg predicts — AI models being formally allowed to obtain professional certifications like bar admission or medical licensing, to a degree that a Polymarket contract could cleanly resolve on that fact — has not occurred by November 30, 2025.
  • However, his prediction is explicitly open-ended (“there will be a date when this happens”) with no time horizon, so the fact that it has not yet occurred does not make it wrong; it is simply untested so far.

Because the outcome is about an unspecified future date and there is still ample time for such regulatory changes to (or not to) occur, the correct classification today is “inconclusive (too early)”, not right or wrong.