In Progress

Legal Training

Help junior lawyers keep AI honest

How might we train junior lawyers to verify, critique, and improve AI-generated legal work product? In a world where the first draft is increasingly written by a machine, the skill set for new lawyers is shifting - from drafting from scratch to reviewing, stress-testing, and refining what AI produces.

That's harder than it sounds. Spotting a hallucinated case reference is one thing, but catching a subtly wrong application of a legal test, a missed nuance in jurisdiction, or an argument that's technically correct but strategically weak requires real judgment. These are exactly the skills junior lawyers need to develop, and traditional training methods weren't designed for this.

Your challenge is to build a tool or experience that helps junior lawyers sharpen their ability to critically evaluate AI-generated legal output. That might be a simulated exercise with planted errors, a structured review workflow, or an interactive environment where trainees learn by correcting and improving AI drafts.

Submit Your Project

Closes March 31, 2026