LLM-driven document review validated the accuracy and efficiency of our methodology.
The Data Science Center leveraged large language model (LLM)-driven document review to validate the accuracy and efficiency of our methodology, compared to traditional methods, in a corporate governance matter.
The Challenge
Our team sought to identify instances in over 1,000 documents where a particular relationship among affiliated companies was alleged. After pre-processing, a targeted keyword search enabled the LLM to analyze relevant content. The LLM flagged just 15 pieces of content for manual review. Consistent with the traditional review process, the LLM-assisted review delivered no true positives.
Key Takeaways
- Enhanced Review Coverage: The LLM-driven approach enables a more comprehensive review by allowing for analysis of a much larger set of data using fewer assumptions than traditional methods (such as specific keywords), thus reducing the risk of missing critical information.
- Accelerated Analysis: For matters with tight deadlines, the LLM-assisted review can make previously infeasible analysis possible. In this instance, the LLM processed nearly 16,000 sections of text in approximately one hour, unlocking valuable insights.
- Flexible Integration: The LLM-driven method is compatible with and seamlessly integrates with traditional review. The LLM can either serve as a first-pass “doer” to prioritize the review of keyword search results or as a “checker” to enhance comprehensiveness and accuracy.