“Can Weagree’s AI help us out?” – asked a head of legal who had been reviewing a contract on a Sunday evening, as her legal department is floated by contract review requests too often. Their deadlines are short, it is impracticable to find legal counsel with the right seniority (the labour market is just too tight), and time is scarce anyhow.
At least with Weagree’s AI capabilities, she can solve her challenges quickly. Het example of an AI-contract review scenario is an easy one for us, also creating a full risk assessment, and also errors (AI hallucinations) are easily avoided. We recorded a video about AI contract review, but here is how it works:
Step 1 – Identify red-flag stipulations
For an AI, identifying the key obligations and red-flag provisions is not so difficult anymore. AI models such as OpenAI (‘ChatGPT’) produce impressive contract analysis results. Just ask (‘prompt’) whether a specific contract provision is included in some form, and the AI model will probably identify it.
If it does not identify a contract provision prompted for, it is a matter of trial-and-error to get your ‘AI prompt’ for such contract provision straight, but that is often a matter of user-skills (prompt/ask the right way) rather than the AI being incapable to identify the contract provision. And let’s not exaggerate, much like searching Google effectively, AI prompting is no rocket science either.
How does AI-prompting work?
Think of your AI contract review (based on such prompts) as a questionnaire that you would have answered if you had been the party drafting the first-draft version. Those questionnaire-questions (taken together, alongside the reviewed contract) constitute your prompt; in other words, your prompt will consist of the dozens or hundreds of questionnaire-questions (with anticipated answer options) plus the contract itself.
Now obviously, in such AI review, you are not creating the contract (so you are also not answering a questionnaire). But that questionnaire is a sort of review-checklist: if you would review a third-party drafted contract, you would still check the contract for (say, 80%) of the same parameters. Where a contract creation questionnaire would ask ‘Do you want to include a change-of-control clause?’ your checklist/prompt would be ‘Is there a change-of-control clause?’ and the related answer options will likely be the same.
The remaining 20% that was not in your contract creation questionnaire obviously results from the counterparty drafting it from their opposite perspective – and quite a few of those ‘20% contract provisions’ would nonetheless be predictable.
In other words, to AI-review a supply agreement drafted by your prospective supplier, you would use your contract creation questionnaire for purchase agreements (as your checklist-prompt), and before prompting the AI extend it with anticipated contract provisions you would not dare to ask should you have been purchaser.
Step 2 – The AI findings
As you are using a questionnaire, it is easy to structure the AI-returned findings into a contract sheet. In Weagree, you can structure your (CLM) data the way you want to, for example:
- Ordering-and-delivery aspects,
- Pricing-related terms,
- Term and termination,
- IP-related parameters,
- Legal stuff (like change-of-control clauses),
- etc.
For the AI model, it is irrelevant in which order you prompt it: each questionnaire question (so, each part of the checklist-prompt) will be checked.
For example, if your prompt consists of 125 questionnaire questions, then the AI model will ‘read’ your contract roughly 125 times. It will return the AI-findings in a structured manner of course: 125 ‘answers’, 125 contract clauses (on which the AI-answer is based) and 125 explanations of why the AI came to its answer.
How does risk rating works?
Weagree puts these 125 answers in the contract sheet (and applies your preconfigured risk rating to each of those answers). The risk rating is visualised as a thick green, a yellow or red blob. If, for example, one of the answers for ‘Pricing-related terms’ is a deal breaker (red blob), then the contract sheet chapter for ‘Pricing-related terms’ will turn red in the overview.
Step 3 – Validate findings (prevent hallucinations)
For the user to trace hallucinations is also easy: Weagree AI requires the AI not only to provide the key finding (identify a contract provision and the form it takes), but also to return the specific contractual stipulation, as well as an explanation of why the AI identified it as such.
Weagree makes validating the AI-review easy for the user, by highlighting that specific stipulation both in the AI-reviewed contract and on the CLM contract sheet. Indeed, for ease of contracting-work, there are two places to validate all findings:
- Manual-read the contract text (navigating article-by-article, just scroll-reading or by walking through the CLM data fields).
- Click through the CLM contract sheet to check the AI-reviewed checklist-findings (an underwater-screen highlights the clause).
Because the AI review took only a few minutes (for document handling), your benefits translate into the fun puzzle of optimising your AI-prompts (one-time) and some remaining time to click through the AI findings and turn it into mature contract lifecycle management. Now, if you are not yet convinced, watch the two videos here.
Step 4 – into the future
Being worldwide one of the leading providers of contract generation, we will iron out our roadmap together with one or more of our customers, and develop for example:
- Use the AI-findings as answers to a questionnaire for creating a contract summary.
- Use the AI-findings with red flags to write a memo for the business.
- Use the AI-findings to retrieve the right clause library clauses (and insert those as mark-up in the AI-reviewed first-draft contract). What is already possible in Weagree is: to enable users to finetune such markup (discuss it with their colleagues alongside the questionnaire).
- Etc. (what are your AI wishes after watching our video?)