This article is part of a series of articles on AI-powered contract reviewing and redlining. In this article, we do a deep dive on how the design choices different vendors make when for their purpose-built AI reviewing tools. For a general introduction to AI contract reviewing, read our primer first.
In our introduction to AI-powered contract reviewing, we highlighted that many of the pitfalls of generic AI tools like Claude and ChatGPT can be avoided thanks to purpose-built AI reviewing tools. Below, we provide a breakdown of what they look like and how they operate.
AI contract reviewing: where do these tools live?
A first distinction between different AI-reviewing tools is where you use them. There are two distinct ‘flavours’:
- Web-based: these tools require you to upload a document to a web platform where the review can be performed. The benefit of this approach is that the developers of these web applications have more flexibility in how they build their tool and what options they provide to its users. The downside is that it requires lawyers to leave their trusted MS Word environment, for a task that they have always done in MS Word. This not only hurts the rate of adoption but also means that some of your favourite functionalities (Format Painter, track changes, etc.) are unavailable to you unless the vendor decided to implement them in their web app.
- Word-based: these tools function as Word-add ins. The benefit is that they integrate directly into the environment where lawyers typically conduct contract reviewing and redlining, allowing for seamless interaction with documents. The downside, however, is that these tools are restricted to Word (though, in our experience, few lawyers use alternatives like Google Docs) and that they are limited to what Word allows them to do within the Word ecosystem.
In our opinion, Word-based wins out over web-based. This should come as no surprise considering our own AI contract reviewing tool – ClauseBuddy – is an add-in for Word. What few people know, however, is that we originally built a web-based platform for legal drafting. With time, we learned that the benefit of being able to develop added functionality that Microsoft couldn’t or wouldn’t let us in Word just doesn’t weigh up against the sheer ease of use that a Word add-in offers.
Open-ended vs closed reviewing
Imagine your task is to review a supplier agreement with the help of AI. There are two approaches as to how you could prompt the AI to do this for you:
Open-ended review
In the open-ended review approach, you give the reins to the AI. The AI doesn’t have to check explicit issues. A general “sense check” of the document is sufficient. This is quick and easy, and particularly useful if you did most of the work already and just want a sparring partner to check if you missed anything.

Some vendors have built on this idea by offering entire data sets of similar contracts, giving the AI more inspiration to perform the review.
This can lead to useful suggestions you may had not even considered, but the major downside is that open-ended review is like a box of chocolates: you never quite know what you’re going to get. In fact, asking the same question twice will lead to different results.
The unpredictability that comes with open-ended reviews is particularly problematic for in-house lawyers who tend to have a clear view of the risk appetite of their organisation, and who know which issues need to be removed from a contract. Unless you make these issues explicit, you have no way of making sure that the AI is going to include them in its review for you.
Which brings us to the second approach: closed review.
Closed review
In a closed review, you guide the LLM through the review process. It starts by defining a “playbook” or “checklist” of requirements you typically check for. The AI then reads through the contract and cross-references it with said requirements in order to tell you which are met and which aren’t.

In our experience, only a small minority of organisations have gone through the effort of documenting these issues, but the majority of lawyers already have it in their head. That’s why, for example, experienced lawyers won’t start reviewing a contract at the top and then work their way down. Usually, they’ll scan the document first for key issues like liability, applicable law, intellectual property, confidentiality, etc.
The benefit of a closed review is that you have much more control over what the AI will check. The downside is that you must invest the effort to make these issues explicit.
This is generally less of a challenge for in-house lawyers. They represent the interests of a single client and know that client’s preferences and risk appetite inside out. Law firms, on the other hand, can represent both sides of a given contract type and generally don’t have as detailed an image of their client’s requirements in the way that in-house lawyers do.
As a result, developing checklists or playbooks is much more time-consuming for law firms. Furthermore, different attorneys will often disagree on what’s important and what isn’t – even when they are representing the same client.
We at ClauseBase see this divide reflected in our own client base. When we serve law firms, we see them gravitate more towards AI-powered contract reviewing whereas in-house legal teams prefer closed reviewing.
In the end, both approaches have their merits, which is why our own AI reviewing tool ClauseBuddy accommodates both. However, we generally find that the best results are achieved with closed review systems. Better put: adoption is typically quicker in the open-ended system, but this initial enthusiasm quickly makes way for frustration as the AI often behaves unexpectedly or misses key issues.
The Competitive Edge: What Sets AI Reviewing Tools Apart?
While the development choices we’ve discussed already reflect a few options for vendors, the reality is that there is a lot of overlap out there. This begs the question: what is the moat of these companies? How do they differentiate themselves from each other? What value do they add that ChatGPT or their competitors don’t?
In truth: building an AI reviewing tool has never been this easy. Virtually anyone can set up a basic system to have an LLM look at a contract in the browser. The differentiation lies in all the tiny ways that a vendor can provide added value on top of this basic infrastructure of calling an LLM to perform a contract review.
This applies to our own ClauseBuddy too, by the way. Like other vendors, we quickly started searching for additional ways to add value on top of the core task of AI-powered contract reviewing and redlining. Below are a few examples of how we, as well as other vendors, are tackling this challenge.
AI-Powered Playbook Generation
As stated above, our preferred approach to AI-powered reviewing is the closed approach. Our data shows that this ensures the best results, but is also the most labour-intensive. Vendors try and offer two ways to make this process less labour-intensive:
- Offer prefabricated playbooks – these are playbooks offered by the vendors themselves that contain several default issues for the AI to check on based on specific contract types. Useful at first, but inevitably you will want to be able to tweak the playbook to make it fit your preferences. Not a problem if the vendor allows such tweaking, but not all of them do.
Some vendors also extend this concept into to a marketplace of playbooks. In this set-up, lawyers can sell their own playbooks on a marketplace and users of the AI reviewing software can purchase them – a win-win scenario. There are some limitations though, as the lawyers in question cannot possibly know every issue important to the purchaser, meaning some manual tweaking is still required. And of course, there’s the question of whether the playbooks guarantee to be compliant with certain jurisdictions, and will be kept up-to-date with the latest statutory changes and evolutions in case law.
- AI-powered playbook generation – in this scenario, you upload your standard template to the AI and ask it to generate a playbook on the basis thereof. While it is an enticing idea, it is only a stepping stone toward the final product. For example: we work with the technology transfer team of a large university which, in its NDAs, wants to specifically ensure that students qualify as Authorised Representatives in the definition of said term so that they may receive confidential information in a research project. The AI may not appreciate these minor additions as being important, while they are absolutely crucial for the university.
Conversely, if you configure the AI in such a way that it considers every minute detail of your own template as a crucial requirement, it will yield far too many (irrelevant) checks to be performed. Ultimately, therefore, there is still some manual tweaking required.
Having tested both approaches extensively, we found that different legal teams are in different states of readiness for the adoption of AI-powered reviewing. As a result, we acknowledge that the different approaches may be more enticing to different teams depending on how far along the curve towards full adoption they are.
One thing that is clear, though, is lawyers can start using the technology with virtually no homework, but additional homework will inevitably need to be done in order to keep using AI effectively and derive true value.
Clause management
Clause management capabilities are a second way in which vendors are differentiating themselves – especially for organisations that prioritise standardisation and consistency in negotiations (which, in all fairness, is most in-house legal teams but relatively few law firm departments).
For example: Imagine that you are an in-house lawyer whose team frequently reviews Master Services Agreements. It is in the company’s best interest that all team members are all aligned on the risk appetite of the company. The provisions that get accepted and rejected shouldn’t depend on the personal preference of the individual lawyers doing the negotiation.
To that end, many in-house teams already compile company standard clauses, fallback clauses, and justifications for introducing each change to a contract under negotiation (nobody likes a change without a word of explanation as to why the change was made).
Having AI scan a counterparty’s draft and pull out the key issues that don’t align with the company’s risk appetite is great, but the lawyer still needs to decide how to follow up on the flagged issues. A solid quality-of-life improvement for AI reviewing tools is to then be able to offer clauses from the organisation’s library as an alternative to the counterparty’s proposed clause, providing the lawyer with ammunition to reason why the change was introduced.
One key weakness of this system is that a lawyer will rarely throw out a counterparty’s clause in its entirety in favour of their organisation’s own standard clause. That’s just bad negotiation etiquette. Several of our customers highlighted this weakness which led us to develop the Smart Merge functionality for ClauseBuddy.
With Smart Merge, you can boil down a counterparty’s clause, and your organisation’s corresponding standard/fallback clause to its individual ‘components’. You can then choose to surgically introduce different components of that clause into the different components of the counterparty’s clause. Instead of just wiping the counterparty’s clause from the document, the AI makes the targeted amendments that incorporate the nuance of your company’s clause.

AI-powered redlining
AI-powered redlining directly within the document is another way in which some vendors address the question: “so the AI has found a problematic issue – what now?”
The idea behind it is simple: you have already prompted the AI to find a specific issue. Why not use that same prompt to instruct the AI to redraft the clause?
Suppose the AI reviewed a contract with the following prompt in the playbook: “Applicable law should be German law and the competent courts should be those of Berlin”. Your AI tool of choice scans the document on the basis of this prompt and finds that it claims Dutch law to be applicable and the courts of Amsterdam to be competent. An easy “rewrite” button can make it so the AI rewrites the offending clause by changing “German” to “Dutch” and “Berlin” to “Amsterdam”.

In theory, this works great, there is only one problem: AI tools do not play nice with the formatting and styling of a document. This is particularly problematic for contracts, which rarely appear as flat text but are usually made up of individual blocks with different numbering, spacing, font size, indentation, etc. applied to them.
Let’s take Microsoft’s own AI-powered Word add-in – Copilot – as an example.
Copilot lets you select individual clauses and ask the AI to redraft them.

However, even Copilot – Microsoft’s own AI-powered add-in – struggles to work well enough with the styling elements of the selected clause. It can not immediately apply the proposed adjustments in the correct style. Nor can it make changes propose changes directly within the given clause to provide the user with a better view of the AI’s proposed amendments.

This is a notoriously hard problem to crack and one that most vendors sidestep completely by not offering a way to insert clauses into a document after redrafting, but simply to paste them as flat text. The reason for this is that Microsoft doesn’t offer a complete way for developers of Word add-ins to engage with the styling features of the document. In technical terms: the API that Microsoft exposes for this is very limited.
That leaves vendors with two choices: offer a simple copy/paste functionality to insert the (re)drafted text or do the hard work of teaching the tool to recognize different styling rules based on the document content. Just by virtue of us sharing this information, you can imagine which approach ClauseBuddy adopted...
Of course, that problem only applies to vendors that opt to create their AI-reviewing tool as a Word add-in. In a web platform, vendors have more flexibility to ensure that the styling is respect for redrafted clauses.
With the information contained in this article and the introduction to AI-powered contract reviewing, you are now armed to make an informed decision on how to build out your organisation’s strategy for AI-powered contract reviewing. There is only one roadblock left: compliance.
Lawyers rightfully fear the potential data protection, confidentiality, deontological, and regulatory problems that come with lettings LLMs loose on the confidential information they work with. Good news: there are a range of measures that organisations (and their vendors) can take to eliminate most or even all of the concerns that may stop them from adopting the technology.
So what are these measures?
That’s a topic for next time!
Stay tuned if you are interested in a complete overview of all the boxes you need to check to responsibly use AI-powered contract reviewing tools and subscribe to our newsletter to make sure you don’t miss any updates!