Using AI responsibly with Juro

October 13, 2023
Recently, huge advances in artificial intelligence (AI) have revolutionized the way we do business.

Contract collaboration is an area ripe for AI-powered disruption. And that’s why Juro has released its AI Assistant - a powerful tool to help you review, summarize and draft contracts more efficiently than ever before.

But when you’re using AI to work with your business’ most sensitive information, you have to be sure that you’re doing so safely, carefully, and responsibly.

What does responsible AI mean?

Responsible AI means designing and using AI-powered systems in ways that align with human values and prevent harm.

At Juro, we have designed our AI Assistant thoughtfully to help our customers agree contracts 10x faster. Our mission at Juro is to help the world agree more - a fundamentally human value.

But generative AI isn’t perfect, and so we’re acutely focused on mitigating the harms that might arise from using AI. The three topics in which we see the highest potential for harm in a contracts context are:

  • Privacy - where is personal data going and what is it being used for?
  • Confidentiality - how do you get comfortable using AI to work with sensitive business information?
  • Accuracy - how do you achieve rigorous accuracy while using AI in a contracts context?

There are other challenges (bias, intellectual property ownership, and many others), but we think these are the most pertinent to contract collaboration. Here's how we tackle these challenges so customers can use AI responsibly.

1. Privacy

Contracts nearly always contain personal data of some description. Sometimes that personal data is relatively innocuous. Other times it’s highly sensitive. Either way, privacy and data protection are crucial considerations when selecting and using any AI-powered tool.

In short: Juro does not use any customer data to train foundational models. Keep reading to learn more about lawfulness, fairness and accountability around how personal data is handled by AI-enabled tools.

Lawfulness and fairness

Personal data controllers must have a lawful, fair basis for processing personal data, wherever they are processing personal data (whether in a software tool or not).

Generative AI has a keen appetite for data, and some AI-powered tools use input data to help train their underlying foundational models. If that input data contains personal data, then using that personal data to train a foundational model is a completely new purpose for the personal data. This is called repurposing.

Repurposing is permitted only in limited circumstances where the new purpose is consistent with the original purpose behind collecting that personal data in the first place. Certain other conditions also need to be met to allow repurposing to take place.

Unless your business is involved in the AI space, it’s unlikely that the training of a software vendor’s AI model will be consistent with the original purpose for which you collected the personal data.

Even if the use is consistent, it might be unexpected or difficult to explain to data subjects involved. This could mean that the repurposing is unfair, and ultimately, illegal.

At Juro, we do not use any customer data to train foundational models. So you don’t have to worry about Juro or its technology partners repurposing your personal data.


Customers need to demonstrate their compliance with data protection laws. When it comes to AI, this means properly and fully understanding personal data flows and taking informed decisions.

Information about AI tools can be confusing or opaque. We aim to provide clear, concise information about how our AI Assistant works so you can make informed choices.

You might also choose to conduct a data protection impact assessment before embedding AI into your contract processes. If you choose Juro’s AI Assistant, we can provide a template for you to use, pre-populated with information about Juro’s AI Assistant.

Find out more about privacy and data protection considerations when using AI Assistant in our AI Assistant Data Protection Guide.

2. Confidentiality

The confidentiality of your contracts is paramount. They contain some of your business’s most sensitive information, and so you need to be sure that AI features in your contract software won’t expose that information in responses to other software users’ prompts.

In short: at Juro, we don't use your contract data to train any model.

When your contract data is used to train an AI model, it becomes part of that model’s learning dataset, and can be called upon by the model when it is predicting the best response to a given prompt.

Depending on the scope and depth of the training dataset, it is theoretically possible that your contract content (or a close approximation of it) could be exposed as an AI output.

That presents some difficult challenges for teams that work with contracts, whether that’s legal, HR or sales.

  • Contractual obligations. Most contract counterparties insist that the terms of the contract are confidential. It’s difficult to keep that promise if the contract’s contents are being used to train AI models that benefit other businesses
  • Professional obligations. Regulated professionals like lawyers need to be confident that their use of AI won’t compromise their professional obligations of confidentiality. Similarly, HR professionals need to protect the confidence of employees. You can’t cut corners when it comes to professional conduct rules
  • IP and trade secrets. Using document information to train an AI model could constitute a non-confidential disclosure. This could put the protection of proprietary information and technology at risk. For example, inadvertently disclosing details of patentable technology through AI tools could anticipate a future patent application, meaning your invention becomes ineligible for patent protection

At Juro, we don’t use your contract data to train any model. We have in place strict confidentiality obligations with both you (as our customer) and our technology providers. Your confidential information remains confidential.

3. Accuracy

When it comes to contracts, accuracy is critical. But generative AI is predictive by nature, and so can never be 100 per cent accurate.

In short: at Juro, we’ve built features in our AI Assistant to help you achieve highly accurate results - but we also have some tips so you don’t get caught out.

Market-leading model

Juro’s AI Assistant is built using OpenAI’s GPT model, provided by Microsoft on Azure servers.

Not only does this setup enable Juro to protect the confidentiality of your sensitive business information, it also puts one of the world’s most powerful generative AI models at your fingertips.

We acknowledge that training a foundational model for AI is a big data exercise. For example, GPT-4 was trained on one petabyte of data. To put that in context, that’s like the model learning from 1,000,000,000,000 average-length contracts in .docx format.

And after completing that training, GPT-4 can pass the NY bar in the 90th percentile. This shows that market-leading models are already usable in a contracts context.

AI playbook

Juro’s AI Assistant features an AI playbook. In the playbook, you can set the context and the constraints for AI Assistant at a template or document level.

This means you can easily replicate the general prompts that work well for you and your team without needing to document them separately, or paste them in each time. This also addresses the challenges of context and constraint, helping with accuracy.


In Juro, we provide helpful shortcuts for individual task prompts, helping you to structure those prompts like an experienced user of generative AI. Not only do the shortcuts help with prompt formatting, but they might also introduce you to tasks you didn’t know could benefit from the help of our AI Assistant.

Reviewing your work

Generative AI is transformational for legal work. But it’s not infallible. As a skilled lawyer or contract professional, your job is to engineer prompts and monitor output in a way that produces high quality output that you are prepared to stand behind.

AI Assistant is safest when you use it to complete tasks that you are capable of completing yourself, and that you are capable of verifying as accurate yourself.

We think of it like working with a highly capable trainee lawyer. It is a huge productivity boost, but ultimately you should check the output and correct any errors you identify before solely relying on it.

Want to see AI Assistant in action? Get in touch with your sales representative, or get in touch below.

Instantly book a personalized demo

  • Schedule a live, interactive demo with a Juro specialist

  • See in-depth analysis of your contract process - and tailored solutions

  • Find out what all-in-one contract automation can do for your business


Schedule a demo

To learn more about the use of your personal data, please consult our readable Privacy Policy.