Solutions
Customer Support
Resources
Generative AI is predictive, and questions about hallucinations and accuracy raise serious questions for legal teams. How can legal teams bridge the trust gap and align everyone around the confidence level they should have in AI solutions? Reflecting on the themes of Scaleup GC 2024, we spoke to an expert panel to understand this fast-developing area.
Our panel included:
Lucy, an ex-M&A lawyer turned consultant, highlighted the benefits of using AI for tasks such as summarizing documents and writing training materials. Nik, a partner at a law firm, discussed the typical use cases for AI in legal practice, including summarizing case law and automating contract drafting. The main risks identified were related to privacy, confidentiality, data accuracy, and data transfers. The speakers emphasized the importance of conducting due diligence on AI vendors, reading and understanding their terms of service, and implementing internal AI policies and training sessions.
Book a demo to find out how Juro is helping 6000+ companies to agree and manage contracts up to 10x faster than traditional tools.
Richard Mabey: Welcome to everyone dialing into this Juro webinar, reflections from Scaleup GC 2024, Bridging the Legal AI Trust Gap. I'm Richard Mabey, I'm co-founder and CEO at Juro. And I'm delighted to be joined today by Lucy and Nik who will introduce themselves in just a moment.
But the context behind this webinar is we had our annual conference Scaleup GC just a couple of weeks ago. Amazing to see, I think, around 450 of you from the Juro community there. And there was a ton of insight that came out of it, which we want to summarize today, but also go a little bit deeper into this topic. So if we go ahead to the next slide, as I said, joining me, Lucy Ashenhurst and Nik Theodorakis, we're going to hand over first to Lucy to give a bit of background on her.
Lucy Ashenhurst: Sure, thank you so much, Richard. So I'm an ex-White & Case M&A lawyer who redeemed herself about a decade ago by moving into startups. And I've been working either as a consultant or in-house as general counsel for startups and scale-ups in Asia and Europe for the last 10 years. But I'm also having worked with lots of SaaS businesses, have also been a huge fan of technology in all its forms and an early adopter of most. So that's what sort of brings me to this chat.
Richard Mabey: We're really grateful to have you here Lucy and Nik from Wilson Sonsini. Tell us a little bit about you, Nik.
Nik Theodorakis: Of course, thanks Richard and thanks for having me. I'm Nik. I'm a partner in the London and Brussels offices of Wilson Sonsini, so splitting my time between Brussels and London, thankfully with the Eurostar. It's quite easy to do that. Focusing 100% on data-related matters, so anything that has to do with the GDPR, data transfer, cybersecurity, and of course AI, the DSA, the Online Safety Act in the UK, and many other things that have to do with data. So I've been doing that for my entire legal career and I'm happy to talk about AI today.
Richard Mabey: Awesome. Well, thanks, Nik, and thank you also for the support of Scaleup GC, which Wilson kindly gave to us. As always in these webinars, feel free to ask questions. You could do that just directly in the chat at any point. So any comments or observations you have, any questions, just throw them straight in. There's a huge amount of legal talent on the line. And so anything you'd like to share, please do. We will also share resources after this webinar. So stand by for that as well.
Okay, so to set the context as to why we're having this webinar, we are about to launch a report at Juro about use of generative AI. And there were a couple of things which really struck us about this. So the first is that the amount of people who are using Juro every day and every week, so that's the dark green and the slightly less dark green quadrants of here, has dramatically increased since we last surveyed around six months ago. So it's about 43.8% of you in the community are using generative AI regularly. And so it's amazing to see this surge in usage. At the same time, if we go to the next slide, there is a concern, which goes across, I think, more than just legal tech, but there is a concern about the security of products, whether they are private from a data privacy and protection perspective, and around the risks in using generative AI.
So we wanted on this webinar both to address some of the use cases, but to spend most of the time on what are the risks, what are the real risks and what should we be doing about them as lawyers. We also asked the community for some of the risks that they see and there was a pretty clear pattern. So you can see in this word cloud here that it's kind of the usual stuff. So confidentiality and privacy really at the top and then accuracy and hallucinations, part of the same thing really. So they came out on top and of course after that you can see cost, time, budget, reliability and so on. But we're going to focus most of the discussion on what was front of mind for communities, confidentiality, privacy and then accuracy.
Okay, so here are the four things we're going to cover and to kick us off, Lucy, you are operating in a very small legal team, I think a legal team of one. So I'm imagining that AI is a great co-pilot and friend to you, but just tell us a little bit day to day what you're using AI for.
Lucy Ashenhurst: Yeah, absolutely. So I think for those of you who are at Scaleup GC, I can give a summary of it, which is I use it for summarizing documents. I work a lot with climate-focused businesses, which is heavily, if not regulated, there are lots of different industry guidelines, very long documents about how verification and validation of forestry works and all this kind of thing. Quite complicated, takes a long time to read through. So it's brilliant for summarizing things like that, pulling out, you know, give me, you know, 10 bullet points on X or summarize each clause or each paragraph in a way that's brief.
So it's brilliant for stuff like that. It's also really good for writing training documents. I'm not only saying, you know, can you write me, you know, speaking notes for a training program on antiviral corruption, right? It'll come up with, you know, slide one, slide two, you can also then say, right, thank you for that. Please customize this for a sales team doing outbound calls, or please customize this with a particular view to X department or X business priority. So it's really helpful in finessing those and you can then pull that out into multiple different work products, sort of more or less instantly. I think the key thing for me is that it's about doing things that you know how to do, it's just much faster than you could do it, because that now allows you to validate the answers and make sure that it's not coming back with total nonsense.
So that's the headline. If I'm allowed an extra 30 seconds to apply though, I actually used it for something this morning and I thought it was a really good use case. So if you'll bear with me, I'd like to talk you through it. So I have a client at the moment who is transferring a couple of employees. They have a couple of different group companies and they're moving them from one to the other because the needs of the business have shifted. And I had this little bell in the back of my head. I'm not an employment lawyer specialist. I thought, wait a minute, that sounds a bit 2P-esque. They've been providing services for the company and now something's happened. This little alarm went off, but I haven't done a 2P transfer in a while. I didn't know it off the top of my head. So I put the scenario in without using any confidential information.
So I said, imagine company A had been doing X, company B had been doing Y, employee number one is in this scenario and we need to transfer them. Would this trigger 2P? How does that work? Please give me a steps list of exactly how I would do that transfer compliantly. And it said, you'd need to do X, Y, and Z, dum, dum, dum, dum, including writing X, Y, and Z documents. But okay. Can you give me some suggested wording for those documents that you've mentioned? Yes, bum, bum, bum, bum, bum. Now, I'm not going to copy and paste any of that anywhere. But that has done for me in five minutes what I would then need to go back to the regulations, pull it out, cross-reference things. I can do all of that. But it's so much faster. And I now have a really neat summary. I can take that, I can communicate that with the business. I have a good idea of the kind of time scales involved, things like that. And I have a baseline for what needs to be included in the documentation. And I can then finesse that and turn that into a really good quality document. That saved me probably four hours of kind of thinking through and research, which then I can give back to the team to achieve more in that time. So that's a nice worked example from this morning.
Richard Mabey: An amazing example. And it also shows the quality of prompting is so important, right, in that query that you're writing. Stephanie has a question. This is just chat GPT, right? You're using.
Lucy Ashenhurst: So yes, at the moment, I just use chat GPT because I run my own independent consultancy. So I don't have a big budget to have heavily customized products. I have to say having had all these conversations, I'm getting more and more tempted. I have a real like FOMO now, particularly the conversations with TravelPerk about customized systems. I'm wondering if I could build a customized system of my own that puts my brain into AI and then sell that back to all of my clients so that they can actually get answers from me while I'm sleeping or drinking a glass of wine. And it feels like a great plan to me, but I'll get back to you on that when I've built it.
Richard Mabey: It's brilliant. I mean, even in chat, you can see now the GPT marketplace, you see a few legal bots. So maybe that could be a possibility. Nik, you obviously work with a whole bunch of clients and are seeing this day to day, but what are the typical use cases that your clients are using? And also what are you using generative AI for?
Nik Theodorakis: Right. Thanks, Richard. So it largely depends on what our clients want because I guess we can break that down in two main camps of clients. Like some of them are kind of early adopters of the technology and they get back to us and they ask, you know, how are you guys using AI? You know, how can we be even more efficient in terms of using the time and resources? So for instance, can you use AI to summarize submission or like case law and things like that? Other clients are actually quite concerned, as we'll discuss later on, with respect to the use of AI, privacy, confidentiality, things like that. So if we take a step back, I think law firms altogether being inherently very risk averse, very slow moving environments, have a more kind of a played by ear approach.
They try to see first what's gonna happen in the industry and what kind of the smaller companies are gonna do and how the whole market is going to shift. And then they're going to follow suit. But otherwise when we have clients who are actively asking us how we can use AI to be even more productive in our work, that would typically have to do as I mentioned before with summarizing case law. And now we see a slowly moving trend towards automation of certain contracts or certain procedures like privacy policies, terms of use, like the general kind of the legal documents that every website has, and depending on the size of the company, it may not make sense to spend a whole lot of money on that.
Richard Mabey: Very interesting. I love the privacy policy one. There's some great prompts that you can use which just make things shorter in plain English, which will be huge. Also, thanks for giving an overview on what's kind of happening on the ground. Let's get into the risks because this gets a lot of air time at the moment. And of course, we as lawyers are rightly concerned about risk. It's part of our job to control it. But thinking about privacy, Nik, I mean, you spend a lot of time in this area. What are the things that you are advising your clients on and what are they really worried about happening in practice around privacy?
Nik Theodorakis: That's a great question, Richard. And in a way, the slide that you previously demonstrated is very succinct and really to the point. So I think the key issues that our clients identify and are worried about have to do with privacy, confidentiality, data accuracy, and data transfers. And if we break that down a bit, with respect to data privacy, they're actually concerned about what will happen to their personal data, whether any vendor will actually be able to reuse the data for their own purposes, to train the algorithm in particular, and otherwise for product development purposes. So that's a key one. Then confidentiality is also extremely important in terms of potentially disclosing or using trade secrets, intellectual property, or other confidential information. And otherwise, how can they make sure that their confidential data is watertight and is protected by an AI vendor that they're using?
Same goes to accuracy of the data. We're all very familiar with a case from last year where a lawyer used AI and actually generated fictitious cases, case law to support a claim. So it's extremely important to make sure that we can have a meaningful oversight and control over the output of the algorithm and that we can actually double check it. And that we don't see that output as kind of the manna from the God that comes and we can just use it no matter what. And as Lucy mentioned previously. And then data transfers is also an issue that we see many of our clients being concerned about, particularly with respect to transfers outside the UK and the EU to third countries like the US, China, how can we adequately safeguard the data?
How can we make sure that there's not going to be any interception of the data? And otherwise, having controls in place like data transfer impact assessments do not always diminish the risk associated with that. So I'd say in a nutshell that when a client is concerned about the use of AI, they're pretty much concerned about the full spectrum of privacy in privacy related issues. And I guess it goes back to the very basics of the UK GDPR and privacy, like the key principles of accountability, transparency, accuracy, data minimization, all those principles that are very high level, but at the same time, they truly encompass everything that has to do with privacy.
Richard Mabey: Out of interest, I mean, you know, open, if you take open AI, for an example, for us, it was extremely difficult to get EA hosting. Hopefully it's now become a little bit easier, but are you seeing clients in Europe happy to actually use US-based sub-processes or are you seeing pushbacks and workarounds in the field?
Nik Theodorakis: We see quite a bit of pushback and a general tendency towards trying to locally store data in the EA and or the UK. And even though the GDPR itself does not have any data localization requirement, there is quite a bit of a strong pushback, particularly because certain countries in Europe are more sensitive towards that aspect than others. Now, in almost every case, we can successfully push back because to the extent that you have adequate controls in place, SECs and what have you, then that should not be an issue. Sometimes it boils down to a business decision as to whether a company is happy to make that concession and actually potentially pay extra money to store the data in the EA because it's also a pricing issue. But we do see that trend increase over time. So let's say, if we compare today to 2018, we see quite a bit of a strong push towards data localization.
Richard Mabey: Interesting. And Lucy, when we last spoke, you used this great phrase, there are some pretty obvious risks around AI. What are the really obvious pitfalls here?
Lucy Ashenhurst: Well, I think, yeah, I think it's sort of perhaps not the risk. Well, I think risks are obvious, but I think it's also the mitigation that's obvious at this at this stage. I think it's really important when we talk about AI to delineate between kind of, you know, chat GPT style, completely open internet based AI and customizable walled garden AI that's specific to and can only read your internalized company or personal data. And I think those are very, very different. I tend, as I said, to use open AI systems. As a result, there is a risk that data could be taken from prompts you've given, for example, and repeated as answers to other people.
I've asked ChatGPT what it thinks about this, by the way, because I was intrigued. It said, no, no, everything's totally closed off. The data is stored specific to you. And as soon as you shut this window down, it gets deleted from our systems. And there's no risk at all of confidential information that you share being transferred to any other users, but it would say that, wouldn't it? So who knows? I think they are obviously, as Nik so brilliantly pointed out, it's a large concern for everybody, including the people making these products, because the reputational risk is huge for them. If there was an obvious data leak, people would just stop using it.
So I am cautious, but not frightened, if that makes sense. But there are some obvious fixes to that. Like I mentioned in my earlier example, I would say company A and company B, even if I was copying and pasting, I would never copy and paste a whole contract, but even if I was extracting a clause or particular wording that had identifiable data, I would anonymize it before I put it into the system. So little things like that are easy. I wouldn't ever, for example, you know, ask it to generate me a contract and then copy and paste that contract and be like, ta-da, it's finished. There are little things like at the bottom of each answer it says generated by chat GPT.
If you copy and paste and don't realize that will also appear in the document you send to your client. So be sensible I suppose. But certainly with confidentiality concerns I just wouldn't bother putting it in. Even though they say it's fine personally I don't want to take that gamble at the moment. But I think this is something that is and will move very quickly. You know, when internet banking came out and mobile banking, people said, I wouldn't do that online. Everyone will steal all my stuff. My mother still doesn't use online banking for that reason. But, you know, our perceptions of those risks change and they change really quite quickly when we realize how useful, you know, how significant the benefits are. So, yeah, I would say caution is good. But equally, you know, I can imagine there are a lot of very smart people working to ensure that that doesn't happen for these products, because I think it would be a death knell for them.
Richard Mabey: So it's interesting, isn't it that, you know, as kind of lawyers, we would naturally take out the personally identifiable information. But if you weren't a lawyer in the business, and you were using AI tools, you might not default to that. And Zeno's got a good question in the chat, which is, you know, proliferation of AI, like drive plugins enable everyone is using them, little controls, can't enforce policies at company level. I mean, how do you think about actually the kind of internal client you have, Lucy, and how to ensure that actually what they're doing is legally compliant?
Lucy Ashenhurst: Yeah, I think it's such an interesting question and one that's really only come up in the last probably like six months. But exactly as Zeno suggested, a lot of software now comes with built-in AI. Most of my clients are scale ups. They use Notion, for example, very often for their intranet. Notion comes with AI. Would you like to generate a page about X? Let us write it for you. Or the employees can say, what's the policy on maternity leave? And it will scan the intranet and it will pull that information out. I've also used Plum, which is a wellness and HR platform. And it has a similar system, it has a chatbot you can query and you upload all of your policies and all of your company information and it will then read it for you.
So it's a very easy access form of internal AI, but the risk of that, and I actually did a notion query while we were talking, just to see if it would be accurate or not. And it gave me a summary, which was broadly accurate. I said, what are the employee policies for this company? And it's not even policies are blah, blah, blah. But then it gave me a link of a load of totally irrelevant pages within the notion system that didn't apply, but obviously had some key word that it flagged. So I think, you know, it's easy to easy to, it's very easily accessible nowadays for almost everyone within their businesses, especially tech focused startups, we tend to be faster at implementing this stuff. But like you said, there's a risk. And I think this comes onto this idea of AI policies, because as lawyers, we're very risk averse naturally.
So we're much more likely to be your kind of gold star sensible users. And I think the question you posed is exactly right, which is what about all the other people using it within the business, whether that's a customized AI tool, or just one that's fed to them automatically by software that you've chosen not for its AI purposes. And yeah, I think I'm coming more and more around to I'm anti sort of bureaucracy in general, having worked for startups for so long. But actually, I think some sensible AI guidelines for the business is really valuable. And I think Nik, you said that you've started being asked about this as well. Is that right?
Nik Theodorakis: Yes, yes, Lucy, that's exactly right. And we regularly advise on how to mitigate the risk associated with AI, because that's something that keeps almost everyone awake at night. So, and at the same time, it's important to acknowledge that it's an evolving technology. So the mitigating steps will also evolve accordingly. But I think some of the points that you raised, Lucy, are extremely important and kind of exercise common sense. That will typically even keep us out of trouble most of the times. I would add to that some specific points.
For example, when we want to use AI for another client, it's important to first check the contract that we have with that other client to make sure that there's no prohibition of us using AI with that client's data to the extent that we input any of that data into the AI system because some companies, as I mentioned before, are extremely risk-averse at the moment. They don't want any AI tool to be used when you process data, for example, as a processor or as a service provider on their behalf. So before engaging a vendor, an AI vendor, as effectively our sub-processor, it's important to make sure to check our contracts and make sure that there's no prohibition that would not allow us to do that.
It's also important to the extent that we can, to tailor the AI solution that we're using to our business case. For example, many AI products offer the option to turn off algorithm training with our data, or that once we close a session, all the data is being deleted. So to the extent that we can kind of add and control how the AI is using our data, that would be extremely helpful. And the risk mitigating measure as well. Along the same lines, it's important to conduct due diligence on our vendors and essentially do our homework. That's very important. I mean, for GDPR purposes, but also for AI altogether. So before we engage in AI vendor, it's important to check, you know, whether they have, you know, privacy policy that makes sense, a DPA that makes sense, whether there is any, you know, funny language in the DPA that would allow them to do more things with our data that we would like to. Also, we can maybe talk more about it later on. Now there's a recent trend for companies to draft AI policies altogether, which are more kind of overarching constitutional documents around their use of AI. So, you know, these would be kind of nice green flags that would allow us to say that, okay, you know, that vendor is a sensible vendor.
So it makes sense for us to use them as well. And then I think internally it makes sense to kind of tried to create an AI task force. I mean, that of course depends on the size of the company. That task force can be in a single person like Lucy mentioned before, but essentially that task force should make sure that keeps abreast with all the developments that have to do with AI. That they can also talk to the other departments of the company, because I find that for many clients, it's extremely interesting that the different departments not really meaningfully talk to each other. So you have legal, you have sales, you have IT, and they each have a different perception of how AI can be used and what we can meaningfully do with the technology. So it's important that everyone's on the same page, because that can also help with AI adoption altogether. And then finally, try to organize internal training sessions so that every staff member knows how to use the technology, knows how to be sensible about that, to scrub data before they put any input into the algorithm, to double check any output, stuff like that, like a sanity list that would make everyone's life much easier.
Richard Mabey: It's interesting. I'm just seeing, Lucy, your question in the chat as well to Nik. I mean, in terms of these policies, how prescriptive do you go? Do these need to be watertight at this stage or is there room for maneuver to have more of a sandbox approach and allow people to get familiar with the tech?
Nik Theodorakis: It's the latter for the time being. So it's more sandbox, high level, constitutional approach, like the key principles that we rely on when it comes to using AI. I think it's more, and it's currently not a legal requirement as such, right? For some companies it may be with the EU AI Act, depending on the risk level of their AI system. But I think for many other companies, they see that as an opportunity, as both an internal and an external opportunity. Internal, because that will have them kind of realize exactly what they want to do with the technology, how they want to use AI, how they can be transparent about that, and the principles that they follow also to mitigate the risks that we mentioned before and to ease any concerns that their clients have. And external because by doing that, I think it's also great for marketing purposes because if I were a client and I would say, you know, a website and I would see that a company has an AI policy, I would definitely think that that company has put thought and effort into that policy. And they're not just kind of using AI because it's the buzzword of this year and, you know, hopefully the many, many years to come, but they've actually put thought and energy into that. And they have a plan. And they know what they want to do with AI and how they want to protect us. So a sandbox high level approach makes lots of sense right now. And I think as time goes by and as we also become more familiar with the technology and how we use it, those policies will be even more specific down the line.
Richard Mabey: Lucy, one thing you mentioned in working with these early stage businesses taking on new tools like Notion just got me thinking a bit about the Slack example, which I think it was about last week or the week before this huge firestorm kicked off where Slack basically updated their privacy notice and said, we're going to use your data to train our algorithms. It made me think that actually often in these tools, it's one thing to have a policy, it's another to ensure that users aren't just kind of clicking, yes, I want the AI add-on and starting to use it. Just kind of curious in practice what you see there and what are the mitigations to ensure that the gap between the policy and reality is bridged.
Lucy Ashenhurst: Yeah, absolutely. It was one of the notes I took from listening to Nik, actually, I'm learning as much as I'm as I'm speaking here, definitely. The idea of having that sort of AI task force or at least someone holding ownership over it within the business. I was thinking through the ramifications of that. And I think you're exactly right. It's and I think I'm guilty of hearing AI and thinking about chat GPT quite specifically or analogous tools, but actually it's embedded in so many things. Slack is an example. You don't necessarily see it as AI. It might not be presented to you in the same way, but it's still making automated decisions. And we know, for example, for employment law and privacy and things like that, there are prohibitions on making employment decisions, things like that, using automated selection. So I think my takeaway from that is that perhaps one of the things you might want to be slightly more prescriptive about early on in an AI policy are the types of AI tools, the specific websites, tools and platforms that the company approves based on and actually have someone read those terms and conditions that everyone clicks agree to. Where is that data being stored? How is it able to be used? Is it used in training? Those kinds of things. I think we can all be guilty of just clicking yes when it's a SaaS product.
This is an excerpt from the full transcript. To watch the webinar in full, click the preview at the top of this page.
Join our private community of 1000+ in-house lawyers at scaling companies for exclusive events, perks and content.