How GCs are adapting to AI
AI is transforming businesses across all sectors, and the inhouse legal sector is no exception. We spoke with two General Counsels—Guy Smith from Holmes Group and Jason Hawthorne from Serko—about their experiences and how AI is changing the legal landscape.
How is your team currently using Generative AI, and what use cases are delivering the most value?
Jason: We believe it will offer the most efficiencies for legal research and contract review, but we are using more broadly than that. We've also been using copilot and other tools to draft odd clauses or disclaimers that we don't have a precedent for, provide a first cut of new policies or procedures and even to draft ESG Reporting content. The outputs will often provide us with a decent starting point that we can tweak and take from 80% to 100%, accelerating the process, and a lot of the time, no confidential information is required as an input.
Guy: We are exploring AI opportunities across our business. AI has a lot to offer engineers, and the organisation as a whole is exploring how to use new tools to our work better. The legal team has piggy backed on that enthusiasm, by using those tools and exploring some of our own.
With the help of our internal Advancements team we've developed our own bespoke contract analysis tool based on AI called Matlock. It is able to look at a contract and compare it to a standard form and highlight irregular clauses and as we train it it's able to suggest replacements. It's still new and needs careful supervision but it's really helped us see the potential of these tools.
There are other types of tools we're exploring too. I'm a matter tracking and filing system called Zoo Library – it has AI underpinning it so it plugs into Microsoft 365 and sorts emails and documents into matters for me and helps me allocate and track those – we are just trialling that too. It's doing the sort of work a team administrator might do for me, and I'm enjoying it.
Jason: One of my favourite tools so far is Google notebook LM, a free service that you can upload a document to, and it'll create an audio file where two people (computer generated voices) talk about the document in a podcast format. We used this to create an AI podcast on our Code of Ethics for internal training, and supplemented the audio with a synchronised slide deck, so that when the AI people discussed our Code, a slide deck with a section summary directly from legal would show alongside it. The result was a 13-minute podcast format training that we rolled out to all staff and incorporated it in our onboarding materials – it has turned out to be the most engaging Code of Ethics training I've come across, as the speakers are so excited about the topic when they talk it really draws you in.
Are you finding AI is creating more efficiencies? If so, what are you using your extra time for?
Jason: It's allowing us to shift focus to higher value work. Code of Ethics is a good example, I'm a strong believer in the value of in person training internally, but we don't need in person training from Legal for onboarding of all staff. I'd rather devote more time on targeted legal training for more impactful sessions like Privacy by Design with our senior architects and product managers – and it's easier to do that if we can mass produce some other trainings.
Guy: Right now, I am finding you have to put quite a lot of time into using the tools we are playing around with properly and thinking about how you are going to use it. We are still in the learning to ride a bike stage, we know the bike is going to make us faster, but we are still figuring out how to make it go straight. What AI is doing reliably is improving the output and doing things you'd usually ask another colleague to do like peer review – looking for obvious errors. LLMs are useful, a lot of my advice is emailing written advice, so it does a good peer review – it throws up content issues or ideas sometimes that are useful – so definitely improving the quality of the work that's going out to the business for us. But for now, is it making us faster? Probably not.
One other thing we're noticing is a growing number of requests for contractual clauses relating to the use of AI, so that's creating a drag. At the same time, we are giving some thought to whether we need our outbound documents to speak to AI specifically.
Beyond legal compliance, what role should GCs play in shaping their company's broader AI policy/strategy?
Jason: We have a big role to play. At Serko, Legal drafted the initial GenAI policy and rolled that out, it was very much focused on ensuring that appropriate safeguards were in place for sensitive inputs, like source code of our software when using AI tools for engineering. It was a collaborative effort between tech and legal to create the policy and that policy is still owned by the GC and CTO.
Within that policy we now have a pilot program – a process internally where people can put forward different AI solutions and we will work through them, assess the risk and we will approve the pilot and set parameters to ring fence it. If successful we can move to an approved enterprise solution for wider use. With our list of approved tools, it is safe to input confidential information due to our contract terms and configuration of private libraries.
Legal is also part of an AI 'Community Practice' at Serko, a group comprised of different stakeholders within the business, which allows us to be plugged into and guide the planning and operational use of AI.
Guy: We have an AI use guidance, and I did input into that. So, I've been involved at that level. We control it the same as any other tool we're using. I ask questions about privacy and security and query how data flows through the different tools. Our leadership team takes an interest too and Advancements – our team that looks after this stuff – is a standing agenda item. But the business is open to some risk because we think you have to embrace these new ways of working to keep ahead of the game. We do have guidance about what data you should absolutely NOT put into the system but ultimately, we want people to access the tools, and we have a lot of trust in our people.
How can GCs lead differently in the AI era?
Guy: I think as a general rule, part of the role of a good GC is to enable your business to be brave and take risks. So, the best thing I can do is to understand the risks as best I can but then stay out of the way as much as possible. So, I express concerns only when I think people don't have a handle on risk. It's tempting to be alarmist, I remember when everything moved from on-premise to the cloud, there was alarm – some alarm is part of the job but too much can do a disservice to an organisation because risk is part of any enterprise.
Jason: We're fortunate to have access to forums like the monthly SHIFT GC Network sessions where there are a lot of talented and forward-looking leaders willing to share learnings and insights, and I think having the ability to keep pace with changes and flow learnings into our teams is really powerful. It's also an area where GCs can encourage tech savvy junior members of the team to take stage and share their knowledge and expertise with the seniors – we had a team workshop recently and I learned a huge amount on how to better tailor and sequence my prompts.
Will AI Replace Lawyers?
Guy: Some lawyers yes. In terms of the profession more broadly, the generation of young lawyers that is coming through now in this era of AI is going to have to wrestle with how to use it but still develop critical skills. Junior lawyers today could conceivably never do any research and just become AI prompt wizards. The profession is going to have to think about that – I certainly don't have the answer. At the stage where AI replaces lawyers altogether it's probably also replaced the Courts and disputes are being resolved between AIs talking in some sort of electronic language we can't understand and when that happens it won't just be lawyers who have something to worry about.
Jason: For our team right now, it's a tool to enhance existing legal work, but I do think it will fundamentally change inhouse legal roles, in a similar way to how the internet transformed those roles. But yeah, efficiency gains from AI will naturally mean that businesses or firms need less headcount or legal capacity to accomplish the same amount of legal work – which could be partly offset by increased regulation and operating complexity. Could I be replaced by an AI agent? Selfishly, I hope not!
SHARE THIS ARTICLE
SHARE THIS ARTICLE