AI Couldn’t Summarize Depositions – Until Now, with John Campbell
Episode Summary
Imagine you’ve got a big case and you want a summary of every deposition. “The first temptation might be, ‘Well, I'll just throw my depo into an AI, and I'll have it summarize it.’ But if you've ever tried that, it would choke on the depo.” That’s how John Campbell describes the problem. His expertise comes from pioneering the use of big data in litigation and writing JuryBall, the groundbreaking book about big data, with his wife, “The Fred Files” co-host Alicia Campbell. His solution? A deposition summarization tool called Res Ipsa AI, which he developed with software engineer Kevin Doran, another member of “The Fred Files” team. John and Kevin join Alicia and empirical legal scholar Nick Schweitzer to give a behind-the-scenes look at this tool that transforms 100-page depositions into 10-page interactive summaries with hyperlinks to original testimony. “What we started thinking about was, what if we could build a tool that’s not a Swiss army knife; it doesn't do a thousand things?” John explains. Their tool does one thing: summarize depositions, quickly and accurately.
Learn More and Connect
☑️ Kevin Doran
☑️ Subscribe: Apple Podcasts | Spotify | YouTube
Transcript
Voice Over (00:03):
Every trial lawyer knows that moment when you've built what feels like an airtight case, but you're still lying awake, wondering what will the jury actually think? Jury research was once a luxury reserved for cases that could support a big data study bill. Not anymore. Join trial lawyer and trial scientist, Alicia Campbell, empirical legal scholar, Nick Schweitzer and data guru Kevin Doran as they break down the barriers between you and the minds of your jury. This is the Fred Files produced and powered by Law pods.
Alicia Campbell (00:38):
Well, hi, thanks for joining us at the Fred Files. We're happy to have a guest with us this time. I think the first three episodes were just Kevin, Nick and I talking about Fred, but we're expanding a little bit because we want to cover a lot of different topics during the Fred files. We want to bring all kinds of lawyer tech to the forefront to the extent we can. So we're going to do some lawyer tech talk today, but we've got a lot of things coming up that we're going to be discussing in the future podcast that won't just be Fred related. We want to kind of make sure that this is a expansive podcast that covers all things dealing with data and law and tech, kind of the future of law really. So of course Kevin and Nick are here with me. Our guest is John Campbell, a dude I barely know. So we're going to talk a little bit today about his product that he came up with. Kevin. Of course, Kevin's our resident, I dunno what you want to call it. Resident
Kevin Doran (01:36):
Software developer.
John Campbell (01:38):
I was going to say tech guru.
Kevin Doran (01:40):
I hesitate with the data guru thing. I feel like Nick is probably a bit more, I can't even remember my stats class or anything like that, but yeah, I could help you out with that. Yeah, exactly.
John Campbell (01:51):
How about Tan Guru?
Kevin Doran (01:53):
Yeah, tech guru.
John Campbell (01:55):
Good title for you.
Kevin Doran (01:56):
Yeah, guy who writes too much code. Yeah, that's how I think of myself.
Alicia Campbell (02:00):
Data code stuff, it kind of works, right?
Kevin Doran (02:04):
Yeah,
Alicia Campbell (02:05):
But you guys are going to talk about what you built and it's called Res ipsa ai. So why don't you tell us a little bit about re said. John, do you want to do an introduction? I don't know if you really need to, but go ahead.
John Campbell (02:17):
I'm Alicia's husband and I work with these folks. First of all, thanks for the invite. I got so excited when I saw the invite to Fred files. It's a big day for me, so thank you. This is the up and coming podcast. I've heard a lot of people talking about it, so I'm excited to chat with you. Second, I think Kevin is going to be able to tell you a lot more about the nuts and bolts of Res Ipsa AI, but I thought it might be useful to tell you from a lawyer standpoint where this came from and so maybe it's simple to even better to start back at what does litigation look like and what do trials look like? Litigation as we all know, is kind of a depose anything that moves climate with the exception of a few states, right? In a few states you don't depose experts.
(02:57):
In a few states the deposition culture is lighter, but in almost all states the idea is if there's a witness, an expert, anybody that might know something about the case, depose them because you don't want surprises. You want to develop the record and because often you need that evidence for summary judgment or to hold off a summary judgment or for settlement negotiations. And so the result is in big cases it is not uncommon to have dozens of depositions. Certainly I think most lawyers listening will say, well, I had some case where I had more than 40. Maybe somebody will chat or write us and say they had a hundred depositions have become a core part and a core expense in cases. But my experience working on complex litigation is that it's very rare that the same lawyer takes all those depos. And so what really happens is a variety of lawyers from a firm take depositions over a period of months, years, or sometimes approaching a decade of depositions.
(03:51):
And then there come times either before summary judgment, before putting together a mediation statement and particularly before trial in which you really need to know what's in all those depositions. But since no one person took 'em all, or even if one person did take 'em all, they were also working 30 other files and taking other depositions. What we happen, I was just reviewing a presentation today from lawyers because I do big data stuff with Alicia and so I see a lot of case presentations and all the comments were, Hey, we need to check this for accuracy. This is just my memory of what the witness said, but somebody needs to pull the depo and see if he actually said this. And so often when people actually check to see if the depo says what they remember it saying, it doesn't. So we've been all faced with this reality that, hey, I'm getting ready for trial.
(04:39):
Probably the best practice is I need to sit down and read 30 depos that average 150 pages each. So I just need to read 4,500 pages or so since I don't have anything else to do and that's how I'll get ready. There are very few lawyers who do that. So where do we end up with? What we end up with is either relying on others to tell us what the depos say, reading the depos ourselves, which is a massive time investment or somehow getting these depos summarized to at least then start to look for the nuggets. The option that should be off the table right is go to trial or summary judgment or mediation and not have looked at the depots. But I can tell you it happens and anybody listening knows it happens. So ipsa AI and this idea that is Chachi BT was kind of growing and AI was getting better and these large language models were starting to work.
(05:34):
The idea was, alright, can we leverage technology so that we could engage in this best practice of knowing what's in every depo but in a time efficient way? And the first temptation might be, well, I'll just throw my depo into an AI and I'll have it summarize it. But if you've ever tried that early on, it would choke on the depo. The depo was simply too long and it would just say error out even if you found a depo that would fit or now with more tokens, it would give you a couple paragraph summary of a hundred page depo, which is not useful. And so what we started thinking about was what if we could build a tool that it's not a Swiss army knife, it doesn't do a thousand things right? It's not one of these things that manages your clients and also gives you legal research and also helps you with your billing.
(06:18):
And also what if we just built a really good tool that lets you upload a depo, get back a meaningful summary that's broken down by pages so that you can find the segments that matter to you, gives you a good accurate summary of those, say one to first five pages, six to 10, 11 to 15, and then let you, when you find the things that sound interesting and important, go to the actual depo and verify and check and see if that's right and then use those real quotes. That was the idea. If you've ever had a depo summarized just now, let's assume you need your depo summarized because you can't always read 'em all and you need to know what's in them for a long time, your options were, alright, well I can pay my associate attorney or a paralegal to do it, but that has an hourly cost and of course while you're paying them, they're also not doing something else.
(07:08):
You could pay an outside company to do it, in which case the going rate for a long time was something like two bucks a page or as we talked about, if you didn't do any of those things, you could try to do it yourself, which for most busy lawyers it's not realistic. Particularly I'll mention that in the world we're living in now where you have lots of trial lawyers who kind of parachute in because they're very good trial lawyers but they haven't been involved in that file all those years, it's even more important that they can get their handle on the file quickly. But that means getting up to speed not only on exhibits and animations or whatever else, but the most important thing, all the testimony which is contained in depositions. So that's rest sips as AI's kind of marching orders and mindset is can we take depositions, make meaningful, usable, easy to read summaries that will allow attorneys to understand what happened in that deposition and then zero in on the parts that matter for their case.
(08:03):
Because in any given deposition it might only be 10% of it that really matters, but it's spread out across those pages and then go to the real depo, verify, understand and read the real words and use 'em. And in doing that, can we save 'em time? Now if we can do that and we can do it far cheaper than the alternative options, then we have something that's good for lawyers and is a good business model. And so that was the goal of rests ai and as you listen to us talk about it and especially from Kevin and hear what he's done, I think you'll hear that we have crushed the cost that it is radically cheap now to have good summaries of your depositions and interactive ability to go check the real transcripts quickly and the product itself is good, accurate and quick. And so to cut to the chase, if you're listening to this and you're like me, you're impatient on podcasts, just imagine that you could now upload 20 depositions, quickly, receive 20 summaries, and that would cost you about a hundred bucks instead of those same 20 depositions, even if they were a hundred pages each being 2000 pages and costing you $4,000.
(09:09):
And imagine that those summaries you get are good, clear and accurate and can immediately let you engage so that you could upload 'em the next day, have 'em back and spend one day working through summaries and going live to the depot and really feel like the file instead of spending weeks. That's Russ Ipson in a nutshell and why we're pretty excited about it and why we are bringing it to market.
Alicia Campbell (09:30):
Awesome. Tell me Kevin, a little bit about formatting what Res Ipsa AI scripts handle, how does it work?
Kevin Doran (09:37):
Yeah, that's funny. That's not where I was going to start, but it's probably very interesting for people listening. The whole idea was that if you're getting deposition transcripts, you shouldn't really have to think very much about getting a summary. You can just go into Res Ipsa, which is a webpage, you dump the transcripts in with one button, you click go and a little bit later hopefully minutes in most cases you get all of them back. And this was John explaining how this is supposed to work to me over and over again because when you're building stuff on the software engineering side, you have a million ways you can build something and there's so many options for how you could do what I just described. And the reason it works like that is because that's what John knew would work as someone who has to deal with these depositions, you're about to go to trial, you have maybe 15 depositions for this particular case and you want to just dump them in and get summaries.
(10:38):
We did iterate a little bit on how wonderful it would be to email those to Resa and just have an email address. But as anyone kind of working and security knows, there's still unfortunately some email providers who aren't great about encryption and in general you just don't want to put sensitive stuff into emails. So that's the only reason you have to go to a webpage and upload them. But after that it's hopefully as easy just hitting go with that question of what formats we try and address formats that are common in transcripts, including a big PDF that has those four pages per page having all of the line numbers already be there. There's sort of a strange disconnect between looking at maybe a PDF file that says in your PDF reader page three, but the actual text that came from the transcription service says page eight, all of that stuff we've tried to get very specific about solving so that it's kind of just this engine for you.
(11:36):
You just go and you dump them and you get them back. And I think the reason that to me that's kind of one of the best ways to use AI in these tools right now is because it's just solving an immediate problem for you with new changes in computers and that's it. And that to me is so much more exciting than talking to especially maybe a sales rep at a very large software company. Usually in this space it would be for a legal operating system that can handle everything for a solo practitioner or a small trial attorney team, a person like that with that sort of software is going to tell you about the strategy of using this software of how we're going to handle everything for you and you're going to live inside of our operating system. And all of us who have been in those worlds know that those tools have tons of pros and cons and they can get rather complicated pretty quickly.
(12:28):
So what I really liked about this was that it was literally right after chat GT got popular right after OpenAI started releasing these APIs that John kind of said, Hey, what do you think about this idea? And I thought, oh my god, that's brilliant because it's like it's going to use this thing immediately for you to get help in the way that AI is best, which is something very specific. So that's kind of why we try and get whatever the formats are, if it's a text file, if it's a word doc, if it's a PDF, however it's coming over from the transcription service and we use AI for what it can be really good at, which is a very specific workflow thing, and I kind of want us to talk about AI in general as it relates to Fred because we're all here on the call and so I don't want to get too far into that, but there's kind of two main categories and the other one as we know is big strategy conversational stuff.
(13:19):
It can be amazing at helping you think about something, a topic in a conversational way or dumping lots of documents or having it kind of iterate on things. But when you get really specific with ai, that's when you have to be really careful and that's when you need to do something like here is specifically pages one through eight of this 100 page deposition. I need you to give me a very clear summary of these eight pages and I need you to do that iteratively over this entire document. Then I need you to do this for 10 other documents and carefully walking through that process and doing it well is one of the advantages of using a software tool for this. And it's also one of the nice things about using a specific deposition summary tool over kind of one of these operating systems that's trying to do a million things for you.
Nick Schweitzer (14:05):
Kevin, can you talk a little bit, you mentioned a little bit about how this is working. I mean for somebody who might just be like, well, I'm just going to upload this file into chat GPT and I'll just tell it to give me a summary. There's obviously a lot more that goes into it. Can you talk about how you kind of engineer all of this so that what you get back is not garbage with a lot of made up stuff in it?
Kevin Doran (14:26):
Yeah, definitely. So this is a great question in using AI for everybody and I think as we all learn more about how to use these LLMs in our workflows, there's like some kind of heuristics and things that we've all kind of figured out and it's that when you give it too much information it can get kind of confused and a lot of that has to do with this thing we call the context window, which is basically any one of the models open ais or Claude or Google's, Gini, all of these tools, they can't handle very much information at once. So there's a whole industry popping up of trying to help people deal with that limitation and the way that they're doing it is kind of weird and there's actually a lot of bad advice floating around. I feel there's a lot of people who say we have solved the context window problem.
(15:17):
If you dump 50 documents into our tool, we're going to magically know everything about all those documents and we can do anything you want with them. And as users you've probably already figured out that's usually not brew. And so RESPs is a great example of this, like we already know our context here, which is imagine you're a busy attorney with 15 documents and you want them summarized. If you went and you dumped those into chat, GPT chat, GPT or Claude or one of these chatbot tools that these companies put on the front of their models, those tools would definitely without question give you an output and proudly tell you that they have done what you asked it to do. And I think we all now are very familiar with this frustrating thing where it just says, with so much pride, I've done what you've asked me and it's like not at all what you asked.
(16:06):
That happens a lot in this case because of this kind of under the hood thing of the context window of that even Gemini, which now boasts a 1 million token context window, maybe some 750,000 words, it is supposed to be able to handle those 15 documents if they're all a hundred pages each or 50 pages each or whatever it is, but it's just going to still get a little confused and it's going to just ignore that confusion. All of these generative models, all they want to do is generate, they're just really excited to generate things for you, so they're going to tell you that they generated something and they're not amazing at telling you what their limitations are. So yeah, you're going to upload these docs to chat GBT, it's going to come back to you and say, I've done it, but it's going to have done a really bad job of it.
(16:54):
It's going to have ignored lots of details, not look carefully at each sections, not done an amazing job of considering the context of what may or may not be important kind of page by page and it's going to give you sort of misleading results. And this is true beyond depositions or legal space just in general humans. What we can really do with well with AI is give it the right context in the right way. So if you wanted to do this with chat GT yourself, you would need to do it section by section, document by document. You'd need to take the time to sit there, think of good prompts, carefully go through each section and say, of these 25 pages, give me the really important things and then kind of just keep doing it, copy and paste it over and over again until you handle lots of edge cases, lots of different formatting, deal with ai, getting confused by specific things, deal with it sort of being confused by different personas or getting lost in the weeds, and that's kind of why you might use a specific tool for lots of different AI stuff, these general purpose tools, they're powerful and they're great, but there's so much more out there that you can do now by having tools that people built specific to certain tasks that as a developer, I hear people all the time like, oh, I can just do that with chat GPT, and I'm like, I pay for different tools all the time in my workflow that are yes, they're just chat GPT wrappers and I love it.
(18:16):
It saves me hours. It's the same thing as anything. It's like it's really helping me not have to spend a ton of time carefully going through the context. So anyways, Nick, I just gave you a 20 minute answer to your question. No, no, but
John Campbell (18:31):
Can I pitch in on that answer? I would just say this too is we mentioned this on our site, we're thinking through it because of course we wanted to make a good business case for the product, we wanted it to work, but we also wanted people to be excited and like, hey, this is great win-win. The thing that stands out to me as a lawyer is if I can get 15 depo summaries for about $75, it's not worth me even logging into Chachi PT because my time has value, my staff's time has value. The frustration of doing this has value, the risk of error has value. And so we really wanted to make something where anybody who heard about this and tried it and then said, yeah, that works, like they said would say it is illogical for me to try to do this myself.
(19:16):
And I think that's what this is at this point, to be able to drop some docs send and then receive quickly all of them ready and know that all this engineering has gone on the backend to make it work, it would be sort of insane to say, well, I just don't want to spend that money. I'll have my secretary figure it out because the hours that he or she spends figuring it out, even if that's your lowest cost worker, don't justify them doing this. Even if they built prompts and could copy and paste them and then pull out the segments and then somehow paste them into a document where they look nice and all that, it still wouldn't make any sense because in any given case, like a video deposition depending on the state costs, somewhere between 1500 and $5,000 for 20 depos, you'd spend a hundred bucks to have 'em all summarized, right? This is an infinitesimal cost. This is lower than the copy costs. This is lower than any kind of expense you have in a case.
Alicia Campbell (20:13):
Not only that, I think the other thing we probably should talk about a little bit, John you mentioned earlier for people parachuting into cases or getting caught up on the file, what makes this very different? Because with that kind of we led with that and then Kevin coming in and saying, you can give AI context and then it does much better. The output of these summaries is really phenomenal. So I think you guys should talk about it's not just a summary that AI goes through and then you get this, well, yeah, but what about this one piece of testimony? How am I going to know how to locate it? So maybe either one of you could talk a little bit about the output that you get from IPSA and why it is so powerful with what you've done with ai.
Kevin Doran (20:56):
Okay, I'm glad you brought that up because I realized that maybe we started talking about it when you don't really know what to visualize. So yeah, what I also really love about this tool, it's another web tool I just saw an influencer that I follow in the software engineering space talking about the importance of having a file, getting a file out of a tool and being able to hang onto it versus a document living in a web tool that you go back to in the web tool has updated and your login's broken and the format's different and suddenly it's like you don't have ownership anymore. So yeah, so that was another exciting thing that John kind of hammered on from the beginning is like, no, I just want a PDF and I want it on my hard drive. As a software engineer, I was imagining how beautiful the web interface could be and how many different things we could build into the reading experience and maybe there's a speed reader and you can put up words per minute and it's like, no, give me a PDF so I can have it on my computer and I can leave your tool as quickly as possible.
(21:52):
That's thing one. Then Alicia, I think the second thing you were getting at too is that it's actually not just a summary in that PDF, it's an extended deposition transcript, meaning the summary is there and then the transcript is also in this PDF. So it's kind of like you can replace the original deposition file with this one because you're looking at a summary kind of at the top of the file and then it's hyperlinked within the PDF to jump down to what the summary was telling you about. Like John was saying, more importantly, you can read the original words, so you're looking at the summary AI's very good at the summarizing things and you can see that it said that the person was talking about this specific thing that was very important is that 10% of the de that's important to you and then you just click see original and you jump down and you see in the original deposition and you can read the real lines and know these are not AI generated lines. We literally are putting the real transcripts just reformatted at the bottom, so you don't have any risk of looking at the AI screwed up words of what it accidentally remembered the transcript to be. So you get the best of both I think, which is really good. The ability to see that the AI more or less took a really good pass at summarizing what happened and then you get to just jump into the actual details.
John Campbell (23:11):
Let me give a concrete example of what Kevin just said and then also some praise. So orginally our thought, one of the things we were worried about was, alright, we create a summary, the AI does a really good job, but maybe it generalizes a statement like the witness said this, well of course we never want a lawyer citing a summary that's malpractice. So we always knew the lawyer was going to need to go to the actual text and see it with their own eyes and say, is it as good as it sounds in the summary, what did they say exactly? We all know one little word could matter or how the question's phrased. And so originally my thought was like, yeah, so then it'll say pages one through five witness said this attorney will go to one through five and look for that. Kevin brought this to a whole new level by making it all interactive and offline so that you don't need to be in our system to use it in A PDF.
(24:03):
So now imagine you're handling a trucking case and you read that the corporate representative of the trucking company says something like in the summary, he admits that company did no vetting before hiring the sub carrier, right? The carrier that delivered the load. You think, whoa, if that's true, if this person said that, that's great. The ability to just click and go right to that part and read with your own eyes and if it's gold copy and paste right then into your filing motion, hot docs summary, whatever you're using for and know that you have captured the exact text page in line is amazing because yeah, I was thinking about use cases, whether it's your own file or somebody's parachuting into a file to kind of take over and try a case. The ability now to scan what maybe takes a hundred page de, what do you think takes it down to maybe 10 Kevin, like a hundred page depos, maybe a 10 page summary.
(24:59):
Yeah, so it's a 10th the reading and then you get to that part where you're like, oh, this is the gold. This isn't where they were being evasive and talking about their background and I don't care. This is where the questions got good and go right there and see exactly what you need, pull it out and have it and then do that to the next depo and the next de and the next depo and in an afternoon have your hit list of admissions key things or when you're prepping to put on your own witness to check carefully to make sure they didn't say anything that worries you, that they're not going to testify in any way that's inconsistent with their deposition testimony. You're not going to expose them to impeachment or cross. This is game changing. Alicia and I work a lot with attorneys who jump into cases because they're really good trial lawyers and they're going to be brought in to try the case.
(25:46):
We just did this recently on a big case. It is a real challenge for those lawyers to know the file and sense all the good and bad that's in it, but with summaries like this, it is realistic even for the busiest, most successful trial lawyers in the country to meaningfully read these summaries, go spot, check the things that worry or interest them and feel much more confident that they have a genuine read on the case's strength and weaknesses and not what someone is reciting to them as best they can recall, which is by definition subject to all the human errors we all make.
Alicia Campbell (26:25):
Well, because part of that is that you batch it, right? So when we talk about these summaries, what we're not getting, it's just so we can explain for because everybody's like, well wait, okay, so you're going to have these places where I have hyperlinks and I can get to the testimony, but if my depo is 625 pages, how am I going to get that in terms of output and how are you guys actually doing it? So maybe talk a little bit about how you do the batching part so that listeners or the viewers on YouTube who knows are at least understanding how this works and how, because one of the things I think is really remarkable about the tool and it's why I use and I love it, is that it is very specific. So I'm not reading just AI's full-blown summary of 625 pages. You guys have it down to where it can be as specific as I need to so I can skip over the background information of the witness with some confidence as opposed to just reading a full summary with maybe hyperlinks here or there. Does that make sense?
Kevin Doran (27:26):
Yeah. The batching is wonderful to think and talk about because it's becoming some of the main work everyone has to do in ai and you hear this term being thrown around a lot right now called agentic ai. We're working on an agent and I went to this wonderful talk with this CTO of one of these AI coding companies where he and his team have to write a lot of code that takes other people's code bases and gets AI to do useful things with it. And he put up a slide that said it's kind of one of those Scooby-Doo unmasking a villain things. And it's that ag agentic AI is just doing everything in what we call a while loop. It's like just doing something over and over and over again or batching something up and thinking about how many times do I need to iterate over this thing until I feel like I'm done?
(28:18):
And you can get super creative with this like that context window I mentioned. Maybe they can only look at a certain number of pages per time, but if you tell them to keep looking, they can do more interesting things. So if you get a 600 page deposition, we don't just have this one shot, oh, we need to get it down to 60 pages, hopefully we did this. What we do is we go through it in chunks and make sure each section has a really good summary and then we can go through that again and make sure that we're getting good summaries of the summaries. And then you can do all these kind of crazy things and there's whole experts and companies spinning up around this. But basically what you do is you just have one AI check the results of another ai. You say, Hey, how is this?
(29:01):
Did this get it right? And nine out of 10 times it goes, yeah, that looks pretty good. And then one 10 it goes, whoa, that was a little weird. Maybe do it again. So you get to do all these little tricks with batching, and that's really what all of these kind of applied AI engineering type companies or service providers are doing is they're kind of just figuring out how many times should we ask the chat bot basically what to do here and then when we encounter this situation, go left and when you hit this one go. And that's kind of actually how we get through some of the formatting challenges, some of the input challenges to just, you can picture you as a person going through that 600 page file and being tasked with this, make some summaries. That's what we're having the AI do is just go through and make good choices in batches about what a summary of these pages might be.
(29:52):
And again, I like telling what the limitations are. It is ai, it's never going to be perfect, just like all computer software. It's like trying to do its best. And that's what's really nice about doing these things in batches is you can try and catch some of those problems. And then also you're giving the human a strategy here, even if you don't use res ipsa, if you're using some other summary tool, the gist of it is like the summary is there for you to reference the original. You want to use a summary in a way where it's helping you guide you to the important sections in the original, and you're never throwing the original away. You're always kind of doing the combo of looking at what the high level might be and then finding the important details at the same time. I have a question for all of us who have been working on focus with Fred, when you hear us talking about summarizing depositions, how do you guys think about our summary work with juror comments in focus with Fred?
Nick Schweitzer (30:47):
It's similar and it's not because in Fred you have these kind of utterances in opinions and things written in all manner of proper and improper English from this array of 75 or however many people that it's trying to pick themes out of and things like that. And so I'm curious actually, is it trying to, so we have with Fred very specific kind of prompts of very specific topics where we can give it a little bit more context. The jurors are going to be talking about X, so tell us what they're saying about X here. You just have anything, any depo can be about any topic or many topics. Are you doing anything with that? How does that work? Exactly
Kevin Doran (31:37):
Right. Yeah, the topic of the deposition has nothing to do with the approach we take in building the summary only as far as the LLM thinks it's important versus when we batch things in Fred, we're very specifically looking at the questions that the Fred team came up with and then the questions that the study creator, the attorney who ran the Fred study came up with when we're summarizing in both cases we're just summarizing what humans were saying. But yeah, in the Fred case, we take the topic and make that part of it and we build the summaries around the questions the study had built. But it is interesting, the same thing, we're sort of trusting an AI to tell us things. Like I said at the beginning, there's you want to use AI for something extremely specific or to kind help you strategize and think broadly.
(32:30):
And I think with focus with Fred, it feels a little bit more like that strategy thing because a little bit more like you don't want to have to sit there and read a whole bunch of people's comments about a bunch of different things and figure out what reading through all that what it means. You kind of want the themes. You want to know in general, when people look at this case, what are the main reactions? So yeah, that's an interesting, I never thought about it with the deposition summary tool. If we could take the topics that are provided and put it into it, I think you probably don't. I mean think it's probably better that you want it to just be very focused on what is the summary I'm producing?
John Campbell (33:02):
Interestingly related to that AI can be creative and it can help you think about new possibilities and all that. It's interesting that one of the tasks of making this work was basically making sure it knew that was wrong and not letting it do that. This is not a strategy session, this is not, we want your spin on what you found. This is not interesting insight. This is summarize it accurately and honestly with no embellishment. And I know that Kevin did some real work kind of making sure that happens so that AI doesn't fill in gaps because as we've been talking about, its primary drive seems to be generate. Never say you don't know, never say there's not an answer, never say, shucks, I'm not sure answer. And the problem with that is if the answer either doesn't exist or there's a little gap AI if allowed to run wild, we'll just fill it in with a logical thing that seems to make sense that that person would've said. And so maybe it's worth just mentioning that, I mean, I think we've put some work into that, both in the way you can tune these and the way you prompt them to make sure that this is maybe the most boring work for ai if it had feelings would be like, oh, just summarize what's there with nothing else. Got it.
Alicia Campbell (34:15):
And multiple times, right, because AI gets better the more times that you say, are you sure? Are you sure? Are you sure? Where you make it be really self-reflective where it's like, oh no. Yeah, right. I mean we even do that in Fred. Fred doesn't get one pass at, okay, what does the AI come up the first time? We make it multiple passes because it helps refine and get out the embellishments and get out the, oh, I would like to adhere something that makes sense to me in my AI context, right?
Kevin Doran (34:45):
I love asking it, are you sure? After everything I do
Alicia Campbell (34:48):
All day. I don't think Claude likes it very much. He's never like, oh, thank you for, I mean I guess he does that. He's like, yes, it's good for you to check, but I don't ever really think he means it. I always think he would much rather be a little wild in his summary. But the beautiful thing is once you do that and you make them go through multiple passes, you can really get at the main issues that are coming out either from jurors or the main issues that are coming out in a depo and rely on those summaries a lot more than, because it's even a hard question for us to ask ourselves. So when I go through juror comments, even on a big data study, there's a reason why I pick out certain ones and not others. But the AI part of it's really helped at least me a lot of ways because it is more of this once you have them go through 'em multiple times, really just like here is what's out there, you just have to read what's out there rather than even picking because I'd like the case to do well.
(35:43):
It's really a great tool that way. And that's why I think it's so good in re ipsa because lawyers remember specific spots. Because for us, and when you take a depo, it's full of this emotion, and then I asked this question and then I got this answer. And then a lot of times you go back and you read it in the written word and you're like, but it seemed like a bigger deal or it seemed more impactful because we also have layered in that all the emotion and the body language and the objections and how much you hate the defense attorney, all that shit comes right in it. And what I love is that this really gets down to here are the things that are actually said that you'll be able to put in front of a judge, right? Because there's many times I write responses to summary judgments that I was like, if I could only write here and the witness was deflated with having to give this answer, but I can't mean maybe I can show it on a video later. So AI is really good at doing that as well. It's just getting you to the crux of the most important issues in your depo.
John Campbell (36:38):
Yeah, I was just thinking about what you were saying Le I bet it's 90% of the time that somebody tells me I've got this really good admission in the de and then I say, alright, well send it to me or, because a lot of times maybe they've told me on the phone or they've typed it in the summary, but they just say, the witness admitted that. And I say, all right, well let's put in the exact quote, let jurors see the real quote in a big data study. I bet it's 90% of the time the quote's not as good as what the attorney thought it was. And I'm guilty of that too. I think as we galvanize these things, they sort of turn into this nugget of admission and then often we see that the witness was more caveated, careful or nuanced than we remember, or the question we asked was dog shit and really unclear.
(37:20):
And so the admission is far less certain. And so yeah, this lets you check all that. It might be fun to think about too, because AI is moving so fast. One of the things I was thinking about is that even just in the time that we've been working on res ia, part of the challenge has been what we were originally going to build. Now the AI does more and better, and so that changes. And so the product evolves as the AI evolves. And I think we all know that in the next few months, inevitably there will be some big leaps. There's already been some big leaps. Altman has now said that GT five is going to be smarter than him. I mean, I think there's more coming for sure and probably better. So another fun to thing to think about is what does a deposition summary and maybe slightly more broadly legal document summary tool look like as it continues to evolve?
(38:11):
And I ask this because it would be fun as feasible to listen to this for them to email us and say what they wish it would do. Because I think Kevin and I have talked about it, and I think what's coming out right now is our most basic vision in many ways of a simple do it for you on the most important document in your case kind of tool depots. But in the future, you could imagine that this tool could do some new things that other people have started to play with ranging from suggest stuff that should have been asked of that witness, which might be useful if you're going to trial, identify inconsistencies for you. Right. Okay. Tell me anything you think now that you see that seems to be, they said A then B that might help you look for that stuff, pull out on key topics that you flag and highlight those differently than just the summary.
(38:59):
Like, oh, you asked about this topic. Here are the pages where I see that. And beyond that, I've started to use AI in a number of settings. When I get an opponent's brief, have AI give me an overview of it before I read it because as I read briefs, I tend to get buried in the first details and then I get really locked in on those and I didn't make it to the end, so I missed the sort of gestalt of it. I also get angry at misrepresentations of cases by the opponent, and so then I go look up that case and before you know it, I'm down a rabbit hole on page three of the brief, but I didn't read pages four through 12. So I've started using AI to do that. When I was preparing for an appeal, I used AI to read all the opinions by the court that I thought were relevant.
(39:44):
And then I asked the AI to act one of the conservative judges and to pepper me with questions to get me ready for oral argument. So maybe if we open this up just a little, there's this interesting world in the legal sphere where we can use these types of tools to summarize things, to reflect on those things, to get us out of our own bubble. And so in case it's worthwhile, because I think we talked a lot about risk and kind of its core features, it might be fun to think about how AI tech generally is going to continue to impact the legal field and maybe to narrow that slightly because a huge thing with a summary tool like ipsa AI and future features.
Kevin Doran (40:23):
Yeah, I love thinking about what might happen next with these tools, especially our own tools focus with Fred. I can brainstorm about that all day long res IPSA two, I think there's so many things that are happening with AI and it's super exciting. The thing that's interesting about tools, especially in software development, it's like we're sort of on this bleeding edge of what the tools are trying to do for us is sometimes they will try and incorporate something they maybe should not have. There's this one tool I'm using right now with writing code and I have this whole system for how I get it to brainstorm, and the editor I'm using just released a feature trying to have it built in the brainstorming process, which totally screws up my brainstorming process and the tool kind of refuses to get back to my, let me do it the way I was doing it.
(41:07):
So there's always those interesting trade-off questions. I have a question for you guys about specifically IPSA and depositions is should the deposition summary include key admissions that LLM thought were things that, because some tools do this and they try and make a summary is one thing, but saying that you thought that the witness just admitted something is like a judgment. So it's a slightly different thing. And I think if it falls into the category of, I guess I'm giving my answer before I even let you guys give yours. I'll wait. I'll wait. What do you guys think about that? The summary? Having something like that?
John Campbell (41:46):
No, I don't mean no in the sense that, oh, that's terrible, it has it. I mean, no, in the sense that I think it has limited usefulness.
(41:53):
My experience with AI is that that's what it's not good at is recognizing that that admission is key in this case. So my guess is the list would be both over and under inclusive. Here's this great admission and you're like, yeah, they admitted they went to Yale. Great. We're actually not going to win the case on that. And then likely missing an admission because it doesn't sound like one. That's also a challenge of the AI would need to have read all the depos, have the context of the case, know the legal instructions in that case and the goals to then say, oh, when they admitted that it fits that negligence instruction and gets us agency here. So my thought is right now that's more of a feature somebody adds because it sounds cool, but it doesn't strike me as a feature that's actually functional or useful because if I get a list of 15 admissions, but I know that probably only three matter once I actually go through 'em and I don't know what admissions it missed, I don't think that didn't speed me up at all. I should have just read the summary and gone to the parts. That sounded interesting.
Kevin Doran (42:59):
I love that answer. That's better. That's better than the one. I was going to say something somewhat similar, just not as on point. I think when it does a judgment call, you have to be a lot more careful. You have to give it way more context than just tell me what happened here. And then the second problem is that people then kind of rely on the tools a bit more. So what you were saying about exciting things that it could do that got me excited about this idea of one thing that it's great at is what's missing? That's a fun one. What's missing here? And so you kind of mentioned a little bit what maybe would've been good to ask this? What's sort of a missing question here? That happens a lot with tech planning and architecture plans. You come up with a plan and then a really wonderful feature with AI is just being like, what do you think I'm missing here? And then it gives you five things and you're like, oh, I didn't even think about three of those. So that's really exciting. But when you try and make judgment calls on stuff in there, you have to be pretty specific. You're kind of leaving the world of brainstorming and entering the world of carefully knowing what's going on.
John Campbell (43:59):
Yeah, so for me, yeah, I agree with you. So it's harmless to say what else could have been asked about here that probably should have been because you recognize intuitively that list is a brainstorming list and maybe only a couple matter. And to the extent it left something out, it's still stuff you hadn't thought of. It's okay. It's really different than find the admissions, which I think would be dangerous to rely on. The other feature I think that maybe will be useful is simply to say, Hey, there are a couple topics I'm really interested in and not to have AI make any judgment, but just like if they talked about the driver's training or if they talked about any prior problems with this doctor or if they mentioned whether there was any engineering study, I'd like to know that and to maybe trigger the AI to look very specifically for those instances and pull them out.
(44:51):
Because right now what an attorney would do is they'd either have to read the whole summary when they use REST ipsa or before that read the whole depot or there's a keyword index in the back of depositions and you look for engineering. But of course someone could talk about engineering without using the word engineering. I think AI is actually pretty good at giving it the sense that you want to find any reference to something like engineering of the structure and saying, here's the places they seem to discuss that. It could still miss something, but it would give a starting point to look. So it'd be like guidance within the summary, stuff like that I could imagine being useful. The other thing outside of just depositions is in litigation you get experts and often what you do when the opponent in a case designates an expert is you get their cv, you try to find past depositions, they've given you try to find any reports they've written.
(45:45):
And what you're looking for is any articles. I could imagine a world in which you could dump that stuff in, get it all summarized, but also then start to say, tell me if they've ever discussed this topic before. Because if I could find out that in a past deposition the expert discussed whether or not a UTV roll cage should be able to withstand a certain speed crash, finding that they've discussed that before and that what they said in a previous case is inconsistent with this one can be a game changer in a case when you're hurting the other side's expert. And I think AI could be really useful in speeding up that sort of stuff because it's great when you write. So what happens is there's an expert on the other side, you reach out to your networks and you find that, yeah, this person's given a deposition in these other six cases, you write the attorneys, you get the des, but now you have six depositions that you got to work through.
(46:41):
Well, now we could summarize those and maybe in a future iteration say, go through these and look for this. And I think that would be powerful if it could chew up cvs, their articles they've written, because also experts tend to be co-authors on tons of articles and sometimes they've published an article eight years ago on talcum powder and its potential to cause cancer and now they're testifying for $800 an hour that talcum powder can't cause cancer. Alright, well boy, I'd sure like to know that. I think that's the sort of powerful thing that AI could speed me up and help me be more likely to find it and identify it and use it.
Alicia Campbell (47:16):
Awesome.
John Campbell (47:17):
A couple of other things that I think people ought to use rest sipsa AI for that will improve their practice. One, if you had a standard practice that every deposition transcript that comes in goes immediately into rest sipsa AI gets summarized and you get that summary one that goes straight in the file. And so that's wonderful because people often have the practice internally that they make some notes after the de Well, now for example, I was raised in a firm where you were supposed to make notes after the de and save 'em to the file while they refresh. You could do the same thing now and have a good summary in the file living with the depo right away with no effort at all. That should be a standard practice two, imagine if you were to send those out to your client periodically. A lot of attorneys are doing, I think working harder and harder to keep their clients in the loop.
(48:06):
Well, this costs you nothing. It's five bucks or something like that depending on your plan to send your client something that shows you are taking depos and what's happening in those. And man, that will head off a lot of doubts, questions and phone calls. It helps you with client management, which of course helps you in all sorts of ways from the ethical rules to when you're talking about settlement, having trust with your client. And so this is something that to me is like an easy ad that adds value but doesn't cost much. The other thing I'd say is if you're a big firm and you're thinking, man, we have hundreds or thousands of files, that means we have thousands or tens of thousands of depositions, then reach out to me and to Kevin. Go to the site, contact us and we will work with you to create a bulk plan, right?
(48:52):
Because it absolutely makes sense for us and for you to figure out a good pricing model if you're going to do hundreds or thousands of these. And we'd love to talk to you about that because our view is unabashedly that it will become malpractice not to do this in the near future. And that it would be silly, right? It would be a waste of your client's money to pay humans to do it, and it would be malpractice not to do it when there's a tool that would let you. And so contact us and we'll get you on the straight and narrow as quickly as possible.
Alicia Campbell (49:20):
Do you want to give your website
John Campbell (49:23):
When you head to ipsa ai.com? Be sure to scroll and look around and also look for our offers on how you can save some money often using ipsa. We rotate those through and you might find them interesting. So please go to the site, take a look and see what's available for you to get started with your team.
Alicia Campbell (49:39):
Well, thanks for joining us today. It was nice to have some Resa talk. Always good to see you guys. Please join us again the next Fred files. Thanks for tuning in today.
Voice Over (49:51):
Thank you for listening to the Fred Files. If you found value in today's discussion, please subscribe and share this Episode with your colleagues to explore how Fred can transform your case preparation. Visit us@focuswithfred.com. Produced Empowered by Law Pods.

Produced and Powered by LawPods
The team behind your favorite law podcasts