Published on
June 15, 2025

Inside Fred's Virtual Jury Room: How Your Case Gets Evaluated

Speakers
Alicia Campbell
Kevin Doran
Nick Schweitzer

Episode Summary

Fred isn't just software – he's your strategic partner in case evaluation. Trial scientist Alicia Campbell, CEO Nick Schweitzer, and CTO Kevin Doran take you inside Fred's virtual jury room to show exactly how 75 real jurors evaluate your case. From building multi-plaintiff, multi-defendant studies to using national samples instead of venue-specific ones, Fred is designed to be scientifically rigorous and cost effective. Learn how custom questions and evidence ratings can expose weaknesses in your strongest evidence. Come back for the third installment in this series, when Alicia says the panel will discuss how Fred “can help you keep your doors open, keep your lights on, and have a healthy Christmas bonus.”

Learn More and Connect

☑️ Alicia Campbell

☑️ Nick Schweitzer

☑️ Kevin Doran

☑️ Focus with Fred

☑️ Subscribe: Apple Podcasts | Spotify | YouTube

Transcript

Voice Over (00:03):

Every trial lawyer knows that moment when you've built what feels like an airtight case, but you're still lying awake, wondering what will the jury actually think? Jury research was once a luxury reserved for cases that could support a big data study bill. Not anymore. Join trial lawyer and trial scientist, Alicia Campbell, empirical legal scholar, Nick Schweitzer and data guru Kevin Doran as they break down the barriers between you and the minds of your jury. This is the Fred Files produced and powered by Law pods.

Alicia Campbell (00:38):

Well, hi, it's time for episode two of the Fred Files. Again, I'm Alicia Campbell. I'm here from Campbell Law, do Big Data and what we're talking about today more in depth is small data. Fred, of course, big data's little brother. And of course I have two wonderful people on the podcast with me and I'm going to let them introduce themselves.

Kevin Doran (01:00):

Kevin Doran, I'm a software engineer and the CTO of Focus with Fred.

Nick Schweitzer (01:04):

I'm Nick Schweitzer. I am a professor, I'm a jury researcher and I'm CEO of focus with Fred.

Alicia Campbell (01:10):

Awesome, awesome. And so today we're going to talk a little bit, we kind of have three episodes planned out. These first three are going to be like the who, the what, the when, the where about Fred, just diving in, giving more details, partially because we want to make sure that everybody understands who Fred is and what he does, but also because we're excited because we just kind of had a rebuild of the platform to make it even easier based on just comments from people who are using Fred and taking advantage of all that he has to offer. So we're going to talk a little bit about the what and the where of Fred, it's inside Fred's virtual jury room, how your case gets evaluated. So today's going to be a little bit more nitty gritty, talking about the details and what it is that you can do.

(01:58):

Fred does have some limitations. He is not meant to be big data. He's not meant to be anything like it really just some things that are similar, but a lot of what we can do in big data, Fred doesn't do, but that's part of how we keep the cost down and make it accessible to the people who need it on cases. I said in the first episode, but I'm going to go ahead and reiterate it today. Red is useful on big cases that are still in discovery. It's a good way to run your case in front of 75 people and make sure you're closing all the holes, make sure you're dealing with all the issues or criticisms that jurors see in your case so that by the time you get that big case into big data, there's more that you know about it and the odds are you're going to frame it up the way that jurors have been seeing it the whole time.

(02:46):

And big data's going to run really well. There'll be no surprises. A lot of times in big data there are surprises. The second way that Fred's useful is if you just have a case that's smaller, either because the injuries aren't as severe or there's just not a lot of coverage. And so it doesn't justify doing a big data study. Fred's very helpful in those scenarios that you at least know, Hey, am I winning or losing this case? I have mediation in three weeks. What's kind of the value? What do I need to counsel my client on? Fred's really good at that as well. So yes. With that said, we're going to talk first with building your Case study with Fred. Anybody want to jump on that bandwagon and talk a little bit about the process, how many defendants, plaintiffs you can have, things like that.

Nick Schweitzer (03:30):

Kevin, our hoder extraordinaire and really, so by the way, if you try Fred and you hate it, Kevin is the person responsible for that.

Kevin Doran (03:40):

Email me, kevin@focuswithfred.com.

Nick Schweitzer (03:43):

So just send all hate mail to him. Kevin, why don't you tell us about how this is all set up?

Kevin Doran (03:48):

Sure, yeah, that's great. I actually would really love feedback. Anytime you're working on software, it's really exciting to get real feedback about where you get confused in the process because the idea is that we can just always keep making it better and better and that's pretty exciting. So focus with Fred as Nick says, I'm intimately familiar with the data model and what you can and cannot do in Fred. And what's fun about that is that this team of trial scientists and researchers and trial consultants and trial attorneys kind of had to explain to me very basic introductory US law 1 0 1 stuff to make sure I knew what we were coding in. And then I kind of went much further. And now I think I know things that maybe people who aren't as familiar with civil law might not know and what cases run. But so what focus with Fred can help you do is take not just the high level, but I think some actual details in a case such as having not just the plaintiff side but having one or two plaintiffs.

(04:50):

And that's important because you configure up to two defendants and then you configure claims for each of those defendants. You could see for example, up to four win rates. So with plaintiffs, having one or two plaintiffs is also another way to see different award amounts. So you can have different asks for different plaintiffs, different types of ecos and non ecos. You can have other types of damages. So there's definitely a lot that you can do in Fred to see a bit more about what is going to happen with your case. Pretty much everything we do in the builder now, we try to show you exactly what's going to happen to the report when you make those changes. So for example, there's a feature that is actually used, I didn't know that sometimes you're building things and you get told about a feature and it sounds kind of vague and weird and then later it's never used.

(05:39):

It's an empty table in the database that no one ever touches and you have to explain to new engineers like, oh yeah, they said this was going to be really important and then it wasn't. So I'll admit one of these I thought was going to be admitted liability as opposed to contested. And right away we had admitted liability cases in Fred. So what the builder will do now is show you that something asking you to configure what your claims are will disappear if you say it's admitted liability. So there's options that will change things that will show you different things on the report based on the setup of your case.

Nick Schweitzer (06:11):

This Fred builder. So for those who haven't actually gone in and looked at this yet, essentially what we've done is created a system that helps you do kind of a little scientific study on your case. And so that's how when we started working and Alicia and John have been doing this for a long time, the big data process is essentially how can we take what I might do, which is a jury research study meant for scientific purposes and hypotheses and all of that. And we turn that into something that is that rigorous for attorneys to figure out. And so what Fred is a way to kind of say, alright, let's hold your hand through making your own little study of your case. And so using the same kind of rigor that we would with both big data but also just kind of general scientific research.

(07:08):

And so the builder is designed to kind of like, okay, what are the things we need to know? What are the things that we can nudge you to do that will give the jurors who will eventually see this the best, most accurate impression of your case so that the feedback that you get is the best and most accurate feedback. So a lot of our attention and a lot of the work that Kevin's doing on this has been to make sure that it's intuitive that when you open this up and you look at Fred for the first time, even if you don't really know what it is that's going on, you can just look at what's on the page and be like, oh, okay, it's asking me these very basic questions and I just answer these questions about my case. And these are questions that they shouldn't be difficult.

(07:53):

They should be things that you probably know probably right off the top of your head and they walk you through this whole process and you end up with this scientific quality roughly study of your case. And so that's kind of the overall philosophy of the builder. And so we added any research project, it starts to get complicated. So you might want to do, normally when I do research, you try to limit the number of variables that you're talking about and kind of do a focus sort of thing. But for attorneys who are using this, you may have one plaintiff, you may have two plaintiffs, you may have one defendant, you may have two defendants, you may have one claim, you have multiple claims, you may have fault allocation. There's all these different things that go into it. So we built Fred to kind of cover a lot of possible scenarios, not all.

(08:40):

And because some more complex sort of situations, we'll still require a more advanced big data style study, but we've made Fred, we say it's the little brother and it's a little brother also just because the amount of data that we're going to get is a little less, but also it's not quite as capable in terms of the really weird case structures that might come up. So it's meant for very kind of straightforward sort of things. But that being said, there is still a pretty wide variety of things you can build into this study of your trial. So as Alicia said, we relaunched kind of a new version of this builder not too long ago and it is a lot better. It is a lot, I think easier to deal with.

Alicia Campbell (09:21):

Yeah, I think the new builder's really slick because one of the things that we struggled with, and I think we mentioned this on the first podcast, was that the first thing you have to do to use Fred is have your presentations done. So you need to have the plaintiff presentation and the defense presentation, and those need to be accurate with the evidence that you have and the arguments you're going to make. And the same for the defense case because even at a base level, Fred and big data both for lawyers to do something that we're not very good at, which is to look at the problems that are actually in our case that the defense points out. I know I don't like listening to it, but it forces you to actually sit down and honestly, hey, what is it that I need to make sure is in the defense case because it's something I know this dude or this chick is going to say.

(10:05):

And so Fred is even at a base level, just forces you to see your case a little bit. I don't want to say honestly, but you have to see it with both eyes open and you actually have to take the time to stare at that defense that the defense attorney keeps telling you, Hey, when I say this in front of a jury, they're going to get what I'm talking about. And if you're like me, you're like, yeah, yeah, all right, shut up. But Fred force you to do that and I think as just a starting point, it's really great. So we've made the builder where that presentation has to be done first and then you're answering questions about your case. And so yeah, if you have admitted liability, and we do have people just running it, which means damage is only model, so you need to know what it's worth.

(10:45):

Flip side is if you've contested next, you can have one or two plaintiffs, one or two defendants. And then since this is smaller data and it's earlier on in bigger cases, if you are capable of grouping people, so for example, I ran an excessive force case. I actually sued eight police officers and I sued a municipality. Well, right now at the early stages I just need to know how jurors are seeing it. So I just grouped the police officers as plaintiff one and the municipality was plaintiff two. And if you have a hospital system and doctors that made errors, I mean it's not meant to get into the nitty gritty details like big data where we're starting to separate out everybody. It's meant more to give you an understanding of where your case is and where it stands and how it's doing.

Kevin Doran (11:33):

I'll jump in then because a thing I think that's important about the builder, and I like what you guys were saying about the idea that you're configuring a scientific study to get a general idea about your case and there's still quite a bit you can do without getting into the nitty gritty. And if you have used Fred before and built a case, you've seen that configuring your study came first and then uploading your plaintiff presentation came kind of at the end. And the reason we flipped that is because really the most important place for you to spend time is on the case presentations. And if you've used Fred before, there was all this really good kind of help material that was kind of hidden from you, maybe you got an email early on. Now it's there in the builder when you get to that step.

(12:15):

So you can immediately see what, one thing I hope that people kind of use a lot is that in the builder now we have templates and that you'll be able to download a template, you're writing your case presentation on your computer offline, however you want to do it in Microsoft Word. We didn't want to try and force you to kind of live in a little text box on our website for these. You can upload images, things like that. And so if you use these templates, you'll kind of know immediately how to structure it. And then there's help documentation right there with examples of case presentations. And so then when you get to the builder part, it's our hope that the builder is very fast is configuring your studies is as Nick said, you kind of already get it. You see these options and this is not an admitted liability case or I have these types of things.

(12:59):

There's definitely a few areas that you might have a question about or you might think about. You might want to read the help text and figure out what am I supposed to do here? I can think of kind of common things where if you're allocating fault, for example, you have somebody who's a third party who's not listed as a plaintiff or defendant, you want to make sure to click add third party, have them be in there. Sometimes you want to uncheck a plaintiff because you don't want them rated for fault. That was somebody who wasn't involved in the actual incident. So there's some things like that in the builder that you want to double check and think about. And especially if you're new to Fred and you just don't want to make any mistakes, you want it to be a good study, we have this trial plus option, and you'll see that on the right. It tells you about that as well. But I just wanted to add that if you're going to spend time anywhere when you're building a Fred study, it should be kind of actually out of Fred. It should be in a Word document or your favorite text editor kind of writing down how you could really present this case from the other side and from your own side, and then get that into Fred.

Alicia Campbell (13:54):

Although that should tell you how easy Fred is because the software engineer just explained third party fault and the fact that you could uncheck plaintiff because they have nothing to do with the injury, the cause, whatever, they're just the sole victim. So it is pretty easy. We've gotten it down to where, and so yeah, Fred can do allocation of fault between all the parties if that's needed, all plaintiffs, all defendants, or if it's not about you have an estate, so they're not going to have any fault in the decedent passing away, but you have some third party person who you need to have on the line. Fred can handle all of that. It's exciting that way. He has limitations, but he can do ecos, he can do non ecos, he can do punitive damages. And we bifurcate. I mean, he's a slick dude. Fred's a slick dude. He can do a lot. Nick, why don't you talk about he can do custom questions and evidence ratings really important too.

Nick Schweitzer (14:54):

Yeah, so we have a standard set of things that we include for every Fred case. And there are things that we think you'll probably want to know about your case if you're running it through there, but sometimes there's some particular element of the case that you want some feedback on. And so we've added a couple places where you can actually add in your own questions. And there's kind of two categories of those. So first we ask some kind of open-ended questions where the jurors give, kind of just write out their feedback about what they heard about the case, what they thought about it, how they decided their verdicts, if that's what it is or whatever. And so we have the option for you to add some of your own open-ended questions, and these are things that jurors would see your question, they would write response and later we'll talk about, we're going to give you a big detailed overview of what those responses are.

(15:43):

And then the other part is for every case presentation you write, you could think of it as there's maybe some facts that you present or pieces of evidence or certain experts that are part of it. And maybe you want to know how are the jurors seeing these particular parts of my case? And so we have this section called evidence ratings where you can basically say for every thing that you might add, so let's say it's a motor vehicle accident, you have dash cam video, you have statements from witnesses, you have a medical report, you have this expert, you have that expert. You can list those things and say, okay, so you read about or heard about this particular piece of information or saw this fact, did that fact or evidence help the plaintiff help the defense neither and a little bit to what extent. And so the idea there is that we've seen a lot of surprising things happen where attorneys are sure that they have this video, they absolutely want this video to be in the case.

(16:43):

They're positive, it's going to just prove everything that their plaintiff is saying and they include it as part of this kind of Fred case. And they see when it comes time for jurors to look at this and say, well, what did you think about this video? They'll either say, oh, actually I think this kind of helped the defense. Or also sometimes they'll say, I don't think this was even relevant. This didn't sway me one way or the other. And so you can put in all of these pieces of evidence and see whether jurors think it actually is helping your case, hurting your case or doing nothing. Maybe you don't want to spend a ton of time trying to get something in if it's not really going to be all that useful, even if you thought it was useful, if jurors are saying it's not, you might want to reconsider.

Alicia Campbell (17:26):

I think that's totally right. It brings up another thing that's kind of shocking for people. So Fred also asks about injury severity. So if you want to know, so obviously in a wrong mean if you're working on a case like wrongful death in discovery and you think it's going to be a bigger one, that's injury severity, obviously you don't select. But aside from that, if you've got some kind of whether be even monetary injury, not just limited to personal injury or things like that, he can do business cases. It's interesting because that is an indication that often informs the damage awards that you're getting because if people are like, eh, that's not very severe, that injury is not very severe. Well, I mean if you're only getting a half a million dollars on the case, that might be why. Or if you're only getting 50 K, that might be why.

(18:14):

And so that's another thing, right? The evidence questions, there are things that lawyers really care about in their cases that a lot of jurors don't care about. They just don't. And the open-ended questions will give you a lot of very revealing information about how people are seeing your case, your plaintiff, the injury or the defendant, how they see the defendant. I think the other thing that we should probably talk about is we don't ask this in the builder, but we do ask jurors about it and we ask them about case credibility. And that's another important, you'll get a measure in the report from Fred that says, Hey, and we do all of our colors kind of red, yellow, green, like a stop light. So it's intuitive, green is good, green means go yellow and then red, you've got some problems. And so we asked jurors, whose case did you find to be more credible?

(19:06):

And we will give you those metrics. So because if you've got a 50 50 kind of win rate where you're like in that 50, 60% range, you should definitely look at your case credibility because a lot of times we see those things kind of move together, not always, but we try to give enough information through Fred to help inform the results so that you can kind of see, okay, well people think I'm more credible, but just a little more credible, and I only have a 60% win rate, which is not optimal. I mean, it's not terrible and people don't think my guy's very injured and they're not giving me much money. Well, already you've learned a lot with just those data points. You haven't begun to read the open-ended responses or the evidence. I guess the only other thing we'd say is you need to total when you're writing the presentation, and you'll see this on all the templates, you need to total your economic damages into one number.

(20:02):

You can go through and say, Hey, I got past medical expenses, I have lost wages, I have future medical care. And you can go through all of those and explain them and describe them, whatever. But when you get done down to that end of the economic section, you need to add all those numbers together and put one number so the jurors reading it go, oh, so they're asking for half a million dollars in economics, and then you can do your whole non-economic spiel, but you need total non-economic request. And you can talk about that number through that spiel. But at the end, we try to make it very easy for jurors. So in this case, the plaintiff is asking for seven and a half million dollars with punitives. We don't want any punitive requests in the presentation. You will build that only in Fred. And the reason why is because we know if what you have is a case where you're going to ask for, I dunno, 50 million to punish this defendant that bleeds into the non-economic numbers that jurors are giving. So we bifurcate it, which means you say in the presentation, Hey, we think this case warrants punitive damages. These are the reasons why. And then actually in the builder is where you will delineate what you're asking for. That way that number doesn't show up in the presentation. And because that'll make that non-economic damage figure, very, very noisy. And so we keep those separate. So that's an important thing for people to just hear and know, okay, they bifurcate punitives and my punitive numbers should not be in my presentation.

Nick Schweitzer (21:41):

Yes. And the reason for that is that we, as Alicia said, we try to make this easy for jurors. So people are not great with large numbers thinking about them processing them, working with them in their heads. And so we try to make this where if your case, for example just has eco and non eco, we will ask the jurors, okay, what is your award for economic damages? And then what is your award for non-economic damages? One number for each. That means they're not thinking about, okay, I got to add this and this and this, and how much was this and how much was that one number? And we found that gets the closest to the actual value that a group of people who hear a case would actually give. So make them do that work. The jurors when they're kind of considering all the different parts of your request, add that up and then give that one number to us, which will then pass along in the report. So that's the reason why we kind of do that. Now we do have it set up where there's awards per defendant, is that correct? Or per plaintiff, which one is it, Kevin? I'm not remembering.

Kevin Doran (22:49):

And we also show punitives then after per defendant.

Nick Schweitzer (22:52):

Okay, per

Kevin Doran (22:53):

Defendant. Okay. Yes. That's why I'm getting, yeah, you can say per each dependent. Okay.

Alicia Campbell (22:58):

Yeah, and that's really important too because it doesn't stop you in the non-economic section if you want to do a per diem, maybe that's what you're thinking. That's the vehicle you're going to use to ask for damages. You could still do that. You just need to total it because it's asking a lot of jurors to have all of these economic numbers and then no total, and then the way that you're calculating non ecos and no total because they want to be told what you're ultimately asking for because then they determine whether or not they think, Hey, that's a little too high or that's a little too low, or that is actually what I was thinking. So the totals are really important. And then no punitive number in the presentation, but I think that's pretty much covered pretty much all the facets of Fred in terms of the information that you input and what he's kind of capable of doing. Does that seem right?

Nick Schweitzer (23:50):

Yeah. Yeah.

Alicia Campbell (23:52):

Okay. So then what is the recruitment process? So we're going to let Nick talk about this a little and why it's good to get outside your venue.

Nick Schweitzer (24:03):

So this is where doing a lot of regular jury research over the years comes in handy. So basically this is like we said, just an attempt to do a little scientific study. And so one of the things that we want to do in any piece of research is we have a sample of people who are the people in our research, and we want to use the information those people give to generalize to some broader group of people. So scientific research study, maybe you have a couple hundred people in your study, but you're trying to describe human decision making or something like that. And so you want it to apply to everyone. So you have to think carefully about, well, how am I getting this sample of people? Because you need it to generalize Fred. We thought quite a bit about, all right, we're going to try to get people for a normal Fred study, 75 people like we said, how can we make sure that that is a good group of 75 people who is going to be the best at then predicting what it is that another group of people, namely your jury, would decide later on if they heard the full case.

(25:09):

So the way we do this, there's a lot of different ways we do this. The first thing is we want to make sure we get quality participants. So we have sources of jurors that we can access. And I mean there's a lot of people in our kind of pool that we draw from tens of thousands of people all over the country, every state, and we can access those people kind of relatively quickly. And so that's one of the things that's kind of nice about Fred is why we can do this a little bit more quickly than other methods. So when we do that, we have a number of different kind of filters and quality checks and processes that we go through to make sure that the jurors who eventually end up seeing your case are being thoughtful. They are representative, they are thinking carefully and critically about what it is that you're presenting and they're giving feedback.

(26:05):

That's good. And so overall, the sources we use already do their own kind of pre-cutting. And there's a lot, Kevin and us, we were talking earlier about how there's a lot of companies putting a lot of effort into making sure that how do you screen responses for quality? And even in other domains, there's people who just want to know, well, how do you identify spam email? Well, you can use some of that same logic to be like, well, how do you identify terrible participants who are giving you nonsense and information that you don't want? And so there's a lot of checks that we employ along those lines, including manual ones. So we still look at the data, we still look at these jurors who actually look at your case and be like, does this all look good and solid? And it is. I mean, even when we find an issue, we might find some new juror who joined this kind of group and took this study and is giving not very good answers, and we just get rid of them, we replace them with a new one and we block 'em from doing our studies in the future.

(27:04):

So there's a lot of checking that goes into this, but also what Alicia said is something that we are asked about a lot, which is, well, I'm trying my case in venue X, can you sample venue X, Fred? The answer is no. We don't do that partially for technical reasons and to keep costs down, but also because it's not really as necessary as you might think it is, especially, and I should say especially if this is an early stage of your case and you're trying to just get an impression of it. So last time on our first episode, we talked a little briefly about this and I'll just kind of reiterate part of it, which is that a lot of people's impression of your case is going to be common to people all over the country. They're going to have a human person response to your case, and that's going to explain a lot of their decision making.

(28:00):

And yes, there are small bits that could in theory vary from venue to venue, but what's interesting is it's actually hard and even surprisingly not as effective as you might think when you sample within a venue because you're trying to really think, okay, who's going to end up on my jury and what are those people going to think? And that's hard to get at even if you're sampling within the venue because those differences, there's a lot of variability in there that you have to try to account for. And so what we found is that especially early on in a case, especially with the sample sizes we're working with here, it actually makes the most sense to try to get the most representative sample from the US that we can. And those broadly representative samples tend to give us the answers, which we'll talk about in terms of their predicted win rate and awards and all that sort of stuff that most closely aligned to what you might expect to see to give you. Here's what you're looking at right now, and I have this probability of winning and we give you kind of a little bit of a range on that. And this national sample is we think the best at getting it to you at this stage and in a way that we can do quickly and cost effectively.

Alicia Campbell (29:19):

Yeah, I agree with all of that. And the other thing is you kind of want a broad sample so there's so much to learn and in dire when you have a panel come in, yes, I mean you could have a panel come in that is of one particular religion or political demographic or whatever it is, but you can also have a few people that show up that don't match because not everybody's the same. So you can have someone who shows up in a largely conservative venue and be a Democrat or be socially and fiscally liberal. And so a lot of it is too just making sure you're asking the right questions so that you can get at more of the interesting ways that people are different, which is not always just what we've always been trained to understand, which is like, well, you know what?

(30:09):

Race matters and income matters and politics matters. And it's like, well, yeah, it does, but it's not the only thing that matters. And a lot of times specific beliefs about your particular case are more indicative of the way people will see the facts and view your plaintiff than whether or not they're Democrat. And then what Democrat are they? Are they a Clinton Democrat? Are they, I mean some of this stuff, it's now that we have the capability of knowing more about people and having specific information about some of the beliefs they hold, we can get a little bit more fine-grained and better in our voir dire and the questions that we ask. Probably the only thing that holds across most studies is tour reformers and tour reformers are the same all across the country. There's no Texas tort reformer. If you're a tort reformer, there are four questions you ask to get at it, but they're the same no matter where they live. You know what I mean?

Nick Schweitzer (31:07):

Yeah.

Alicia Campbell (31:07):

Okay, cool. Yeah, I mean I don't think that some people will take that as a limitation of Fred. I don't think it's a limitation of Fred because when I run big data on my cases, I don't specifically do venue drill downs. The national sample is good and gives me a very diverse view on my case, and then that way whoever walks in for a veneer, I at least know what that particular person may think based on this trait or that trait. So yeah, the Fred Report. Kev, can you talk a little bit about the Fred report? What does the Fred Report basically tell plaintiff lawyers what's included? What will they see since you're building them? So

Kevin Doran (31:47):

Yeah, what you see in the Fred report, and sometimes I think we think about flipping the builder on its head and having you start with the presentations. Sometimes I almost wonder if you want to start with the report. And I think that's why it's kind of cool that in the new builder you see some samples of the report on the side as you're going because you see things like the win rate probability analysis, just kind of showing you your likelihood of success on a case. For an outsider, that is the most fun graph to read because it's extremely clear and it does not work the way you might expect or the way I thought it might work when I first got started, I thought it would be like win or lose and it's 50 50, it's actually not. It's a sort of a bar that goes and explains from zero to 100 what is the likelihood based on what all these jurors said they're going to rule on a particular claim.

(32:36):

And there's a kind of helpful center area that shows you very clearly what the experience of this group and this kind of methodology and this research says is that 55% is not a win. It's kind of has a little bit of a offset showing you that between more 65, 70 5% is going to be better and higher than that is more a definite win. There's a yellow area telling you like, Hey, careful, even though it's a more than 50% or more than 60%, there's still a decent chance that you're not going to walk away with a win. The other really exciting graph for people is the damages awards. And that's a very easy graph I think as well for an outsider to understand because just showing you money on a scale and showing you how much people think should be awarded for a particular claim. So you can have one of these, that's usually the main thing you're looking for, but you can have several of them and it's going to show you what all the jurors were going to award based on what they saw in the presentations and some averages.

(33:37):

And so that also, there's kind of a nice chart showing you the damages over what jurors, how many jurors kind of awarded specific amounts, how many said 5 million, how many said 4.5, how many said zero. Under that, we show you the averages and we show you some things that are a little bit more on the statistics side, but upper and lower quartiles and the median and basically you can think of those as just what's kind of our best guess what would be a really great day, sort of a really awesome lucky day, and what would we expect on a bad day? And then those are those three numbers under the graphs. Alicia mentioned this earlier, but the credibility ratings where it's kind of a bit higher level graph of the two sides and saying on the plaintiff side, how credible did the jurors find this plaintiff?

(34:27):

And then on the defense side or kind of the defense, the plaintiff's argument, then how credible did we find the defense's argument that's also color coded? It kind of helps you think. Green is obviously very good if you're representing the plaintiff and if the plaintiff's side was very credible. And then same thing if you have a case, an early on case where you're not really sure yourself, you give this to a jury and you're going to see on that graph, maybe they don't find the plaintiff's arguments very credible and that kind of helps you on the big picture understand why everything else happened the way it did. And then AI insights is where we use what are called the LLMs large language models to take a whole bunch of jury comments. These can be, there's 75 jurors, there's between three and five open-ended questions. So there can be quite a lot of data.

(35:19):

And what I think is interesting about AI and LLMs, and this is something that now comes up in almost every conversation you have socially with friends and family is AI and what's happening professionally, especially if you're a software engineer and you're interfacing with people, it's literally discussed hour by hour every day because we now use AI tools to write code. So what I think is exciting for me about focus with Fred is that we aren't using AI as the main insights driving factor. We're using humans and on the report we use AI to help you understand what those humans are saying, which I think is so much better than the other way around. And I also think having worked kind of at an AI, large consumer AI company that had 1.3 billion in funding, this was a huge consumer project where they were going after making an amazing chatbot and I got to see a little bit of how the researchers there worked and how AI worked.

(36:15):

And it was a really cool experience for me coming from just a traditional application development background, there's a lot of things that AI is really bad at, and one of the things that consumers see is that bad stuff. A lot of times you jump on a chat bot with ai, you want to talk to a human, it tells you how to do something completely wrong. It's really frustrating. What's cool about Fred is that we got to use AI for exactly what I think it's really good at which is summarizing information, taking a large body of information, not like a library, but a decent enough amount of information that you don't want to try and read through it in a short setting and making it shorter and giving you somewhat unbiased versions of that result. If you were to kind of scan some of these comments, you might grab a juror comment that is not representative of all of what the overall feelings were and read something that sticks with you.

(37:07):

AI is not going to do that. AI is going to just kind of go through all of them and do its best job of predicting what is the summary of this information. So at the end of the Fred report you see kind of theme by theme, what was the overall juror reaction to these questions? And that's the same for, we didn't talked about video study much today, but for video studies as well. It's actually a super helpful part of the video studies is seeing the summaries of what everybody kind of reacted to this video and getting kind of overall themes about I did not, I'm wondering if there's some fraud involved here and seeing that lots of people said something like that or that a lot of people are saying, this looks very straightforward to me. I think the plaintiffs are in the right, I'm sticking behind my award numbers, et cetera, and getting a summary of that.

Alicia Campbell (37:52):

Well, and I think that's all very important. I know we've had lawyers say, well, I just get the comments, I just want the comments. And our response is always the same. It's like, no, you cannot have the comments. And the reason why is because what we're trying to do with Fred is steer away from the heavy focus because even I do it right, when I go through comments on big data and I'm looking and I'm like, okay, what are people saying? I still go more towards the point of view, go more towards the, and so what we're trying to do with Fred and democratizing data is like, look, here are just responses to your case that are quantitative. So you need to look and stare at your credibility. You need to look and stare at the injury severity. But what we don't want you to do is going through comments to justify not following the data, right?

(38:44):

Because I found that one comment from that one plaintiff attorney. And what we don't want you to do is overweight the negative comments. And so AI doesn't care, it doesn't give a shit. It's just going to go through and say, Hey, a lot of people are saying x. A lot of people are saying Y hear representative quotes of what they're saying because AI is really good at that. It's very good at going through comments and saying, Hey, no, no, no, no, no. A lot of people are saying this about your plaintiff. We think some reasons why are this based on the comments and here's some representative comments that further prove that maybe you got a problem with your plaintiff because this is how people are saying it. So just to make that clear, if you write me and say, can I get the comments, Alicia? My answer is no, no, you need to read the summary of the comments because that's a much more balanced view of how to see what people are writing. And it keeps you from just focusing in on that one person who's like, this is the dumbest case I've ever seen, right? Because if it's one person, AI's not going to include it. It's not enough. And so we want you to have a more balanced view of your case, good and bad.

Nick Schweitzer (39:57):

And I mean it mentioned earlier that we're trying to steer you into doing a little scientific study. And what we don't do in scientific studies is start listing off one person's response. Well this one person said this or this one person said that. No, you aggregate if you're going to do that, if you want to focus on individual things, you have to go through, you have to code it, you have to qualitatively code, you have to do all these different sorts of things. And that's what the AI is helping us with is it's helping us to be like, okay, maybe that one person said, one person said your case was stupid, maybe another one said it was dumb. Maybe another one said you shouldn't be trying it. Maybe another one said this. And if enough people are saying that, then it's going to say, ah, I found a theme here, which is that there's a subset of jurors who really don't like this case and when they do and if it meets that kind of critical mass, we do give you quotes, we give you individual quotes from the jurors so you see what those jurors are actually saying, but we don't show you every comment about everything.

Kevin Doran (41:01):

There's one other thing about a Fred report coming from doing information architecture I guess you can call it, but basically laying out data in a way that humans can understand it in other domains, like most software web applications that you're using that show you some sort of graph, they're figuring this out as they go, they don't really know what to do yet. A lot of times I've been in many meetings with product and design people who are for the first time figuring out, okay, hey, we have all this data, we need to show it to the users. What are we going to do here? Working on the Fred report was amazing because it felt like we skipped all of that from the sense of what is important and how should it be laid out? This team already knew that they had already done hundreds of these, they knew what was important, what graphs are going to go there.

(41:48):

At no point was there going to be a user feedback form telling us on the engineering side that why do you have this graph? Each graph is very important and each one is thought about in a way that usually is not how dashboards and graphs and charts are presented to people. So there's the academic side of that and then there's just kind of when you do something hundreds and hundreds of times it comes out differently. So I dunno if I ever told you two this, but working on that graphs thing, it seemed like a lot of back and forth and it seemed like you guys had lots to kind of figure out how do we take all of this research we've done and make kind of a standard version that's important. But it was, to me it was like, oh man, this is like they know exactly what's going on here. And I think if you're seeing a report on your A case that you're considering, you get to know that there's a lot of thought that went into this report. So I think on the AI side too, it's kind of nice because sort of the final part of the report and you're kind of digesting why things went the way they went and getting kind of a highlight into some of that. Yeah.

Alicia Campbell (42:48):

Nice. Kevin, thank you. So I mean, let's talk about video testing just a little bit and then we'll be able to kind of wrap up. But yes, because right, there's the whole video section, and while we're talking about comments and ai, that's what you get on the video section. So the video section can have some quantitative data to the extent that we can identify something that we can quantify for you, but really it's meant to have a lot of input about either what jurors are seeing on your incident video. Like, Hey, I think this woman faked falling, or I don't think she was paying attention, or I don't know if the driver meant to swerve over and hit that motorcyclist on purpose, or you'll get a lot of that kind of feedback and you'll be able to create three custom questions about an incident video.

(43:35):

And so for the video study, there's like two types. There's the opening closing part that you can do, and basically all you do is you log into Fred, you upload the video, as long as it's in the time parameter so it doesn't get rejected, then you pay. And it goes, because we have the questions, we have 13 questions that we ask about that. And we've done, John and I have done it enough that we know good questions to ask about openings and closes, but the incident video, you'll have about nine standard questions that we ask that are all open-ended, but you get to write three of your own. So if there is a particular concern, can this, did you see this in this particular part of the video? Or what are you seeing going on? Is there anything about the plaintiff specifically in this video that bothers you or upsets you?

(44:22):

You can ask those questions as well to get even more information about what it is they're seeing in either the dash cam or the ring cam or the trucking camera video, whatever you have. And I think it's up to 30 minutes. Is that the same for incident videos too? You can have up to 20 minutes. This is standard across, so I mean, it's a lot of time, and I mean with the opening and closing, you can have the PowerPoint behind you as long as the camera footage catches you and the PowerPoint. I mean, you can do all of that if you want just to get a good basic plat. And I just tried in Colorado, we ran Sean's opening right after he gave it. We ran the defense opening right after he gave it so that we could get a, Hey, how are we doing?

(45:06):

How are we starting off right now? Now obviously on Fred, you'd like to do it earlier and practice, but it does give you 50 people's feedback pretty well with AI giving you a general view. And that's another good reason to have AI doing it because those types of opening and closes, it'd be really easy to focus on the criticism. So if you're reading every single comment and you're like, oh my God, this one guy hated me, really, it's not good to just focus on the one guy. So this is another reason why the AI works really well. The AI just doesn't care. So I think we've talked about it. Nick, can you tell us one question that I get a lot, and I'm sure people are wondering it is dealing with a juror virtually somehow different or less good than having an in-person type interaction? What can you say to that?

Nick Schweitzer (45:59):

Oh, yeah. So there is quite a bit of work. So practically speaking, if you're going to try a case, you're going to try it physically with a jury. Although there were experiments with that during COVID about doing that virtually. But this whole kind of world has turned into, there's a lot of people working on this exact question, which is what are the benefits or the pros and the cons of working with either a research participant virtually or in person, a juror, a realtor in person versus virtually? And there's limited actual evidence, which is what's interesting at this point on the research side of things, about 10 years ago, people started to have access to people who would participate in research projects online. The method we used before that was to literally a lab building at my university and bring people in, even if it's for a survey, you got to go have 'em fill out a survey.

(46:58):

We used to send people to shopping malls when those were a thing to just go around and harass people, will you fill up my survey? Kind of thing. And so there's been a lot of work on, okay, well how predictive of real world outcomes are these online virtual sort of research projects or research experiments versus ones that are done in person? And I'll say there's mixed evidence on that. Maybe we could get into all of this at some point in time in the podcast in future episodes. But the consensus is that basically kind of in the same way I was talking about with venue, where getting a broadly representative sample gets you so much of what you actually want to know, that it's almost not worth it doing the targeting in that same way, if you put in the effort like we do to get quality online research participants, you're going to get data that is basically as good and depending on where you're getting the in-person people from, perhaps even better than what you would get if you did this in person.

(48:05):

So we have teams and colleagues and collaborators doing a lot of this sort of work looking at, well then deliberation. What if there's online deliberation versus in person deliberation? What if you're watching a video in person on your little small screen versus a big giant presentation of it? What about the fact that you're reading a concise, tightly written presentation summary? But of course, in a real trial, everybody's stopping and starting and objecting and side barring and then sending the juror out and bringing her back in, and it's, so how does all of that affect that? And I think what we're finding is kind of like I said, what we did with the venue is that the way we're able to do this gets you very, very good information to the point where doing this in person, which would by the way, be exponentially more expensive, you're not going to necessarily get better results just by doing that.

(48:59):

We understand. We get it. It makes sense. Well, I'm talking to an in-person jury, why wouldn't I want to get in-person feedback and in-person research before that? But that's not necessarily the case, and it's certainly not, in my opinion, for these early stage things, it wouldn't be worth it. It would just be you're going to get probably for the cost of one focus group, you can do a couple of Fred studies. And I mean that data is going to be so much more valuable than what you could get from doing a little in-person sort of thing for the same amount of money.

Alicia Campbell (49:29):

I would think the in-person comes with its own limitations, right? Because the people behind a screen are very, very capable of telling you exactly what they think because in a live person focus group, you still have to worry about skew, and every lawyer knows that, right? Have they figured out this is my case or a plaintiff case or a defense case? Because the other option is for them to sit there and go, well, you're paying me, someone here is paying me. That's not the person that I want to give bad news to. Whereas there's some argument that the online where we make it as the gatekeeping function pretty similar in links pretty similar in the way that people are making robust arguments. The jurors don't know, and they're not obviously confronted with a human being who's standing there and they're trying to assess who's paying all the bills.

(50:16):

There's a lot of interesting just in doing, I mean, I think John and I are down almost a thousand of these now. It's really a great way, I think obviously since I've done it a long time, I mean, jurors are so good at honing in on issues, cutting through the crap, and then writing very, I mean, I have read rabid ridiculous comments from people that I don't know, they could actually say to a lawyer like, this case is dog shit, right? I don't know if you can actually say that to someone. So yeah, it's all interesting. You can always do it in tandem with a live focus group too, because there is some benefit to having a mock jury and practicing. But I do like the numbers better with Fred

Kevin Doran (51:02):

Talking about methodology. I don't think I've ever told you two this, but when I was in college, I worked at a marketing research company as an intern, and it was a giant building and there was 50 people in there on the phone calling people. And then there were four or five research consultants and then us interns who would go and put data into Excel charts and try and make it work. And clients would call up for a study and it was like some sort of six figure number. And then this room full of humans getting on phones would call and ask them questions about, we always use Coca-Cola as an example, even though that probably wasn't even a client, but it's like, what do you think of this new Coca-Cola can? And then you go through into all that stuff, and it's just amazing to me that now you can run a study as good what we were doing then or better on something like this at a number that's one 10th the cost of what people were paying for these marketing research studies. So anyways,

Alicia Campbell (52:00):

Well it makes me wonder, I worked at Gallup Poll in college and that's when you wouldn't hit the phones, man. You hit the phone and it was always kind of a letdown a little bit. You got hired on with Gallup Poll, but then you realize, oh, all of the people who've been with Gallup Poll for a while do the cool political ones. I'm doing Pizza Huts new crazy breadsticks. Right? Which is not as exciting, but it is crazy because you do get a lot of people who are like, I don't want to answer any of these questions. Or you can tell they're answering, but they're half watching television and I got to think the G poll has to love this online like Coca-Cola does, right? Where you just get access to people, they're paying attention. You can check whether or not they are, and you can get real life feedback on what it is you're asking them.

(52:46):

Seems good to me. I guess the only other thing is reports come out from Fred three to five business days after you've paid and it gets running. We do checks on those as well. I mean, it's not fully automated, but we don't mind hanging out with Fred. So we do a little things here or there just to make sure that we've got the data so that you get results that are accurate. But I think that's really it as to the, what is Fred and where is he? We've got jurors online and he's kind of everywhere. And a useful little software program that we've tried to make easy and accessible for people with cases they need to know more about. So the next one is going to be more strategic, right? Like how you take Fred and maximize the results, maximize the information you have, how you can use him to maximize the case's value or the case frame, or whether or not maximize your law firm's ability to take in cases because you're like, Hey, I'm going to run this first through Fred and find out if it's even worth spending money on. So the third podcast that we're going to do is more about the strategy and how Fred can help you keep your doors open, keep your lights on, and have a healthy Christmas bonus. Well, thank you guys. Thanks for joining us for all who's listening and look forward to the next one.

Voice Over (54:07):

Thanks all. Thank you for listening to the Fred Files. If you found value in today's discussion, please subscribe and share this episode with your colleagues. To explore how Fred can transform your case preparation, visit us@focuswithfred.com. Produced and powered by Law Pods.

Produced and Powered by LawPods

The team behind your favorite law podcasts