I have a very interesting topic to talk to you about today and it’s artificial intelligence in the courtroom, in the criminal courtroom and civil courtrooms as well. And, boy, oh boy, you know, there's been such developments as you know, with artificial intelligence. We've got chat GTP. We've got voice synthesizers. They're working on creating fake profiles of people where they're at a video and where does this all end? And what does this portend for the criminal courtroom? Oh, my God. You know, you could get someone who is a defendant in a criminal case and they could the complainant say it’s a rape case, a sexual assault case or domestic assault and they have their wife or their girlfriend’s former girlfriends voice they can hire one of these voice synthesizer companies and make up a fake you know, phone call that the girlfriend made where they supposedly said, look, I was just doing this to get the kids back. This is all crazy. I'm sorry. And this they can try and introduce this in court. Well, the first of all, it's obstruct justice, not the wisest thing to do. And it wouldn't shock me going forward with this you know, technology becomes very good. Someone trying this and, of course the other side's going to go this is fake. The complaint goes to the Crown Attorney. I didn’t say this, this is crazy, obviously, the Crown, now a fundamental rule when you're trying to introduce a document where you're trying to voice memos, telephone messages, anything like that electronic messages, a video, you have to prove it's an aesthetic document, you have to show that so the defendant have to show that obviously in this situation with the current state of artificial intelligence would probably be pretty obvious the crown could hire an expert to show that’s bullshit and the accused is going to get charged with obstruct justice. Now, conversely, you might get a complaint try this. It's not going to shock me at all. In fact, I predict in the next zero to five years you're going to see someone try this. They're probably going to get caught with the current state of artificial intelligence. And you know, our current laws of evidence I don't think people are going to be able to get away with this thankfully. It's going to get caught. And our current rules of evidence, which require proof of authenticity and reliability accuracy of a document electronics, memos, videos, you name it, are going to be able to deal with this situation. But where does it lead in the future as this artificial intelligence increases and we get these fantastic videos where it's almost impossible to tell the difference or voice memos or a telephone message. I don't know what the answer is that 5, 10, 15, 20 years from now. Certainly I know the answer right now probably during my next 10 to 15 year legal career it probably won't be a problem, but it couldn't be in the future. And this is a problem that the courts may have to grapple with and but don't be surprised. I wouldn't be surprised if I represented the odd person charged with obstructing justice creating these fake things because I know what's going on. And just a funny thing. Our IT guy today he sent me when preparing for this video, he sent me my voice talking about blogging and different things and I went wow, that's pretty similar. I mean it was a bit off, you know, but it was similar and, and he said if he would have worked with it more the difference for some people may have been hard to tell. So it's an interesting topic. And I want to tell you this though, I want to assure you that I am not with chat GTP it is Mike Kruse, speaking here to you for today from Toronto, Ontario, Canada. Thank you for joining me.