Unknown Speaker 0:00 All right, good afternoon and welcome to today's webinar, AI patents maximizing protection and licensing value presented by Autumn. My name is Sandy Spiegel, autumns professional development manager and I will be your staff host for today. All lines have been muted to ensure high quality audio and today's session is being recorded. If you have questions for our speakers, we encourage you to use the q&a feature on your zoom toolbar. And if you have a technical question or a comment, please feel free to use the chat. You can also use the raise hand feature if you'd like to ask your question aloud. Throughout the presentation. Our speakers have a couple of built in pauses for questions so we encourage you to get your questions submitted as they come to you rather than waiting until the end. Should you need closed captioning during today's session. The Zoom live transcript feature is turned on and available on your toolbar. Before we begin, I would like to take a moment to acknowledge and thank autumns online professional development sponsor Marshall Gerstein IP, we appreciate your ongoing support. I now have the pleasure of introducing you to today's presenters. Robert Plotkin is a patent attorney who has been specializing in software patents for over 25 years. leading universities technology companies and law firms worldwide turned to him to obtain patent protection for cutting edge software and fields such as artificial intelligence, quantum computing, autonomous vehicles and speech recognition. Robert is an MIT educated computer scientist who focuses on maximizing the value of his clients patent portfolios. His clients have profitably sold and licensed the software patents he has obtained for them to major corporations worldwide. He's also an inventor himself. He's named inventor and owner of over 25 patents and patent applications many of which have been successfully licensed. Long before AI patent patents became a hot topic. Robert wrote a book on AI generated invention and titled The genie in the machine how computer automated inventing is revolutionising law and business which was published by Stanford University Press in 2009. He uses the strategies that he describes in that book to obtain and enforce AI patents for his clients. He has been named as an IP Trailblazer by the National Law Journal, and as an intellectual property Super Lawyer, John Brenner has worked in various capacities, advising, creating and managing new ventures for over 20 years. As a result, he has developed expertise in major aspects of business development, including financing, intellectual property, joint ventures, product development, collaborative research, product distribution, material supply, and transaction diligence. John has worked in a wide array of technology areas such as biotech, high tech, AgTech, Foodtech, and 3d metal printing, among others. Currently, John is the vice president for IP management and licensing at re Onyx Incorporated, which spun out of Chaotix and company incorporated in oh nine, where he was the vice president of micro fluids. Along with his role at Ronix. Mr. BRENNER provides advisory services to select clients on a project basis. Before then, Mr. Brenner was a Senior Technology Manager at Cornell University's technology licensing office CTL, where he managed commercialization of drug delivery diagnostic devices include science and other analytical inventions developed by Cornell's research faculty, John was the founder and president of Oris incorporated specializing in financial and organizational solutions for technology based companies. He's a member of LDS and was among the first class of CLPs. He holds a BS in Plant Science from the University of New Hampshire, and MS in science education from Syracuse, and an MBA from Cornell University. Welcome, Robert. And, John, we're so excited to have you both here and to learn from you. So with that, I will turn it over to I believe, John, to get us started. And I'll kick it off to you. Unknown Speaker 3:58 I will bring up the here we go. So we've dispensed with the first slide already. Second, third. And here we are. So Robert, you were recommended to me by one of my Autumn colleagues. I received this disclosure from the geology department. My colleagues who just did you can help me with it. The vendor developed some really interesting software, which I think maybe AI, how would I know and what should I look for? Unknown Speaker 4:32 Yeah, it's a really good question, because it's not always obvious from a disclosure, whether that disclosure describes an invention that relates to AI in some way. With lots of other technologies. It is pretty obvious. But with AI, the AI can be hidden beneath the surface of the disclosure. And so we know that AI is notoriously hard to define but Fortunately, I mean, although that's an interesting academic and philosophical question, we don't need to answer the question of exactly what does aI mean? Instead, I'll just give you some guidelines or heuristics for how to spot whether the disclosure of disclosure you you're looking at, might describe something that was related to AI. And so there's some buzzwords or terms you might look for in the disclosure, things like machine learning. Deep learning is used really commonly these days, in in AI, terms like neural networks, for example, convolutional neural networks, which are abbreviated CNNs. If you see anything that describes training a model, like a language model, or some other kind of machine learning model, if you see terms like evolutionary algorithm or genetic algorithm are these days, generative adversarial network, these are just examples of terms which just by skimming a disclosure you could look for, that might give you a clue that what you're looking at relates to AI, even if the disclosure doesn't otherwise, talk about AI, because that the disclosure might be describing something related to chemistry, like a new molecule. And from the inventors point of view, that might be what they've invented. But it might be the case that they invented that molecule with the use of AI. And that's important to find out. And as we go through our conversation, we'll explore how that can be relevant for patent and licensing purposes. Does that does that help you in looking through this disclosure that you have in front of you? It does, Unknown Speaker 6:46 and you can, it reminds me of something that I worked with Long, long ago, which was, as you listed off the evolutionary algorithm, I had experience in ag tech, with an evolutionary algorithm that would make use of of weather data to improve crop production in the greenhouse. And frankly, if I go back in time, I wish that when we had filed those patent applications, we we hadn't been focused just on the Ag tech elements of it, because it would have won, we were all said and done, had a new venture started and had a production facility. We were making less, but we all looked around and said, oh my, we really should see this be useful in an age back and energy efficiency in buildings, because it was taking use of weather data that was both historically developed in from records, but also the stuff that was happening, right in real time. And so it was being able to make adjustments on heat and light input into our production facility. But that could have applied to many other things. And so we look back and think, what were within this entire process, should we have been working on developing a better disclosure from our inventor, or looking at better claiming strategies to move that forward? And the other thing that, that I had questions then, and I still have questions about is the data that was used to train these models and trying to understand how that plays into patenting, how it plays into licensing? Yeah, Unknown Speaker 8:38 you know, I know we're, we're gonna walk through a kind of framework for thinking about these issues, that's helpful for breaking down the possibilities into discrete components, kind of helps us evaluate different aspects of the invention that could be patented. And that could be licensed. And I find this this framework that you've shown here on the screen to be really useful, because it again, it captures a wide variety of AI related innovations that that I see often these days, it helps us to walk through a disclosure and ask, does the disclosure include any of these elements we see here and it protect the practical value of that is that it helps to avoid overlooking elements of the disclosure that you could patent and then license because as you said, it's very easy to to see the invention for let's say what the end result is that's valuable, but then that might result in obtaining a patent that's only licensable for that particular result or field of use or application when perhaps you could have gotten the patent that had broader applicability and therefore lot greater licensing value does that relate to that past experience you had Unknown Speaker 10:02 entirely relates to my experience is that when we, this came from Ag Engineering, and it was a way of controlling the inputs into a production greenhouse facility. And, you know, we were just focused on that. And we never really thought more broadly, though, we should have, in the end, we should have because the fascinating part of the technology was its ability to be adaptive to both taking advantage of historical information on whether sunlight angle of incidence of Sun cloudiness, seasonality of all of these same parameters, and balancing out the inputs in a greenhouse so that you could have very uniform production of an agricultural product, and have the highest performance with the least input. And so it was this very complex and interesting problem to grow lettuce. And the, you know, frankly, had we gotten it now we would have been much happier, because the two things we said were, well, we probably shouldn't have focus this on lettuce, it should have probably been marijuana. But that wasn't legal at the time to grow that, but it'd be ideal for it now. Shame, the patents are expired. And the other side was there were more applicability because it was balancing what's happening in the outside environment is how is it impacting this closed environment of a building, whether that building was a greenhouse or not. And the the interesting piece of all of it, if I look at this model that you're showing me is, it comes back again to when we licensed this, and that's, you know, the whole purpose of what we're trying to do here is to generate licensing revenue out of all of these disclosures we have in the patents that generally are generated from that. So I don't even really see in here, what were the basis of our royalty stream was because the royalty stream was on what went out the door and was delivered to grocery stores. And so, yeah, Unknown Speaker 12:15 yeah, I mean, let me let me just step you through briefly the the model to you know, because we've got I know, you've got this new disclosure now that you want to talk about, that's right, this, this model, of course, is not applicable to all AI. But I find it's pretty broadly applicable to a lot of the AI related inventions that I see coming in these days, you know, 8020 Rules probably applies to at least 80% of them. So it's useful for that purpose, I'll use I'll use neural networks as an example, because they're really in vogue these days. And, you know, many people who are listening to us talk about this, you know, I don't know how familiar they are with the use of neural networks, but a classic example of them is to do something like, learn how to distinguish different types of images from each other, you know, people often talk about, okay, how, what if you want to distinguish images of dogs from images of cats, and the classic way you would do that with a neural network is just start out with a bunch of existing images of dogs that you know, to be images of dogs, and existing images of cats that you know, to be images of cats. And you feed those images through a training process. So in this example, if we follow it through the diagram, the training data would be those images that's on the top, you know, that's what you're starting at sounds people call that the ground truth, you kind of know this to be valid data doesn't always have to be completely valid parameters can take a lot of forms, but you know, maybe you parameterize the images in terms of their size as other features that the images might have parameters of the training process, like in a neural network, what the depth of the neural network is going to be. So you can tweak those parameters a lot. You feed that you feed primarily the data and the parameters into the into the module and run it. And so often this is called learning a model. Just like a human might learn. May I Unknown Speaker 14:20 interrupt you just ask you a simple question. You mentioned the depth of a neural network as being an important element to decide on what the training parameters are is. What does that mean? Unknown Speaker 14:31 Yeah, so I don't want to get too far into the details. But you know, when I mentioned deep learning before, the deep the word deep and deep learning refers to the depth of the network. So in a neural neural network is is essentially designed to simulate at a high level how the human brain works with neuro simulated neurons and the neural network that process information And then pass on their outputs to other neurons. And so you have what are called layers and initial layer, the image comes in, there's a bunch of neurons that receive your pixels in that image, perform some computations on them pass on the outputs to another layer of, okay. And to a rough extent, the more neurons, the more connections you have, the more sophisticated processing you can do. And so, you know, early generations of neural networks, both because of the algorithms that that were known, and just because of the raw power of the hardware that we had, were limited in how sophisticated processing they could do. But of course, as computer hardware has gotten exponentially more powerful. And as there have been improvements in in algorithms, networks have become deeper, and they'd been capable of performing much more sophisticated processing, to both in the training and then when we get down to what's called the model execution here. Does that roughly make sense? Unknown Speaker 16:06 It does. That's, that's very helpful as I am getting ready to introduce this disclosure to you. Yeah, that certainly helps a great deal. Thank you, you Unknown Speaker 16:16 know, so generally, you very often there's a model, this model that comes out might be the trained neural network, for example, which then you can think of as it's sort of the embedded knowledge that the system has gained about the training data and the patterns in it, like, what distinguishes images of dogs from images of cats, would be if I'm following through that example. And then, once you've got the model, the goal is to apply it not just to the training data, which you knew about already, but to live data, you've got a bunch of new images, you know, new images, and you don't know what they're images of, that's the live data here, that you then feed into the model through what's called this model execution module applies the model to the live data produces the output, again, through this simple example, the model output might say, for each image, is this a dog image of a dog? Is it an image of a cat, that's the output. And then I'm added on this output processing module, because very often the out the automated output that comes out of a trained model is good, but maybe not great. It may not be ready for primetime or ready for commercial use. And so often, there's some maybe filtering that needs to be done on it, where a bunch of the outlets say there's 1000 outputs, maybe there's really only five or 10 of them that are good enough, let's say this was actually a model that you use for drug discovery or molecular discovery, you know, maybe it produces 1000 candidate molecules. You see what I mean, as our eyes do now. Yeah, now, maybe they're not all really good enough for whatever the end goal is. But it's better to start out with 1000, then with 100 billion of them. And so the model has done you some good there. But then you need to prioritize those sort, filter them through what I'm calling output processing to produce the final output that's actually usable in a medical or other commercial context. And the last thing I'll say about this, because I know we need to talk about your your disclosure, that's why you came to me. Sometimes that processing post is done by a human, it might be done manually, you know, if the number of outputs is small enough, or maybe it's done automatically, or maybe it's done in combination between software and a human, and that's going to influence how patentable this is the output processing the overall system and potentially how licensable it'll be or how the licensing would work. Right, if the processing is manual versus automatic, but we'll we'll get there. Maybe we should just dive into your Unknown Speaker 18:56 Yeah, let's let's do that. Because I think we may be revisiting this chart after a while. Yeah. Unknown Speaker 19:05 So tell me what you tell me what you got. Unknown Speaker 19:07 My my disclosure is, is that in the geology department, let me move on so that people are able to follow that a little bit better. So we're the geology department is looking for ways to more accurately find locations and deposits event of interest, and they could be anything of interest. Which is the difference that I had coming back from my other experience, which was let's grow lettuce more efficiently. This is this is seems more amorphous to me. So they train this neural network and thank you for helping describe a little bit more about the neural network. That's why I asked that question. Using mining data that had been included in a proprietary data and a study. They applied this network to the new data, and they have tie obtained another block of data. Well, then they took that data and put it through another process to output more information or different information. And that's what I'm still trying to understand is were that different information. Is that the part that's claimable, or is it the first part that's claimable, or they both claimable, and then they, and then the plant. In the end, they applied another algorithm to the outputs to the to the output of the second stage of their analysis to produce a final result indicating where they may go search, in particular, for higher probabilities finding a particular mineral. And then they can update the model over time based on new data. So I got excited about it, because I thought, Oh, that's a more efficient way of of finding things that may be of interest. But I think I need to work with them to improve the disclosure. So they're just more specifically describing the steps that they take, and what it is they're doing. And frankly, your model will be very helpful to even for me to put in front of them and say, within this model, you know, where are you doing the things that gave you this aha moment of how you could find these deposits. And where and then I have to understand a little bit more about the data that they used to train it, some of it was proprietary, and some of it was openly available. Some of it they had already. And they owned it. And so I was hoping to learn a little bit more. And I think I need to have that in the disclosure. So that either from your standpoint, to turn this into a patent application, do we does this bring out the information it needs? Or yeah, how does it impact the licensing in the in the limitations that I may have on that? Unknown Speaker 21:54 You know, so I'll step out a role for a second, just to say, you know, you had some inventors here, who who did a great job of providing a disclosure that's in the form of the framework we provided, right? It kind of followed along that diagram very cleanly. And usually, you're not going to get a disclosure like that, right? Usually, or disclosure is not going to step through, we took this input data, and we trained a model, and we, we produce the model that way. And then we applied it to data, and then we post process that it's not going to map on to that model so cleanly. And so you know, you had a slide about disclosure, diligence, you know, it's going to be your job, in most cases, to try to look beneath the surface of the earth, no pun intended, because we're talking about geological look beneath the surface of the disclosure to see what might be aI related here. Often, the inventors in their disclosure are focused on whatever problem they were interested in, you know, and so maybe, maybe the disclosure is going to either skip over some of those steps, or just emphasize whatever was most important to them at the time, which which might leave out some information that's actually relevant to this model, and therefore relevant to what could be patented, and what could have significant licensing value. Right, they might frame the disclosure in terms of dukkha, and resulting higher quality of the outputs here, which is great, but they may not go into all the detail about the method for getting there, for example. And if you want to just claim the outputs, that's great. But if you want to claim and license the process, you might need to get more details from them about that. Unknown Speaker 23:52 Well, and absolutely, given my experience with it in the past, and looking back and having been burned, that we didn't apply for his broad set of claims, as we probably should have, with the basic system that we had. And again, this was some time ago. So things have evolved even more beyond there. And I just wanted the best for the vendor and the best for the institution to, you know, be able to do that. So, so that's very helpful to go back and be able to go back to the vendor and to tidy up some of the components of the disclosure, and get them to be a little bit more forthcoming and maybe encourage them to think a little even more broadly. It gets if we go back, you know, where, where would this fall? And I shouldn't say maybe so much where we're adults. What's the most productive information for you to be able to gather when you're writing your patent applications, and I'm just gonna go back to the model just for the moment. Is it more productive Do you think to get into the Do you know the first half of this model where we're dealing with the training module? And the model itself? Because the live data keep changing that? Or is it easier to move down the process into a more finished product? I think? Unknown Speaker 25:19 It's a great question. It's a great question, because I'll give you an example where you could get tripped up. If we go back to that go back to the diagram of the of the model, let's say let's say by as a result of our discussion with each other, we are you determined that the real the primary value for licensing purposes, is in the final output, you know, the the locations of the the Geograph, a geological deposits, or maybe an executing the model once it's been trained to produce that output. And that's the focus not on the left and top part of this diagram, right. But, and that's what we're going to claim the model execution and the outputs, perhaps, it's still going to be important to investigate very carefully what the training data was and what the training module process was. Because you might run into a problem with enablement. If you don't describe the training data, and how the training module works in sufficient detail, even if what you're trying to claim comes further down, let's say in the model execution module, and I know you asked some of this earlier, it all really depends on what really is new here. I mean, I think you asked about, like, Where does this training data come from, for example, sometimes you have training data, that's from a publicly available source. I mean, there's increasing amount of training data out there. You know, there's, there's websites, and there's other sources of training data now for various machine learning applications in different fields that are just publicly available, whether they're free, or they're paid, they're out there in the public. And if that's true, and if that's what the inventors used, we might be able to just point to that, and the patent application. But if the inventors actually had to generate their own training data, or massage the training data, then we've got to look really carefully into what did they do? What is that data? And how much of it do we need to describe in the patent application in order to enable the model? Even if what we're claiming is execution of the model? Model? Why? Because if someone's going to execute the model, they have to have the model and to have the effects have the model, they need the data, then we have to describe the data. Unknown Speaker 27:49 So is the data a part of the model? Or is it used to kind of transiently to create the model, and once that first date is used, is no longer really a part of the of the of the package that you might license? Once it's once you got a model? i The the new model could be just constantly updated with newer live data, not that historical data? Or does that even matter? Well, Unknown Speaker 28:19 you know, this is I think, we're you're gonna have to work with the vendors and find out because it's gonna, that's the kind of thing that may vary on a case by case basis. And there's a I'm gonna say that in response to a lot of the questions here, which is, you know, I'm we're putting up this framework that looks very clean and neat. You know, but in reality, a lot of these questions, you just really have to look at on a case by case basis. I mean, I'd say, in general, vast majority of cases that training data doesn't literally exist within the model. Right? It's used to generate the model, but still ask ask that question. Right. So Unknown Speaker 28:57 it's not it's not like a lookup library that it uses. Right, once once it's been once it's used? It's used you may and silly question. If, if, if I use the training data today, and then that training data continued to get more information and I subjected that model to the training data, again, what I have or excuse me, the training module what I get the same model out again, Unknown Speaker 29:26 you may or may not again, like real this is this is a, you know, these are all situations for real careful conversations with the inventors about this because it's going to affect what you need to disclose. And then I'm curious to know from you and I know you can't answer this in general, you know, what, what you would need to license later. That's Unknown Speaker 29:50 right. Well, that's what I tried to I'm also trying to get my hands around that is to where would the model exist in the licensing continue? would I take this model, which we know by the disclosure has been an exist, but it's in the computer's in the geology department? Yeah. Now I'm going to license this to a third party, private third party, do I hand the training module to them? And they then make their model? Or do I move down to the model execution? And then periodically have to keep updating the, the licensee with any, any improvements or changes that have occurred, you know, with, with the, you know, the professor's new use of that original training model? Or is it perfectly reasonable? Here it is at this stage out, then you go to it and just and just cut it at that point. And they come back and ask for more, I'm just kind of wondering where that falls as far as my obligation is licensor. Unknown Speaker 31:02 I get like, you're asking, Will the How long will the model sort of be valid or be useful and will be under summary? Yeah, what will it only, I mean, again, I'm saying 8020 rule here, in cases that I see most of the time these days. For most applications, the model really does need to be kept updated with new data to truly be keep it to use over time, otherwise, its value degrades, you know, I'm thinking of an example, I've worked on a bunch of applications relating to communications, network security, you know, detecting viruses and malware attacks, and so forth. I mean, computer networks are changing all the time. Use those models often are being updated continually, essentially, continuously. You know, if they're a week old, they're no good anymore. And so, but there might be other use cases in which that's not true. Yeah, in what I've held on what the data is, and what the application is, maybe you've got a model that is largely valid for a long time, and may be sure it would be a little bit better if you could keep updating it, but it's not as essential. Unknown Speaker 32:23 So that that's also then related to how you may interact with the licensee. And try to encourage them. But I think from a disclosure standpoint, I'm going to have to go back and make sure there's forthcoming on the training module aspects as they can be, so that we can build that into the specifications, specification and claiming, right, it may be not claimed right away, but have it there in case as the licensing process continues, we can go back into the application and pull things out that may be more applicable to, you know, the new case of a licensee down the line, because this is I mean, that that's the hardest part about this is that it's it's such a dynamic space. And it's so long before I can get this in front of an examiner, and then finally anything claimed I want to be able to get out there and license it right away. If I can't, right, so. So having that in the application be very helpful. So let's just for our own purposes here, just go through and see that we have discussed the main points that we need to have discussed, so I can go back and get this disclosure in a little better format for you. Yeah, Unknown Speaker 33:51 I mean, I think so far I'm looking at this, I think we discussed most of this. The last point is an interesting one. And I've certainly come across with this. Increasingly over in a very recent period of time, you know, I've worked on it. I'm a software person. And in the last year and a half, two years, I've worked on increasing number of patent applications involving chemistry and biology. Where, you know, we've had to work as a cross disciplinary team, you know, AI for generating a new molecule, right? AI for for detecting cancer in images, and where the claims are primarily software claims, but where the specification has to describe a lot of the underlying chemical or biological sciences, and where the claims even may, depending on how we decide to claim them looking at that model may have to refer to the relevant biology and chemistry so I'm seeing a lot more cross disciplinary work. work involved. And again, in terms of setting yourself up for licensing success later, the last thing you'd want to do is inadvertently have a software claim that inadvertently characterizes the chemistry too narrowly, let's say, and therefore limits what you can license later to a certain class of molecules when that was completely unnecessary, because the software process was just as applicable to other molecules. And these are the conversations I've been having with the with the with clients. Because, again, talk about what the disclosure you get it very often is the case that the inventors were working on some particular class of molecule that say, and it's not until you talk to them, and ask them the question. Does it have to be limited to that? Or was when you were working? That just happened to be what you were working on? And that's why the whole disclosure is limited in that way. And you often find out no, that doesn't have to be limited. And then you can get into a really interesting conversation about what's the real scope, potential scope of the patent, and therefore the licenses down the road. Unknown Speaker 36:15 Outstanding. That's why I came to you because I had that I got burned in the past on to narrow an application. And, boy, we were sad at the end of that. So Unknown Speaker 36:26 too, too broad is a possibility, too. I mean, you need to make sure that your your disclosure truly enables the breadth of the claims. And so you have to be careful in both directions. You don't want to claim the world, when it turns out that you know, your AI would take a million years on the fastest supercomputers to come up with that solution, because that's probably not going to be the sufficient. So you have to look at both edges of the sword. Unknown Speaker 36:53 All right. Well, let's let's take a break and see if there's anybody out in the webinar or land that have their own questions. We've been peppering each other with questions. And let's give somebody else a chance. Unknown Speaker 37:08 Looks like we haven't had any come through yet. So attendees, don't forget, you can use the raise hand function, you can use the q&a or the chat. We'll give it a minute to see if any questions so far come through. Unknown Speaker 37:28 Well, okay, if no, if no one has any questions? Yeah. I mean, you've been asking me a lot of questions. John, I have a question for you, which is, now that we've gone through some of these ways of thinking about it through this framework, in the maybe in the current disclosure that you have, is there anything that jumps out at you as as being the best candidate for licensing or what might have the most traction, you know, for you as a as a license or most value? Unknown Speaker 37:55 Like, I think, for me, it's it comes up with what what's easier, is human nature's is what's easier, it's just so much easier to explain, making use of this for a particular product case. I think that's, that's easier, easier may not be the best i I'm taken by trying to come up with with moving back into the model a little bit further. And I like the idea that maybe there's traction in the development of the model more than there is in in taking the output of a model and taking that output in and, you know, processing it further through different different software. And I think I'm a little pissed that way. Because I I want to be able to have options down the line. And if I go back into the diagram, I think that I may have better opportunity to, you know, fulfill strategic options from the licensing process. I think it opens up a broader category, that category but a broader audience that might be like willing to license this. And then work with that, to try to help focus on what are what are potential product areas that this model may be helpful on. So maybe this which is right now a disclosure very focused on mineral deposits. Maybe there's an application in agriculture, which takes advantage of much of the same and maybe helpful in mapping nutrient availability in farmland or something like that. That still looking for geological or elemental type of copper or I should say elemental composition, compounds and things that are useful in that process. And then thinking about how I might try to get that data in it's it's, you know, how is the data coming in for the geology? Is that coming in, in particular samples? Or is it coming in from remote sensing? And the agricultural space? I think it may be the same. Are you taking soil samples and some sort of a matrix in the field? Are you combining that with something that is remote sensing to get to capture it? How does that impact over the course of the season, as a, as a company are looking to looking to incentivize a company to be better if the company could apply this product to, let's say, the farm field more than once during the season, if there's value to the farmer, and this AI can help, maybe there's different information that needs to be provided at the early growing stages, as opposed to close closer to harvest? I'd like to have the opportunity to build that into patent applications, so I can put more eggs in my basket. Yeah. Unknown Speaker 41:05 I mean, it's I mean, in general, what you're talking about is, you know, how can you for incremental, let's say, extra effort, and a single patent application built in a lot of extra return, you know, get get more get broader claims, more claims, a larger number of continuations that would apply to a wider range of more valuable licenses, you know, and I really like your way of saying, if we're starting on the right, the right end of that diagram, which might be how the disclosure is written, you know, the disclosure may be focused on the app. And I've had some of these, where the disclosure says what we've invented is a new molecule or a new class of molecule, that's the far that's the final output in that model. And then then our job, you know, primarily, your job as the front line is to push on the inventors? Well, is that really what what the invention here is limited to that final output? You know, can we push back a little bit earlier in the process and say, it's actually the, as you said, the execution of the model that the process of doing that is, is potentially patentable, therefore, licensable, even if it produces other outputs, and we don't have to limit ourselves just to the the final molecule or class of molecules. And you know, another interesting thing, as I said earlier, the increasing amount of publicly available training data now, I think, over time, we might see it's going to get easier to swap in different training data, maybe than it is now. And then it asks a question, could you use fairly easily different training data, you know, use the same process to tackle different different problem areas. And so that way patent to cover them, Unknown Speaker 42:53 really. So that would really be like moving well back into your model to at least support claims later on? Because that's where that first training data came in, to the whole process? Yeah, Unknown Speaker 43:06 I mean, let me just mention something which is the other direction, you know, when you're talking about disclosures coming in from the computer science department, those might be on the left side of the diagram, those might be for a new training algorithm. And those those inventors might be just thinking about how they've come up with a training algorithm that's more efficient, right? can process a more larger amount of data in the same amount of time? And then then the job is pushed to them to think about the right side of the diagram. One, what would would that other than just general increase in efficiency? Does that have any particular application to a specific field where it'd be really useful? Have you already used it to produce some new better outputs? I'll say molecule again? Or do you suspect that it would have some particular? Could we could we claim those now, do you think we could spend a little time seeing if and then claim those on the right end? And that's again, where maybe the cross disciplinary approach comes in? Unknown Speaker 44:17 Right? Yes. Fascinating. There's so much to do with just just getting the disclosure, right. So that it's a basis for what you need to do and what I need to do. Yeah, searching for that searching for that licensee and understanding the value proposition. And then making sure that what we built together from claims really supports that so that you know, they're very happily willing to license this patent application. Which takes you to a little another question. That's, that's related a little bit to the to the licensing, but I don't think we have time to dig into it too far, but it's just how does one spend time looking over the potential infringement? Right, you know, if you are gonna license it, there is some questions about about that. And, you know, that's really what your expertise brings, I think into that processes in a claim structure so that you can both litigate the claims and observed that the claims are being practiced by unlicensed third parties. Yeah, Unknown Speaker 45:24 I mean, this is Oh, I would say, in my experience in handling software patents generally, whether they're AI related or not, detection of infringement is always an important question to consider right from the beginning. Because with a lot of software, right, that is being performed internally, it can be hard to detect infringement. And so there again, right from the beginning, I always talk to the inventors. And it requires some some bit of speculation, but some knowledge, which is if someone else were to be practicing this invention, would you be able to tell? How would you be able to tell? Would there be some clue? Again, going back to the framework we have, for example, if you saw the output of what they were doing, does that output contain any clue about the process that was used to generate it? Sometimes the answer is yes. And sometimes the answer is no. And sometimes the answer is somewhere in between, which is again, I'll just keep going back. Why these have these conversations need to be had every time? If the answer is oh, you know, if I saw the output of someone practicing my invention, I could tell there'd be some telltale sign of how it was generated, you know, then then we might have a good case for patenting the what we call the model execution process, or even the training, even if that was being performed in secret, because we could use the output to give us some clue that someone was infringing. Does that make sense? Does Unknown Speaker 47:00 that you know, that's that. It's all that's always a helpful part of the discussion with licensees. Nobody ever really wants to get into that discussion. Because it's expensive, and it's convoluted. And, you know, it's so difficult sometimes to, to think about that. You know, I always from a licensing standpoint, when I'm building a tech brief, I'm most often trying to think through, you know, what are the good and wonderful things to do here? Not the Okay, let's go to battle. Yeah, but yeah, it's got to be there. Right. And so that's part of it. Unknown Speaker 47:38 You know, you're talking about the tech brief. I mean, I'm sure that this this, this whole situation we're talking about might make your your life a bit more complicated when it comes to generating a tech brief again, it felt we had talked about a molecule generate maybe a design the old fashioned way or Sunday be a little simpler, the situation is more well defined. But in this situation, I don't know what your what your general thoughts are about, you know, what has to go into? Unknown Speaker 48:08 I don't know. But let's it's getting close to the end of our hour here. So let's let's flop over a slide here and see if anybody else out there has any questions that they may have come up with, they can be at any part of this discussion. We don't need to go backwards or don't need to go forward. If if people want to go back and discuss something in particular. Yeah, Unknown Speaker 48:35 can relate to you know, cases you've worked on situations you've had to deal with or wondering about. Unknown Speaker 48:47 So far, it looks like we haven't had any submitted just yet. So attendees reminder, you can submit on chat q&a, raise your hand. But I also know John and Robert up PI have a bunch more content. So if you'd like to keep moving through we can do Unknown Speaker 49:02 just to make sure what a watch at the time that was 10 of one ones are? When do we close this? Unknown Speaker 49:09 I think we did on the hour, but we usually have about a buffer built in. So if you have a little bit more, don't worry. You can drop off if you need to. Okay, well, Unknown Speaker 49:19 we'll use some of that. All right. So without any other questions, I'll go back to the question that that Robert, was trying to describe it as we turn the page here. Tech briefs, you know, that's kind of the bread and butter of introducing a licensee into what we have to offer. There's very little way for us to communicate a lot of times they're they're on some type of curated website. Lot of times you'll try to search for a particular audience and send them direct solicitations. I use the word tech brief. I think that's pretty common. And within autom, to call it that, but it's this, it's the short one to two page summary introduction of features benefits of, of, of what we have available to offer as a license. And this is fascinatingly complex and interesting. So, I think it, to me, it provides an opportunity to be that much more creative about how to approach potential licensees, and how to get into some very interesting licensing discussions. What is the royalties? How do we calculate the royalties? How do you cut up the fields of use? Can you've cut up the fields of used? How involved is the professor, I know from some past experience, there's a lot of interest that the professors have about having some continued involvement in the commercial development of what's been generated in the in their lab. I think, if I go back to the geology department, I got a lot more questions as to not only those that pertain to the things that you need, but still to feed back in and almost feels like we're an AI engineer ourselves to try to feed more information back to you as to, you know, where, where we're going to cut that licensing. Definition, you know, going back all the way to the to the training model itself, or at the end, where the university is going to be updating it providing updates going downstream. So so what would it be in the tech brief? Am I going to try to describe the training model? The end, end product model applications of using using the the AI, all of that is going to inform me I think, as how I would move forward? I think, just give me a second year because I see a question that came up. Joshua asked the question about using publicly available data to optimize a set of parameters. Combination of parameters for classification diagnosis for specific indication would be patentable. I guess that's more a question to you. Yeah. But this election? Unknown Speaker 52:23 Yeah, it's a great, it's a great question. So I'm going to first say it depends. And then I'm going to explain why because that's going to seem like a cop out. It in general, if the with if I fill in a unstated premise of your question, if you're saying that that's being done with a known training module, then I think not patentable, right, it depends if the really it should be considered obvious to use known parameters, and publicly available training data, using a known training process to produce a classifier, the only argument you'd have to the contrary, is that for some reason, it wasn't obvious to try to put all those things together, maybe you'd have a case there. But I think that's, that's a hard sell at the patent office, or certainly in court later if you had to. So if you want something patentable, I often do, if what you're trying to claim here is the training, I usually look for something new in the training process, the training algorithm, there's a lot of off the shelf, you know, AI machine learning tools now that you can use for training where you just use them as is, I'd be hesitant to try to patent that now, you could say that the use of the that the parameters are new, maybe the data is new. But you know, in method claims, if we're looking for as a method claim, you you, for the most part need to show something novel in the steps or combination of steps, and not just in the inputs. So I would say most likely not patentable, but unless you've got something new in the process for training, hope that answers that question. Unknown Speaker 54:19 Oh, there you go. Thank you, Joshua. So beyond that, within within the realm of of licensing, I have other alternatives as far as as licensing is concerned and encouraging licensees. One of the things that may be valuable to my licensees is to pursue international claims. Are there jurisdictions that in your thoughts I should try to avoid because it's really hard to get a patent through those jurisdictions that is dealing with AI or they're ones that are more friendly. I think a lot of it from my standpoint, it would be nice to have some options for some of the very interesting, especially with this particular disclosure in geology. There's worldwide opportunity there. And so it'd be interesting for me, when I'm writing the tech brief to just to maybe incentivize somebody that this could be a protected opportunity in certain jurisdictions that they might be finding very interesting. Unknown Speaker 55:29 You know, so far, so far, this is largely untested. I mean, in the sense that, I would say, in most cases, the differences among jurisdictions for AI are the same as those that apply to software patents generally. There hasn't been a lot in terms of patent office or court decisions so far, that tell us anything different about patentability of AI versus patentability of, of software generally. Now, that being said, because we've talked about a bunch of examples in which AI is being used to process data gathered from the physical world sensors, you know, for physical sensors, or to produce outputs, that represent something new, like a molecule, those are always easier generally to patent than what I call pure algorithmic software that doesn't have that kind of tied to the real world. And to go even further, certainly, any process software based process AI or not, that controls something in the real world, you know, something for autonomous driving, or for controlling the robot or any other machinery, those a generally are much, much easier to patent. So that's gets into the what how you write the claim, you know, for it might be easier to patent if you've got a method which says jet executing this model to generate a control signal. Now, do you stop there? Or do you say receiving the control signal, and using it to, to control an actuator to drive a motor, you know, now, that might help you from a patentable subject matter point of view. But it might decrease your scope of who you can license to write because to do this, only someone who's actually using the signal to drive a motor would would have to take a license to that. So that's the double edged sword. Unknown Speaker 57:37 Sammy just updated as we go to, we can go to 115. Okay, Unknown Speaker 57:41 great. I mean, you know, we could go to five o'clock, but we don't want Unknown Speaker 57:48 that. Okay, that is true. So improvements. One of the things that gets written into a lot of licenses coming out of universities is you know, where, where, and how improvements get delivered to the license licensee, based on continuing work at the University. Some of that is they have a claim on new patents that may come out, and others are improvements on the patent that you've just written. But in this case, with with AI, it, it's so evolutionary, meaning that it's by definition, this changing thing, whether it's in the hands of the university, or in the hands of the licensee, I'm struggling a little bit. As far as how I would try to describe that in a license. This particular phenomenon, it seems a little bit like is done with cell lines and other things that may be changed with new sequences transformed into them for for different purposes. But on a whole different different level. Was what's your experience? And yeah, how to deal with the improvement side? Unknown Speaker 59:07 First, let me just agree and acknowledge that the ability of AI to update itself to update models to update data automatically, over time and repeatedly, certainly does raise some interesting questions for and blur some of the lines between you know, what exists now, and what exists later, what is covered by a patent now versus later? What might be the scope of, you know, we it's harder to carve that line off in time, because of this ability to have automatic feedback loops. Right. And that's one part of why I foreshadowed this earlier when I said in that reef, I called it the I call it refinement. Module the thing that that modifies the output to produce final output. Sometimes that can be done by a person. Sometimes it can be done by, by software, sometimes it can be done by both the same thing is done is true with the feedback loop. You know, I, here's something I think I'm pretty confident about saying is if you've patented and are licensing the automated updating process? Well, I don't see what problem there is with that. And with this, as long as someone is using that process to update their model for the next 10 years, then that that's what they've committed, that's what they've committed to. And I think, I don't think see any problem with that. So Unknown Speaker 1:00:42 that goes back to what I need to do. My next step is going back to the professor with the, you know, that kind of summary outline of his disclosure to me, and really delve into that, you know, the earlier stages where the training model is is created, what what is it? How is it digesting data? And how does it? How does it evolve, so that you can, you can have that in the disclosure going downstream. Unknown Speaker 1:01:14 So, let me just mention another option, you know, I'll give an extreme version of it, which I'm not necessarily suggesting anyone try at home, yet, but it's kind of where this points, which is, you could imagine a claim, you've got claims on the training process, you've got claims on the model execution. And then you've got a claim, which says, Any product produced using any of those other processes, right. products by claim, Unknown Speaker 1:01:50 I would have liked that back with my, my, my my lettuce algorithm, that I could have anything that was controlling the building, you know, temperatures and lighting, and all of the things that were necessary to get my my good and efficient lettuce, growing appliances to so many other things that I wish I had had that then. Unknown Speaker 1:02:10 Right. So when I know I'm being a little bit tongue in cheek, but you know, are there ways are there more subtle ways to write claims like that, that would be valid and enforceable? And therefore give you some ability to reach further into the future? From a licensing perspective, I think the legal question of the which is the first part, valid and enforceable over time? I don't know, it's going to depend on how, you know, enablement is applied in these contexts to future products that don't exist yet, probably going to depend a lot on case by case basis and exactly on how the claim is written. You know, but I certainly will be attuned to the possibility of doing that, and have been, and to trying it out with clients who are willing to in a reasonable manner, you know, and see where these things go. Unknown Speaker 1:03:09 So it's almost like marching orders that we should take a look at every disclosure that comes through and say, Hmm, no matter what it is, is there some way to add a layer of AI into anything that comes through the patent? You know, that any disclosure that comes through even if it's not immediately obvious, as this particular disclosure, that maybe it's worthwhile pointing out that if you're looking for optimizing something, or having something that is more broadly applicable, that we can add some add some of that and other disclosures? Unknown Speaker 1:03:48 Right. You know, it's really it's essentially saying, is this AI create the ability to broaden the disclosure and potentially the claims in a way that could capture some things that normally you would have to use a continuation in part or a new patent application to cover later? Unknown Speaker 1:04:07 Right. So just briefly, I'd like to interrupt you, Joshua asked another question. technology related AI, ml software moves so fast. So would it be worth pursuing patent prosecution in a university setting, which takes quite a long time to allowance? Okay, I'm Unknown Speaker 1:04:30 trying to make sure I understand the question about, you know, the premise that that prosecution takes a longer time in universities necessarily, if you know, you, when I hear prosecution, I'm thinking about from the time you file the patent application till the time it's granted, which in my experience, you know, isn't necessarily any longer for a university than then elsewhere. If if you're suggesting that maybe a university It takes a longer time to get the patent application filed. That's also not necessarily my experience, either. I would say what I do see there is, people do often ask me this about software patents generally, which is the rate of innovation can be much faster than the speed at which the patent office examines patent applications. So you always have to consider that, you know, worth considering accelerating examination in some way, if you really want to be getting an issued patent that you can license sooner through through track one or through Patent Prosecution Highway or petition to make special if it applies, you know, patent office has a number of bases for accelerating examination, and I've definitely made use of a number of those. There's green technology now and always have been age of the inventor, you know, so that this I don't think is limited to universities. But yeah, if you need to speed things up, make sure you're aware of all the options for doing that. Unknown Speaker 1:06:06 That's really helpful, because dealing with the timelines can be a challenge from any licensing standpoint, to always, it's always good to be able to license when you've got a claim in hand and still something pending. But in these cases, I think Josh was right, you know, you've you got to be entering into the licensing arena, you know, at almost a provisional patent application stage. And frankly, my, my preference is to work early on these on these licenses, sometimes you've got to give a little bit on the you know, early stage value, but in my longer experience has been very helpful to have a commercial partner who can help you mind claims out of a disclosure. And I think we, Robert, you and I talked about that before and said put put elements into the disclosure that you can go back and start to use as the basis of claiming especially those things that are further upstream in your model, and not so much product by process at the very end of the model. Because you know that it's a different sell process. To say, Look, you can you can have a seat at the table. And you can help convert this into something that is more detailed for your particular needs as a licensee and a little bit more of a partnership. In some cases, where I've done that it's been very helpful is the licensee tries to sponsor some research, if it's if it's necessary to still do some research in the professor's lab. Sometimes it sweetens the pot that the professor has a little bit of time in their schedule to work as a consultant and help to effect that technology transfer. But whenever I get into those situations, I want to have the ability back in the patent application that as as that process and the combination of the you know, the research inventor and the and the licensee start to do some things. I want to make sure that as a licensor. I can participate in the outcome of that. And so I have to be very, very involved with you. Robert does say what if, what if, what if this happens? How can we write that in the specification? And is it potentially claimable? And to build that out? Yeah, Unknown Speaker 1:08:45 I mean, I think what this is, all of this is saying to me is, as you said before, just as a framework we showed as a feedback loop, right? So is the relationship between the two of us where, you know, my goal is to get you something that has maximum value and gives you the maximum range of options. And to do that, we need to be in close communication with each other from the beginning so that I can get from you would I need in order to create that application? We all know, right? Once you submit the the least the non provisional utility application, you know, that's your filing date. You can't add more to it after that without without having to get another a later filing date. And so to do all this feedback, work upfront about what's the end goal for licensing, you know, what, and then work that back and what what does that mean, we need to get into the application from the beginning is really critical. And it does take some more work upfront to do that. But I mean, I said earlier, I think it's not all that much more work to do upfront for potentially very large payback and in fact, can actually be some cost savings down the road. Because you know, the Cost of churning out some more continuation applications based on the same specification, you know, incrementally is less than having to file more new patent applications and new patent applications that don't have that early filing date, you know, so I think it's important to look at the oh, all of those costs and benefits together Unknown Speaker 1:10:23 now. So we have, at least by my desk clock here, about two minutes. So I think we, I think we've asked each other an exhausting number of questions. Are there any final questions? Joshua, thank you for all of your participation in this is really, that was very, very helpful. Any further questions? Otherwise, we'll turn it over to Sammy for closure. Unknown Speaker 1:10:50 Looks like we have a thank you coming through. So thank you, participants for joining us today and listening, John, and Robert, any kind of final big takeaways like a one, a one liner of if you remember one thing from today, what should it be? I'll put you on the spot for that one. Unknown Speaker 1:11:10 In my case, I want options as a licensee or licensor, I want to have as many options to go out to industry and find as many potential licensees as I can, who will be interested in what I'm doing. That's my that's my goal. Unknown Speaker 1:11:26 I would add to that, to enable me to help you get as many options as you as you can get me informed and involved early, so that I can help you with that. So that's Unknown Speaker 1:11:38 why that's why I came to you with my disclosure. I'll go back to the department and we'll we'll then I'll work harder to get that disclosure firmed up. And then we'll have a meeting between you and the inventor, and we'll get that even get that ball rolling. Unknown Speaker 1:11:52 Thanks so much for thinking to me for sure. Perfect. Unknown Speaker 1:11:57 Well, thank you both so much on behalf of autumn, for joining us today sharing your insight, this was a really fun, creative way to get I think a lot of really important points across for our attendees. So thank you for, you know, presenting this hypothetical and sharing all of this great information with our group and attendees. Thank you so much for joining today. As a reminder, a recording of the webinar will be available for viewing on the autumn Learning Center within a few days of this session and is included in your registration. We will also include a copy of a slide handout and a certificate of attendance there as well. And don't forget to complete the webinar evaluation which will open when you close out of this session as well as the in a follow up reminder email tomorrow. This helps us to serve your needs in the future. So once again, I will say thank you so much, and I hope that everyone has a great rest of their day. Thank you again, John and Robert, Unknown Speaker 1:12:50 thanks for having us. Unknown Speaker 1:12:51 Thank you. It's a lot of fun. Have a good one. You too Transcribed by https://otter.ai