Speaker 1 1:03 current day of the executive order the request that the USPTO develop guidance on subject matter eligibility under 101, that team is working on that, and we're also diving in some other things right now, such as AI generated prior art. How does AI impact the person oriented skilling art standard? How does it impact. Hello, and welcome to today's autumn webinar. USPTO AI patent initiatives. My name is Don vo young, and I'm a member of autms Educate team and today's staff host. All lines have been muted to ensure high quality audio, and today's session is being recorded. If you have a question for the presenter, we encourage you to use the Q and A feature on your zoom toolbar. Should you need closed captioning during today's session, the Zoom live transcript feature is turned on and available on your toolbar. Before we begin, I would like to thank and acknowledge Autumn's online professional development sponsor Marshall Gerstein. We appreciate your ongoing support. And now I would like to welcome today's presenter Matthew skid we are excited to learn from you today. Thank you Don Yale, and thank you everybody, good afternoon and good morning. Where we happen to be from, as you point out, my name is Matthew SCAD. I work in the Office of patent legal administration at the USPTO. For those who aren't familiar with that, for our department, is the policy shop at the USPTO. So we are the department in charge of developing all the patent policy here at these USPTO, whether it's drafting rules, whether drafting the MPEP, advising the Under Secretary on patent matters, perhaps the b That's what our shop does. That's kind of my day job, but my awesome my other role here at the USPTO is I lead our artificial intelligence and emerging technology working group here at the USPTO. So with that, let me start my slides and I can get into we can get into that, huh. Okay, all right, hopefully you can see my screen now. So as I said, Before, I lead our AI and et policy working group. So before I kind of get into what that working group is, let me give you a little overview of where we're looking to go today. So I'm going to talk about the AI et Working Group and essentially all our work streams and our our initiatives that we have going on here, I just kind of fill you in of all, all things AI going on at the patent office. Then I'll get into some specific patent policy initiatives that we recently came out with. The first one was the inventorship guidance, which came out in February, and the second one being the use of practice, use of tools by practitioners and others before the USPTO. So this is the guys that came out last week, kind of excited to talk about things, but our first public discussion about it. So I guess got yourself lucky that the timing of this just kind of worked out great for that. Then I'll talk about next steps, what we're doing this, where our working group is doing moving on from here, and then I'll happily answer your questions at the end of our proceeding. So I'm hoping to go about 50 minutes or so and then leave time to answer your questions at the end. Okay, so let's talk about the AI and et policy working group. So as you can see here, we have, well, technically, we have five different working streams, but we have four here. I'll talk about that in a second, but let's start from the left and right and right and talk about the things that we're doing here at the PTO. So on the left, do we have our patent policy sub team as part of our working group? So this is a team that's working on just our patent policy initiative. So they worked on the inventorship guidance, which I'll talk more about in just a few minutes. They talk we worked on the practitioner notice they are working on the 112, analyzes, all these sorts of things are kind of things we're thinking about and taking a look at right now, so that. But the big one, and we just got, got to those big policy initiatives off our desk, but that group right now, the big one they're working on is the one on one initiative. I'll talk more about that later, but we'd have a couple other things we got. We got kind of brewing within the team. The next work stream on this is the broader IP and tech policy team. So this is essentially working on all the policy stuff that isn't specific to patents. So they are working on the executive order with regards to copyright, right? And part of that executive order is a request that the USPTO come up with a recommendation on executive orders with regards to copyright and AI. So that team's working on that. They're working on other things as well. They're looking at trademarks right now. I think one of the things you'll see coming out of that group soon is name imaging and likeness and Atlanta act and kind of dive in, kind of those issues which are part copyright, but probably a bit more concentrated in the trademark side of things. We're also looking at Tech policy. So here at PTO, what can the USPTO do with AI? And keeping in mind, you know, the executive order from the President, which lays out some restrictions on how we use AI, but also the memo from OMB, which came out a couple weeks as well, it does the same thing. So how can we leverage AI here at the agency that that works for stakeholders, works for the USPTO and still works within the restrictions placed on us by the executive order and the OMB memo. So those are kind of things that we're thinking about in that kind of team there. The third one there is our workforce development team. So that's our team that's essentially trying to obtain and retain most knowledgeable people in AI that we can right. So that's all our hiring efforts. That's all our training efforts. So how the staff that we have now? How do we train them up on AI, and how do we keep them up to date on such a fast, changing and moving technology, right? So, so we have lots of initiatives working there. It's working with HR, it's working with our patent training academy and all those sorts of different people. So I think I kind of mentioned this before. Our AI policy working group is made up of members from all different departments here at the PTO. So it's not just a patents thing, it's not just a board thing. We have members from patents, different sections of patents, different sections of trademarks, the P tab, the T tab, CIO, CCO, General Counsel, Solicitor's Office, HR, we have lots of different people from lots of departments. Coming together trying to tackle these, all these AI issues that are coming up. So we're trying to, trying to, you know, it's very fast. It's very difficult because AI is moving things so fast, but we're doing our best to try to stay on top things as much as we can. And that's why we have a dedicated team to doing this. There's one work stream that's not shown on here, on this, on the screen in front of you, and that's our international and interagency work stream. So those are the people who are essentially moving these same kind of policy issues along, but more at the interagency and, you know, international level. So it's discussing with other agencies how they're handling things, how we're handling things, so we can coordinate and work together. It's working with fellow patent offices, members of the IP five, etc, on what they're doing with regards to AI policy and what we're doing, and to what extent we can collaborate and work together. So I don't have a separate work stream here, because essentially just overlapping what we're recovering the other ones. But we do have a team that's working specifically on that, just looking at the same things, just more of this interagency, international level. And then last but not least, is our AI and et partnership. So let me, let me break that down for what is our AI et partnership? So this is a collaboration between us, the USPTO, and you, the stakeholders, the USPTO, the public that operates before the USPTO, and we take great value in the feedback that we receive from stakeholders, whether it's through Requests for Comments, whether it's through engagement sessions, roundtable discussions, listening sessions, whatever happens to be, we really do value that input. And with something like this, that's extremely important, extremely fast moving, we want to make sure we hear from the stakeholders correct? Because we don't know what we don't know, and until we're told it right? So we want to make sure we're constantly engaging with our stakeholders on all these issues, so we hear the problems that you're seeing, issues may be seeing down the road that we may haven't even thought of yet. And how can we kind of address these things so so we have so part of our work stream is this, specifically, this engagement between us and and the public. So we've had several different engagements on this so far. We started this back in June of 2022, with this kickoff event. And that kickoff event talked about inventors is more patent policy focused. That first event, it was, you know, inventorship. We talked about subject matter, eligibility. Talked about 112 and then we've had several different engagements since then, one on biotech and AI. One on AI driven innovation, which talks about we talked about enforcement there, we talked about more about inventorship there. We had a session directed to the AI tools that we use here at the PTO. So essentially, we're. What tools we're using here, and helping examiners exam patent applications, or helping us classify patent applications, or how we engage with our stakeholders through some of our automated Internet tools that we have. And so then hearing from the public back, you know, essentially, you know what they would like to see from us, or thoughts they have on the current tools that we have. And then, most recently, just last month, just a couple weeks ago, we had a public symposium out in Los Angeles, and this covered more copyright issues. So as I talked about before, we have that requirement to recommend Executive Order, executive actions on copyright, and so we're in the process of kind of data gathering to do that, right? We're the USC, United States Copyright Office, the US CEO, is putting together a report on that, and we're working with them to because they did a huge mass, you know, data gathering effort for that report, which they still have an issue, but we're working with them to get that data and learn more about the issues. And we're doing some stuff on our own right. We're doing this public symposium. Was one of those things to try to gather more information for the public. So we had discussions about inventorship and authorship. What's the difference between inventorship and authorship? Should those standards be the same? Should they not be the same? Why or why not? We had a panel discussed now. We had a panel on name, individual likeness. So we're kind of looking at these issues and trying to think about them a bit more critically as we move forward, and try to figure out, okay, what executive actions should we actually be recommending? So with that, let me get into our first patent policy initiative, our big patent policy initiative I'm gonna talk about today, and that's inventorship. So this is inventorship, AI assisted inventions, and it starts out with the Thaler versus the Donald case. I think most of you probably at this point pretty familiar with it, but I'll just go through it again for posterity sake. So Dr Thaler filed a couple patent applications here at the US PTO. One was to a fractal container, and one was to a light, a light that would blink at a certain frequency and and finally, these applications, Dr Thaler named his computer, which he named Davis, which is acronym for device for the autonomous bootstrapping of unified sentience. He named Davis as the inventor on his patent applications. Right? So this came to the USPTO, and this was flagged in our pre exam process, so the applications never actually got to an examiner. They flagged our pre exam process because the pre exam specialists, there's a whole checklist of things that look for it make sure the application is complete. And one of those things they look for is the inventors named. So they looked on there and saw Davis, which they pretty much outlined. It was a computer. It was not a person or a person's name. And so we sent out news policy saying, you have to give us an inventor. And we had a whole back and forth with them. They eventually petitioned us to please let them follow their applications, naming this computer as the inventor. We had a final agency action where we said, No, we cannot permit me to do that. You know the laws, from our reading of the laws, to be an inventor, you have to be a natural person. So this was then sued us in the Eastern District of Virginia. We won that suit, and then it was appealed up to the Federal Circuit, and the Federal Circuit agreed with us as well. So essentially, the decision on this, the decision of you know, whether an inventor can be a computer or some entity that is not a natural person, really lynched on 35 USC 100 F, so Right? 35 USC 115 says you have to name the inventors and provide decorations from those inventors. And then 100 defines what an inventor is, right? And it's an individual, or individuals collectively, who invented or discovered the subject matter, the invention. So it was really the term individual, and the court kind of looked back to Supreme Court case law. And finally, okay, well, individual means a natural person, unless Congress intended a different meaning behind it, and since we don't have a different meaning, the court concluded that individual has to be a natural person, so the mentor has to be a natural person. So this is important decision, but just as important that decision was, was that the question that they did not answer, right? They answer the question is, of can a computer or some non naturally occurring entity, be an inventor, and the answer that is no, right? You have to be a natural person. The question they did not answer, and they were very explicit in stating that they're not answering this question is the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection, right? So that's the more common approach we probably see today, is right? People in combination with AI creating things, and that is not the question the court answered. So because the court did not answer it, along came an executive order from President Biden back in October. As part of that executive order, it was a very broadly reaching Executive Order covered lots of different agencies, lots of requests on the Department of Commerce specifically, and we had three requests of us in particular, I kind of already addressed some of them already. The first one being copyright right, executive order on copyright Speaker 1 14:53 guidance on 101 subject matter, eligibility. And the third one was answering this specific question that the court that the Federal Circuit. Did not answer, right? How should we be looking at inventions that we term as AI assisted inventions? So inventions that are created with the use of AI in some capacity, right? Could be a very minor capacity. For example, you're just using the AI to search something. Or it could be a, you know, very large capacity, where, you know, essentially, you're typing in some prompts in the AI and hoping the AI solves a problem for you. So to what extent can these things be patented? Right? Big Question is, right, are they even patentable to begin with? Second question is, if they are patentable, then who should be the ventures on these things? So based on executive order with PTO or cast, I think of the 100, yeah, we're given 120 days to determine, you know, this guidance, provide this guidance to our stakeholders. So the guidance did come out in February, but before we did that, we were, you know, even for the executive order, we're looking into this issue, and knew at some point what, you know, we were going to have to come to some kind of conclusion on it, or make some kind of guidance to our examiners or to the public on it. So we were getting to engage with the public. As I said before, we really do value public input. When we do these things, when we go out with a request for comments, or we have listening sessions, I always ask people to please respond to these things. Please give your input, because we do look at it all, and we do consider it all. It's good to know where the public stands and the different views that are in the public to help us know, you know, help us inform us to try to figure out where the best place to land is. And so in doing that, we had several engagement sessions regarding this. We had a request for comments that was out between February and May of last year. We had 69 comments in response to that. A lot of them, very well done, very thoughtful, well well thought out comments that we see in that we also had a couple stakeholder events. One was in the East Coast, in Alexandria, and one the West Coast. Combined. Between those listening sessions, we had over 800 people. So we heard from a lot of different people. Got a lot of different perspectives. We even had the the Journal of the patent trademark office society, there you see in that last bullet, they did a special issue for us where they got a lot of scholarly articles on the on this, on this issue for us, they released that a few months ago. That issue, I don't know if anyone's a member of that, but they did issue all those kind of scholar research they obtained a few months ago. So based upon this information, I got this public, this kind of you know, information gathering. We then developed the inventorship guidance that we released in February. So that came out in February 13, 2004 so the big takeaway from that guidance is that AI inventions are not categorically unpatentable. We don't believe that the Thaler decision extends directly to these types of inventions the grounds for, you know, finding them unpatentable in the Thaler decision, I don't think applies in this specific situation. We did get some comment to that effect, there was some public feedback that thought that these things are categorically unpatentable anytime you use an AI to any extent. We didn't agree with that, as I said before, I don't think the Thaler decision reads over that far. I think the statutes are clear as to what inventorship is and what's required of inventorship, right? Inventorship, essentially, is you must name the inventors, right? And we know from Thaler, inventors are natural persons, so the statutes under 115 is essentially, you have to tell us who the natural persons are, that are inventors and providers and declarations from those those persons. So, so we felt that the statutes were pretty clear as to what inventorship is and what's being required of it. And it doesn't say certain tools can't be used. It doesn't say that if you know, AI contributes in some way, these things are not patentable. There's nothing in the statutes that kind of infer that. So we don't believe that the place to land we thought the place the land was these things are potentially patentable. So again, that kind of big question out of the way, right? These things are not categorically unpaddle. The question raises, okay, you know, when are these things patentable, and if they are patentable, who should be named as the inventors in these, you know, in these applications. So to try to figure that out, we leverage joint inventorship law, right? We have couple 100 years of joint inventorship law, reading 35 USC, 116 and what it means and what it means to be a joint inventor. And essentially, it, it's not the same exact thing as we're thinking about here, with AI assist interventions, but it's similar. But it's similar, right? Joint inventorship is essential. We don't have a single person that individually conceived of an intervention. And so the law gives us a give us a way to kind of think about that and try to determine, Okay, who the inventors are and who should be named as inventors. In the applications where you don't have a single person who's an inventor. We have the same issue here, right? AI system measures. We don't have a single person that has conceived the entire invention, right? You may have gotten help from other inventors. They've gotten help from some kind of computational tools or AI systems, whatever happens to be and, you know, we're trying to answer that same kind of question, who should be named as an inventor? So we're leveraging that case wall and using this. Same test from joint inventor case law, which is a significant contribution test, essentially, to be named as an event or an application, you must make make a significant contribution to that invention. So essentially, these are patentable, these types, these AI assistant inventions, are patentable when you have at least one person who has made a significant contribution to the invention, right? So then that you then you have someone who can be named as an inventor, and then you can file your patent application where we have situation where we don't have a single person, or any person who's made a significant contribution, that we fall back in the Thaler territory, right? We don't have any inventors, and the applications on patentable so that's kind of we left out. So then the question, okay, what? What's it mean to be a significant contribution, to make a significant contribution to an invention. Well, we looked to a case called paneer versus iolab. It's a case from maybe about 20 years ago, and it was kind of it, kind of what it did, we thought nicely, is it kind of broke down the case law of a joint inventorship over the years into what exactly did the courts look at to determine whether someone has is a proper joint inventor, where they've been seeing that during contribution, they break it down to a three factor test. We like to like most courts like three factor tests. We like factor tests too, which is simplifies things made a little more straightforward. So, so we've used these factors to determine whether you may assume a contribution the AI assist intervention context, right? So before you are the three factors that can lays out whether you're a joint inventor or not, right? The first one says, Have you contributed in some significant manner to the conception reduction practice in the invention? Second one is, make a contribution to the claimed invention that is not in significant quality, when that contribution is measured against the dimension of full invention. And excuse me, third factor is, do more than merely explain to the real inventors? Well, no concepts in the current state of the art. So let's go back to that first factor again, right? Contribute, in some significant matter to the conception or reduction to practice of the invention. Now, if you go back to panoo, when they're talking about reduction of practice, they're talking about simultaneous conception and reduction of practice, which is a part of inventorship wall that was mostly discussed in interference context when we're trying to battle out who the first to invent was, and really arises in what they call unpredictable technology. So technologies where even though you may have conceived of something, you don't know if it actually works the way you intend it to work until you actually build it, right? So maybe you know more pharma areas, biotech areas, those sort of things were deemed unpredictable technologies, and those areas you don't, they essentially conclude you haven't had conception until you've built it and see if it works the way it's supposed to work. And so that's what that reference, the reduction of practice is. Because, generally, when we're talking about reduction of practice, you know, we're not saying reduction practice is sufficient to make you an inventor, right? Invention is all that conception. It's not about reduction of practice. I think the kind of usual, you know, representation they give is, you have some, you know, someone who runs a lab at a university, and they come with an invention, then they get their lab assistant to program it for them or build it for them. It's the person that conceives of the invention, right? The one who's running the lab that's the inventor. The lab assistant who just builds it for them is not generally found to be the inventor unless they, individually, you know, on their own, developed something, added something to it that's significant. Otherwise, you know, it's all about conception. It's not reduction of practice, but it's practice, but still in there, because that's the quote from Panu. And we'll talk more about reduction practice in just a second, when we get into some of the guiding principles that second factor is, make a contribution. That's not in seeing the fan quality, right? So the first one is, have you made a contribution to conception? The second one is more quantitative. Okay, let's look at the scope of what you're actually claiming. Each claim is kind of individually looked at on a case by case analysis. So the king claim that you have in front of you, the scope of that claim, how significant is that contribution, given the scope of what you're claiming, that specific claim? So you know, this could go out different ways depending on what the scope of your invention is, right? What does the person actually contribute what the AI may have, may have contributed. It's all about what you're claiming, the scope of what you're claiming. Okay? And the third one is doing more than merely explaining real inventors, well known concepts and current state of the art. So just explaining what, where the current state of the art is, generally, is not going to be enough. One case that comes to mind, I think, a person was consulted as to what products were currently on the market, what features those products have, maybe some features that would be nice that be added to some of those products. And the court found that that's that doesn't make that person inventor. Just telling the other persons who eventually do invent, just telling them what's currently existing isn't was not found to be a significant contribution and provide inventorship. Speaker 1 24:44 So note that all three of these requirements have to be met to be an inventor, right? If you miss any one of them, you're not an inventor. All three have to be met. So a first focus there at the bottom there is that this and. Analysis is focused on a natural person's contributions, right? Patent laws in general, what's the focus? What's the whole point behind our patent laws? Right? It's to incentivize innovation, particularly to incentivize human innovation. And so we think this, this test, this approach, this kind of falls under that, right? We're focused on human contributions. The same way patent law is focused on human innovation. So just keep that in mind, we're not so worried about what the AI did or to what extent you may have used AI and eventually creation process. It's focused on what the persons have done right. We only care about what the AI did to the extent and may have may show the person hasn't done something that they think they have done right. It's really focused on what has the person done and is that, is that contribution significant or not under the canoe test? And keep in mind we do there. We do that this thing and 116 talks about how each inventor does not need make the same type or amount of contribution. So this same plays joint inventorship, right? We have some inventors that make larger contributions than others, but as long as one person has made them a contribution, they can they still named as the inventors, even though their contribution may not be as large as someone else's. Same thing kind of comes into play of AI's interventions, right? AI may have made a very minor contribution, a person makes a large contribution, or vice versa, right? A AI may have done majority of the work, but as long as the person has made a statement contribution, it doesn't matter one way or the other, right? We're all focused on has that person may assume a contribution, not the exact the ratio of AI contributions versus human contributions, or anything like that. Speaker 2 26:35 Okay, so applying this test Speaker 1 26:40 can be difficult, right? Joint inventorship law is not the most straightforward law we have out there. There is a lot of it's very case by case, very fact intensive and fact specific. Understanding that and trying to provide a little bit of assistance to stakeholders and examiners, and applying this, we put together some guiding principles to hopefully help everyone out how we see, you know, in some scenarios where it might shake out under the new test whether something is or is not a significant contribution. Now, as I said, these are guiding principles. These aren't absolute findings in every single instance or whatever. You know, as I said before, everything this is all very case by case. It's very fact intensive. So just because we have a guy in principle doesn't mean it could, couldn't shake it out the other way every once in a while. But these are kind of talking more in generalities, and thought thinking of, hey, for for creating things with AI, and AI has done this, and the person has done this. You know, where would it shake out? Who would be in the inventors? And that's what these kind of principles are trying to show. Are trying to show. So the first guiding principle, I think, is probably one of our most important ones, and it gets that back at the fact that we care so much about what the humans have done and what's the humans contribution. This is a natural person's use of an AI system and create an AI assist intervention does not negate the person's contributions as an inventor, right? So just because you happen to use AI in some capacity, whether it's a minor capacity or major capacity, it doesn't negate what that person has done, right? That person, we're focusing what that person does and what has that person may stand a contribution regardless of how much they've used AI, right? As I said before, it's not a ratio thing. It's not, oh, there's been so much AI used. This person can't be an inventor, right? It's focused on what has that person done? So just because they use AI doesn't negate their contributions as an adventure. So that's the first general principle. Now the second general principle is kind of based on some joint venture case law that we've looked at over the years, and it looking at the people that recognize a problem, right, present a problem to someone else and other person events, I think, you know, kind of traditional kind of, you know, scenario being you have a manager at an engineering firm, and a manager goes to his engineers and says, Hey, can you build me a widget? It is a, b and c. And then the engineers go off, and they can see an invention, they build it, and then they file a patent application that manager who just simply told the engineers that, hey, please build me a widget does has these features or does these things, that doesn't make the manager an inventor, right? It's all about conception, so thinking of it in that way, and then applying that in AIS is an invention world, we would see that, you know, a person that simply prompts an AI to solve a problem, if that's all you've done, and just, you know, gave a simple prompt, say, you know, prompt, you go to a generative AI system, just type in, you know, design a widget that does a, b and c. It's probably not enough to make you inventor. That's all you're doing. So you know that, as I said, before, just recognizing a problem or presenting a problem to an AI, if that's all you're doing, it's probably not enough to be a significant contribution. But we do hedge a little bit in this, because we do recognize that, like prompt engineering and other areas where we're trying to figure out where we have, you know, very bright people working with AI. Systems and figure out how best to interact with that system, and get that system to create something on the back end that's actually valuable and useful, there could be some significant contributions in that, right? So I don't want to be cozy that anytime, if you're just putting in inputs in AI or automatically, never, never an inventor, right? It's all very fact specific. As I said, you know, if you really have an innovative way with which how you're interacting with that AI system that could make you that could be singular contribution and potentially make you an event. So that's what general principle two is talking about. So principle two is kind of focused more on the front end of the AI, right? General principle three is more focused on the back end of the AI, working with the AI outputs. So I talked a little bit about this before reducing inventor invention to practice alone is not a significant contribution that rises to level of inventorship. Or I talked about it right, just the person who builds the invention according to the you know, the conceived invention of someone else they built exactly as they're told to that doesn't make you an adventure. And same thing somewhere here with an AI system, if you have a person just sitting on the back end of an AI system, taking output, building it exactly according to how the AI system says to build it. That doesn't make you an adventure. That's all you're doing. And similarly, we do have this as I talked about that second to last sub bullet there that natural person who recognizes and appreciates the intervention. We got some comments on this when we were developing the guidance, and so I thought I would just mention it here today. So recognition, appreciation. It goes back to that simultaneous conception of reduction practice stuff I talked about earlier, when we're talking about unpredictable technologies and interference context. And there is this idea of recognition, appreciation, essentially, conception is not complete. And others, the courts have said conception is not complete until there's recognition, appreciation, as a person who understands what the invention is, understands the utility of it and what it can do, and only then, when someone appreciates it, understands what they have, that is conception complete. So in that context, that's what recognition appreciation is. And I think we got some comments in saying, Well, that's all you need to be an inventor, the person who recognizes and appreciates the output of an AI that makes them the inventor. And we don't believe that's, that's that falls within the case law. If that that's, that's a significant contribution within a case law. So if all you're doing is staying on the back of the AI system and looking at the output, going, Yeah, that's useful, I could do something with that. We don't feel that's a significant enough contribution. But you know, as I said before, these are all case by case, and if you do do something with the output, right? Just because you pull out the output, just because you're working on the back end of the back end of the AI doesn't mean you can never be an inventor, right? If you do something with that output, right? You start building it, you start designing it, and realize, oh, wait, this isn't going to work. But if I change these components out, or if I modify it this way, or I do this that now we're start talking that you're potentially making a standard contribution. Now, so if you're just plainly taking the output of AI and just building acquiring what they're doing, you're definitely not going to be an inventor. But the inventor, but the more you contribute to that output and do with that output, the more likely you are an inventor. Okay, so let's move on to our general principle four. So this has to do with building blocks and those developing the building blocks on which the final invention is developed on. Now there's, there's a couple cases out there, the Dana Farber case being one that comes to mind, where we have situation where we have persons, you know, contribute to something along the way, some kind of intermediary step of the process to the final invention. So maybe they didn't, they didn't create the final thing that that's being patented, but they created something along the way that was essential or necessary in order to get to the final, final position. So that's from the Dana Farber case. And so we can see this kind of playing out sometimes in the AI context, when we're talking about the programmers and the developers of the AI and to what and when could they possibly be named as inventors in those situations, right? In some situations, you could see that developing an AI system, whether it's training, it programming, it perhaps the B could be an essential building block to the output of it. For example, if the contributions in the programming or development in some way was to solve some particular problem, right? I this particular problem and want to solve. I'm going to choose some models, some very specific models for a particular reason. To do that, I'm going to train on a very specific curated set of data. Do that, or I'll program it to operate in a particular way. To do that, right? These contributions to solve some particular problem could potentially make them adventures could potentially be a significant contribution to the final result of the output. This isn't always the case, or, as I said before, it is case by case, but we could see it as a potential possibility that if you're trying to solve some particular problem, and you're building that AI in a particular way to solve that problem, that could be a significant contribution. To that end result, and that last one is maintain intellectual domination over AI system. So this is some of the cases back in the day that the owners of some firm don't necessarily become an inventor of the inventions developed by the persons who work for that firm, right? So you know, as I said before, the owner of a company owns company Acme, the Acme Company and Acme engineers design something just because, you know, the person owns Acme Company does not make them an inventor. It's all about those who conceive the event the invention. So building off of that, the persons that happen to own the AI system, or hope and oversee the AI system in some way that doesn't necessarily make them an event or right? They have to make a standard contribution to the invention itself. Just the fact that they happen to own the AI system, that happens to come up with an idea, doesn't make them an event. So those are kind of the general principles by which, Speaker 1 35:58 you know, we kind of see this working out in certain situations, right? So this all so the kind of the takeaway from this is, right? These things, AI system inventions, aren't automatically impatable. Aren't categorically impatable. They are patentable as long as one person has made a significant contribution to the invention. The significant contribution is decided based on those canoe factors that we kind of talked about a little bit. And these general principles are our thoughts here at the PTO as as how those canoe factors may apply in certain situations of AI assistant interventions. So let's play this out in patent practice, house. House applies here at the PTO. Not necessarily changes, but just, you know, just kind of seeing how this guy is kind of plays out here examination. This applies across the board, this inventorship guidance to utility, designer plan applications, we're not making any distinction there. Naming the inventor remains the same as you as you've always done it. They're named on the application data sheet or the ocean declaration, where it happens to be an important part here is we still presume that those, people you invent are the inventors of the application, right? So, not changing anything here at the PTO, when I go, when an application goes to an examiner and exam or sees who are the invention, the adventures listed are, there's a presumption that those inventors are the actual inventors. The examiners generally don't investigate that we we generally don't have the tools to investigate that begin with. But generally, they're going to assume that those people that are named The inventors are they're gonna assume those who are listed as the inventors are the actual inventors of the application. Now, some small scenarios. There may be a situation where there might be something in the record or the prior ARP search, which may cast doubt on who those named inventors are, which case you know, the examiner could make, any you know, make a proper rejection. If they have enough, you know, evidence to back that up and support Prime fish cases rejection. They could also make a rule, 105, requirement, acquiring more information as to who the inventors are. But you know, we see that as a very small, small set of applications that might happen in because we have this presumption here at the PTS. So even though we provide this guidance, most examiners are not going to be investigating this or sending out 105, requirements to applicants. We only see that happening probably in a very, very small set of applications and inventors open declarations, those inventors that you do name those who made a significant contribution and those who are listed on the application data sheet, you have to provide oaths and decorations of those inventors, just same as you always have before. We don't want oaths and decorations from AI systems or any other non person. I mean, it's pretty self explanatory, but I have to say that anyway, okay. Correction, inventorship is still the same way. It's always been done. It's pretty straightforward. If you're doing an application it's handling under the rule 4837 CFR, 1.48 you just need a new application data sheet, you know, with all your cross outs and underlines, and you need, you know, there, and there's a and there's a fee that's associated with it. And But if we're dealing with a patent, it's a little more difficult to rule 324, it's essentially, you need everyone's agreement the assignees and the inventors to kind of weigh in together, to agree on with the who the inventors are. So so inventorship is still kind of the way it was handled before, though I do know if you do notice a problem with the inventorship and that vendorship, is that there is no person who's missing of a contribution to the claimed invention, then you're on an obligation to then cancel or amend that claim. You can't keep moving forward with a claim that you know does not have a person who's missing a contribution. So if you ever comes across that scenario where you're going back through or you made an amendment because of our art rejection, or whatever happens to be, and then you realize, oh, wait, this person, there isn't a person anymore. That is actually, may say, the contribution you got to mend or cancel that claim. And as always, there's always a continuing duty as you mend claims, as claims change throughout prosecution, to update those claims benefit and priority claims we do. Note this because there could potentially, could be foreign applications that may permit the naming of entities that are not persons on applications as inventors. So you potentially can still get your foreign priority claim to them. Just note that you need at least one inventor in common between the two. We've always read three to five USC 119 as requiring at least one inventor to be in common between the foreign priority application and the US application. So if you have a foreign application that names both a person and AI system, you can just name the person here in the US, assuming they made a standard contribution, name that person here in the US as the inventor, and then you can get your foreign priority claim you have a foreign application that foreign application that only needs an AI system, you're not going to be able to get that priority claim here at the US, because you're not going to be able to name the AI system as an inventor, and you're not going to have a common inventorship between the two last year. Assignments. We don't want, we don't want you recording assignments from AI systems, skirting the obvious problems of who could be a party to a contract, AI systems aren't inventors, and therefore have no rights to assign anyway. Okay? And we do have some examples playing out this inventorship guidance. We have one for a transaction for an RC car, which is much more simpler, straightforward example. And then we got a second one, which is developing a therapeutic compound for treating cancer. For treating cancer. This one is a lot more more more of a difficult read. I'd say this. We did this one because we want to hit one in the pharma industry, because it's how much AI is being used there. But that one's a little bit more difficult of a read, as we have certain teaching points we wanted to make in there, and it made the facts a little bit more complicated, but for if you're in a kind of Life Science farm area, it's a great one for you. Rest of you, I would just stick to the RC car. It's a lot more straightforward. Okay, I want to quickly go through the practitioner use of AI guidance that we've got the other day. I'll talk more about this later, but we're going to have a webinar, a panel from the USPTO, answering questions from the public on this, probably in the May timeframe. So keep your heads out on that. So you'll have a chance there to hear more about this in detail. I'd say the biggest takeaway on this notice is you are free to use AI any way you like. Just be careful with how you use it. That's essentially the what the guidance is really getting at, right? We're not saying any kind of AI tools are prohibited, or a specific AI tools are prohibited. You can use AI AI tools you like and developing your applications, whether it's drafting patent applications, whether it's searching, whether it's whatever you need to do with AI, that's fine, just be careful with how you're doing it, and you're not violating any rules professional conduct. So this guidance, it doesn't introduce any new rules and duties. So the whole point of this guidance was not to provide any new rules or put new duties on applicants. Essentially, these are what our existing rules are, and this is how we see them applying in this kind of new era that we're dealing with here, where AI is kind of permeating everything so and we kind of go through different scenarios that we see AI could be used, and kind of just kind of hash out where, you know, we're seeing potential pitfalls, and where she'd be extra careful with how you deal things. One of those things being AI document drafting, this is using some kind of generative AI system to whether it's draft a patent application, draft claims, draft responses to Office actions, draft briefs, whatever it happens to be, we lean heavily on the signature requirement, as we do throughout the notice as that essentially, you can use these tools to draft these things, but a person needs to sign it, and that person who's signing it is making certifications under 37 CFR, 1118, and those certifications essentially say I have reviewed this document. All the statements in here I believe are true, all the facts in there, I believe are true, all my legal arguments, all my cases, whatever happens to be, they're all supported by all and warranted by law. Think we all seen a lot of, you know, press about hallucinations and court filings, you know, creating cases that don't exist, etc, etc, etc. It's the on the duty of the person that's signing it to have made it a reasonable increase, reasonable in our circumstances, to make sure everything there is correct, right, all the facts are true. The cases in there actually exist, and they actually support what you're trying to say. Just like you would with a first year associate at your firm, right? You wouldn't just blindly submit, you know, whatever they created. You would review it and make sure it makes sense and make sure it's, you know, it's actually helpful for the client to submit it. So it's the same thing here. Using the AI to do that, you need to check it and make sure it's correct Speaker 1 44:42 duty to disclosure. So we do talk about the duty disclosure a bit in the notice, and this is because we don't have a specific duty on applicants or practitioners to submit to the office the use of AI. So whether you're using the AI and the creation of your invention or using the AI. Drafting the patent application or using AI in any other capacity. There's no requirement you have to tell the USPTO that, okay, but there potentially could be a scenario where you may have to give us that information, and that's if that information is material, right or the duty of candor and good faith, we have this duty of disclosure any information that's material patentability essentially would impact the decision on patentability. Ask me submit it to the office. I think we heard in some of our listening sessions that AI, when people have used AI to draft patent claims or drafted patent applications, they'll put all the information to the AI, and AI will spit out an output, and part of that output, there will be embodiments that they are even thought of. And some instances, people may want to file patent abortion or claims to those embodiments that no one had thought of before. But if you do that, and you submit it to the office, that that point, you may have to tell the PTO that was drafted by an AI, because that that point, it may be material, because you know, if it can be shown that that claim did not have a significant, controversial contribution from a person, and you have that information because the AI actually came up with it, then you need to submit that to the PTO. So that's why we talked about the duty disclosure there, and note there, the due disclosure can't be transferred to someone else. Rule 50 6c outlines the parties who have a duty disclosure, and you can't give that, give that duty to someone else, right? You can't give that your paralegal, your duty disclosure, and have them take care of everything the the responsibility lies those who are outlined at 50 6c so same thing, you're with a with an AI system. You can't give the AI system your duty of disclosure, because there's tools today, right, that can track all your applications for all the you know, your foreign filing offices and USPTO, and track them all and pull them all together and create an IDS for you. But if you know that system misses something that is material, that you know that require that due disclosure is ultimately with those persons in their 5060, so you can't just say, well, the AI missed it. That's, it's the AI's fault, right? It's ultimately the person's responsible for making sure that duty is complied with, all right? Filing documents so there potentially could be using systems or computational tools or AI system, whatever have you, that can help you file things the office actually interact with our systems here at the PTO. But remember, a person needs to be signing and most documents at the USPTO need to be signed, right? And the person signing them has to actually put their signature into these papers. There was a, there's a notice for a couple weeks ago on signature requirements were expanded. Signatures requirements the type of electronic signatures that we'll accept now, and there's a discussion of this in there as well, that a person actually has to insert their signature in these documents. You can't rely on systems automatically do it for you. So you can't program these systems to automatically sign everything for you. Person has to actually put their signature in their uspto.gov accounts. They are individually they're an individual capacity. Each person has their own uspto.gov account. You can sponsor other persons to get uspto.gov accounts, but you have to have your every person has to have their own. And same thing with this. You can't give your uspto.gov account to an AI system and have it just, you know, go at it and follow whatever it wants at USPTO, essentially, you know, you know if it's if anything happens under your account, you're the person who's ultimately responsible. So if you're using some system to automatically file things here at the PTO, if it does anything improper, for example, accesses information, you don't have a right to access to files, information, improper information, file stuff in wrong applications, whatever it is, you're ultimately responsible, because it's your account. So you can't just say, well, the AI made a mistake. That's that's fine and dandy, but you know, the responsibility ultimately on you we talk about in their Terms of Use, using AI systems to scrape data off our websites and things like that. Ultimately, if these AI systems or competitions, whoever happens to be, violates any of our Terms of Use or our legal frameworks, the persons that oversee these AIs are ultimately responsible for whatever they do. Last one on the practitioner notices, confidentiality is just more just kind of a cautionary tale that you know, if you're going to use these tools, make sure you're you know where what these tools are doing, who's in charge of them, and what they're doing with the information, right? There's a lot of generative AI tools out there. AI tools are open to the public, and if you plug your client's information in there, that could be a problem for you, that you could be violating your clients confidentiality, right? You don't know what that that AI system is doing with that information. It could be taking that information in retraining its models. It could be spitting that information back out to other persons. They could be taking that information and selling it to third parties. We don't know what we're doing with it, so you have to be very careful that you understand you. Know whatever tools you're using, you're confident that they're not doing things with the data that would breach your confidentiality requirements. And same thing, as I say here in the last bullet, without national security issues, export control, foreign filing, license issues, these all come to play too, because you don't know where the servers sit for some of these things, right? A lot of these servers could be out of the country. They could be run by foreign nationals. Whatever happened to be, you could be in violation of some of these other issues as well. So be very careful with the AI systems you're choosing. And hopefully you're not, you're not doing anything that would might, might possibly implicate some of these, some of these problems. Okay, real quick for, I guess, some of the Q A let me talk about the next steps we got going on here at the PTO. So I just talked about two of our big deliverables we just came out with. We're going to continue with stakeholder engagement. As I said, before, we find extremely valuable, and we really appreciate your input on these things. We're, as I said, also before is that we're going to have some sort of webinar, probably the May time frame on that practitioner notice. We're have representatives, probably from different BU's kind of weighing in on questions from the public with regards to that, whether it's, you know, questions specific to AI or questions broader, with professional responsibility rules, we'll have members from different BU's kind of weighing in on how those bu use kind of handle things here at the PTO, we're also looking at some potential engagement events With name, image and likeness, diving in that little bit further, as I talked about before, we're also worried, you know, we're gonna think from the patent policies perspective of AI generated information, and the extent that can use this as prior art, right? Is there any implication of 102, that the evidence used to show something's not novel needs to be made by a person versus an AI system? You know, and thing, all kind of issues that surround that, and all the issues surrounding 103 right? 103 just seems like a bear to try to tackle, because there's so many issues in there, and a lot of them relate to the person of organ, ordinary skill in the art standard. And how is that standard impacted by having a very powerful tool like aI at their disposal. And you know, how does that affect secondary considerations and all different kind of things that revolve around that and showing something is or is not obvious. So those are kind of the things we probably be engaging on next with the public. So please stay tuned. This will probably be out within the next couple of months. We'll probably be talking about those types of things and future engagement of that events. We have two more upcoming Executive Order deliverables I talked about before we had the copyright one. We're still in the information gathering stage of that. The timing for that is time to the US Copyright Office report. I think we have 180 days from that report. So the timing of that is probably some some ways off, I'm not sure how close they are to the issuing that report, but wherever that is, we'll have 180 days on top of that. And the second EO deliverable is subject matter eligibility. This was guidance on the intersection of AI and IP, specifically, 101, and we were given 270 days on that. And that takes us to the end of July. We'll use this July 27 we have that. So we'll have something out this summer on 101, and AI, I don't expect that we're in the state. Got a pretty good feel what that's going to be. Likely it's not going to be anything groundbreaking. It's going to be more it's not going to be a big change to our eligibility guidance that we came out with in 2019 the likely will more of a supplement, just kind of more clarifications to some issues involving AI and how our guidance would apply to those kinds of situations, all right. And last but not least, our AI webpage, all the stuff I've talked about today, all the materials that we have, all our previous engagement events, and videos of all those engagement events are up on our subject, our artificial intelligence webpage, USB two.gov/ai, I mean, if you don't want to type type that all in, you just go to usp.gov and search bars, type in AI, and it'll be the first result for you. Have all our information up there. Okay? And with that, I am going to stop sharing, and I am going to see what kind of questions we got, and hopefully answer, answer our questions. Okay. Chat. Okay. If you have any questions, please feel free to send them my way. I'm happy to happy to answer any and all questions that you have. First ones, could you share insights on the role of data provision in AI inventions as implications for determining inventions mentorship specifically, do contributors of data automatically qualify as inventors? Speaker 1 54:32 I don't see people that contribute data as automatically qualifying as inventors, right? So it's all about significant contribution to whatever is created. And I could see an argument being made that those that contribute right, so when data, when AI, is trained right, it pulls in a whole bunch of data to train it, and based on that training data then comes out with an output. I can see the argument being made that everyone who contributed to training. That AI has made a contribution, but I would tie this back to an essential building block. Was that data an essential building block to the development of the output of the AI? I think that'd be a very, very case specific inquiry. I would say probably, I wouldn't think it'd be super likely, but I could see a scenario where the argument could be made that that data that was taken in was so important to what the final output was. Right? These final output is so click, so clearly close to that input, potentially you can make the argument, well, then that might affect the novelty of it or non obviousness of it. But from pure inventorship standpoint, I could see the argument made that that was necessary to the final product of the AI, and potentially that person could be an inventor. But I see that scenario being not very likely, but I it's a good question. It's interesting question. I'm interested to see if that ever actually happened. We would that the output would be so interesting. You know, output would be so good if there's no one to follow the patent application, but it'd be so close to what that specific, specific input was, that that person can potentially be named as an inventor, that'd be interesting question. I don't know if we've seen that or heard that before. All right. Is there any other questions, okay, is the USPTO able to host a data resource analogous to the sequence listing resource, to the degree that data is an important part of the spec, possibly supporting, enable it does the USPTO have a policy on accepting synthetic data sets? So this is something we have chatted about. I wouldn't say we've, we've had a deep discussion on it, but it is something we've talked about right from the 112 perspective, especially when you're talking about deep learning models, and you know, having more information on what the models are and how those models are acting, and the data used to train those models could, in some instances, be important to enabling or providing written disclosure of an invention. We do accept that information today. You can provide that information to the office so we don't have a specific resource for it. We have talked about potentially creating one. Yes. Daniel, I can see the question. Sam, I'm trying to respond in a second right now. Yeah. So we don't have a specific resource for it, but it is it, you know, it could potentially be important, and we would accept it if you did submit it. So I think 112 is one of those issues that we're looking more into, and to the extent that that type of repository would be necessary, right? Because that kind of repository would be pretty pricey to keep and maintain, and to the extent that it's needed, whether that's a good investment on the USPTO as part, I think, is to be determined. Is what we're still trying to figure out, to what extent applicants need that, and extent stakeholders who are trying to determine, maybe how an invention was created, or how it works, whether it be beneficial to them. So it's one of those things we're still kind of weighing here at the PTO, but it is something we have chatted about, and it's an interesting thing. So thank you for raising that. Yeah, we got three more minutes. Is there any more questions? Speaker 3 58:37 Matthew, it looks like we've got about seven questions coming in in the question and answer box. So I'm going to read those. You see those chat? Speaker 1 58:48 Yeah, I was looking at the chat and not the Q, sorry about Yeah, okay, all right, um, are there any quantitative metrics to measure an adventures contribution? So, so the question is, is there any quantitative metrics to measure an adventures contribution, not specifically, alright? So this, right? We're talking about the wall, right? And this is a joint inventorship wall, and there's no specific metrics to it's not like you have to make 51% of a contribution, or this ratio of contribution to AI systems. It's not quantitative like that. It's very case by case. I think as we get more cases, more is happening, potentially, maybe at the PTO or down the road in litigation. I think we'll get better feel for where that kind of line might be. But right now, I think we're it's very case by case. Okay, under the new factors, do people who contribute data towards the training of AI represent inventors, I think we just talked about that a little bit, or do they have to be involved in method of training itself? I talked about that a little bit already, so I'll move on the next one, based on GP four. Is it possible that the company responsible for making AI could have an inventorship claim to invention supported by AI? So I talked about that. That before, so I wouldn't say the company, when the company itself would not be an inventor, right? Inventor is that has to be a person. There are cases even before this where companies tried to be inventors, and the Federal Circuit shot that down. So now you have to be a person. But the persons that developed AI there, they could potentially be inventors, not the company itself. Regarding principle three, what is an example regarding trying to determine if the person moves the invention beyond the reduction of practice after AI has generated something? I would point you to our examples that we have. Both those examples the RC car as well as the pharma example, provide some examples of taking the output of the AI system and then making changes to it and creating something new, where they can then be inventors, where they took a beyond reductions practice stage to the actual invention stage. So I refer you to those examples on our website. Sounds like EO for copyright could be quite a way off. In the meantime, USPTO is issuing practical guidance on copyright disposition and registration eligibility to filers, that seems surprising. Sort of likelihood those determinations will be open for reexamination when new guidance is issued. I don't think the decision has been named, has been made yet as to whether we're going to ask for comments on the copyright deliverable. I think that's still way off. I think there's issues regarding new administrations potentially coming in and their wants with regards to what that deliverable is going to look like. So I would say I don't know at this time. We don't, we don't have an answer to that next one. Is there a concern that generalized access to AI that the bar for expert in the art will be compromised, or somehow shift the bar for 103 Yes, I would say that is definitely a concern that we've heard before, and that's one of the issues I talked about that our patent policy team is starting to kind of gear into. Is that kind of issue with 103 and the person who oriented skill in the art standard. How is that chain, right? We have this, this very powerful tool that they're using. Does that make lots of thing more things obvious? Now, right? That is something there's been a lot of concern we've heard from stakeholders. That's something we have concern about, too, and it's something that we're going to be looking at. There's one last question, Davila, I'll do that, and we can get out. Is there engagement with China. How is coordinating approaches to AI working, giving given Chinese regulations. There's engagement in China. I say through the IP five. We, as the IP five as a whole, is having discussions on AI in lots of different capacities. Our under secretary is currently in China right now. I don't know if she's specifically talking about AI work or not with them, but I would say we're definitely in the capacity of IP five and potentially more one on one as we move forward. But I don't have anything specific to say what's come out of this discussion so far. So that's as much as we have. All right, I think that's it. Speaker 3 1:03:03 Thank you so much. Matthew. On behalf of autumn, I would like to thank you for the informative presentation today, and thank you again to our sponsor, Marshall Gerstein. A recording of this webinar will be available for viewing in the autumn Learning Center within a week of this event as and is included in your registration as a reminder. Please complete the webinar evaluation following this session, it will open when you sign off the session and will be sent in an email tomorrow afternoon. Please join us for our web may webinars and thank you for being a part of today's presentation. Thank you again, Matthew, have a great day. Everyone. Unknown Speaker 1:03:39 Thanks. Everybody. Have a good one. Bye, bye, bye. Transcribed by https://otter.ai