Unknown Speaker 0:06 Declan got dropped Unknown Speaker 0:08 the I think we're gonna move Speaker 1 0:27 Hello and welcome to today's autumn webinar. The AI enabled TCO. My name is Davina Young, I'm a member of autumns educate team and today's staff hopes. All lines have been muted to ensure a high quality audio and today's session is being recorded. If you have a question for the presenters, we encourage you to use the QA feature on your zoom toolbar. Should you need closed captioning during today's session, the Zoom live transcript feature is turned on and available on your toolbar. Before we began, I would like to acknowledge and thank autumns all my professional development sponsor, Marshall Gerstein, we appreciate your ongoing support. I would now like to welcome today's presenters. As Vice President of Technology Opportunities and ventures Mark Saddam is responsible for commercializing the intellectual assets across the three campuses of New York University, including gets number two ranked medical school for one of the top performing tech transfer offices in the United States. John Kerry is the contracts compliance manager at NYU Langone hill, where he leads the development and refinement of AI prompt engineering, enhancing user interaction and operational efficiency within cov and NYU LH. Declan Weldon joined the University of Glasgow and November 2021, following four years at Trinity College Dublin, where he led the combined functions of industry engagement, technology transfer, campus companies, and academic consultancy. Thank you, Mark, John and Declan, we excited to learn from you today. Speaker 2 2:30 All right. Welcome, everybody. So we're, we're back, we're asked to by Autumn to give a, what effectively is a reprise of the presentation that we gave at the annual meeting on the use of AI in the operations of tech transfer. Sadhana may be on the call, but she was our moderator during the tech transfer piece. So we're gonna go through this and kind of three parts is I'll just give a little bit of a framing of why and what John will talk about, John did all the work. So John, John will talk about what we've actually done and how we've saved a bunch of money and created some really good opportunities for us. And then Declan will follow with, we transferred some of these ideas to the University of Glasgow, with actually John was the transfer between the University of Glasgow. John, there's a comment that you're on Presenter View, and not the actual showing the slides. So you're showing, you're going to get a flip, flip flop, how the presentation is showing. And so we worked with, with Dec at the University of Glasgow to bring some of these ideas into an institution from the jump. So partly what we wanted to show everybody was, you know, it's one thing to have a place like NYU say, well, we put a bunch of resources into this. And we were able to create some really interesting opportunities. But that's not really helpful unless you can transfer that to every university. There you go. That's good. You can transfer that to every university or tech transfer office in the world. And I think we can do that. So next slide. This is a quote John Paul from Sam Altman, because the first question everybody wants to know is, am I going to lose my job? And the answer is, no, there's there's a good, good saying that's been used for a couple of different disciplines that AI is not going to replace people, but people who use AI are going to replace people. So you know, it really is something where maybe over time jobs will shift and morph. And you might not have a person who is called a financial analyst doing the same job of a financial analyst today, but they'll be using these these types of tools to make their job better. And in fact, that was one of the one of the starting points for us at NYU, was looking at places where there was a lot of repetitive work, a lot of detail work, and to see how could automate that and that was kind of the jump off point of where we were so you Excellent. I think it's helpful. I know everybody, you can't sneeze and not hear the word AI show up somewhere. But I think it's important to level set and have an understanding of what this work is. So So generative AI GPT, Bara, Gemini all these kinds of programs are roughly described as generative AI. And what how how AI effectively works is you have a pile of data. This training data can be text images, it can be multimodal, meaning a combination of texts and images. And the data is then built on top of a foundation model. So there's the model itself said I've looked at tons of data. And now I think I can predict or act or accurately try to guess what happens next with a set of inputs. And so those foundation models are looking at patterns and ranges and the foundation models are really specific. Usually you'll hear them referred to as being specific for a specific task. So NYU, Langone health actually has a foundation model specifically for medicine and health care that we're looking to commercialize. GPT is one version of a foundation model. But that's that's what that term means. For us in tech transfer. I think this is a, this is a moment that will remember that I've been doing this kind of work for 30 something years, and the last time that I thought of a technology that could so fundamentally transform what we do was, you know, the internet and email, right? I mean, it is that transformative. So you should look at this not as well, I'm scared, and how am I not going to do this? But I think everybody should really, really looking at it and saying, How do I adopt some version of these tools as quickly as possible, because the savings to the office and the outputs I think are so much worse, they're worth so much more than the cost. And the and the worry that it might do something that you don't like that I think you just you just have to be in a position to start going. Next slide. You can, you can zip through all the things, John, so So basically, if you don't know how these things work, effectively all a model does is give you the prediction of the next most likely thing, if it's a text based model, and most of what we're going to talk about are text based models. It's really just saying, given the sentence that given all the words that came before, what is the next most likely word. So in the sentence at the bottom best thing about AI is its ability to a model might kick out any of these five words as next as the next likely word. But learn in GPT is the one concert with the word that would most likely come up, come up next. And that's all it's doing. So if we think about our contracts, finances our reports, you can see very quickly how, if you train the model specific for tech transfer for the things that we do, you could probably get a little more accurate about what that next predicted word or thing is going to be. So skip to the next one. And this is what everybody is worried about. hallucinations. In general hallucinations not a good thing in the AI hallucinations, possibly even worse. The biggest question that we all get is, well, how do you know that the outputs are pretty accurate? How do we know that this isn't going to make something, excuse me make something up that we're going to have to deal with. And the most important thing to remember is that the models that we are going to talk about, you can restrict the inputs of the model. And you can restrict the outputs of the model. And so what I mean by that is if you said, here's a license agreement, tell me what you know what the outputs of this license agreement are, but you didn't give it a format in which you wanted to see that. And you were using, say, a public forum of one of these foundation models, it would go and effectively look at the entirety of the internet and the entirety of the billions of pages of information that was used to train the model and give you an answer. That's not actually terribly helpful for us, right. So what we've done and what we're going to show you how we we do this is we effectively restrict the inputs to a series of documents or a document. And we restrict the output that we want to receive through the prompt. And by having a limited input that it can look at and a limited output. To come out of it. You can actually create a process where the hallucinations are, I don't want to say there's zero because they're not always perfect, but they are very, very, very, very accurate and things that you can rely on to do next. Next slide. And then I talked about this briefly, is seeing the most important thing that you have to take away from this Presentation is do not do this work in a public forum, do not go to GPT. Four, do not go to Gemini, and put your confidential license agreements. And, and because what you're doing is effectively giving that information to open AI, or Google or whomever, to further train the model, you don't want to do that. That's how people at Samsung got fired was they were seeing the open models to train. And then then that information which happened to be in that instance, that was in the New York Times, proprietary Samsung wound up being used to train the model, and then people can actually look at it. So it's really critical that if you're going to use this in your operations, that you have a private instance of whatever it is, that it's past your internal security tests, and that you're comfortable that you can share proprietary information to get the outputs that you want. That said, you can use the public search or the public MLMs, to do things like get familiar with AI, or take your public license template and play around with it and see what the outputs you can do. Anything that you have, that's public, feel free to use a public LLM to play around and address. But if you want to get something actionable out of it with all the contracts that we have in our agreements, or IP or whatever, you don't want any of that stuff to go to go in the public. So the disadvantage of the private is, it's going to usually be an enterprise approach, your institution is going to have to get involved, you're gonna have to convince them that they should get involved. And part of the reason we're doing this presentation is to give you some of the ammunition to go to your institutions leadership and say, Please let the tech transfer office do this, we can give you what an ROI, that's going to be pretty quick. So next slide. And then the last thing and I'm gonna hand it over to John, as people ask, like, how did we, why did we start what what was the motivation for us to start and the real Speaker 2 11:56 flicker of insight was that NYU Langone health was working with Microsoft on using these types of tools for the provision of healthcare to look at things like radiology images, to look at some of our data and how we admit people or readmit people to the hospital or with the likelihood of what the outcome to a patient is going to look like in 30 days. So we had gotten comfortable with using these MLMs. And we were doing it in this private instance, which was secure from our IT perspective, for PII and pH i. And so when I heard that we were playing around with this stuff, I went and talked to the CIO and said, explain how we're able to do this without this, you know, year 14 months ago, explain how we're able to do this without leaking patient information. Then he said, Well, it's, it's clean, we were it's private instance, nothing is shared, we can look at patient data. And so I said, Well, patient data is much less or much more significant of a breach than financial data, or tech transfer data. So it's good enough for patient data is good enough for us. That was basically it. And then they let us kind of play around. So the idea that we started with was, having worked at a small tech transfer office at the University of New Hampshire before NYU, now I'm in a big office at NYU, the issues are kind of the same in the small office, you don't have a lot of resources, it's really hard to scale, you have tons of administration, and you can never really get to doing the outreach that you want to do. In a large office, you have similar problems, they're just at a different scale. So we're working at really high level of activity. But the administrative work is still overwhelming, there's a lot of back office work, a lot of finance work that maybe is is more is a bigger part of the operations of a large office than a small office. And in the end, we still don't have time to work with our faculty to find where the opportunities are. And so that was kind of what I started with and said, Well, let's look at places that we can reduce the amount of effort and energy and FTE time that we're spending doing some of this administrative work collections back office operations, and see how much of we can automate it. So we started looking at places that were repetitive that were under resource, it took a lot of time. And thought, let's start there and see if we can come up with a way to make this work for us. And so John, I think you're I think that's the last of mine. And your next right? Yep. All right. Speaker 3 14:19 So when we're, when we're looking at this, I'm gonna try to develop a little bit of a roadmap for everybody. And one thing I like to say before I start often is it's a disclaimer, and it's that I don't have a technical background, I don't have an engineering degree. You don't need that to be able to do work with AI, right? So what you really need is people who understand their jobs very well. And they know how to get from point A to point B. And they're able to communicate that in writing. That's largely what prompt development is built around. So to begin with, I like to show this slide because it really does capture kind of exactly what happened for NYU and For our office in the early stages of development, so when we first started, it was a very, very slow process, right. So we were actually one of the first teams at NYU outside of the MediCal program to actually get access to AI. So that looked like this. Largely, we got about 40, exploratory access, granted, and that was, that was the beginning phase. And Mark at the time that kind of told the team look, I want everybody to get access to this new AI, it was it was GPT through Azure. And I want everybody to start to think how maybe they can actually automate some tasks in their position or just playing around with it, to the extent that you're kind of understanding what AI is, or understanding how this might benefit your role in the future. And we all got it, we all kind of rushed into it. And we were immediately hit with some roadblocks. And that was largely just a learning curve, right. So in the first week, nothing really happened, we had access, but people were still hesitant to play with it. Week two came in when no use cases were really presented. And then by week four, so we're about a month into this at this point. And we have two use cases, right. And those use cases are actually use cases will demonstrate later on. They had to do with contracting. And you'll see that in a little bit. But fast forward from Week Four to today, which I think it's been about six months and NYU across the board has well north of 200 use cases at this point, right. And I want to highlight this because it's if your concern is that we won't have enough to do with AI. Once you start using it, you will find work to do with it right? The use cases start to uncover and present themselves as you work more and more with it. So what did our beginnings look like? The beginning was very restricted. Largely, we had just, you know, a new chat playground, we didn't know exactly what we could do with it. We had a meeting with MCT. With Dr. Again. And we tried to figure out okay, so basically, we have this model, we're able to add language into it, we're able to maybe add some agreements into it. What will these outputs look like? Will they actually save us time? Will it be more time to go through the entire AI process than it is to actually continue to do the work we've already done? And these were a lot of the floating questions right like this, this is the beginning. And it's it's difficult when you have processes already that are in place, and they've been in place for years, to start to think about new ways you might be able to actually work. And that that was probably the biggest learning curve, I want to say, or the biggest hurdle to get over in the beginning, right. So with the very, very first use of Azure chat playground, it was just about building up prompts, right? Very plain language prompts. These were short, short problems, they were defined this agreement, show me what's in this agreement and the prompts. They lead to summaries of contracts, largely that were incorrect, but they weren't really helpful for what we were doing at the time. So we needed much more definitive responses from the model that we were getting. So fast forward. Speaker 3 18:04 You, you start to and I'm gonna break this out into three stages, largely, right. So stage one is understanding the basic concept of what AI is, and what you will be able to do with it, right. Because when you hear AI, a lot of the times, you'll start to maybe jump the gun a little bit, you'll say, oh, it can automate this entire process, or it can automate this entire job. It really can't, right, not not up until this point, it's very good at doing tests like Mark described before. But it's not great at doing full jobs. So you have to understand what type of output you want to get, and what that's going to look like. Right. So it's the beginning is about actually understanding the output, looking for a nuanced language, right, setting the stage for what type of AI applications you might want to build later. And just getting that first use case in. Once you have that first use case, you'll start to expand after that stage tilde, in my opinion, is where a lot of the learning comes to fruition, and you're actually able to start to put pen to paper. So stage two looks more like you have prompts. But maybe they're not giving you the outputs that are exactly what you want, right? So you're getting decent outputs. But you need to specify more, you need to really start to hone in on exactly what type of test you're doing, and what the output should look like. So you, you go back and you do prompt engineering, and this is the really big stage of everything. So as you start to define problems and redefine them and redefine them, you'll notice that the outputs change, you'll notice that you're getting a bulleted list. Now, maybe you're getting more financial information than you were before. And there's a couple of different ways to actually do this. And one of the suggestions I usually have here is take one of the outputs that you would usually do manually and compare it to the output you get from the AI model. That's what I did. So I would go through agreements and review them manually. First, I would look at my output and I would compare it to the model. And I read I rewrote my prompt basically all the way until I was able to get very similar outputs to what I was actually doing manually at this point, right, and we're way down the stage now. But at this point, I'm able to usually get better outputs than something that I would have drafted in much less time, right. So you'll start to see the efficiencies in stage two, because you have tests that are completely routine. And you're able to do them over and over again, right? Write me a letter, write me a write me and notice things like that. And you have prompts that are built, saved, built with examples. And this is where you really start to see time flows come back into play, stage three is moving beyond. So what I what I believe stage three is expanding your AI use into other fields, right. So at this point, you should have a pretty decent understanding of prompt engineering, you should have a couple of use cases that you're very comfortable with. And you're excited about the outputs. And you're able to do this over and over again, right, that's, that's another thing you should focus on. When you get an output, you want to make sure that's the type of output, that is something you can replicate, right? Because as Mark again, described before, depending on the prompt you have, your answers can change. So when you run, when you run a prompt and get a great output, you're gonna want to run it again, right away, right, you're gonna want to run it 234 times, and make sure the outputs are very, very similar each time. And that's how you know you have a prompt that you can actually really use, right, and you can see that, and that's going to be what you start to use for a big use case, potentially. So you're going to expand outward, right? If you're going to look at drafting documents down instead of just reviewing them, you're going to look at possibly including breach notices, termination letters, memos, right, these are all different tests, stage three tasks, that are actually the AI generating full and complete tests, a full breach notice a full termination letter. And this is really what's going to start again, to save tremendous amounts of time. Because what we've seen here at NYU is that you don't have to be an attorney or have a JD to be able to produce a termination letter anymore. You need to be someone who's familiar with AI, and familiar with the agreements we have, if you have those two, you can actually produce most of the things that a lot of the contract managers can do, right, like terms of memos, terminations, and it'll it'll really continue to compound from there. So what does all this look like in practice? And can we give you some numbers actually, on what we've seen? Absolutely. Right. So this is the AI implementation. It's an analytical review of just the compliance test. So again, when I was brought on, I wasn't brought on to actually use AI or anything along those lines, I was brought on to review agreements. And to see if those agreements were often in compliance or whether or not we would have to start to send notices, things like that. So the job that I was tasked with was reviewing license agreement, right, that was the first position I was going to have to do. And this was initially going to take about two years is what Mark said, he said, You're gonna have to go through the last 10 years of licenses, probably like six years of licenses, I want to make sure that we've received all the payments we're supposed to receive, I want to make sure we received all of the patent patent payments we're supposed to receive. And this when I first started, it would take about two years. That's what I extrapolated. So it would take me, you know, a couple of days to go through 10 license agreements, maybe, you know, two weeks to go through 10 of them, it depended on how long the agreements were the complexity who I had to reach out to, once we got aI involved, we saw these numbers sheet that right. So up to this point, we've reviewed north of 500 license agreements, and the amount of money we've collected that 270,000. It's, it's closer to about 300,000. At this point, 1.3 billion and payment plans. These are real payment plans that we have with individual companies that we've been to reach out to do to this audit project. So when Mark says the money will find itself it really well, right. This is this is one of the other big takeaways is if you're concerned about the cost of all this, the money comes back, whether it's through an audit, or whether it's through time saving, you'll find ways to make the price of the AI system that you guys choose to go with. absolutely worthless, right. And that 1.1 6 million about your negotiation, have actually updated again, it's about 1.14. Now we have moved over to payment plans as of this morning, another I think about 200,000. And it's it goes beyond this right. So you see here Salary and Fringe savings of 212,000. This is again, because the initial project that we were signed up for that I was signed up for was supposed to be two years. And because of AI we were able to do it in four months. And that was with the learning curve of AI that was with trying to build products around it. And once we had that we were able to start to expedite the process incrementally at this point. This is what the automation of agreement generation looks like. I've been giving you guys a lot of ally problems that works or comes to work. This is really what it looks like. So it was about a two hour process before this, right. And this is based on my reviews, and then a couple other attorneys that are on the team who are contract managers, when we're going through these agreements, to review them soup to nuts to make sure that everything is saved in our database, there was about a two hour process, right, it would take about an hour to go through the agreements maybe a little bit longer, depending on what it was, we'd have to build summaries, and then we'd have to go check our database to make sure that everything from our summaries, were actually in the database and saved correctly. Another thing to note is that I should have said on the previous slide is, not only have we actually brought money into the institution, another huge benefit of this has been taking dead data off the books, right. And what I mean by that is we've found north of $5 million of just dead hfts on our books that should have never existed in the first place. So we've been able to kind of scrub all of that to make sure our data is efficient and correct. And again, if you look at this, you'll see a couple of the processes. So again, it's 120 minute to five minute process. And you can see this in real time, the SA s Agreement Review, it's a 30 minute process down to two minute process, we've built prompts around actually looking for NYU terms that are favorable versus unfavorable, we build prompts around our templates. So when we run something against our template, it automatically pulls information right that we do and don't have. And that's really where we're seeing a lot of these types of things. We've moved a little bit beyond just contracting now into the finance space, I will tell you that the finance components of all this are much newer. So that's why we've done data reconciliation and analysis. And it's been phenomenal. It's done a very good job, we did a 600 line recon, it was a 10% reduction, never, I would I would argue that it was probably a bit more than that. But the reason that finance uses are new, or is because of new programs that are coming out with AI, right. So as AI continues to develop, you'll see through open API's version that is able to read Excel documents. Now, if you move over to the Azure version, it's just getting that capability. So that's really where a lot of this is going to start to actually come in. And you'll see more time savings and things like creating PowerPoints, drafting PowerPoints, directly Excel documents, turning PowerPoints into Word docs, turning Word documents, PowerPoints, all of this is going to be done just by running it through either autopilot or whatever AI system you guys choose to implement in your universities. And you'll again see it right away. And you'll be able to see like, this is a PowerPoint that should have taken three hours to do. Now it's taking about 10 or 15 minutes. Speaker 3 27:41 So this is a summary of the existing prompts that we use, like on a daily basis. Right now, there are a lot more prompts throughout NYU that are being used, including some prompts that are eight 910 pages long at this point, because what we've started to do is take the prompts that we have, and tag them with metadata, right. So we're actually training, instead of training a model, we're building better and better and smarter prompts, so that when we run it against different types of agreements, or whatever you want to use it for, it has more of an understanding of actually what we're looking for, based on that type of metadata that we've collected, right. So this is again, a summary of the existing prompts. It's letter, drafting notices, secure licensed termination letters. And I again, I'd like to point out, I worked at a firm before this. And a lot of these draft notices the terminations the Corps, these are potentially two hour letters, right to billable hours. So that could range between 1000 $2,000, depending on the type of firm you're using. So this is all just again, more money saving, because these letters take a couple of minutes. Now you have a template that's approved by legal, you're able to run it through and get everything out immediately. So broad agreement reviews, again, like that essay s review that we were talking about before, it's actually compared some of our template to the agreements we get. And then of course, there's the brief summaries agreement, the database and sponsored research review, these are all prompts again, that we're using in total right now. But it expands much further beyond this across NYU as a whole. So I want to also talk a little bit about, okay, you have Azure, or you might have GPT enterprise, what does the future look like? And this is such a dynamic space that it's continuing to grow. And it's almost difficult to keep up, right, because if you asked me four months ago, what's the most important thing I would say the most important thing is that rate prompts. I still largely believe that, but you'll see from this slide that that space we're moving into now is bots, right? So we're going to be using those prompts just in a very different way. Instead of running a product that's, you know, one dimension, you'll actually be able to throw it into a bot and you'll create something like an assistant or a GPT that specialized in certain tasks. And that's what you see here. You'll see copilot on the right side and this was a really quick thought I created just for presentation purposes. So what essentially I had done was I couldn't Did Copilot to NYU and when you link those policies for travel expenses, right? And as you go through the body, it's actually very cool to see you can see the dynamic changing occur, where it'll say, Okay, are you an NYU employee? If you said yes, it'll show you the NYU employee information. If you say you're at NYU and don't, it'll actually bring up the NYU Langone expense policies. And then you can have full AI conversations about these policies you find. So you can ask me questions, you can submit reports to it and ask if that makes sense, fair to the policies. And these are just a couple of the features that you could actually quickly create, right? Like this is something that takes 15 or 20 minutes. Once you have some type of familiarity with copilot on the left hand side, you'll see G PTS, and maybe a couple of you have heard already about what GPT is are but again, they're actually specialized bots, that you're able to go ahead and plug in your own information to so they're really good at certain tasks. And what I mean by that is, this is one example. Right? So a licensed creator. So at the bottom of this, you'll see a couple of things, you'll see functions code interpreter, right, if you look at the assistant setup, and Assistant is just the Bots for Microsoft, right, that's what they're called assistants. And behind all this, you can actually upload up to 20 documents and half a gigabyte of data of your own information, right. So if I want something that's going to create licenses, for me, behind all of this, I'll actually set it up with licensed templates licenses from the past that we've used, that we really enjoy. So when we go ahead, and we input the 18, or 19 different data points that we've collected, it'll draft a full and complete license for us, right. And you're able to take these types of bots and really expand them into any direction you want. It's, of course, not just for licenses or anything like that. But they're able to output into Excel documents now. So say you have a bunch of information that you want to be put into an Excel document, you can go ahead and create a bot that's just for that maybe you have examples of documents that you've used in the past. Again, it's the same idea, you can go ahead and put all that information, teach the bot a little bit. So it actually starts to learn and move from there. So this is one of the big projects that Mark has been really, really working on lately. And this is one of the things that we've worked on. So you'll see that this is the this is the big aha moment, it's actually killing the invention disclosure. So when, when I came to NYU, I actually had no idea transparent what an invention disclosure was. But Mark brought me up to speed very quickly. And it seems like at least from my understanding, a lot of the issue with actually getting disclosures is the amount of time it takes for innovators to come forward, fill out the disclosure and make sure all the information is correct. And then here back. So how does AI with that what AI is going to do is we've created a disclosure assistant bot, that actually has our disclosure behind it. And when an inventor comes all they have to do we're innovator, all they have to do is drop whatever type of maybe pre publication, hopefully, or maybe the post publication manuscript, whatever it is, they dropped into the bot, and the bot because of what's behind it understands that it's supposed to start to fill out and build an adventure disclosure for the innovator, right. So it'll go through all the information that it that it just got collect all of it and start to fill out the disclosure. One of the great parts of this is that if the disclosure is incomplete, the bot will actually tell the animator or ask it, do you have more details on this part of the disclosure? Have you considered this right? And maybe the animator hasn't done some of the market research that's necessary, that's fine, because the bot will actually be able to help you with that. It won't be as great as sitting down maybe for a few hours and really digging into the type of markets. But it will be really good at giving you a baseline of what type of market that should go into maybe a market, you'd haven't considered give you some statistics on that, and give you some ideas on maybe how to start researching that topic a little bit more. And basically, at the end of all this, we'll have a complete disclosure. And it'll actually go back to co pilot. So this is where it all comes full circle, circle, and co pilot will be able to email two copies one copy to NYU, a business development manager at NYU, and then a second copy back to the innovator, right. And at that point, the individuals at NYU will review the invention disclosure. And then they'll set up a meeting with the innovator, barring a couple of different research tests. They're typically first on whether or not they're going to discuss it. And that will be the first time there's human interaction. It's after the full disclosure is complete. And we're hoping this process takes about 15 minutes total, to actually complete right to actually get the full disclosure. Based on the documents the innovator comes and drops off. Speaker 3 34:40 So Mark also asked me to come up with a couple of challenges and considerations, which was very difficult for me because I'm heavily biased in favor of AI at this point, just because I've seen it actually in real time. I've seen what it's capable of doing. But these are the ones that I've come with. So there is a learning curve. It will take a little while for you to get Actually accustomed to going through the GPS going through the API, and actually figuring out exactly what you should and shouldn't be feeding it, what type of prompt you should build out. This, this is the hardest part, right actually building the skill of product engineering. But once you've established a decent prompt, you'll be able to build on it over and over again. And we can use AI to help you build these problems, right, you'll be able to actually ask AI for prompts based on what you're trying to accomplish. And that's something that we didn't do in the beginning, but it's something we do often now. So if we, if we need a prompt, for some type of review, we'll tell exactly what we're looking for. And the ad will help spit out a prompt that we can use. Another thing is identifying strong QC datasets. This is very important, right? So as a lot of, you know, GPT AI, it all updates, right, it updates all the time. And by that I mean there, for example, it was updated in August that made GPT, more socially acceptable and socially aware. Although that update had nothing to do with its ability to review agreements, the update alone actually changed the outputs we would get from our prompts. So the prompts we used for contract review, now we're spitting out different answers to the actual same queries we run before, because of that update. So we had to go back into our QC dataset, and re actually engineer our prompts to make sure that we were getting the same answers as before. So having a strong QC data set is going to be vital to making sure that everything is always online, with exactly what you were trying to do. I technical know how long runs largely with the learning curve, I think so I won't spend too much time on that over over reliance is another big part of this, I fall into this trap all the time, I self doubt where I believe that AI will largely be capable of doing most of the things that I needed to do. And I really often don't need to consider the fact that it might not have the capabilities of running a certain query or actually producing a certain document, I often run into issues where it can't fully produce the test. But that's something that you'll get used to once you're actually understanding what the AI is capable of very resistant to change comes more from just the individuals who have procedures in place. And again, it's a difficult, it's a difficult thing to come to people who have been doing, you know, a great job for years on a certain task and say, Hey, we have a new way of doing this, and it might be a little bit quicker. But what we've seen again at NYU is once we actually demonstrate how much time you can save and how the machine actually works. We've we've done a really, really solid job of getting everybody on board, I would say that maybe 75% of our office uses AI on a daily basis now and the other 25% At least dabbles in it. And that those are numbers were very actually happy to say. And then startup costs. So startup costs are going to vary based on if you're a Microsoft user or not. If you're a Microsoft user, and you're thinking about going into a GA, I would I would suggest reaching out to whatever Microsoft Kinect you have and making sure that it's a license that you're going to have to be able to purchase instead of a license you already own. And token costs, right? So token costs is one of the big costs. But I'll say this, from the time we've started actually using AI today, token costs have dropped by a factor of 10. So it is significantly cheaper to actually run tokens today than it did in the beginning part of the actual use of AI. So that's a little bit about me, I'm gonna actually hit it off to Declan now, who's going to tell you guys a little bit about what they've been able to do over at the University of Glasgow. Speaker 4 38:34 It's great. Thanks, John. If you can go to the first slide. Speaker 4 38:43 And good morning, good afternoon, evening, everybody. My name is Declan Whelan, responsible for innovation, entrepreneurship and engagement at the University of Glasgow, in Scotland. And together with my colleague, Jamie within operations, we have been working with other explain how our colleagues and NYU to start the introduction of Gen AI into our office and I'll go through the reasons that some context around that the reasons why we're looking and some of the early results that that we've had. So next slide, please. Just on that a little bit about the university itself, and the University of Glasgow is a very old university 1451 That was set up. That's a long time ago. And it is in the top 100. It is a member of the Russell Group in the UK, which means that it's a research intensive university. So it's very famous for research and teaching. And our ambition is that we become a leading university in that space for innovation and entrepreneurship, as well as for research and teaching. We have a large community we bet 10,000 staff but 43,000 students, undergraduate and postgraduate, but the challenge on why I joined the University just two and a half three years ago was Start on the in the innovation stakes, there's a large gap between our innovation performance and our reputation and expertise in research and teaching. And when we got here, it was a very small core innovation team of four people. And by innovation, then it was purely the IP and commercialization team. And the growth story that we're been on for the last sort of two years that we've developed a new strategy build at the team. So we're growing the team, and we will top out at about 24, full time equivalents will reach by June, we'll reach about 22 people, but by the end of the year, we will have you know, brought that team up to significant size for the university and build out its capability. Next slide, please. I think the challenges are many of you who have been in this industry for a while will recognize any of these 10 points and can talk at them at length, I'm sure I think the challenges we're facing at the university. It's all of them at the same time, because we're growing as a team from a very small base. And I point to a couple of them, because they're all you know that they really dominate the scene, cultural change. And John alluded to that in his last slide, is a massive challenge for us. Obviously, there I think it is, I mean, it takes time. And we university with very ambitious goals in this space, we've been expanding team recruitment is an absolute challenge, I think we all feel that there was a growing demand for our services. As we upped our game, there's more demand coming in. And we're trying to cope with that growth. This competition for funding, we are concerned about the quality of read inputs, as everybody is, we have lots of data, we're trying to make sense of it all. And but we're also very concerned about the extra workload that doing extra tasks around innovation and entrepreneurship can put on an individual academic whose primary causes is around research and teaching. And then we turn up and say, Hey, have you thought about this route? I'm very, very, you know, keep your eye on that, please. Because getting adoption rates up, etc, will primarily be down to convince your academics to work with us. I'll just mention one thing in that photograph there, there's a blue circle or a blue plaque. That's actually the office that's the door outside the office that I'm sitting in at the moment. And the blue plaque is a plaque that commemorates the fact that this is there was the home of Lord Kelvin. For those of you who studied physical sciences in your undergraduate, you will remember the second law of thermodynamics with some horror. Well, this gentleman was responsible for that. And I'm sitting in his living room right now, which is my office. So that's just the point that now historic the university is, and but also that comes with challenges in terms of resistance to change, etc. So next slide, please. So why do we look at Gen AI? Why did we consider it? I mean, our challenge is about scaling up not cutting back. I mean, when we're trying to grow our service grow what we're capable of doing, we will keep the team when we reach that size, we keep it at that for quite some time. So how do we cope with growth, we want to grow a pipeline of opportunities, but definitely not grow the paperwork that comes along with that. And as we tell each other when we meet at conferences and give talks, the work we do was very much a contact sport, as well. To do that we need to make more time for people and less time spent inside processes. And the last point is common both I borrowed from Tony Raven X Cambridge enterprise. I mean, what would the neighbors say what that means is everyone's looking at AI, you know, what's the what's the, what's the cost of being left behind, we should be looking at it, we should be exploring what its capabilities are in our world. And that's what we said it doing. So next slide, please. In the very early stage, I had seen a slide on the autumn website. And it was a picture of John giving a presentation on a comment from Mark beneath it. And I read the slide behind John and the picture. And it talked about AI and this is going back to October last year, etc. So I made a phone call. And I said what's happening in this space. And I got into a conversation with Mark who I've known for quite some time. Explain the work with John was doing so I made an offer. And the offer was you know, if John could come over to meet my team, and to start to introduce us to this, which they accepted. We're very grateful for that. And in particular, we wanted to look at the three offices that were building up in terms of people and capability around academic consultancy. I've been commercialization and partnership and the partnership piece is what we used to call it industry engagement. That's an innovation partnership because our partners are much broader than just industry. We have a large partnership with the National Health Service in the UK. We have partnerships with government and public bodies, etc. So we are setting up teams in that space and processes and people bringing them in and we were looking at specific challenges within those teams around and contract review, which takes a lot of time and academic consultancy, the invention disclosure piece, which John had talked about in the work at NYU and have sometimes that's a barrier to engagement with our our academic colleagues, and really about the lots and lots of data that we have, how do we try to start distilling some useful information for our bizdev people from that? So we've challenges in all of these areas. We asked John to come over he dutifully did. I heard a word that I've never heard before, which is a prompt Athan, I think John made that word up on the spot. But what that meant was that I got my team in John, over the guts of the bet the best part of a week, John saw what we were doing and took it all in. But then spend time with the team training and how to write a prompt. And when I say training, it was the very basics of what is a problem. And starting, you know, starting off with the really simple plain language prompts that that he described somewhere early on. So these are the areas that we were specifically interested in, looking to develop some some capability. Next slide, please. And so some of the examples that we developed from that initial prompt upon, and we did, if you look at, you know, in academic consultancy, we spend a lot of time looking at many different contracts. And we need to get summaries of those of the key contract involved to make sure that we know what the inputs and outputs are. And that's, that just takes a lot of time, it takes hours to read through this to create the summaries, and feeding those in with very basic prompts. And these are the plain language prompts at the very sort of stage one work that John described, the time saving around that a bit creating very basic sort of summaries of contract information, and redrafting, some basic terms, again, it's stuff you can do in 10 minutes, that usually would take an hour. And if you add all those hours, all their savings around that next slide, please. And the IP and commercialization very much interested in the invention disclosure piece, because it's, it's it's a barrier to change, right? It's one of the things that we have to get over in terms of growing, we're very interested in in those in growing, you know, those typical engagement rates that we see in the very good universities who have high rates of engagement with the academic staff, or maybe say 15%, we're very interested in one of the ways and means of breaking out of that paradigm. And how do we get more engagement, and this is one way. And those natural inputs, we've all had the joy of, you know, everything from the 80 slide PowerPoint, that we have to disentangle and turn into an invention disclosure down to the back of an envelope or a tissue with an idea on it. All of those natural inputs we come we talk about, how do we turn those into disclosures? And we've seen examples that were John and we have an example there on the screen. That's very basic, and that as well. Next slide, please. Speaker 4 48:00 On the partnership side, I mean, this was really, how do we look? Again, we look across all universities, because we're universities were typically federated to some degree. So we don't have a CRM that covers the whole of the university. And Glasgow was very like that we have, you know, colleges of arts, humanities, social sciences, medical, veterinary life, science, science and engineering. Well, we don't have one way of looking across all of the all of the colleges and schools. And so it's good at looking at different data sets and looking at where it comes from. So when we go looking for who are partners, across those different colleges, it can spot those that can report back on those but what they do, we can check the internet for what their current strategies are, and bend that information into what I call a primer for a biz dev person. So it's not, you know, it's not a business report, per se. But it just gives them something to create to start a conversation with the client to say, Hey, did you know that you've got six contracts with the university that they probably don't know, they are related this area, and we know that's important, etc. And it's to build a conversation and give a prompt to bizdev. Folks, and again, the saving of doing that is quite, it's quite considerable in time. Next slide, please. I mean, again, the relationship intelligence that I mentioned that piece, distilling large data sets, you know, there's an example on the screen that came from a very large contract that came in with multiple partners, there was more than 12 industry partners on it. Each of them have their own financing bit in both cash and non cash towards the project, reading through that 120 pages, takes an awful lot of time, you can do that. In very same amount of time gone to that point we wanted to do was distill all the financial information and give us a breakdown or cost or cash, no cash contribution by partner and just getting that simple stuff done, which again, would have taken a couple of hours to get through a document like that turned into a 20 minute exercise and That's with the checking and rechecking to make sure that we're getting the same answer. There's also potential to spot hidden partnerships within there if you extend it a little bit. And again, that's just intelligence for our bizdev people to use. Next slide. And just as a, you know, before finish up, I mean, I'm going into this, I knew very little I played with Chad GT GPT, like everybody does. We've all seen, you know, if you're, if you're hiring a lot of people, you could see CVS coming in, and you can see the quality of the letters and the CVS getting better and better as more people use that to presentations, etc. But moving actually into the line of work. And I'm was as nervous about that as anybody would be in terms of really understanding what it can do at our university. Recently, we opened a new center for data science and AI, led by the brilliant profession anniversary. So we've got that and said, you know, as we develop this work and get more into the bad thought that stage two and stage three did, that John had mentioned, can you help us just in terms of governance and oversight to make sure that in terms of regulation in terms of you know, what we're doing that we're, that we know that we've got some oversight from real professionals in this space, and Anna has agreed to that, which is brilliant. And so we'll have that conflict there, just to make sure that you know, that we're on the right track. But also finish off by saying, We're also very interested in talking to people about how to develop specifically them avenues, particularly around for a full spectrum type of university. So we can get very good very quickly on engineering, or computer science, or biochemistry, etc. But we're also very heavily interested in social sciences, arts and humanities, and about what we have to do and what language we have to use to get good at that in those spaces. And in those disciplines. So that's particularly interesting, awesome, anybody who wants to, you know, for that conversation, more than happy to do that. And I think that's my last slide, John. Unknown Speaker 51:59 It doesn't mean, goes back to Mark. All Speaker 2 52:01 right. So I'm gonna I'm gonna wrap us up here a little bit, there have been 20 set of questions in the q&a, which I have been furiously trying to answer as we go just because they're probably there are a lot of good questions. So if you press the little q&a button on the bottom, you'll see the the answers, figured it would be easier for people to go. So. So where is this all going? So we've just explained, you know, we've effectively started to create a future where most of the operations of a tech transfer office are AI enabled, hence the title of the presentation. And but it's not going to stop here. And even in the three weeks since the autumn annual meeting, some of the things that have been on this that are on this slide, are already coming to pass. So there are startups that are trying to draft AI for D or who are using AI to draft patents. We know law firms are using them to draft your patents, whether they tell you that you're using it to draft your patents or not, I don't know. But you should assume that the same reasons we're using AI to improve efficiencies, that legal profession is in equal parts probably frightened and also actively using it to improve efficiency and the things you do so I can see. In I said 12 to 18 months in the presentation, I would say it's six to 12 months right now that you're going to be able to draft a reasonable version of a patent application through using a prompt and that this is going to drive down the cost of IP. The good news is, if you're a small office, and you don't have a huge budget, you might be able to do more. If you're a large office, you're probably going to do more of everything. So your your budget might not go down. But you might actually be able to get more throughput or file on more things. The everybody talks about these GPPs, the large language models, but where the value is going to be extracted. And in fact, what we're doing in some places is small language models, which are things that are bespoke things that are fit for purpose, like we do believe through doing these webinars and the presentations. And we're going to talk about an E group for autumn in a second, that there will be a way like eventually we'll have some kind of bespoke model for tech transfer that does something particularly good for the way that we do things effectively, the killing of the disclosure that John described before, that's kind of a small language model. That's going to be a model that's built specifically to take technical information and publications and convert them into the form and format that we all need to do our jobs. So you're going to see more of that I my personal belief is that the world of the large language model that GP Ts, Geminis that's almost already done, because it is so expensive to create them. What you're going to see is people riffing on or using or creating subsets of those large models for that are going to be fit for purpose. And for all of us. So think AI is going to be your executive assistant. So every I mean, so the good news is everybody gets an executive assistant, it's learning how to use these tools to do the to make your job a lot more efficient. I think, you know, going where we started, which is, are we going to lose our jobs? answers? I don't think so. I do think the nature of the work will change, I do think that you could see, I think that there are maybe our positions that I currently have staffed, that are, that may not be replaced when people leave, that we might be able to have squeezed after two or three years, we squeezed out enough efficiency that we might not need to replace a position. But nobody, nobody's gonna lose a job. For us on just you're learning how to use AI, it's going to make us do more better faster, because we still have to scale like, like Declan talked about where John is working on this project, now called the Tov assistant where we can actually verbally speak to the data, which is kind of incredible. So you could query your database with voice commands, and not with asking someone to go pull up bits of information. So it's, you know, imagine a world Hey, Siri, or hey, toav, assistant, tell me what the royalty rate was in the Pfizer agreement that was signed by Dr. With Dr. Smith in 2007, and get an answer, which now just think about how much time that is, you can ask it to pull reports and generate reports based on just a request. So just imagine how much time is being saved. And then you can also in face to have it write emails for you have it, you know, say, give me a license agreement, here are the five terms that I want included. And please give me a draft of a license agreement, where you're kind of waving your hand at the screen, and having these things be created. So it's pretty remarkable. Next slide. Okay, so this is many of you know this already, but but here's the Siri was calling me. So here's the thing. And why us giving the prompts away. So feel free to take a picture of the QR code on the screen, the Tov AI prompts are, it'll send you to a link to our website, where you can download all of the prompts that we've talked about the disclosure prompt is not there yet. But when we figure it out, we'll let you know how to access that. The ask that I get for everybody is we're giving this two things. One is, it's for non commercial use, which our internal operations I'm considering as non commercial use, we are asking that you not give them to third parties that you're not give them to somebody who's like consulting for you to help. Because the goal of giving all this way instead of our attempt to monetize it is that is to help teach the community how to use them themselves. If you go to that website, we're asking that the director of the tech transfer office be the person that signs it, so that we know that there's accountability from the from the director of the office to sort of uphold the things that we're talking about. So that's the trade off for us is we you know, we thought we thought a lot about whether we should or shouldn't try to monetize this. And we decided not to. But in exchange, we're saying this is for the tech transfer community for academic DTOs. That's the use case, it may come a period of time, like in a year where the information is so badly out there that then we don't care. But that's what I'm asking of you is please don't flip this and hand it to someone who's helping you where we can't control what happens to the men because what what I'm what I'm trying to do is make sure everybody can find out a way to use this. That's affordable, without having to hire outside people to do the work. And I think that's, that's where we landed right. So why don't we leave that up for 30 more seconds to so people can take a picture of that. Unknown Speaker 58:47 There was a lot of questions. Oh, yeah, Speaker 2 58:48 yeah, no, I think why don't we hit those hit the q&a? I think we have Daniela, we have what, 15 minutes for q&a. So any one of you shooter. Speaker 3 58:56 So just just to clarify tokens are the actual output data that come from the AI models? So if you look at GPT, or Azure, right, so you'll you'll see that it has different token availability counts. And for example, if you use GPT, 32k, it's 34, which is model for 32,000 tokens, and that's the maximum amount of tokens you have for both input plus output, right. And it's not per input per output. It's for one full chatline, right. So if I upload an agreement, that's 12,000 tokens, and output is 2000 tokens. I've used 14,000 tokens of that session, I still have another, you know, 18,000 to play with, or whatever it is, I can go back and forth ask questions. And that's just how you actually keep track of the amount of data you're using and what the cost will be. Right. So 32,000 tokens, there's about 50 pages of data. Docklands team has a model that's 120,000 tokens and that's 300 pages roughly on data. So input plus output. So that's actually the difference, right. And again, if you use chat GPT, maybe 3.5 per 1000 tokens, so per, you know, 10 pages almost of data, it's going to cost point 0015 cents. If you use GPT. For that cost goes up to about point 06. And it'll it'll scale up from there. But again, these prices have already dropped precipitously since the beginning of token cap, right? They are GPT 3.5 is a 10th of the costs GPT. Four, I believe is a fifth of the cost. So we assume that these toking costs will continue to decrease over time. Are there any other questions that I missed? Speaker 2 1:00:49 Yeah, I think there was a, there was a comment that the QR code was not working. But I think I mean, we've had we've had lots of people download the prompts. So we know if we're seeing a lot of box just contact me offline. So when I add, there's an ask so in, you can probably take this down down if you want to, so you can get the panelists back up. So we talked about what we're giving. Now. Now, we're going to ask for something from everybody in the audience that Daniela has provided her email address in the chat. And what we'd like to do what autumn is willing to do, but we have a policy that we need to live by, is we want to create an E group for AI in tech transfer offices. So that's a place where people can share things. You know, if people want to share their own version of prompts or things that they've improved, that's great. Again, we just asked that the toe AI prompts like sort of, you download them, it's on you, it's a single use license, all that stuff. So if you are interested in participating in an AI, e group in autumn, please email Don VL. As soon as you can. What basically we need 20 people effectively think of it as a petition, we need 20 people to sign up. So maybe by 110, we can have 20 of you just flip an email and say I will do the I am interested in an AI you group, I think that's all we need to know. As soon as we get that number of people involved, then autumn will in a little bit, open up an EEG group. And then that's a place where we're going to look to share, I know, John's got folks who are starting to become power users of this and that there's a, there's a separate thread or more of like a I don't know, it's not slack, but they're you know, they're chatting about things and once you start to get up to speed. So while I want to say I encourage you to reach out to John know that 5000 people reach out to John, he's, he's still got a day job. So we're trying to find ways to help and to share what we know in a more granular fashion. But without overwhelming the office to be able to you know, we're unfortunately, we can't be tech support for everything. But I can tell you that if you're using as your and you're using the prompts that we generated, you're going to basically save a couple 100 If not 1000 hours of work, we're getting you 95% of the way there, even in the things that you download, if you know anything about the back end of how the GQ stuff works, there are a few places where we actually give you the JSON files, which is you just upload it and run it. And so that saves a bunch of time too. So that's the ask of you. So please, if you're interested in having this conversation on a semi regular basis, drop an email, but don't yell. So we can get the group up and running. We're also we've got a little bit of time to ask, ask and answer questions. So if anybody has other questions, you can put them in the chat. We're happy to answer them live. But I think we got we got most of them in the in the q&a as we were going along. There's a question about marketing abstracts we were we strangely enough, I think most people who've been playing around with it, I've used it to wrap Mark got strikes, we went in to try to do the hard finance and stuff first. So we have we have marketing abstracts on the on the list of things that we want to do, we just haven't actually done them. But as we as we update our prompts and we get them to work, and we're comfortable with the outputs. We'll update will update the file that you can download. And so like if you download it today, and then three weeks we put another prompt up we'll find a way to let you know that there's more information that you can pull down. Someone asked in the chat, what is the name you just mentioned? And I don't know what that is because I don't remember what I said. Unknown Speaker 1:04:42 Was it JSON files? Speaker 2 1:04:44 Oh, yeah, that's not a name. That's a it's dot j s o n. That's their that's what you got. So Speaker 2 1:04:58 there you go. Somebody put a link up from what up So that's just that's the file that you upload to run to run everything. So I just say that we just figured, we have it in this format. If you have Azure, you can just upload it and use it and why, you know, why make you do your own? Again, we're just trying to share what we know and help everybody. So I'm going to ask how, oh, oh, who's the person that stung VL the person who's on the screen in front of you. Her email is in the chat as D young at autumn dotnet. So that's a pretty easy one to remember. Speaker 3 1:05:31 That's for the wellspring question. The Wellspring interface has absolutely nothing to do with the process. Honestly, we were hoping that we'd be able to connect with offspring through an API to actually automate with AI a lot of the processes that we have. But that was something that we were unsuccessful with because Wellspring just isn't ready to make that leap yet. So the wellspring interface actually has no bearing on the actual outputs or the process for creating these prompts. Speaker 2 1:06:07 There's a quite a question. No, don't send it to me. Send it to Don. So the only There you go, sorry. I'm answering that in the in the chat. So yeah, send it to send it to the not me on the screen. All right. We'll give them give 130 seconds. Anybody else have any? Looks like looks like a now it looks like we're dropping people, which is perfectly fine. So all right. Well, thank you, everybody. This is great. I hope you got something out of it. Again, feel free to download those prompts. John deck, Ellen, if you have anything else you want to add? Well, thank you. Unknown Speaker 1:06:50 Thank you guys. Speaker 2 1:06:51 I'm gonna do my may use Zoom. Oh, no, it's not. It's not letting me do it. I'm supposed to get explosions behind me it's not happening. Speaker 1 1:07:01 Well, on behalf of autumn, I would like to thank Mark Nan and Declan for their informative presentations today. And thank you again to our sponsor, Mark sugar's theme. A recording of this webinar will be available for viewing in the autumn Learning Center within a week of this event, and is included in your registration. As a reminder, please complete the webinar evaluation following this session. It will open when you sign off the session and will we be sent in the follow up email tomorrow. This will help us serve your needs in the future. Thanks again for joining us and have a good rest of the day everyone. Unknown Speaker 1:07:38 Thank you guys. Bye bye. Thank you. Transcribed by https://otter.ai