and they will then… that'll give you and present a few slides. That whole presentation will go for about 12 minutes, and each, so that leaves, like, 20, 22 minutes for questions. You can ask questions during the presentation, but please put them in the question and answer, uh, put them in question and answer, uh… feature, because, uh, I would like that we would prefer that the panelists have their time to make the presentations, but please put in the questions as quickly as you can as they enter your head. So that you don't forget them in the question and answer, um. And in the question and answer. Okay, you can… the other thing is, the way I'm just gonna go with Shanti and Surin will talk about AI and its applications. Matt will talk about quantum and AI and their applications in research and in drug discovery and such. The emails of all the participants are on the first slides. I had the first slide there, I'll give you a few minutes to write them down. If you want, and I can… we can bring them back at the end of the presentation, too, so that all of you have the emails that you need. So, without much further ado, I would have, uh, Shanti start off the presentation. And, uh, please go ahead, Shanti. Yeah. Thank you very much, Amy. Okay, just a quick intro. As Amy said, um, I run a company called DataESis. As the name suggests, we do all things data. And, uh, personally, uh, I have worked with data. my entire career, uh, what I've realized through that journey is that. The technologies keep changing. But the requirement. for, uh, bringing data in. integrating it, storing it, analyzing it, and more importantly, getting insights from that data. remains the same. If anything, there are more and more new applications for it, uh, you know, with AI. being the forefront which we are going to talk about now, but, uh, so I'm… really glad I chose this path in my career, because it's, uh, it's just… it's been an awesome journey. Um, just a little bit about our company. Uh, we… so that you, you know, I guess I'm the only. No, I think Matt is also… Uh, from the commercial world, people, not from the research world, but we primarily… our primary focus is pharma. They are… we have significant domain experience. Again, all we do is primarily around data, so… We know all of the data sets that pharma uses. Uh, we help them with… primarily on the commercial side, with. product launch, uh, implementing various types of solutions, and more and more increasingly working on AI-driven applications. to really optimize and help with their business and improvement of revenue. Uh, so we… towards this end, we actually have a full suite of applications. That are easily customizable, and we can get a company to launch their product within a few months, which typically takes. They start launch planning. about a year, year and a half in advance of when they actually are going to launch. Uh, we can cut that time down to a few months. Okay, so just level settings, so since I'm the first one to go, I just want all of us to be on the same page, so starting with some very basic definitions around AI. Oh, I guess most of us got, uh, know about AI, at least for me personally, through science fiction. Uh, for me, if I was just trying to think back while preparing these slides, was really the… how 9,000, if anybody remembers that, if you are my age and have a few gray hairs, I guess you will. Uh, which is the Arthur C. Clarke's book, and which was made into a movie, and uh… Because this is amazing. But mostly, I guess it's all these… humanoid robots. And a lot of that, right? Or most of it… I always felt, and I think it is true, that science fiction really leads the way to. through research. in universities, maybe they get their ideas of science fiction, uh, as well, so… Yeah, it's been around a very, very long time, um, going back to… the mid-1900s, 1940s or so is when it started, but we never really had. the computing power, I guess, to do that. So, then moving on to Gen AI, generative AI, which I guess is the latest hot thing. This came into the. scene with the chat GPT about 3 years ago. And as the name suggests, what generative AI is about. is generating information, which is creating things like. you know, documents, you know, images, audio, video. Today, they do all sorts of amazing things, so… If you've been reading the news, and you've played with some of these tools, like ChatGPT, Gemini. And so on, then it's just been wonderful to watch the evolution over the last couple years of that technology. More recently, this agent take AI has been taking off, and what, uh. Agents are… they do tasks. Right? Well, GenAI creates stuff, agents actually accomplish and do tasks. And this basically means even the simplest of tasks which we human beings tend to just. do almost magically. It really means about, you know, the ability to understand, the ability to make decisions, and then take actions. Right? So that's what agents are all about. Some examples, if anyone's tried the Microsoft Copilot or anything, they can. Read your email, you know, they can set up… automate your meetings and appointments. And then, of course, you know, you can answer it, all sorts of questions. based on just even… I'm not talking about ChatGPT, which is based on. generic internet information, but based on your. contents, your mails, your documents, and so on, so… Um, and then more incre… the power of these agents. increase even more when you can string them across. You can add agents. like, like, you know, work chain or an assembly line, if you will. And you can… you can really do amazing tasks with that. If people have… I'm sure many of you would have read about. whole Chinese factories. run without a single human being. Right? The entire factory floor. is driven by, you know, robots and automation and these agents handing off tasks. Uh, from one to the other, so… I'd love to go one day and visit one of those. factories. We don't have them here in the US yet, but… Uh, we'll get there, I guess, with… Anyway, so to give you… I just already talked about factory floors, but. Lots of applications of these agentic AI. Uh, you know, almost any type of vertical, all right, uh, uh, yes, you know, the Waymo, if you have heard of that, if you live in San Francisco or in California, I definitely would have heard of them. They run these completely autonomous driverless cars. you can, you know, it's like an Uber, you call up one and hop into it, and there's no driver. And you just get to where you go. It's the… pretty amazing technology, and works really well. Um, in healthcare, if people are from the healthcare domain, there's. AI is doing an amazing, amazing job of. reading, scanning, uh, things like x-rays. helping radiologists, they can catch things that. Sometimes the human eye misses, and really enhancing the radiologist's function. to focus on the highway, uh, riskiest of cases over there. So I'm not going to go through all of them, I just wanted to give you a taste of some of these. types of advanced AI and agents in practice. Uh, but you can, um… the… I guess the summary is that there's applications everywhere, in every type of domain. Every type of business. Uh, every type of university, you know, you name it, there are applications for. AI. Okay, so let's come to pharma, which is the domain I know best, and Surin is going to talk a lot about drug discovery, so I'm not. touching that. Start… we'll start with, once the drug discovery stuff is done. The next step for pharma is really getting it through clinical trials. I mean, and we all know this is a very arduous process. several years' journey. They've got to go get through Phase 1, Phase 2, Phase 3. And only when they have a successful Phase 3 clinical trial can they actually get ready to launch the drug, but that is after they get FDA approval, so… The whole thing can take as far, as much as 10 years. Uh, let alone the time for the drug discovery, right? So it's a long lead time, and clearly, AI can make a difference. And really trying to shorten this time. The problem with clinical trials usually is that even just patient recruit. recruitment can take a couple years, right? Because they have all these. conditions that the patients have to meet. Depending on what type of drug you are testing, and a lot of that process happens manually today, believe it or not. You know, there are guys sitting on the line of the phone call, making calls to patients. And it's just a very, uh, very, uh, manual, painful. time-consuming process. So there are many startups that are already doing a. ton of work in optimizing that. And really automating as much as possible of that patient recruitment, and creating that. you know, more or less ideal patient pool to ensure you can have a much better clinical trial outcome as well. Alright, um, then there's also the way to… it's also AI can help with. during the design of the trial itself. Right? So you want to… there's just a lot of complexity in… these clinical trials, there are specialists, there are consultants that are used, but AI. is playing… already playing a major role. It will play a much better… there's lots more improvements that AI can do. in the space associated with clinical trials over there. Uh, and then once, of course, you are through, and now you've gotten your approval, the FDA has blessed you. And you're ready to launch, you have to figure out the whole manufacturing process. So, Sanofi, I saw recently, had reported that they are using AI agents, they have been able to. improve their yield in the manufacturing process, right? If you can improve the quality, reduce wastage. and test, because, I mean, you know, you're talking medicines here. You've got to be sure the quality is good, there's no contamination. There's, you know, the standards here are extremely high. And, uh, using AI and robot… robotic, uh. mechanisms, they can absolutely help reduce. improve the yield, better quality, and reduce the cost, and there's a huge focus right now on drug prices, right? politically in the government, so it's really important for them to. ensure they can get the maximum yield. And yeah, so at the login, much more work to be done. by AI in this area as well. Now, coming to the commercial side, and when… when I say commercial, so there's usually two main parts of pharma, clinical and commercial. I mean, we just talked about clinical, we didn't talk about drug discovery, which Surin will. And commercial is really everything to do with the market. Once they decide to, and have gotten the approvals to launch the product. It's all about the marketing. Oh, you know, how do you optimize? Which channels do you use for marketing? How do you optimize that? And then, more… the most important thing is what's known as market access. Market access is making sure that you can partner with all the payers, the insurance companies. So that they cover your product, right? If they don't cover your drug, these drugs cost, you know, thousands, tens of thousands of dollars sometimes. And you've got to have insurance pay for it, otherwise… Uh, you know, that does not… you're going to have a very small volume, I guess, of customers. So, there's a whole issue about market access, and. Today, there are AI models that can help you determine which payers. are more likely to approve your drug. And go after them first to create partnerships, so that's a whole… um, uh, what should I say, channel on its own. customer service, and this is similar to any kind of product. You have a product, and then, you know, customers have questions. You have to deal with them, except in pharma, you have many more players. You have providers who are the doctors, who are really. you know, think about it, your customers, because they are the ones writing the scripts. for patients to go purchase the… patients are indirectly your customers, but really it's the… doctors are your customers. And then you have questions from patients you need to handle, and then all your internal, your sales reps, FRMs, because this is a complex product you're selling. So, everyone's got to have access to all of the information. both your product as well as all the competitor products over there. And, uh, the final thing I put there was about documentation. Pharma companies have to generate tons of literature, they have to read tons of literature from end to end, right? Both on the clinical side and the commercial side. So, AI helping with, actually. digesting all that information, generating. the kind of literature that they need is a huge, huge time saver. Uh, on that side. Uh, I don't know if I'm out of time already. Now, you can go ahead, you can go ahead, Janti. Okay, I'm nearly done over here, but the last point I put in there is with respect to. Okay. Sales… so we actually have, uh, an Agentic AI. application that we have built to help improve the sales rep. call plans. So, basically, just like any other business, you know, you have reps. making calls to the providers, the doctors in this case, and there's a lot of complexity, unlike. Uh, other types of businesses, sales is much more complicated. In pharma, you have regulations on. who you can approach, what you can say, and all sorts of things, so… Uh, there's a whole, um, you know, planning that happens, and the… amount of literature that they need. As well as quite a lot. Uh, and then finally, I talked about predicting prediction. This is, I think, similar to most other businesses where you really want to be able to. have data, do real-time analytics, and then be able to make predictions. on all fronts, yep. So, I'll stop there. Thank you, Amy. Thank you, Shanti. Uh, we will… Next, move on to Surin. Yeah. And he will talk about, uh, AI in drug discovery. Am I right, Sirin? Uh, yeah, yeah, generally. Uh, and yeah, and uh… So I'll leave it to Soren to do the presentation. Excuse me? Hi, uh, so my name is Sirin Mong, I am an Associate Director of Licensing at the University of Michigan Innovation Partnerships. That's the tech transfer office here at UMIS. So I work primarily in the life sciences, with a large part of my technology portfolio, lying at the interface of artificial intelligence and life sciences, so I was asked to speak. some of what we're seeing at our office in this regard. Um, a little bit more about me. In a previous life, I helped develop intellectual property portfolios for biotech startups in the greater Boston area, and then came over to UMISH. last year to be more on the transactional side of things. Um, some of these startups that I worked with had a heavy digital component and applied. cutting-edge AI techniques to problems in drug discovery. Um, at University of Michigan, the function of Innovation Partnerships is probably. familiar with many of you in the webinar audience, but just in case. you're not involved in academic tech transfer. Uh, what we do is, like most university tech transfer offices, we work with faculty who submit invention reports to our office. To help them protect, develop, and hopefully commercialize. their innovation, either through licensing to an existing entity, or possibly through startup formation. As an office, we're pretty busy. We receive a large number of invention reports every year from practically every college at the university. Um, on the left-hand side, you'll notice that, you know, from the… I think this is the 2025 fiscal year. The majority of these invention reports come from either our medical faculty affiliated with either a medical school. or engineering school. And, um, on the right-hand side, this bar graph highlights sort of the volume of invention reports we're seeing over the past several years, and. You can see the steady trend upwards. But I think maybe the most important thing I wanted to highlight here is that the relative, uh, percentage of invention reports that we're receiving that have. tags, um, as being, you know, in the domain of artificial intelligence has increasing. It's more than doubled since what it was 5 years ago, and I would say the trend is still. Um, increasing upwards slightly. Uh, over time. And, um, if you break this down, either, you know, break this down further, either by physical sciences or life sciences, we're seeing increasing trends in both cases. Um, so at University of Michigan. What does AI innovation look like in the life sciences? I think we're seeing a lot of use cases and implementations. Particularly when it comes to drug discovery. Not everything we're seeing is necessarily geared towards drug discovery. There's a lot of faculty working on a lot of different projects, but I would say a large percentage of it is. And what I'm showing on this slide is… just sort of, like, your typical roadmap for drug development, starting with. Initial target identification. going through drug optimization, and then eventual testing. Um, and patients. And I put this up here because, you know, I just want to emphasize that drug discovery isn't, as everybody knows, is an incredibly challenging task. It becomes increasingly expensive. As you get closer to market, and you push towards clinical development. Um, so anything, uh, you can do to really improve the efficiency and, and, and. bring down the costs of developing a drug, I think. The often-cited number cost something like $3 billion, um, after accounting for all failures, uh, it costs about $3 billion to develop an approved drug. Um, so anything you can do to bring efficiency to this process, um, has great value. And I think that's what a lot of the faculty that I work with is… they recognize that, and they're trying to leverage artificial intelligence in many ways. To improve this process, um, for example. finding ways to possibly attrit compounds that are in this drug discovery pipeline and get them out, uh. have them fail faster before actually reaching. the more expensive steps of clinical development. Um. So with this framework of drug discovery in mind, you know, what are some specific projects and ways in which we're seeing our faculty use artificial intelligence? On this slide, I just want to provide very high-level overview of some projects that we're seeing. Um, this is by no means a comprehensive slide, and the intent is just more to provide vignettes and to. the things that I'm seeing. Um, there are many other projects that I could have included here, but for the sake of time and brevity, I've only included a couple. Um, on the early discovery side, I'll just… I'll briefly comment on two. Um, we have faculty working on biomedical knowledge graphs. that use public and private data to map biological… biomedical relationships that can be used to train. AI models. Another project, uh. that I'll talk about in more detail shortly is… Using machine learning to predict. novel drug combinations capable of overcoming antimicrobial resistance. This is a problem which is, uh… For other situation, which is becoming increasingly dire for patients who do become infected with drug-resistant pathogens. Moving a little bit further down. the start discovery chart, um, some other projects that faculty are working on at University of Michigan are. using generative artificial intelligence to. optimized drug properties of existing or known compounds to, um. to improve their drug-like properties, and… These properties could be things such as. Increased solubility of the drug, um, if it's an antibody, decreased polyreactivity. Or perhaps target engagement of the drug with its target of interests. And then, um, another project which. I'll just, um, touch on briefly here is the use of machine vision to rapidly assess. Drug toxicity from high-content imaging. And using this approach, you know, we have… I have a faculty member who's using machine vision to. to detect very subtle phenotypes that are indicative of drug toxicity. that may not otherwise be able to be detected using traditional methods, and it's really… the machine vision gives you. a leg up in trying to detect and. select those compounds which are most promising and move them forward while taking out the ones that are. are potentially toxic. So going back to the project that I had mentioned, where we've got a faculty member using machine learning to discover drug-drug combinations. So this comes from the lab of a faculty member, Sriya Chandrasekhara. who's, I believe in the Department of Biomedical Engineering. I just want to double-click on this project. maybe go one layer deeper to provide a bit more detail. So, um, just to provide some background, in case you're not aware, antimicrobial drug resistance is a growing crisis. the discovery of antibiotics was an incredible achievement, and the fight against infections, but as the bar graph on the right is showing, increasingly we're encountering pathogen. Resistance to, um, current antibiotic therapies. And this problem is only expected to worsen. Um, so as response clinicians are increasingly turning to multi-drug regimens to… Excuse me. To combat. infection, but this is often a trial and error process, and so there's significant room. for improvement. having to find the problem. So what a lot of work in Srivram's lab involves is leveraging artificial intelligence and machine learning. Um, in different ways to discover potentially synergistic drug-drug combinations. And one such approach that he's developed and published on, um. which I've included in this slide as a platform. He's called M2D2. So that's mechanistic machine learning for drug-drug interactions. I'm not going to dive too deep into the technical details here, um, but I'll say at a high level, this is a… A two-stage machine learning pipeline. Which takes as inputs, computational or experimental data on drug-target interactions. And using what's called a random forest algorithm. It outputs potentially synergistic drug-drug combinations, which then need to be further validated. in a wet lab. Um, the other interesting thing, uh, piece about M2D2. is that, well, many machine learning models are thought of as a black box in terms of not being able to. or users not being able to understand, you know, what is contributing to the output of a machine learning model. the way that this particular MTD2 model is set up. is that, um, it does have the offer to, uh, the potential to offer mechanistic insight into how these drugs are working. So, allowing scientists to really understand, you know, what are the pathways, uh. within bacteria that are being affected by these proposed drug-drug synergies. Um, and this idea of AI with mechanistic insight is a recurring theme throughout Sri Ram's work. So that's great, you can have machine learning models which make predictions, but, you know, how valid are these predictions? And so on this slide, I just wanted to show that one such prediction from his work. was that, um… a prediction that the drug. which is a broad-spectrum antibiotic. commonly used to treat urinary tract infections. was predicted to be synergistic with faceta, which is a vasodilator often used to treat strokes. And, um, simply put, uh, this… this particular prediction was validated using in vitro experiments, and on the right-hand side, there's this graph. demonstrating, um, inhibition of E. Coli growth. Uh, the way to interpret this is that the lower the bar, the better the results. So the black bar, you have no effects. From the use of FASIDL, which is the vasodilator, um, by itself. Um, and the second bar, uh. Massillon, which is a broad-spectrum antibiotic, you expect to have some activity against E. Coli, which it does. But the really interesting thing is, you know, you combine the two, and you get the synergistic effect on the third bar, where you really drive down the growth of E. Coli, and you're able to inhibit at much higher potency. Um, than just either of the two drugs alone. And I think, you know, the point of talking through this particular project is I think it innately captures. How artificial intelligence and machine learning is being used by our faculty to explore scientific questions and provide results that perhaps are non-obvious to people in the field. And there's many examples of this across the different faculty that I work with. At Michigan. Um, and this is my last slide, and I'll just touch briefly here on my approach towards protecting innovation around artificial intelligence, as it's applied to life sciences. I think a very simple framework I use in working with faculty is. One, I think it's really important to have a fundamental understanding of the AI methodology that they're using themselves for whatever the task is, or whatever the project or research question that they're trying to answer. And then separate from that, also consider what are the inputs and what are the outputs of a given AI model. And I think, in my mind, when I've been. you know, I look at each of these three different buckets, um, there's potentially something of value that may warrant protection. So it, you know, in the example I just provided on Sri Ram's work around, um. novel drug-drug combinations. Obviously, the outputs. potential method of treatment type claims, you could pursue those, um, in a project like that. AI methodology itself is potentially patentable. Um, if there's something unique about the approach, or perhaps the application of an AI model in a space where it hasn't previously been used. Uh, if you think of AI as software, I think one also needs to consider and weigh, you know, a copyright strategy in addition. or alongside a patent strategy for the AI methodology. And then something that I think we're increasingly seeing is with regard to inputs. is, um, instances where the data inputs for training a model are potentially valuable, too. data is not necessarily something you would patent or copyright, but. We're seeing, you know, some industry members reach out about interest in datasets that are. generated internally, or mostly internally, and those prevent opportunities as potential licensing opportunities, um. to those, um, corporate entities. So that's all I have in terms of slides. I'm happy to elaborate or answer any other questions during the Q&A session. Otherwise, I'll… Uh, turn it over to Matt at this point. Thanks, Erin. Uh, that was a great presentation, and I guess Matt is on next, talking about. quantum and AI, so I'll… so Matt, please go ahead. Thank you. Great. Uh, thank you so much. Uh, so, my name is Matthew Kiesen, VP and GM here at INQ. I won't be presenting deeply on our history, but I wanted to note for this audience, we're a very. proud tech transfer from the University of Maryland. I joined INQ when it was 10 people. In an empty warehouse. For those of you who are familiar with UMD, it used to be the Terrapin Trader, I believe. For furniture and so forth, on the edge of campus, um, in the 8 years since then, we've been fortunate enough to build. a $20 billion company with over a thousand… excuse me, over a thousand employees all around the world. So, per today's conversation thus far, generative AI is on all our minds. Um, and you may also be seeing a growing course of announcements around quantum gen AI. And so today, I'll be talking about a couple. uh, hot off-the-presses research items from INQ in this space. And importantly, what this might mean for those looking to be early movers and opportunities for tech transfer. I have to note, since we are a public company, and this is an audience of many legal professionals, that, uh, today's presentation contains forward-looking statements, so… By continuing to watch, you are agreeing to the fine print here. Just kidding, but uh… again, forward-looking statements, uh, as classified by the SEC. Okay. So, um, INQ has actually had unprecedented momentum in building quantum computers since we were spun out of UMD. And we're not slowing down. And we're actually. And you may be surprised to learn this, starting to see the beginnings of narrow commercial advantage, meaning. Where quantum computers can do things better, faster, cheaper, with less energy than classical computers. We actually expect quantum computers to materially outperform classical computers in certain domains in the next two years. We're extremely close to being unable to simulate quantum computers classically. At all. And when you look at our roadmap, um, you'll note. Uh, we've actually recently purchased another technology transfer in England, Oxford Ionics. To accelerate our roadmap. This is our new updated roadmap, I'm showing you two things, qubits. and logical qubits. Qubits are quantum bits. While I won't be going deep into how quantum computers work. You can think of it as just how classical computers have bits, quantum computers have quantum bits or qubits. Um, and just like classical computers. qubits, or physical qubits, have some noise, and so we apply error correcting codes. Uh, with some overhead to have perfect logical qubits. And so you're seeing here two tracks, one. physical qubits in the system, and the logical qubits below. For those of you new to quantum computers, a helpful way to think about it is, every time we add one qubit. Due to the power of entanglement, we actually double the processing power of the computer. So our plan is actually to increase the power of our machines by 2 to almost the 200th power over the next year. Um, which is much bigger than just a linear increase, as with classical bits. Um, and it seems there are likely useful applications we can run at this scale. on physical qubits. So, uh, just to refresh, for those perhaps unfamiliar, or who haven't worked too much in the field. Um, two kinds of generative AI I'll mention today. large language models, like ChatGPT. Gemini, etc. Typically using transformers. Uh, and GANs, Generative Adversarial Networks. basically two competing components. You have a generator that makes data, and a discriminator that evaluates the data as real, or… Or fake, or says it's good or bad. And both of these are useful technologies being used today, as kind of discussed earlier in the panel. I want to talk briefly about. what it actually takes to make these models. So, Bert, from just over 6 years ago now. Consumed the energy equivalent of a round-trip transcontinental flight for one person to train the model. So, okay, not so bad. GPT-2, with, uh… 1.5 million parameters, a little more. Then GPT-3… Uh, now we're looking, in 2020. Um, at $175 billion parameters, and this took over 1,000 megawatt hours. Which is about as much power as we might consume annually. If we lived in a small neighborhood of, let's say, 100 to 150 homes. Um, so, starting to get a little material. Um, then, uh. In 2022, of course, this didn't last long. That looks like nothing now. Um, and by GPT-4. Uh, with a trillion parameters, um. the model sizes are even larger, consuming even more energy. The IEA, the International Energy Energy Agency, I believe. Estimated we could be hitting over 1,000 terawatts on training next year. That's equivalent to Germany's entire energy consumption. And a gigawatt model is coming, actually. And so, this is actually the trend where we find an opportunity for quantum AI, um, due to some. potential speedups and energy efficiency we can get from quantum computers. In our case, we're manipulating individual atoms. trapped ions to do quantum computation. To change the energy state of a single atom takes very, very, very little energy, so quantum computer energy consumption scales quite nicely. Okay, so, a real-world example. study we did recently. Those of you in materials or related may be familiar with this, um. If you manufacture steel, you want to know its quality. You actually need to examine it through a special electron microscope. Um, which currently is done by humans doing spot checks. Obviously, as we were discussing earlier in other use cases, an AI should do this, not a human eyeball. Um, it turns out, training these models to do this is super hard, because it's very expensive to collect the training data. And there aren't very many of them out there. So, uh, we recently partnered on a solution, um, to use a quantum GAN, quantum generative adversarial Network. In the existing workflow to train larger and actually more accurate AI models. And so, if you think about the workflow. In a classical GAN, okay, we talked earlier about this, uh, discriminator and generator. Uh, working in a loop. We are feeding in real images and random noise, and basically, slowly over time, the generator is getting better. at generating images that look real, and the discriminator is getting better. Uh, at saying, okay, no, this is not real, this is real, etc. And this back and forth allows us to actually generate quite accurate. synthetic images to help train our models. And so, with a quantum GAN, all we're doing is adding this quantum step up front, take the output of a quantum circuit run on a quantum computer. Uh, and this basically improves upon what a regular GAN does, which is just starting with classical. uh, random noise. And it turns out, we actually could see, um, material improvement here with just a handful of qubits. So, kind of 5 big takeaways. One, quantum can plug into a larger model. Uh, to help with data augmentation. Uh, you could actually generate very good quality images, as judged by human evaluation. Regular GANs can have challenges with slow convergence or collapse, where basically the GANs just return the images that they were trained on. Um, our model for quantum GANs applied this, uh, Vos Christine technique to avoid model collapse. And we saw relatively fast, uh, training convergence on just 16 qubits. And you can think of this as, like, the quantum computers in the loop. Uh, we take the output and feed it back, as in back propagation, into the same model. Um, most quantum AA models thus far are using quantum computers for only inference, and so what we're seeing here is a path to scaling. Where we can actually get improvements with the quantum computer at scale that we couldn't get. Uh, from classical alone. I want to come back to the subject of power briefly. You know, you may be aware Microsoft has a power plant dedicated to AI. Uh, Meta has a gigawatt model coming. Um, the power consumption here, you know, is just unbelievable. Um, for a brand new technology. And so, we've been working on this new technique, quantum large language model fine-tuning. Which is one way to reduce power consumption. Basically, we leverage the sunk cost of training your existing model. You've already made that gigawatt model. Um, to fine-tune new models without needing nearly as much power. Basically, quantum can use less data and less energy to achieve target outcomes. So, um, for example. In a hybrid workflow, we add a quantum layer to the fine-tuning step. So here, again, in this picture, we're seeing, uh, kind of. classical fine-tuning with this little quantum layer here that allows us to actually get more efficiency that we can get from a classical computer. Alone. Um, and so, to look at some data. Um, training data is expensive, and it's hard to get. Fine-tuning often requires the most expensive and rare data, because we're working on some very specialized domain. potentially without a lot of, uh, training data available. So, we used a real model. Full token-based prediction model, a known benchmark, um… to demonstrate what you see here, which is basically, uh. promising signs of efficient scaling. Again, just at, like, 16 qubits. Um, and if you consider our roadmap, where we're headed for 256 qubits demonstrations next year, and 10,000 qubits. Soon after, um, you can see, uh. kind of… or get… develop an intuition for why, uh, what's happening on the next slide. Um, should be so exciting. So in this picture, um, we're graphing power consumption on the left. against, uh, two different things. On the bottom, we have the number of qubits. And then we see, in orange. Uh, the power consumption growing over time as we add qubits to the system. And in blue, we see a classical computer simulating. the effect. Because every qubit doubles processing power. To keep pace with quantum computing, classical simulations of these same methods rapidly head to infinite energy consumption. In the range of just 40s of qubits. Um, to put it in perspective, our most powerful computer available online right now on the cloud. called Forte, Forte Enterprises, 36 qubits. Next year, we'll be launching to general availability on the cloud a 64-qubit system. for anyone to try. And by that point, we're already in the regime where the power savings, you know, with every qubit is exponential. I didn't even actually include the graph of 64 qubits here. Uh, because it's just radically off the charts. And so, I think the message I want to leave everyone with as we wrap up here is. Quantum is actually happening right now. Quantum and AI, we're actually in this kind of beautiful period of greenfield research, where there's a ton of low-hanging fruit. Um, in classical computing, uh, people have to do, uh, ever more complex research to. push the field forward. In quantum computing, we are still in the very early stages, so there's a ton of opportunity. For IP generation, for research, um, and for really, uh, groundbreaking discoveries. Um, and I'll just note, um. that, uh, for those of you who might be interested. Uh, or have researchers… we offer a research credit program. Please reach out if interested, folks. In universities can get credits for their students, and their labs to do research on real quantum computers. Thank you. Great, thank you, Matt. So, I will ask people for questions. You can just put them in the Q&A. And, uh, we can go from there. So, any questions, please, uh… put them in the Q&A session. It was that clear, huh? No questions. Okay, I'll start off the questioning then. First question! Given the early-stage development of quantum. And the latest stage of development of AI. Ai has been here longer than quantum has. Would AI substantially help? Quantum computing. achieve its goals in research and development. Yeah, I can take a first stab at that. So, we actually use machine learning. Uh, to develop our quantum computers. Um, in many different facets, uh, for example. Um… we trap ions. And, uh, to do high-quality operations on those ions. We need them to be in the right place. And not being impacted by stray electric fields, or magnetic fields that might be. in the environment of the ion trap. And for those unfamiliar, you can think of an ion trap. as kind of a… actually, quite a mature technology, it's how atomic clocks work. You trap. A positively charged particle, in our case, in a rapidly oscillating electromagnetic field. Um, but turns out, uh, for doing high-quality quantum gates, we need to be… we need to have all of our ions, like, well aligned with each other. And there sometimes are stray fields on the trap that might be distorting, uh, the ions, pushing them in a direction that would make for a more difficult gate. And so, uh, we use, for example, computer vision. to identify, uh. how well-focused and how well placed all her ions are, and then we can apply kind of a feedback loop of compensating fields to address what the computer is seeing. Um, and so, basically, we have a model of our desired state, and use, actually, uh. really straightforward machine learning to optimize the performance of the computer. We do that in a lot of different areas. Um, so I think. Broadly speaking, it's kind of like any large-scale or complex experiment, whether in physics or chemistry or anything else. Um, when there's tons and tons of parameters in the old days, right, grad students would be. Turning knobs, tweaking variables, trying to get high-performance system of any kind. And now, much of that can be replaced, uh, with machine learning. Great, thank you. I see there are questions. Let's see, what's the first one? Okay, Surin touched on this a bit, but could any of the speakers please further elaborate. on how to protect AI-related IP. Generated by university researchers. What is palatable, and how do we protect and license. What isn't patentable? That's cheating. That was my question, by the way, whoever the anonymous attendee is. Go ahead. Uh, maybe I could… speak a little bit more to this. So… excuse me. Uh, you know, I think going back to how I approach, you know, working with any faculty. And this idea that, you know, there's inputs, you have an AI model, you have outputs, and potentially in all of those buckets, you have. something that's protectable, whether that's. with a patent or not. Sometimes patent protection is appropriate, particularly. In the context of drug discovery for outputs, you know, the actual… if you, you know, propose and come up with. a solution, a new compound, or a derivative of a compound that's. So, composition of matter patent could be had there. Um, methodology, uh, so AI methodology and methodology type items, that's certainly patentable, too. I think… you know, things that are generally not patentable are just… data sets by themselves, um… one way that you can still bring value to those is, I mean. Essentially, those could be kept as sort of, like, trade secret-type assets. Um, that you then license out to potential partners, and I had mentioned, you know, we're seeing that in a few instances where. We've got really valuable data that you can't really patent it, you're not going to copyright it, but. Um, the value is in the data and having it be able to be used for things like training AI models. Um, and there's conversations around that. Um, I think it, you know, there's a wide spectrum of things that. how you can approach… working with a given innovation and a faculty member, and it's protectable, um, really comes down to a case-by-case basis, in my experience. Great, anybody else? Okay. Move on to the next question. How do you see the issues of data slash communication. Security evolving and being addressed with both AI and quantum. So this is for data communication and security. How would you use both? Uh, I can maybe… tried to take the quantum side, let's say. Matt? Which is, um, something I didn't mention is… we were getting closer and closer to the point where. quantum computing actually can break. real RSA encryption. Uh, proofs of concepts have been done on very small bit lengths, but. You think about, you know, typically we might see 2,048-bit RSA. being used, for example. Um, and kind of a good rule of thumb is we want. 2X the number of bits in the key, the encryption key. in logical qubits, so around, you know, 4,096 logical qubits. Uh, to break 2048 RSA. Um, per roadmap, that's… In the next couple years, um, if we achieve our scientific goals. And so, um… All organizations really need to be thinking about what we in the industry are calling Q-Day, which is the day that actually RSA is just no longer viable. Um, and so… I would say, among your CIOs and CISOs. Um, folks should be thinking about moving to post-quantum encryption now, since that's, like, a major migration to change. internal networks and so forth. And, um… we… because we see this coming, we've actually been acquiring quantum networking companies doing quantum key distribution. to offer an alternative, um, I think these are… these are complementary technologies, post-quantum encryption, on the one hand, and the other. quantum networks that are provably. unbreakable. And so, uh, I think… Quantum offers both. the, uh, the poison and the antidote, which is the… um, breaking of our traditional, you know, for the last 50 years. form of encryption, public key encryption, and, um, a more powerful networking solution. Great. By the way, this question was from Mahdi Nikdust. I hope I'm pronouncing your name right, Mahdi. So, thank you for your question. Anybody else have anything for you to move on? So I can add… I can add something on the data side, right? Uh, uh, so it's, uh, one of the reasons that AI. took so long to being used in enterprises is because of the. concern around their data security. Most enterprises have. do not want their data exposed, especially financial institutions. I mean, you can imagine they all have. a lot of, uh, you know, very, uh… and they have… protections, federal protections that they have to meet. And that's why it is a slow start, um, uh, in enterprises. It's super important. And I think companies like OpenAI and all that, who are pretty loosey-goozy. with data security initially, I've now tightened up and are claiming that, yes, you know, any data you send to us is. is yours, you know, we are not going to use it, and… but how securely are they guarding it? Uh, is anyone's bet, I guess, but it is definitely a very important question. to ask early on when you're considering which technologies to use around AI. Great. Thank you. Next question is… Where do open-source models fit into the AI ecosystem versus the proprietary ones? That's a good question. Yeah. Yeah, I can take a first track at that. We've definitely been working with open source AI models, the cloud in particular is a pretty good model, and. Increasingly, you don't really, even though OpenAI touts the size of its LLMs, there are lots and lots of applications you can do with just. SLMs and MLMs, which is really small language models, medium language models. You know, I think we're going to see a split in the use of AI based on the application. You know, use the right type of model, and many of those. are open source, they can run on much, uh, lower type of hardware requirements. So, speaking to what Matt was talking about. Uh, so I think that's really going to be important for energy use and things of that sort, yeah. Remember that you're typing an answer? Yes, I was trying to see… so I… are we going to be able to get to all of these? Yeah, no, no, yeah, I'm all good. There's just one more question. Is that the question, Matt, or do you wanna… You're all good? Okay, I know we don't have that much time. Well, we have 7 minutes. So, this is from Arund Deep, thank you for the question, Arundeep. What are the regulatory issues related to the use of AI in the design, conduct, or analysis of clinical trials? I guess Shanti could say something about that, or Surin can, or… It's a tough question, because that's an issue. sort of new… it is a tough question, it is a tough question, and definitely, I know several startups, for example. who are working in using AI for the patient recruitment, optimizing that, optimizing that, if you will. to meet what the, uh. a particular drug requirements are, and yes, it can be a slippery slope. They have to be extremely careful. It is still early on, I think, but that's a good question. It's something that they have to keep in mind. Okay, next question. Anybody else? You good? Next question is from Hannah Kerbone. Thank you for the question. Is IP law in the US or elsewhere keeping up with developments in AI. Or quantum, for that matter. Any lawyers in the audience? Mark, are you there? Okay, we'll keep this for later. I'm gonna get our attorney, Mark, to answer this question in a few minutes. Okay. Nicole? Branofsky, I hope I pronounced your name right. How do technology transfer offices evaluate faculty proposals? That superficially add AI or quantum elements. often just using LLMs. Are there any rules of thumb for spotting real potential? I think that could go to maybe Matt. Matt, you've been evaluating a lot of proposals coming your way, or… Working with faculty? Have you had, that's how it started, right? I'm… I'll be back. Yeah, well, certainly we do fund research, um… And… yeah, it's a great question. I think, ultimately, we… can't do it alone. We bring in, you know, experts in the area, probably similar to any… how grants are working everywhere, right? Which is… just evaluate the proposal to see, okay, what really is the level of novelty here? Um, I think… so in that sense, I agree that there's a lot of, you know, superficial. elements that don't really… makes sense. Um… I don't think I have a rule of thumb, though, other than, at least for us, just having, um… you know, a round robin of trusted experts who can do a quick evaluation. And I find most folks. need only, you know, can skim a… proposal in a minute or two, and have a sense of, like, okay, is there potentially something here, or is this… Vacuous. I'd be interested to hear from Suryan about how this actually works in the university context. Sorry, I was ordering. Yeah, I mean, I… I don't have, like, a general rule of thumb. I kind of approach every technology that I come across on a case-by-case basis. I would say one thing is, you know, if… AI or quantum elements are superficially sort of tacked on to… to a project, I mean… one potential thought partner that… if you can't evaluate it yourself, you know, one potential thought partner you could also turn to is. If you're working with an IP attorney to help draft a patent application, like, is there actually anything. Does it actually lend some sort of patentability hook to it? Is there some value in actually trying to protect it if you're pursuing patent protection? Um. But I do encounter a fair number of these, um, and oftentimes, you know. After self-evaluation. we often opt not to proceed with. protection of its… if there's nothing particularly new about it, um… But, yeah, I'm sorry, can't give up. a better answer other than, um, I don't have any rules of thumb. Okay, just coming back to the last question, uh, is IP law in the US or elsewhere. Keeping up with developments in AI or quantum for that matter. I'm going to have Mark, our patent colleague, in our office. who will be able to sort of, I guess, answer the question. Thanks, Mark! That mark. Okay, you should be getting on. Let me continue with the other one. All right, this is another good question for all licensing professionals. Any thoughts on how to structure the financial consideration in license agreements? For data sets, example for AI training. Running royalties on licensed products are unlikely to cut it. what's the alternative? I think this is Surin, or maybe, uh… I know as a tech transfer office, you guys must be dealing with licensing these issues… licensing these technologies, so maybe you have any thoughts of what the deals are looking like? Um, I'll just say, you know, briefly. on some of the things that… I've witnessed recently, in some of the proposals that we've put out are. essentially annual licensing fees to use the data, um… one-time payment annually. If it's… if you're licensing it in perpetuity, it'll be some sort of multiple. of, you know, what you might expect for an annual. annual payment, um, it's… what we proposed so far has been pretty, pretty simple in that regards. Okay? I know we only have 2 minutes left. And I will… if anybody has a last-minute question, please go ahead. I'll still go and find Mark if I can. But here's an interesting part. I can't speak to all of IP law, especially patent law. This is from Mir Massoud Elias. But on the IP transactional and tech transfer front, where I practice. We are developing provisions as we are going along, while at the same time trying to come up with best practices. Based on practice and experience. This is one of those areas that'll evolve organically. With the technology, aside from the usual safeguards that we put in all licensing. sponsored research, clinical trials, etc. Of course, there are top-down laws being enacted in the EU and state and federal guidances in the US. I noticed that this is on your chat, so you can all read it. So if you have any comments, please go ahead, and I… you have a minute. I have a burning question. I don't know how many lawyers are on the line that is… that is, uh, that are there, but… What kind of eye… this you already answered. How should patent attorneys approach claim drafting? And in quantum-related and AI-related applications. This is for anyone in the audience. Anyone? No? All right, Hannah Gurbone, oh, to everyone. Okay, thank you. So thank you all for, uh, joining the conversation. I will get the answer to the IP questions, uh, and I'll just email it to.