Speaker 1 0:00 Sam, good afternoon and welcome everyone to today's webinar, leveraging performance metrics a good idea presented by Autumn. My name is Samantha Spiegel, Autumn's Professional Development Manager, and I will be your staff host for today. All lines have been muted to ensure high quality audio, and today's session is being recorded. If you have a question for our panelists, we encourage you to use the Q and A feature rather than the chat feature. If you have a technical question or a comment that is the appropriate place to use the chat feature. Before we begin, I do want to quickly recognize our autumn 2021 online professional development sponsor. We appreciate your ongoing support. And now I have the pleasure of turning it over to today's presenters, Laura, Brian and Alan, who will introduce themselves and begin their discussion. Welcome all, Laura. I'll let you take it away. Speaker 2 0:53 Thank you. Right now, let's get started. All right. Well, welcome everyone, and thank you for your time. Today, we're going to be talking about performance metrics, and I have the pleasure of having a couple of experts with me that will talk about their experiences doing it, and we'll try to give a little bit of background as well as Samantha mentioned, my name is Laura shoppie. I am the founder and president of when tech, which is a virtual technology commercialization consultancy, which means we have been operating like this since 2001 so the rest of you welcome to my world and not being able to see everybody on a daily basis and have face to face meetings regularly. We are going to miss autumn being in person this year, but hopefully these webinars will help in keeping everybody connected. I am an aerospace engineering background, but have been doing commercialization now for well over 20 years, and we also do benchmarking and analysis, so that's where the idea for this webinar came to fruition, but I was lucky enough to be joined by a couple of experts in this field, and so I will allow them to introduce themselves. Alan, take it away. Hey everybody. Alan Bailey, here. Speaker 3 2:13 Hope you can hear me. Okay, I'm a head technology transfer at Vanderbilt University. I've been at Vanderbilt for about nine years, and I've had the benefit of working both at a private institution at Vanderbilt, at a research hospital at the Cleveland Clinic, and then, before that, started my career at a public institution at the University of Virginia. Speaker 4 2:32 Thank you and Brian. So I'm Brian. Brian Wright, Director for commercialization at Auburn. I've been here a little over 18 years. Also did my undergraduate here, background engineering, but I oversee the whole portfolio, and we have licensing associates specialized in certain areas. Speaker 2 2:52 All right, so today's goals are really to help make sure that we are understanding the benefits and why and what you do with metrics, and not just the analysis portion of it, but what do you do with that data, and how do you best present it in order to make sure that you are influencing people the way that you hopefully want to. So to start digging into that, why would you look at metrics? Well, one of the first hopefully, is because you want to proactively manage your organization, and that is from your performance capabilities on. Are you doing things the right way that are leading to the most efficient and effective results, but also potentially managing your folks and the specifics of what they are doing, as well as departments and colleges, so not just looking at the institution as whole, but digging a little deeper, which is data that only you would have, and that's not something that that you would find out publicly. The other aspect is making sure that we are informing and educating all of our stakeholders, and they are many and varied. So you've got your administration and your faculty legislatures if you are a state school, but sometimes even in the private sector, depending on what's going on in your government. In fact, I had that circumstance here in North Carolina where the governor wanted to take a look at all of the institutions, and some were public and some were private, and then also educating the alumni and public, because your alumni are an important part of your university in the giving, and they want to make sure that your university is getting good press, usually, and getting recognized for the things that that they they are doing the the usual reason that we see people wanting to look At benchmarks and performance metrics is because they're responding to something, so it's reactive rather than being proactive. And that's because they get some sort of administration change. The president of the university comes in and says, we're going to have a 2030, vision and double all of our research and startups by then, and you're left to react to that, or you. Some budget changes, which are definitely happening now and then, making informed decisions, just if you do have to make adjustments, what's the best way to do it? Brian, do you have anything that you would want to add to this list, or pinpoint things that are the reason you're doing a metrics analysis? Speaker 4 5:19 Well, I mean, the only one I would add is, is curiosity. And that's really how we started going down this path. We weren't really sure what we were looking for. And once you actually start looking at numbers, then you may actually realize what you need to be looking for and then working Speaker 2 5:35 on Yeah, how about you, Alan, what are you guys specifically looking for? Curiosity Speaker 3 5:40 is a good one. I think that motivating individuals, motivating staff, motivating the office leadership, motivating institutional leadership, certainly can be useful. And motivation has positive and negative drivers. You can, you can, you know, by by assessing staff performance. That's a motivation of itself, but comparing them to other peers elsewhere, to set expectations, things of that nature, to set metrics. And one thing that we've done that I'm not entirely sure that I would advise others to do, but on occasion, we have placed every one of our licensing people's names on a slide next to all of their metrics, how much money they've made for the institution how many transactions they've closed things of that nature and put them up for all to see. And so if you're a little behind the curve on those things, that can be a motivating factor, but motivation is important one. Also persuasion that may fall in the information education category. But if you know, if you can persuade leadership on a new program or committing additional resources, things of that nature, and you have benchmark data to support that, that's usually quite valuable. Speaker 2 6:47 Great. Thank you. All right. So what are the peer institutions? If you are going to benchmark and compare yourself to others, how do you determine who those players are? So when we do it, and we are a third party, so we're looking at you from the outside in, and we look at making sure that you're comparing yourself to the same kind of institution. So if you are a university, obviously we're looking at universities, but we may also want to narrow it down to public versus private, depending on what you're specifically trying to determine and what you're trying to analyze. But typically we don't section that out when we're looking at the larger picture. We do separate out medical school versus non medical school, so that you are apples to apples. So if you do have a medical school, you're being compared with those that also have med schools, and if you don't, you're not being compared. And I think we'll see some differences and why that matters, at least when we're looking at it. There are, are also some anomalies and outliers that we will sometimes remove. So some universities that are reporting in with the autm data may have very, very low numbers or exceedingly high numbers that are just not in sync with the way that your operation might work. And so depending on who the school is and who their peer group might look like, we might have to remove some anomalies or outliers. And the other aspect is we ignore NCAA divisions we'll get into what that what that might imply in some cases, but they are very different than what's going on in research. And so even though your administration might care about that, when you're looking at tech transfer performance, it is not the same thing. Doesn't mean you don't have to compare to them sometimes, but that is not the criteria when looking for true peers. So let's hear what our experts and how they define their peers. So Vanderbilt Alan gave me his list, and on it, I noticed that half of them were med schools and half weren't. So Alan, tell us how you came up with this list. Well, to Speaker 3 9:00 some extent it's not my list. First off, Laura, your point on ignoring NCAA divisions is, I think, a good one. I try to ignore that at Vanderbilt, even if I'm talking about sports, unfortunately, but with respect to actually tech transfer metrics and such, this list is largely a list of peers set by the institution. So this is the list that I would use if I'm trying to inform, educate or persuade leadership on activities, because they've selected their peer group. And the most closely related peers to us in our that we consider are Emory wash, u and Northwestern I think they're, they're similar to us in many ways. Unfortunately, they're all really good about productive institutions, so they're a hard peer group to compare against larger research institutions like Hopkins and Duke and Penn. Certainly they're aspirational, so we compare them so that we have other types of programs to aspire to, at least numerically I had. Actually put UNC on here myself, simply because I want a public institution on here. All the other ones were private, and I want to see a public institution with a medical school when we do internal for curiosity purposes, when we do internal metric assessment for our office, we can expand upon this list, but certainly this is the list for leadership, and then we can go greater if we're looking at specific issues, we also want to make sure that we're not picking a particular school whose program is so wildly different from ours that it would make no sense to compare, to compare to what we do. I don't think any are on this list. Are really in that category. They do have some different functions. Hopkins has a number of new programs they've launched last few years. Penn has a very different structure than we have, but to a large extent, these are still a valuable list for us. Speaker 2 10:47 And I didn't mention it before. Vanderbilt does have a med school, so we would have compared them primarily to med schools. Now, in Brian's case, at Auburn, they don't have a med school, and so we would have chosen non med schools, which, for the most part, Brian has, there's only one that does have a med school, Brian, you want to explain your your selection of your peers, please, Speaker 4 11:10 yes. The first cut was looking at public land grants, and the reason for that is similar to what Alan just said, is we are we want to look at schools that have similar programs, engineering, vet, med agriculture and things like that. And then also, as it says down there at the bottom, we eliminated med schools because we don't have one. We also chose r1 because we just recently became an r1 in the last classification. And so because of that, we consider this grouping moderately aspirational. If we did all public land grants with no med school, we'd be rolling in several R twos, and we would actually we look better in that analysis. But for our internal use, we choose to go moderately aspirational because it teaches us more and it also is still good enough that we can show it to our superiors, but I probably would not put up on our website. We're going to put something up on the website. We would probably roll in all public land grants because the numbers would look a little better, and there may not be as because it'd be more scrutiny if you put something up on your website. And so I think what we've got there is is a set of defensible criteria, and I think that's important, and it's what we've all touched on. So you don't want to cherry pick. And I think the problem with cherry picking is you may pick a whole bunch of poor performers to make yourself look better, and you may get exposed on that, or you may subconsciously pick schools that are all aspirational and not realize what you're getting yourself into. So having a good, clear, defensible criteria to get a true apples to apples comparison is is really important. If you could go back there, Laura, this one distress too. All right, let me just take over. Okay. Internal reviews do welcome a harsher analysis, whether it's going as far as putting up personal metrics up for everyone to see, or whether it's just really digging in with your staff and figuring out what needs to be improved. And then you may want to then reassess what you show your superiors, and then definitely reassess what you show out to the public. You out to the public. You don't want that to be particularly harsh analysis, unless you don't like your job. So there are also other criteria you can consider, and we've actually poked that around a bit. Here's some examples. We've removed schools that are that are just very different in terms of the amount of research they do. So 2x is an example. If they're less than half or more than double, we cut them out. And that actually did change the numbers a little bit, and we actually looked a little bit a little bit better. We also tried analysis where we just limited it to college towns, because we are in a college town, so population centers of under 100,000 was how we cut it. And it was surprisingly little change. We really expected a shift in startup numbers, because NC State, sitting over in Raleigh, should have a lot of advantages. And you threw out four or five of the schools with this but, but it didn't even, hardly even a blip. And you may have your own criteria. You know, Alan talked about some of the ones he uses you if you're in an urban center, you have a med school, you have certain areas that you work in, certain geography considerations. You're gonna have different set at Auburn, Vanderbilt University of Colorado and southwest Saskatchewan state. So find out what works for you, and then work with that. And then I'll tack on my thoughts about using NCAA conference as you're grouping, or some other group that is thrust upon you. You may not have a choice. Laura is an outside expert, and she can, she can lecture people all day long about why they shouldn't do that, and they'll listen to her because they're paying her to be an outside expert. But you are answering to someone that is saying. You need the SEC numbers, so you're probably going to have to provide them. So my recommendation there, if you're in that situation which we are, provide them both. Provide them the SEC numbers and provide them numbers you think are more meaningful and more informational, and explain why. And they can, they can take or leave the additional information. Speaker 2 15:19 Yep, that's exactly what we do to help them understand the difference. So good advice. All right, so what data do you use? Where do you where do you pull this information from in order to do the analysis? Well, the autumn survey data is, is kind of the gold standard. It has lots of institutions in it. It's got many years of data, so you can do trend analysis as well, and it's accessible to many people. The one thing about the autm data is that, on a yearly basis, they will publish their summary, and that's usually when you get the big reaction from your administration or legislature, going, how come you're not at the top of this list? So it's that's the other reason you probably want to use that data in order to be able to compare and look at how you you rank. If you participate in it, you get free access to it, so that that's an endorsement for you to all participate, because it helps everybody. If more data is in the database, and there is a tremendous amount of internal data that you have that you are not publishing up to autm. The institutional level data is what you report to autm, but you also have access to the specifics, like Alan mentioned of the people and how they're performing, what they're doing departments and colleges. You can get a lot of information as to their participation level, like, are they submitting all the invention disclosures they should be and maybe where they've got some opportunities for additional funding? Speaker 3 16:50 Laura just to add one thing to that real quick. Cutting things by department or by school can be very valuable, in particular, if there's an underperforming school, and get them to really support innovation and engage their engage their faculty. I think that's important piece also, you know, we're all friends with each other. And if I chose three or four other institutions and say, Listen, you know, I'm looking for data that's not automated. You having data on X, Y or Z, how your engineering school is performing. For example, at your university, we often share those things with each other, so relying on colleagues, but the autm data you mentioned is certainly robust, and it gives you a chance to play with your peer groups. Be strategic about it. Look for those outliers, and it also gives you the chance to look at autumn data across all institutions, or across all medical centers, like you've done some of that analysis in this in this presentation, we'll see later, all medical centers are all land grant institutions, or all Canadian institutions, whatever you want it to be. And those are valuable ways of slicing the data as Speaker 2 17:52 well. Yep, the more data, the better we are sponges. We love that. Now, having said that, data is wonderful, you should also recognize that not all the data that you're going to look at from the autm data set, or even from your own internal data set, is within your influence range. So it's a pipeline of getting the deals out the back end really depends on the investment that's going into the front end. And it's, it is sequential, and it, it does matter how, how that all starts. So you start with the investment and the amount and the diversity, diversity being where the the funding is coming from, and how much you have, and you as a tech transfer operation, have almost no control over research expenditures or Re. You'll see the RE term used through the rest of this presentation. So that's not something that you're going to be able to change. So when your new president comes in and says, Our goal for 10 years from now is that we're going to double the our re that's not necessarily something the tech transfer operation is going to be able to impact. And presumably they're not going to they're not going to expect you to now that leads to how much innovation comes out of it, because it is about three to $4 million of re leads to an invention disclosure is kind of the ballpark. So that's the quantity, but the quality of those innovations are dependent on your faculty, but also could be improved by you educating your inventor so that they are, in fact, improving the quality of the invention disclosures they submit, which will make your evaluation process easier and faster, and making sure that you're getting all of the invention disclosures you should be getting. As Alan pointed out, you know, maybe you can identify a department or college that's not submitting as many as you think they would. Now the last two the strategy and the deal flow, are things that the tech transfer operation has influence over, and so that's where you really want. To focus in on what you can improve, where your strengths are, where some of your weaknesses are, and you can make some adjustments there. So it's not apples and apples and so normalizing the data and getting ratios are the best way of looking at that data. So looking at Auburn versus Vanderbilt, one by one doesn't really provide you any meaningful information, but normalizing the data we typically one of the factors we use is 10 million in our E in order to allow a university that's got 200 million versus 20 million to potentially be both compared to the same class, invention, disclosures, FTEs, things like that, are also used for ratios and analysis. And then we do averages across the peer groups, again, rather than doing one school versus another and how we we look at the data. Trend analysis is something else that is very helpful in analyzing the data from a different perspective. So lot of times, you might start with looking at the most recent data, but if you look over time, it can help you see how things have been improving, or if you've had some anomalies, and help explain that to your stakeholders, because if you have a spike in one year because you had some sort of monetization, they may expect you to do that every single year. And you need to help them understand why it occurred that year and where your norm is actually been somewhere else. And you're going to see examples of that with our with our panelists, and also that time period. We'll ask both of them to comment on what's the right time period, or how can that vary depending on what you're looking for. And then you want to look at a variety of different metrics, but not just one metric. You want to look at a combination of them, because just a single line is not going to tell you the full story, and so you need to look at a variety of different data points in order to really understand what's going on and what that might indicate and where your opportunities might lie. So this chart just kind of lays out the full autumn average. So that's what you see. When autm gives their 2018 summary, they'll give those full numbers. But we also separate out the medical schools from the non medical schools as a basis for comparison, depending on what kind of school you have. And so we've got Vanderbilt, which does have a medical school, and Auburn which does not. And so this gives an opportunity for them to compare. I'm not going to go into the details of this chart. If we have time. Alan and Brian are going to talk about it later, but I just wanted to kind of set the stage as to the level of re we've got with these organizations. So as you look at their data, it helps put things into perspective. So I'm going to hand it over to Brian to walk us through how they do their metrics. Speaker 4 23:02 All right. Thank you. Trying to get control. Here, there we go. So when we are working with our data, this is for peer comparisons. You got to start with, what categories do you want to consider, and for us, we prefer ones that are the most easily defined, that's going to not going to have much variation across universities. So invention disclosures is pretty straightforward. Patent applications, we've seen some variations there, because sometimes people lump in provisionals and sometimes they don't, and you see these wide variances, and it's hard to get good information. So you'll see on the next slide what the categories are, but keep that in mind too, and we do the normalization. We also do a three year rolling average to help minimize the noise and the aberrations and the spikes, like Laura just talked about, we found that to be very useful for presenting, particularly to superiors and to the public. We like to normalize the data again, once we've run it with respect to the maximum value in each category. So whatever school has the highest value, you make that one. And so everyone is between zero and one. The advantage there is you can show all of the categories on one chart. So when you're with the superior and they have five minutes or they have 30 seconds, you can relay the information very quickly. We don't always do that for our internal analysis when we have more time, but it's very useful for a quick summary. We also like to focus on median values instead of mean values. I believe Laura, in a previous talk, had talked about using averages, which can be tricky, because you may have these outliers that really throw the mean off and make everyone else look bad, and you can address that by identifying outliers and removing them. But once again, she's an outside expert, and she can be relied to identify true outliers internally, if you tell yourself. Period you removed an outlier, they'll tell you you removed a high performer. So why did you remove the highest performer in each group? You know? So it's much harder, much harder to justify removing outliers. So if you can go with median values, the outliers don't have as much of an impact on the numbers, and it's easier to justify. Here's our criteria, here's a category, here's the data, and you've not done much manipulation of it. So here's the data we ran with these 16 schools, plus Auburn in these five categories that kind of run the length of the lifespan of technologies from disclosure stage to revenue stage. And so for each category, the brown bar is the maximum value set at 100% normalized. So the orange is the is the median of the group, and the yellow there is the is the minimum, and Auburn is a blue. And as you can see, Auburn is pretty median, pretty well across the board. So that's actually, in many ways, useful information in and of itself. I'll go into it some more. And actually our weakest category is the middle one in the licenses and options, which I'll talk about some more when we have actually already thrown out one outlier in this category as presented, and I'll go and call them out. It's the University of New Hampshire. You may have heard about their fine arts and humanities licensing program, great program. Wish we could do it, but we can't. I think there was one year they had 200 license agreements and one issued patent. Well, that's not the world we live in. And their numbers were, I think, four standard deviations above the mean. And all these bars got squashed down the little tiny things if we left them in. So we had to, had to toss them out. So like I said, we're generally median, which, given the fact that we are moderately aspirational, is really not bad. We were pretty well satisfied with that. And again, we look even better if we look at all land grants with no med schools, the kind of the base message there is, there's no need to panic on any single thing, but there is room for improvement. So we can't just sit around and say and be satisfied being meeting. We'd like to be better than meeting. And also with so many things at universities, if you're not improving, you are falling behind. So this helped identify the areas where we actually may need the most work and the most focus, just to keep pace, if not, to actually move up in the group. So I talked about the outliers and license and options. You know, we didn't like that data. We wanted to study it further. We had to do some dancing when we talked about it. We also identified a group of the schools, very common land grants. They have big plant breeding programs which leads to crop varieties, which leads to lots of non exclusive licensing, and sometimes quite a bit of revenue and good for them. But Auburn kind of scrapped their breeding program 10 plus years ago, and we don't, we don't have that. So it's not the world we live in. So we tried versions where we threw those three or four schools out, but they also, you know, maybe good performing schools in other ways. And it was, it was very awkward. And then, actually, literally, last week, the light came on that since we started doing this autumn, has added separation of exclusive and non exclusive licenses. So we said, Well, why don't we just switch to exclusive licenses and options? Because that's mostly the world we live in. We do do some non exclusive, but not not much. And so when we switch to that, we have this chart with defensible criteria, defensible data, nothing thrown out, and Auburn is is near or above median in every category. So this will be the way we present it to the superiors going forward. Now, pass it off to Alan, Speaker 3 28:55 thank you. So a couple of quick comments going back one particular screen. Actually, I'm going to steal some of your slides here, Brian, this slide in particular, I'm getting to it here in terms of so I'm going to separate how we use metrics in two categories. Kind of the first category being, what do we do to help run the office? How do we assess our performance compared to peers to drive improvement. And our office has a list of 30 something, 3233 different different benchmarks that we that we generate using autumn data, purely autumn data, it's purely against our peers in most cases. And so I have a series of 35 different graphs of our performance versus our peers in different categories for three years, 2016 17 and 18 was the last version that we did. If anybody has a true interest in what those categories are, I'd be happy to make a list of them and share them to the group they are very similar to the. So I can get there to some of the ones on here. Some of the ones on on Laura's list are ones that we run, but we do a little bit of a deeper dive. Just try to try to find information. When we assess these, we do look for outliers. We do look, you know, we look to see which are the the best performing institutions, and why our purpose is to try to identify where weaknesses are and to identify, can we identify programs? Can we reallocate resources to improve in certain categories? And we also ask ourselves a simple question, is it, is it necessary to improve and to improve in a certain category? So one category everyone looks at are disclosures, or disclosures per 10 million in research expenditures, we tend to be a little on the lower side of average. You know, not bad. But do we care? Are we part of institution that motivates the generation of new disclosures? Is that a good thing, or is it a bad thing? If you generate more, are you getting high quality ones? I can tell you whether or not we can, you know, I can, I can generate more or not based on different programs that we can launch. But I don't know if we're gonna get more quality disclosures. But on average, if you get more, perhaps you get more quality. But you also have to think about how that's going to affect the workload of your employees. You know, do you work more intelligently and deeply on fewer technologies, or do you or you expand that? So those are some things that we think about, but to a large extent, we look at this the exact same way. Brian does three year rolling averages. We look at means and medians. We even do modified standard deviations on some of our some of our data to try to remove outliers. So there are analytics techniques that you can employ that are fairly straightforward, that the engineers in your office can teach the biotech people in your office how to do. That's just a little jab at the biotech people. Sorry, trying to advance this to our section here. This is a slide that I grabbed from one of our leadership presentations. It has to do with total revenue. Our organization has a series of five or six goals that were measured by our leadership. Total Revenue is one of them, and we employ a threshold target and reach mechanism. The target is the target. That's what we're trying to generate every year. You know, whether it's 5 million or 10 million or 15 million or whatever more. In research every year, the target is set by the past three year running average. So if I had two good years previously, my target number is going to go up for the next year, which makes it harder to achieve. But that ratchets up the expectations of the institution of our office, year, year over year, it creates a very difficult thing. So eventually we're going to fail to reach that target. Having said, having talked about the target, the threshold is the target minus a certain percentage, and the reach is that target plus twice that same percentage. You know, whether it's target minus 5% or target minus 10, and target plus 10 or target plus 20 for the reach. But we have a mechanism for setting those things, and I can avoid I can come below my threshold in a particular category once. If I do it two years in a row, then I may be, you know, speaking to you from a different institution the following year. So we do, I mean, these things are taken very seriously by our leadership, but I also try to give them some context for what they're looking at, you know, so we look at revenue from running royalties compared to total revenues. What percentage of our revenues is from running royalties versus other types of fees, versus upfront fees, versus liquidation of equity, that type of thing, royalties as a percent of revenue, pad cost reimbursements. These are things that we that we track and, you know, I reproduced that graph on the right hand side, that those charts on the right hand side, I mean, looking at revenue from running royalties. You know, is it increasing with time? Well, the graph that I have shows that over the last three years, it is increasing with time. That's a good thing. We don't have any home run generators, so we have to hustle for income every single year. If we had a steady stream of you know, of a technology paying 10 million in revenue every year, reliably, year in, year out, we'd be a little bit different situation. We'd be tracking different metrics. Probably we don't have that. So our focus on short term revenue generation is an important piece of our metrics, and so we have to keep track of that. We also look, however, at, you know, when will we get that home run? Are we setting the table? So we do trend analysis, longitudinal analysis, to ensure that we are generating the type of transactions year over year. And I have a graph on that next that will lead to long term royalty revenue in the future. So you can't, you can't just be about generating a million dollars this year. You have to worry about the $10 million deal in 10 years from now. That's important as well, at least, if not for me, for the guy that takes my job when I get can't. At least we'll set them up, you know, to be successful. But is there evidence that this trend is upward? Sometimes you have to look beyond three years to determine. Or whether a trend is upward or not. Otherwise, perhaps you even still have too much noise over a three year period of time, royalties as a percent of total income. What is 30% what is below 30% mean we're well below 30% that could mean several things. Well, it means that we don't have a home run. It doesn't mean that the vast majority of our income is from royalties. But where does it come from? Why is it low? Are we? You know, if the royalty is a very small number, then we're going to be below 30% if we liquidated equity for $100 million then the royalty is going to be below 30% so you have to understand exactly, why is it below 30% in our situation, we don't have any big money makers elsewhere, so it truly is below 30% and something which we like to build, but there, but it's not artificially below, because we have large income in other categories, that would be great, but that's not what we're looking at revenues from upfront payments. Well, if our royalty is low, then we're, you know, how can we most quickly affect new income for the following year total revenue that's from upfront fees. So being able to do a benchmarking, being able to do forecasting of where you're going to find that next big deal, forecasting is always very difficult in our business, and people like to talk about doing forecasting, but I tell people, if you can tell me what technology you're going to disclose next year, I'll tell you how much I can make off of it in five years. So as soon as you tell me what's going to come in, then I can tell you how much it's going to be worth. But if you can't do that, it's difficult, the error bars and these things are very big. Patent cost reimbursement. You know it's interesting, because if I'm talking to my to my Provost, and talking about Office performance, the impact that we have on the bottom line is important. Licensing fees are important. Any money that we can point to that we generated in the form of research support to advance the technology is important to the provost, the Chief Financial Officer, wants to know what he can spend. We can spend and patent cost reimbursement, that's a good thing, because we could keep 100% of that instead of 50% of that. So, you know, things of these nature, whether it's patent cost reimbursement or how to modify or increase the royalties as a percent of total income, these lead. I mean, you start to think about how you can affect those things. Those lead the new programs. So benchmarking and metrics lead to active thought, which lead to potentially launching new programs that can then rejigger your metrics and how you perform. Pretty typical chart of our transactions. Okay, number one, licensing transactions. It goes up over time. This is what we report to autumn. You can get this from the autumn data. This is a good thing. However we do, we did acknowledge that in 2013 14 and 15, we had some outliers. We had one software program that was generating every time we licensed it, a few $1,000 a year. So it wasn't going to change our lives, but it generated some income. It was a quote autumn transaction. And so they were counted. We did 17 of them, then 32 of them, then the person left and we we canceled the program, but that artificially inflated our numbers for FY 13 and FY 14. And I thought this is a nice way of explaining that away. Okay, you can see the nice trend of all blue. It's still upward, but there was something on top of that that was an outlier for that particular year. In 2015 we have a bunch of licenses to catalog companies for materials, and they generate, again, a few $1,000 a year a piece. And we noticed that a bunch of them had expired. Companies were still paying royalties to us, but they expired. And so we went and actively renewed those, and that created 28 new, you know, new transactions. Speaker 3 38:45 And all 20, all 20 of those transactions were probably the work effort of three normal licensing deals. So, you know, it was an outlier. And this was a good way we thought of explaining that away, also for our metrics, you know, trying to identify our targets in terms of licensing transactions. I don't want to be burdened with 101 or 98 and trying to, you know, have that massively increase our our expectation, our bogey. Unfortunately, I didn't think of that until afterwards. So I was screwed by these numbers. I didn't think about, you know, making them as outliers and and displaying them this way until a later date. Royalty bearing transactions. This goes to what Brian had said about factoring royalty bearing or exclusive licenses versus non exclusive. We have a category called royalty bearing, which are the ones that we think have a reasonable expectation of generating long term revenue for the institution. They don't have to be patent licenses. They could. Most of them are patent licenses or software licenses, but they have to be royalty bearing, and they have to have, there has to be a substantial expectation. And so those are the ones that are going to pay revenue 1015, years from now. And we track those. The idea there, this is not cumulative. This is an annual depiction here, and we want to make sure that those numbers go up to 25 and. Know, when we started looking at these things in the early 2000 10s, they were five or 10 or 15, so bringing in the right so we took these numbers to help us properly staff the office and look for the right skill sets for the office. And we think that that had a very positive impact. We begin to move the merchandise in a different way, technologies transferred. Faculty or departments ask us all the time, what are the chances that you're going to license my technology? And we all think about our deals. And you can do that, you can, you can divide the number of deals by the number of disclosures and come up with a percentage. But actually a better metric is, how many technologies have you actually transferred in any particular year the deals to disclosures is a little bit misleading, because you can have 10 technologies in one deal, or you can license one technology, non exclusively, 15 times. Also there's a there's a substantial time lag in most cases, between when you get a disclosure, when that deal is is actually transacted. So technology transfers, we try to, we try to use metrics to answer questions of that nature as well. So Okay, last last point on this. For our trend analysis, we were looking at things like MTA is processed, and we looked at this and realized we had about a 40 something percent increase, and the number of MTA is that we've been processing over the last decade, and we have had an increase in a staff in that area of exactly zero. So how do you go about a very important but non revenue generating function of processing material transfer agreements for an institution? No one ever tells you you're doing great at MTA. It's very rare that you get praised for it, but you get a lot of flack if you're not doing it well. And so we looked at this and we said, Man, we have a problem here. We don't have resources to manage this, but, but this is a life blog blood element for our faculty in in a our licensing transactions may affect 50 inventors a year, but MTA is are going to touch 1000s of of researchers at our institution, and we got to figure out what to do here. So this is a useful thing to say, Listen, leadership. If you want us to do better here, you have to work with us. On, on, you know, increasing resources that we have, or or whatever to manage this activity. Likewise, disclosures, I got to Vanderbilt in 2011 since that time, we have seen a tangible increase, kind of a new steady state for disclosures. So what does that mean? When do you have to when can you use data to identify weaknesses in the program? And weakness is often, you know, being overworked. Your workload can be an obvious weakness. It leads to a number of things. It leads to faculty dissatisfaction. It leads to you patenting more technologies, because you don't have time to think about whether to patent them or not, so you do the easy thing and and hire an attorney to patent them and you'll think about them later. So increases your patent budget, or your patent expenditures, at the very least, it decreases your ability to get market feedback on technologies. There's a lot of reasons why workload is a problem, and if you're going to be a successful organization, you have to take that into account and work with leadership to to help you address that. So we try to dig into this a little more. This is more. These are some examples of those 35 slides that I mentioned, annual invention disclosures on this. We look pretty comparable to institutions like Emory and Wash U and Northwestern if you normalize them by research expenditures, then we still look like, Wash U and whatever else I just mentioned, Emory actually jumps up here, so they seem to be a little more efficient in their activities, and that Emory is a great program, so that's not terribly surprising. Disclosures per licensing professional. What does that tell us? That tells us that, you know, can we use this to say we need more you know, need more employees? Can we look at our peers? And the answer to that is no. I mean us in UNC and maybe Northwestern. Northwestern is almost identical to us. We're kind of on the lower side. We're actually better research, better resource in terms of licensing activities. Now you have to do an assessment of licensing FTEs to total FTEs. You have to confirm that some institutions think a licensing FTE for their autumn data is any professional staff. So new venture people are included. Some places don't include new venture people. There's a lot of things that kind of go into that some places have business development personnel to generate sponsored research from research from from industry, and that may or not play into the numbers. So you have to check those types of things. But, you know, we did a deep dive in this to try to figure out how we can, how we can handle doubling the number of disclosures. Speaker 4 44:34 And Alan, let me interject a little bit on the licensing professionals too. You may have different job descriptions, but you may have an office where there's a patent group and a marketing group, the triage group, and all licensing people do is actually do the deals that's going to throw be a very different look than like our office that's literally cradle to grave, right? Speaker 3 44:55 That's a great point, you know? So your soup to nuts were soup to nuts or cradle to grave, or I. Sometimes call them vertically integrated, because you do everything up the up the chain. Those institutions that I refer to as horizontally integrated, you have these, these bands where you have the marketing band, you have the licensing band, you have the triage band they're going to give you, yeah, you need different personnel. You need different expertise, certainly, but you also need different numbers of different personnel to make those different models work. And it would be a challenge to compare a vertically integrated licensing staff against a horizontally integrated licensing staff for a number of these, these metrics. That's a great point. Speaker 2 45:31 Yes, you you almost want to convince autumn to change it from administrative staff to maybe technical or professional staff to kind of clump all that together so you don't have to make that distinction between the horizontal and the vertical. Speaker 3 45:45 Yeah, the last time that I tried, I know we're on an autumn call here, but I'm going to poke him in the eye real quick. The last time that I personally tried to get autumn to change something in their in their annual survey, I was back at the University of Virginia, and I said, Hey, listen, you know, these patent costs that reimbursements that we're that we are presenting, we're doing a lot of direct billing for cash flow reasons. You know, the our licensees are still investing in our technologies. But instead of giving us the money and then us giving the attorneys the money, we're setting up so that they give the attorneys the money directly. So can we expressly include direct billing, because we can track that, we can get copies of the invoices. Can we expressly include that in patent cost reimbursements, just for clarity? So everyone's doing it the same way. And then they went, Autumn was great, you know, they reacted to that, and they did change the definitions for the survey, and they expressly excluded direct billing. Unknown Speaker 46:40 Run for the board again. Valerie, I think Speaker 3 46:43 I was on the board when they did that too, anyway. So a quick question for you, Laura. I actually didn't know Todd was on the call. There is a Q and A question that was posed about key indicators. How are we handling those? Are you trying to answer those in between talks? Are we answering them at the end, or we're gonna handle those at the end. Yeah, okay, I didn't want it to go on, unnoticed, unnoticed. Yeah, thanks. Yep, Speaker 2 47:06 Todd, we'll get to you. We promise. All right, so, and part of it is because Alan is hard to control. We got to get through this material before we get to the Q and A How do you present the data? So we've had, we've had some overview of two institutions that look at the data differently. They create their peers differently, and they're they're doing something similarly to other things differently. But one consistency has been that we've said, depending on who you're talking to and what you're influencing, you may want to present different aspects of your data, and what you might show to your staff is not necessarily what you show to public or the administration. So looking at your audience is key. Who are the stakeholders or the audience that you're trying to influence, and what is it that you are trying to accomplish? So we actually, before we do any kind of presentation or discussion, doesn't really matter what it is we go through what we call the AMM, determine who your audience is that you're targeting. What message are you trying to convey to them? What is the mechanism by which you're going to do that? So in this case, it's going to be what kind of graphs, or which graphs are you going to use, and what outcome Do you want as a result of that? So don't show data for data sake. There should be a reason why you are doing that and and what you want to get as a result, what change or what influence, or what decision making Do you want? So we're going to present a couple of examples. We are not suggesting that that you should be disingenuous in the data, but there are things that you can do and how you present and what you present. So one example is using Auburn's data and Brian, I'll ask you to comment on this, on how you might use this data or interpret it, depending on who your audience is. Speaker 4 49:01 Okay, well, the think that where we I think, are different than the rest of the group, the most is in the provisional patent space, and that's a case where, actually, I don't think I would change the message that much across groups, because we file first and ask questions later when it comes to provisionals, because we view ourselves as a service unit. You know, everyone's familiar with the provisional coming in two hours before the public disclosure, and you have to file that same day. But you also have others that may be two weeks out or two months out, unless we can find a really clear reason to not file. We like to file, and we like to refile if the inventor is still engaged at all, because it keeps them engaged, and it keeps them happy. And if someone comes in and sends it, well, you don't even agree to file provisional, it's going to feel like they really in last place, and they may not come back to us next time. So if with the staff, we use this to say, you know, this is fine. Line, keep supporting them. But if there's really some, especially on the refiles, you can cut out, go ahead with administration. We use this to justify the fact that we are a service unit. And if you don't want us to be a service unit, you want us to be a revenue generator only, then we can reassess. But so far, we've had that support to be a service unit and keep the inventors engaged. And that's one mechanism we use to do it. Speaker 2 50:22 Yeah. All right, Alan, and this case, we've got your peer group, and then we've got the med school average, which is what we would typically look at when we evaluate what is the difference in what you're looking at here? How do you think that might be interpreted? Speaker 3 50:42 Well, this is interesting because this is a comparison that Laura, you and your group created. I haven't done this with this particular one. So there is a big difference between where we are and where our peers are. So this is going to be something which we have to spend some time on in our office. We're going to look at this and say, Okay, well, why is this such a big difference? Why are they at 850 and we're at 200 in terms of income per 10 million in research? What is the reason for that? And in fact, if you look at the right hand one, which is over here, we're still there's a huge difference. $50,000 versus what $350,000 of income licensing. Income per license. Now is that because of a different makeup of the different type of transactions we're doing, are we doing more software licenses that don't generate, you know, a lot of a lot of milestone payments, or do we have more diagnostics and they have more pharmaceuticals, or vice versa? There's a lot of questions there. I believe, you know, if I had to guess in this, I believe it's because Northwestern is part of our peer group, and Northwestern is bending the curb for anybody. So if you look at the 30 some trends benchmarks that I do, Northwestern is in there, they have a home run, they generate two, $70 million in royalty revenue every year off that home run, and that breaks the curve. So I have, I have, actually have to, I have to exclude Northwestern from my graph, because if I include them, No, you can't see anybody else's numbers because they're so, so big. So I think the real reason here is because our peer group includes Northwestern, and my first assessment would be to cut them out and see how these numbers look, using them as an outlier. If it's still heavily skewed negatively toward Vanderbilt, then we got to jump into the numbers and try to figure out why that is. Are we understaffed or people not as productive? Do we not have as close to contacts with certain companies or certain market sectors than we than we should, is a type of technology where we're getting to protect and commercialize different from others. You know, what's the makeup? So those are things that I would look at. Okay, you go back just one second. Just want to comment real quick. What I actually mentioned, what I meant to say was, I have not done this assessment for med school. For all med schools, we have done it with respect to our peers, and that's why I believe Northwestern is affecting that. For all med schools, we look a little bit better. And so we may show the left hand graph to our office and say, hey, look, man, we're behind the curve. Let's all put our brainstorming hats on and figure out what we can do to have a greater impact on on these types of outcomes, that may be a motivating graph. I'd much rather show the right hand graph to our leadership. However, this is where you know we're not we're not at the average, but we're not as far below. So that may be a little bit softer of a message, but still something which we need to focus on to leadership. Speaker 2 53:37 So here's an example of a different peer set altogether with Vanderbilt, and it's, it's using Alan you want to comment, I was gonna say, Speaker 3 53:50 yeah, one, one possibility is that we just got the wrong set of peers. And if we choose a different set of peers more strategically, then we can, you know, we can show different results. The only, the only thing that you have to worry about there is, if you're showing that to leadership and you're choosing a peer group that they're not familiar with, then you need to be able to justify why you chose that peer group. It does you no good whatsoever to choose a peer group which makes you look good, if you can't justify why those groups are in your group of peers, especially if it disagrees with what leadership thinks is their peers. Speaker 2 54:22 Yeah, in this case, it happens to be a divisional so it's an NCAA selected peer group, and Vanderbilt looks actually pretty good against that, but not necessarily a realistic peer group. Sure, sure. Okay, Alan, you want to take over? You've got a few other examples of presentations. Speaker 3 54:42 Yes, I will try to grab the screen here. Okay, so just for, just for some, some background, for some of the people on the call here, one of the things that I found to be very effective in working with leadership is moving away. From the specific benchmarks, and when it talks about showing things leadership, how do you present the data? We have begun combining our royalty revenue with our industry sponsored research. Now I'm not talking about all industry sponsored research. I'm talking specifically about that which we have tangibly contributed to, where we do a license agreement and a research agreement, or we do a combination license and research agreement, either contemporaneously signed or combined into one document. Or if we've through our contacts, if they don't feel that the technology is right for being licensed, but they're willing to spend a little bit of money to lock it up with an option in some research or that type of thing. That's the type of industry sponsorship we're looking at here, but we combine that. It's a different color of money for the institution, but our leadership, it does resonate with our leadership in assessing total impact. So we're looking at total financial impact here, pattern, patent cost, reimbursement, royalty revenue and industry sponsored research. And we can show, for example, this graph shows that our income from this, our impact, well outpaces our our cost to the institution, and our cost the institution is really our Bucha, to a large extent, our budget, revenue generation. All that means is, what are the costs that we, that we incur in the course of generating revenue that's licensing staff salaries. That may be new venture staff salaries. It may be prototyping funding to add value to a technology. It may be payments for licenses for software databases that we have to make us smarter. Things of that nature, Operations and Support or other administrative activities. You know how to you know all the costs associated with our Bible compliance. Things of that nature, managing the patent portfolio, our marketing and communications. Things of that nature, rent is split between the two, evenly split by headcount, but split proportionally, not evenly. And IP protection is all the expenses for our patenting activities, usually attorneys fees. So showing things like how the impact of ours outpaces our expenditures is a very positive message that our leadership has responded to. In addition to that, we started to try to gage where technologies came from, what types of technologies generated more revenue for us. And so we started using this graph, and we use it for leadership as well, because it it gives them some insight into different technologies. And each of these technologies is from a laboratory that leadership knows the names of, and then like to hear about the names and the stories and whatnot. But basically, the thicker the arrow vertically, the more valuable that the technology is, the longer the arrow is from left to right. That's just how close we're progressing to finalizing or signing an agreement, a license agreement. And so you can break them down by estimated value, high to low. This one is colored green means it's a pharmaceutical. Orange means it's a medical device. You can do any, you can just for any, any type of technology. We chose pharmaceuticals, medical devices. The one the light green, one down near about two thirds of the way down the addiction treatment is a kind of a combination of the two. So it's a light green. Then, then we started cutting these things by you know, did? Did the deal also include research funding? I thought we'd get research funding on 20% 15% of the licenses we do. It's actually like 65% it's much bigger. Once we started graphing it, we realized we get research funding along with our licensing more often than not, at least for the high value ones that we're showing. There's, you can appreciate a lot of low dollar value transactions, which that's not entirely at all true with. There'd be no research for them, materials licenses, some low value transactions. But this was real compelling because it speaks to the bottom line that addiction treatment technology is still light green only because I don't have a quality control in my office, and I overlook things from time to time, we made a cut by where is the origin of the technology, school of medicine, engineering, arts and sciences, things of that nature and that addiction treatment did get colored maroon in this particular case, but I forgot to change the color to white, because, again, no quality control. But by making different slices, you can really begin to understand, you know, from a performance perspective, where are the innovations coming from? Who needs to be supportive, who needs to be engaged more thoroughly? Where can you redeploy resources? It was really eye opening for us by making these cuts in different ways. Speaker 2 59:36 All right, thanks, Alan. All right, so we've again looked at the metrics. What were the outcomes after presenting that? What came of what we did? Well, the things that you want to be looking for is making sure that you know your data and what it means, because you don't want to be caught off guard when a change does come down, or new administration comes in and people are asking. Asking you to make some changes. You want to be familiar with your data before they start asking questions about it. You also want to monitor and adjust the performance as you go, because it will help you at the individual level, at the office level and also at the institution level. There are things that you can influence by looking at and paying attention to your metrics and then informing and educating your decision makers or your stakeholders, and some of that may also be helping them understand that you shouldn't always be comparing against your athletic division. There are certain aspects that we should look at the data a little differently that may take time, and you may, as Brian had mentioned, have to present both sets of data of the peers that they use, as well as the the peers that make sense from a tech transfer perspective. But one of the main things is you do not want to be reactive. You want to be proactive in how you're managing your office, and so avoid those surprises by making sure you're paying attention to the specifics of what, what is going on in your office. Now, there have been other outcomes that have come from the analyzes that both Auburn and Vanderbilt have done. So Brian, tell us about what, what are some of the key things that came out of your analysis? Speaker 4 1:01:13 Okay, so we looked at the five categories, and, like I mentioned, we were around median. But do we want to do better? And back up Alan's point, you may have to be careful, because if you try to really max out your disclosures, you may overwhelm your office. And very often, if whoever's a leader in a certain group, they may be the leader because they're sacrificing in other areas. You know, if they're leader in the startups, maybe they're just doing startups for the sake of startups, and not maximizing their revenue or quality deal flow. So you have to consider that carefully and not just shoot to be number one, but when we looked at disclosures, we thought that we could do better. We thought we had the bandwidth to handle it, and we also were having quite a few new faculty on campus. It was kind of a wave of new hires that we needed to reach out to. So we proactively implemented new strategies the very least, possibly just to maintain our disclosure rate and possibly to increase it. We visited, I think, 25 different faculty meetings when you were allowed to do such things. And then we also established some on campus office hours, because we're off campus on the research part, we actually went on campus that's now been converted to a virtual zoom hours where we're available to faculty, and that has gone really well. We haven't done much with patent numbers. They're a little harder to affect unless you have your disclosure rate changes or you have a big budget initiative. Do I have control of the screen? Unknown Speaker 1:02:39 You should give it a try again. Speaker 4 1:02:44 There we go. We also reassessed and revamped our marketing efforts because we felt that was one we could do better anyway. Even with the new look at the numbers, we still think we can do better. So in the last couple of years, we've completely redesigned our web listings with a third party, and we have redesigned our summaries and that we've all started using third party marketing services, which I don't think have actually resulted in a deal yet, but they've resulted in multiple leads. They've also helped to validate our own methods in terms of because we're not an office that has a whole lot of deals walk in the door, so we have to go find our own and working with these professionals, with their ramped up systems, have kind of confirmed that that we have pretty good systems, even though they can do them better the startup numbers, even though we were below median, that gave a lot of encouragement internally, because when you're in it every day and you hear about What this school is doing and what that school over there is doing, you can feel like you're at zero. And so what these numbers showed is we were not at zero. And look, there actually are two peers that are literally at 00, startups in three years, and we're well above that. So it was actually encouraging that we were not as far behind the pack as we thought. And that dovetail them with a university wide effort to improve, to do an entrepreneurship curriculum, and student startups and faculty startups, and we had helped kind of boost and enable that. And we have a new accelerator, we have entrepreneurs and residents. We're doing things that that were just hope for five years ago. License revenue. That's the one your spares. Love to hear about love, to talk about love, to focus on it is a really hard one to affect directly, because the big hits are so often run by running royalties, which is largely out of your hands. Yes, you can do more deals. Yes, you can do better deals, but we try to do that anyway. So our strategy here is simple. We got a big check last year, and we're going to wait till the numbers update, and then we'll look better. We have to talk about it, and in the meantime, telling everyone who wants to hear it that we got that check and that we're doing pretty well on the licensing side in terms of revenue. So what was an outcome of this? How did we actually use it? Uh, we actually had the unfortunate situation, which I'm sure no one else has ever had, of some rather prominent doubters on campus. So go around saying Auburn just not very good at commercialization, and they would give no specifics and no evidence. And actually heard of one conversation where someone was pressed on this, and I think they brought up an anecdote from around 1994 which, of course, is relevant to what's happening now. But this data that we've generated has been very useful in countering that train of thought, including with our superiors, who were hearing this information from certain people around campus, and we were able to show that, yeah, well, we're not as bad as people say. I mean, we're not, we're matted leader in the clubhouse, but we are doing pretty well. And if someone's going to say we're not very good, they're going to need to be able to back that up. Speaker 2 1:05:49 Okay, Alan, you want to walk us through your outcomes. Speaker 3 1:05:55 Soon as this works, trying to oh, here we go. Yes, you're going backwards. I know. I know. I want to go back to this slide of yours real quick. I thought you made some really good points here, Laura, about knowing your data. Avoid surprises. Being proactive. You talked a bit about being proactive here, one thing I'd like to recommend to people that when you do benchmarking data to the extent you can make that public think a little bit about what you want to make public and put on your bloody website. It helps. It helps for transparency sake. Number one. Number two, there are people across your state that need to have access to your information. You know, state bodies or public private partnerships or things of that nature, foundations, organizations, and I love being able to when they call and ask for data, information to point them to the website. It's all right there, go get it. So we not only put graphs on there, we also put some charts on there as well. We also try to anticipate what certain departments and schools will ask about we often get requests for data, benchmarking data, in particular, in fact, I just got a request for benchmarking data for the Vanderbilt School of Medicine, and they want to have, they wanted specifically autumn data. I was like, I'm sorry, Autumn data doesn't come in flavors of school medicine versus engineering versus whatever. It's just total institution, and so we had to have that discussion. I have found that it has been extremely valuable to anticipate on a school basis what people want to ask and create those reports for those schools and push them out and or put them on a firewall, protected private part of our website, so we can point them to it as well. It does two things. It allows us to be proactive, and it also allows us to say, Hey, this is the information we can provide to you. This is the reasonable information you can ask for. Don't ask for crazy things that we're gonna have to spend 1000 years trying to figure out how to pull that data from our databases. So it helps put it helps funnel to the questions that we have easy answers to the questions that they should be asking. So I really appreciate your comments on that. That's why I wanted to jump back real quick with respecting to Vanderbilt stuff. We did go over the MTA analysis, the trend analysis, a couple of slides ago, that led to two tangible outcomes. It led to creating, as I'm sure some people have heard about, MTA share, MTA share is now in its seventh year with us. We've read we've recently recreated it to be far more robust, and we haven't started using the new version, but we will next month. It at least saves one FTEs worth of time and effort, and it has dramatically increased our satisfaction for people that do that, do MTA. So we do a satisfaction survey for that to measure that. So that has been a huge thing for us to be able to manage the much larger number of MTA we're doing now, over where we where we were 10 years ago, without any real additional resources. We also restructured, however, to create a, what we call a corporate contracts group, and that's the group that it's a small group, but it handles the MTA and handles a lot of confidentiality agreements, and handles reviews for our contracting, for our sponsored research office and places like that, our Office of General Counsel, it handles reviews of sponsored research agreements and industry funding agreements and foundation agreements as well, and It is not devoid, of course, of our licensing team. If there's an MTA for a prominent investigator, and, you know, and we have to share information, they're all together, at least, when we're back together, they'll all be together, but they'll they take the lead on that. And so our analysis told us we had to do a couple of things, and these are the two tangible items that we've actually done, looking at total impact versus total expense. That really kind of changed things for us, in that it allowed us to focus on not just technology licensing, but also marketing of our core research capabilities, which, as a licensing office, we are uniquely qualified to know and appreciate what core research capabilities our faculty have. Have in our our core facilities, our institution have. And so that makes us a more effective marketing group, because we can market more than one thing, not just new ideas. Technologies are sexy for six to 12 months. Whatever it may be, core capabilities are sexy or useful or relevant for a decade or more. And so what that allowed us to do is allowed us to change our internal metrics, both office metrics and individual metrics, on how we measure our performance, and it spotlights our industry collaboration element of what we do, and not just our licensing element. Speaker 2 1:10:34 Alan, if I could jump in there. The other aspect of the core capabilities is, if you do have to staff up. It helps you know what technical capabilities you need in your staff. So if you're growing in the physical science area and all of your staff is Life Science, you may want to hire somebody in the physical science sector or vice versa. That's Speaker 3 1:10:53 a great point. And in fact, I will say that we have a manager for physical science and a manager for life science in our office, and they're constantly competing for those limited resources when we hire a new individual, is it physical science? Is it dry science or wet science? And they use autumn data and benchmarks to show why they're right in their different data, because they're each arguing a different side of the of the of the equation. So that's a lot of fun. Just like Brian, we have departments in particular, what I got here our School of Engineering, I gave a a public presentation, my first one, to our School of Engineering, and the first question afterwards is, you guys don't pat enough things, why don't you spend more resources on patenting protect our technologies? And I had just for the for the interview process, I'd done some benchmarking using autumn data, and it showed me that Vanderbilt was spending more on on patenting than a lot of their peers. Normalized for 10 million in research. And so I had that data available at the back of my of my deck, and I showed three or four slides on our on our benchmarking and how much money we actually spent on on protecting engineering technologies. And that made a huge difference. It really did. But, you know, like Brian, we do get those comments. One of our most prominent researchers, I won't say, who got mad and said that, you know, then you guys suck at tech transfer banner place Vanderbilt is a lousy place to invent and work. That was a that was a pretty negative comment I took, but two days later, he was our biggest advocate. So you never know. Sometimes these things, you know, wax and wane over time. Lastly, on this particular slide, I mentioned this new metric. Oh, no, I don't know if I did math. I mentioned that forecasting without a home run was important, and we have to hustle for every dollar. That led us to create a new metric called fixed contract value. And so fixed contract value is contrived, completely contrived, and I've included the criteria as best as I quickly could. On the right hand side. It's basically measuring the value of a contract. It's not the expected royalty. The expected royalty on any contract that I do is zero until something actually happens. It's not sales based milestones, but development milestones, contractual obligations for patent reimbursements, minimum royalties, maintenance fees, licensing fees, things of that nature, adding all of those together in a contract. And I realized that the milestones are rather speculative as well, but they're less speculative, hopefully, than the royalty that we would get, so all kind of fixed, tangible fees that we would get if there was success and the contract was not terminated, goes into fixed contract value. And to ensure that we were setting the table for success long term in our office, to generate that home run, you need more bites at the apple, and we need to make sure that we were doing more royalty bearing transactions of this ilk, and we need to make sure that our fixed contract value was increasing over time because and so I gave a chart here from 14 on to ensure that we actually are doing what we said, and we're identifying opportunities for long term value technologies. So we do measure fixed contract value, and that fixed contract value is the sum of all the individual fixed contract values of all the licenses for that year. Obviously, the majority of these, we don't, we don't compute a fixed contract value. We do it for just the ones that are that on my last graph was a high, high value likelihood transaction all the you know, small, small transaction once it's not worth the time because it's a pretty speculative metric, but a new metric for office based on that as well. Speaker 2 1:14:30 Okay, thank you, Alan, so we are running out of time, so I'm not going to do this quiz, but you will get these slides. So if you're interested, the next three slides are all the same school, and you can email me and just tell me what your interpretation is of the performance of the school and what's going on. But we're not going to go into those details. We might come back to this if we have time, but we've only got 10 minutes left, so I'm going to get to the questions. I will go ahead and leave this up while. We do the questions. So the first one is from Todd, and the reason I'm leaving this up is because I think it's relevant to Todd's question. It seems like Todd shear, by the way, it seems like we are all having plenty of volume, volume based metrics, and not enough key performance indications. We are seeing an increased interest from our faculty and leadership for these kinds of metrics, things like processing time, turnaround time and success rates. Can you shed light on key performance indicators so Alan, do you guys have you were one that did present more of looking at specific staff and how you might look at their data? Do you have any suggestions for Todd? Speaker 3 1:15:40 Well, the irony here is that we don't really track these types of indicators, like turnaround times, processing times. They can be done. They're they're kind of onerous to some extent. We haven't been tracking those until we had some discussions with Emory and learn what they were tracking. And so now we're tracking them. So our our experience with these turnaround times is is new. I can tell you that they have a direct impact on satisfaction. There's no doubt about it. And there are different ways of tracking these things. You can, you can. You can numerically track every turnaround time. You can track deals from start to finish. You can track MTA is with other academics at it differently than MTA with companies, because there's a different negotiating burden when you're doing an MTA with a company or a license with a company. So there are different ways of looking at it. You can get rid of outliers, because there will always be outliers in this data as well. There'll be one that just sits and there's nothing you can do about it very often because of the other party. So some people, and I think maybe Emory does this, I don't recall, actually tracks just their piece of the turnaround time, because that's all they can control. In terms of shedding light on the indicators, I'm not quite sure what you're really getting at or what really success is. I do know that these are important metrics. They can be tricky to track, but with a little bit of infrastructure, they're very valuable, and they really go to the heart of how you can drill down to how each individual is actually performing. One thing that we do do, and we have done for a while we'd have some experience with, is when our sponsored programs group sends us an IP clause to an industry contract to weigh in on or to modify or to review that type of thing. We track those turnaround times because we want to make sure that we're not the sticking point when, when another party at our organization is trying to get a research contract done. And it's pretty clear that some people in our office are faster at doing that than others. Some people prioritize that over others. And then you can have a discussion with your group about what is that priority? Should it be top priority? Should it be secondary priority? Is it secondary to some of the things that that we do that contribute to our measures for our office? How do you how do you assess that, and How do other people, other offices, feel about that? So those are those led to some really interesting discussions about how you prioritize those, those type of key performance indicators. So Speaker 2 1:18:05 with NASA, we actually did quite a few of these types of metrics early on, but they also had a database early on that would allow for some of these metrics tracking. And your database is a key aspect of this, because if you can't put some of these numbers in, there's no way you're going to track it. You just not going to get that data from the individuals themselves. So if you do any kind of workflows, like Alan said you can track or I guess it was Emory that's doing it, you can track your portion of the workflow, and if it stagnates on the external parties, then you don't have to put that into your analysis of how well are you doing for turnaround time and the success rate. The problem with this autumn data that we're looking at here is that it's a snapshot of that year, and it is not a one for one. So the invention disclosure that comes in is not the license that is issued or the revenue even that that is being posted here, because those all stagger by a few years, but internally, you can do that. So again, within that database, you may able would be able to track the invention disclosure to the evaluation, how successful was that evaluation and accurate to how long it took to do the marketing, what the licensing revenue was on it. And then your success rate for each individual thing is it 10% that are getting through and so forth? So it is much more an internal systems capability that is needed in order to do these kind of key performance indicators, not autumn data. From that perspective, Speaker 3 1:19:35 it's also management, because, like our database, I bet Brian, your database can can track these, but you have to implore upon your your employees, to actually put the data in, to actually put the turnaround times in. If they're not doing it, then the data is junk, right? Unknown Speaker 1:19:49 Garbage in, garbage out. Yeah, Speaker 3 1:19:50 we do. We have one place where we have tracked turnaround times has been our MTA for years because we had to track them before MTA share and after MTA share and. So we have hard data that shows that we're five times faster using MTA share. And the rate limiting step is not the MTA share system, it is us. You know, how often we log on to the system. So we have hard data on that, but we don't have it for some other good key performance indicators around research and run licensing, Speaker 2 1:20:17 yeah. So I think that that will improve in time as systems improve. So next question, Laura, Speaker 4 1:20:22 let me add a little sidebar to your point. We've actually experimented with looking at autumn data with a lag period. So you look at research expenditures for one year disclosures, two years later, issued patents, four years later, five years later, we didn't learn a whole lot from that. It didn't really change our numbers, but it's something you could possibly play with, particularly if you're in a timeframe where your school research base is rapidly changing, usually increasing, because if your numbers double in two years, you're gonna look really bad, yeah, because all your normalized numbers are gonna be cut in half. So that's something you consider doing, Speaker 2 1:20:59 yep. So Brian, I'll ask you this next question from Todd Keeler, how can you handicap income comparisons based on the age of the office? Hard to compare an office that is eight years old versus one that's 30 years old. Speaker 4 1:21:18 Yeah, that's something I'd not considered before. I don't know how to actually handicap it. How do you interpret data from a 30 year old office and relay it to one that's eight years old? Yeah. How about picking your peers? What do you my first thought is that could be a criteria to define a peer, if someone that's of a similar aged office. And I almost there may be cases where you actually go back in time and find that maybe a school is similar, but there they had an office 10 years older. Maybe you go back 10 years and look at that data. There are other considerations you'd have to consider, you know, what was the economy 10 years ago and all that was there a pandemic. So you'd have to consider that. But it might be worth doing if you had enough schools, then oddities like that may average out. Speaker 2 1:22:00 Yeah, another thing that we might do in this circumstance would be to separate out your current peers, where you might pick those young offices, but then do a second comparison column of who you're aspirational to. So those are the offices that might be about the same size but have been around longer, and you can say this is our aspirational group once we're up and running. Okay. Next question is, John minor, are you using special software or just dumping data into Excel? We are dumping data into Excel, but it is a mammoth Excel spreadsheet. If you go to our blog and look for webinars, I actually have a webinar that talks about the how to so that you can see. How do you actually create this sort of data? You want to be an Excel guru to do it. Are you guys? Either one of you using anything more than Excel? Well, Speaker 3 1:22:54 Brian, what do you use for your enterprise system? UK, SS or NTN? Spring. Sophia, well, it will spring. Okay, yeah, so I know a little bit about Sophia. I don't know. I haven't used it any detail, but that system, in TMR system, I know info, IP, a newer system, has a lot of really good analytics functions that can be used. We use a lot of that for internal like developing reports for departments or for schools and creating prefab reports like I mentioned before, and push them out so they don't ask them, but the benchmarking data that the core data that we're talking about here, it's just all Excel. Speaker 2 1:23:31 Yeah, yeah, the systems do usually have good dashboarding and reporting that you can do, but that's only going to give you your institution, and only for specific time periods that you might choose. And then Todd asks Todd shear asked us an additional question. We are beginning to hold ourselves accountable for the total time hours and theirs in I should put my glasses on. I'm sorry to incent people to follow, Chase, down and dive and drive to completion of contracts. So Are are you guys looking at that tracking of the time to incentivize we Speaker 4 1:24:14 aren't really tracking it much. No one's asked. No one's asking. We could pull some information out because of the wellspring system. So if they want no turnaround time on agreements, we should be able to give that to them, and marketing interactions and things like that, but a lot of other stuff we just wouldn't have, because it's, it's not part of our day to day flow. And no one's no one's asking about Speaker 3 1:24:40 Okay? And from our standpoint, we have a set of of metrics. So we have we have goals. We have five goals that leads to Office metrics, and that leads to individual metrics. And for every goal, you may have one to three individual metrics that could contribute to that goal, whether that's a revenue goal, whether that's a licensing. Goal, you know, so confidential agreements being signed as a precursor to doing a license, because you got to get people on the hook and you got to share information, you know, a way of example. There's a we have a goal around satisfaction in turnaround times we are now. We've now included that in our metrics, individual metrics willing up to satisfaction. But that is a brand new thing that we're rolling out for the current fiscal year. So it isn't actually done yet, so Todd's ahead of us in that regard, but I do think it can be used. But if you're not, if you're not raising it to the level of either rewarding them for their performance or hammering them to some extent, holding them accountable, I guess maybe, maybe a better way of saying it for falling short, then it's a nice thing to have, but, you know, it's not as important as when you actually use it to drive performance. And so the way that Todd describing it, I think, is a very good outcome, because he's using it to actually drive performance. Speaker 2 1:25:54 Okay, well, we have run out of time, but I want to thank everybody for having participated, and you will be getting the slide. So again, if you want to take a look at that quiz and send me what your interpretations are of the office performance that's exemplified there, I'd be curious to see what you think, and I'll hand it back to Samantha. Thank Speaker 1 1:26:15 you. On behalf of autumn, I just want to thank Laura, Brian and Alan, all of you. Thank you so much for this informative discussion, and thank you to all of our attendees today for joining us. As a reminder, a recording of the webinar will be available for viewing within a few days of the event, and is included with your registration fee. So on top of that slide presentation handout, you will also have access to the recording, and you can visit the autm website to view the recording or purchase a past webinar, and don't forget to sign up for Autumn's annual meeting, which is happening virtually this March. And then as a final reminder, we ask that you complete the webinar evaluation, which will pop up when you close out of the webinar. This helps us to serve your needs in the future. Thank you again, and I hope that you have a great rest of your day. Thank Unknown Speaker 1:27:01 you. Thanks. Brian and Alan, thank you, Unknown Speaker 1:27:03 everyone. Take care. Bye. Transcribed by https://otter.ai