Speaker 1 0:00 All right, good afternoon and welcome to today's webinar. Generative AI has arrived essential knowledge for TT O's presented by Autumn. My name is Sami Spiegel, autumns professional development manager and I will be your staff host for today. All lines have been muted to ensure high quality audio and today's session is being recorded. If you have a question for our presenters, we encourage you to use the q&a feature on your zoom toolbar. If you have a technical question or a comment, please feel free to use the chat. Our presenters would like to encourage you to submit your questions throughout the session. If they are relevant in the moment we will be able to pause and address those. But we will also save time at the end of the session to address the questions that come through. Should you need closed captioning during today's session, the Zoom live transcript feature is turned on and available on your toolbar. Before we begin, I would like to take a moment to acknowledge and thank autumns online professional development sponsor Marshall Gerstein, IP, we appreciate your ongoing support. I went too fast on the screenshare apologies. Missed the important part. I'm going to introduce today's presenters. We have drained MacFarlane and Thomas Ultimen here with us who are the co founders of Tazio. And here we go back on the screenshare apologies for that. Drain MacFarlane has spent nearly two decades providing associations with Business and Technology Solutions he loves diving into the goals and challenges of a group have driven by passion to support their mission in the best way possible. immersing himself in needs and getting a deep understanding of organizations has been a constantly evolving and incredibly rewarding experience. Founding Tassia gave him the opportunity to continue on this path specifically focused focusing on applying artificial intelligence tools and expertise to associations and their members. Thomas Altman is an association professional with more than a decade of experience in data science and applied AI. His expertise lies in helping associations implement cutting edge AI and machine learning techniques to improve your organization's Thomas is committed to empowering organizations to make data driven decisions and streamline their process with the latest technology. We are so excited to learn from both of you today as we embrace this new frontier in the generative AI space. So with that, I will turn it over to you both of you to kick off the presentation. Speaker 2 2:26 Wonderful. Thank you very much, Sammy, appreciate the introduction. Welcome everyone to this session on generative AI. Today we're going to talk about what that is and how it can be applied. Sammy said we're I'm David Farland and joined by Thomas Altman cofounders of Casio, our background is in providing AI tools and expertise to the association to nonprofit world. We're going to kick things off today by walking through some basics and background information. Then we're going to dive into specific and hopefully immediately actionable examples for you. Then close out with some thoughts for the future and a good amount of time for q&a. So as you have questions, ideas, you want to add anything definitely feel free to jump in. We aren't experts in the area of tech transfer. We've tailored some of our examples to use specifically to help illustrate our points. But we are relying on your input. That's why we encourage questions, we want you to bring up areas where you could add some context to how you might use this or what your concerns are, what questions you might have. Basically all of those things will help make this a better session all around. So definitely get in there, ask questions will interrupt each other if we see a question that really adds to the point we're making in the moment, otherwise, we'll get to it towards the end. Now before we move forward, I just want to take a second to explain why the two of us are so passionate about this particular topic and what this picture is on the screen right now. In previous discussions with other colleagues, we've often returned to the idea when talking about generative AI. That what we're seeing in the advancements around AI technology right now is a similar shift in the world that could easily be compared to other major historical shifts, such as humans taking flight. Just like that opened up a world of travel and opportunity for people to move around, see other things that they might not have realized existed to exchange goods and knowledge in ways that wasn't really an option before. This technology could change our day to day lives in very, very big ways. This has been imagined pretty thoroughly in science fiction. But suddenly, especially over the past four months, a lot of that stuff has started to become immediately realistic. This image is a pretty good illustration of sometimes the way I've been feeling sitting there on my horse watching the first planes go by basically seeing something advanced beyond what I really expected to be possible. And of course, because this is a discussion around AI, this image is AI generated. So that is courtesy of our good friend dolly to to help create that image So with that, let's dive in. Let's, let's start with a really simple question. What is artificial intelligence? kind of laugh? As I say that it is not a simple question. There isn't a whole lot of agreement around that particular question. When you talk about AI, you'll often get into things like machine learning deep learning neural networks, talking about big data and how to interpret it. Let's keep things simple today. So for today's purposes, when we talk about AI, what we're really talking about is the ability for a machine to do something that traditionally requires human. That's as simple as you know, I can look at something and tell what it is, if a machine can do that, that's artificial intelligence, I can look at a sentence, look at data, look at something and understand it in ways that we traditionally haven't expected computers to be able to do. AI has been around for a long time, this is something that people have been working on, people have been developing computers have been applying for a very long time, particularly around this, this loose definition of AI has been able to do a lot of things. And that's, you know, from the original AI examples that you might know of where they played chess by basically knowing every possible move, going through the list and figuring out what the possible outcomes of the game were. That's AI for this purpose, that's something that they're planning ahead doing something, they may not be doing it the way a human would. But it is something that traditionally humans would do. So all these other pieces, machine learning deep learning neural networks and big data, they're part of it, you know, they play a role in discussions around AI. But the ability for a machine to do something that is fundamentally human is where we're going to get into today. Now, with that baseline set, I'm going to hand it over to Thomas to talk about where we are. Speaker 3 6:38 Yeah. Thanks, Jay. So hi, Thomas Altman, I am really excited to be here, especially with kind of the tech transfer concept, because I think there's a lot of good applications for this. What I want to do kind of right now is sort of set us at a moment in time, because I think that's really important to sort of see where we are and understand that where we're going to be in even a few months, it's going to be rather radically different than than where we have been, right. So one of the things I always go back to is I remember I've been playing with AI for a while now. And I remember reading this blog post way back in the ancient times of 2015. On weight, but I and I recommend this blog post, if you kind of get a chance to it, it's artificial intelligence revolution on weight, but why, um, it does a great job of sort of situating, sort of what's to come. And one of the main points there is that as AI tools get smarter and get more capable, the rate of change the pace at which we make even smarter AI is going to increase, it's going to get smarter, faster. So what looks like a slow creep, a kind of incremental change in ability in AI, actually is a kind of a lead up to a an exponential curve, or even a logarithmic curve, kind of a massive shot, shot up. So this image is from a blog post from 2015. And you can kind of see the arrow pointing at a little guy on this sort of progress curve, right before the elbow right before that little joint that kind of kicks off, right? It was kind of interesting, revisit this article in preparation for this call. Because I think we are no longer just before the curve, I think we're just after, I think there's been kind of a take off moment, a pivot within the last year, kind of unexpectedly, you know, I'll kind of get to that second, where really, things are going to change much more quickly. And it's going to change much more quickly because AI is enabling us to shorten that throughput cycle for improvement. And it's going to get smarter and smarter and smarter, faster and faster and faster. So kind of situating us where we are right now we are kind of at a changing point in our ability to do things right, our relationship with information has changed fundamentally. And next year is going to be radically different than today and a lot of ways, not flying cars and all that stuff. But like the way we interact with the world, it's going to change especially the way we interact with information. So I think situating sort of what's happened, how did we get here so quickly, and it's kind of a surprising journey. And I want to frame where we are within this sort of general history of AI a recent history of AI. So I'd like to kind of plot it out on this timeline here. And I think the the key moment, and it wasn't obvious at the time, but the key moment was round 2017, early 2017, late 2016. You may remember, you may have had this experience I know I did, of going to Google Translate before 2016 And it was fine. It translated word for word pretty well, but it never like got a good translation out. It was kind of clunky. And then there was this moment overnight where it got really good. It got much better still not perfect, but there was a noticeable like leap in its ability to translate. And that all has to do with this paper they released slightly afterwards called attention is all you need. And what researchers at Google figured out out is how to take a sequence of text and identify the portions of that text that are important for a given task and the initial case it was translation, and then use that to identify what a following sequence of text might be. So what they really did in creating this Google Translate advancement was created, basically the world's most, you know, hyper tuned, amazing next word predictor. So it's predictive text. And I think a lot of people are surprised to learn that when you go to chat, GPT, or any of these generative technologies, it's not a thinking machine. It's not anything that's kind of receiving instruction, understanding the instruction and doing what's actually going on is through this mechanism through attention through a model called transformer, which I'll get to in a second. It's just really sort of predicting the next most likely word, kind of appending. That and then adding that on. So that kind of that that breakthrough, was really interesting in taken up by another group called Open AI, which I think most people are familiar with now. And they came up with this idea of GPT, which is a generative pre trained transformer. So that same technology that Google came up with, instead of using it for translation, they start with the same language, and then they just predict the next most likely word in a sequence of texts, you give it a starting point. So anytime you type into chat, GPT that's your starting text. And what seems like chat GPT responding and understanding is really just predicting the next word that's going to follow as if it was a chat conversation. So they kind of formulated that concept and 2018. And they did it a small level. And I remember at the time, when that came out, it was kind of cute, it was a it was a neat gimmick, i No one did, there was no way you could have expected that, to lead us to where we are now. So you could do fun things like go to Project Gutenberg, and get the full complete works of Shakespeare, then train a GPT on on that and then simulate kind of Shakespearean language like a valves and dots and all sorts of stuff. And it was it was a gimmick, and it was fun, and then you forgot about it. Fast forward another year, kind of a leap in technology was made. And what they did is they started from a relatively small language model GPT and increase the size of it made it bigger, the way you can determine the size of the language models through parameter counts, it's one way to determine that they basically added more parameters to it and trained it on more text, they kind of scraped the whole internet basically. And it started to be instead of just sort of a gimmick, actually produce reasonable text, you could say write a blog post, or you could start a blog post. And it would more or less get it right, there might be a couple errors here and there, you'd have to go through and clean it up. Still very manually hands on, but it was kind of cool that you can generate a fairly lengthy blog post from from just a simple set of instructions or initial text. But it was still not anything crazier than just a next word predictor. The magic happened around 2020 When they came up with GPT, three, so what happened is it went from a language model to a large language model, the size of the model itself increased pretty dramatically. And what happened pretty quickly after is some emerging qualities came out of this. So what started as really just a next word predictor, people started noticing you can see in 2021, there's this pretty critical paper around something called a chain of thought. If you frame your question the right way, it starts to simulate reasoning and thought and kind of the next words or predicts or those that would be kind of following a formal logical like pathway. And when you kind of expanded it past some threshold, there's we'll get to a graphic in a second, I'll show you what happens. These emergent qualities come out of these language models where they can start to simulate reasoning, they can start to simulate thoughtfulness and you know, being able to do different things. I mean, at that point, what happened is people started rapidly iterating on that, and open AI kind of invested on certain struct models being able to give a command and it follow in that way. Your release Chachi btw, this year it released GPT four which has been another jump and capability some interesting items about that is it got a five on the a the Calculus BC exam. Literally doing calculus just through text. Really recently, and I'll touch on this a little bit too as we've started to see people develop autonomous agents so kind of simulating basic thoughts around organizing, planning action, taking spinning up other agents to account As your tasks, taking the results of those tasks and passing it forward, people are actively experience that. So all of this is starting to explode right now, like every day, something new happens. And that's kind of the history of where we are and how we got here. And I think it's good to know where we are and understand that how we got here is not expected by any means at all. So with that plaintiff back over to drain, kind of gives them a sense. Thanks, Speaker 2 15:23 Thomas. Yeah, so with the history there, let's, let's take it again, let's get back to the more definitions. So we talked about AI in general, let's focus in on generative AI, this is what we're talking about today. In particular, let's start with what it's not that we talk to a lot of people about this stuff and get a lot of questions. There's a lot of information out there on the internet, and a lot of interesting speculation, generative AI, the way we have it right now, it's not conscious that it's not some soul hiding in the machine that is out there trying to escape its balance. It is an export predictor at the moment, it is not a thinking being, it is not unbiased, this is something that is a major concern for these models. The bias in these models is based on what it's trained on. So it has bias because we have bias because our history of data has bias because the internet has biased. So don't go into using these things, assuming you're getting some unbiased neutral output from your machine. Good news, this is not a human replacement. This is something that eventually could be but right now what we have is another tool that is useful, it can help us be more efficient doing things is something we can leverage. But it's not going to go out there and just say take this person and replace them with Chet GPT. It's not inherently evil. That's a bit of an extreme statement. But I have run into a few people saying you know, AI is evil, it is out to destroy us. Again, it is not human, it is not motivated to do things like that. It is currently a tool, and it can be used in many different ways. So what is it, as Thomas just went through, really at its core, it's very sophisticated autocomplete, I don't want to oversimplify that. Because what it's doing behind the scenes is extremely powerful. It's not just taking a body of content and saying in this text, this word most often pauses other word, it is quite a bit smarter than that. But it is doing a word follows another word, each step of the way. That means that it's not reasoning through something by looking at an entire body of question or content that you provided, it's being primed by what you said and saying this is the next thing that should follow. It is something that is trained by both data and humans. So when we talk about the bias that goes into this, I mentioned, you know, the body of content from the internet that it was trained on, that's a lot of data. But on top of that, we go through a lot of human training. So when GPT, four came out, it had actually been trained and consumed the internet of data for several months, maybe even most of the year at that point, because it had been going through human training, basically experts coming in and using the model, confirming it when it does the right things, adjusting it when it does the wrong things, giving it feedback so that it can learn and improve. That's how they take it from just here's something that is a very sophisticated autocomplete to something that can do tasks. It is a little surprising to its creators. So Thomas mentioned that this is something where we didn't really expect certain things to happen as these models advanced. But they did. And the creators may not have expected it. And we're still learning about it. The surprise is that it's full of emergent behaviors. So as these models have gotten bigger as that number of parameters have changed, different behaviors have emerged at different thresholds. So this graphic here is a good illustration of how these models have grown over time. This one is particularly focused on Google's different models, at 8 million parameters, you have some functionality kind of cool, it can write things. But as you get up to the 60 to 100 billion parameters, start doing arithmetic, you start understanding language translations and improving. Once they got to they're not even most recent model anymore. Around that 500 billion parameters, you start seeing it do a lot of different things. It can handle instructions, it can go through particular tasks. That's pretty close to what you're seeing with chat GPT. These days, open eyes models have a different number of parameters. They haven't released what GPT four in particular is based on, you hear rumors from anywhere from 1 trillion. I've heard people speculating much higher than that, probably in the 1 trillion area error area, though. And just yesterday, Google released their own newest version, which again, speculation on how many parameters it is. But the idea there is we think that they've actually reduced the parameters. So they're starting to get these emergent behaviors more efficiently as they get better and improve these models. So with all of that, how do we actually get the most out of these tools? So there's two different directions. We're going to be focusing mostly on the generative AI side today, but I figure it's worth talking about the more specific tools for very specific tasks before we get too far down that path. I broken this into basically three different types of things that people are using AI for a lot of these tools for some of the ones mentioned here, and a lot of AI tools in general had been available for a very long time. So they're improving over time, but they've been available for a while. The key recently is that as these language models have improved, these tools have gotten better because they start leveraging that same technology. On that audio to text text to audio, we have tools that do a lot of transcription. From watching chat, it looks like you've gotten to see one of those inaction that maybe needs a little more improvement. So zooms ability to do the closed captioning is apparently decided that we were speaking French at some point in the past few minutes. Maybe they need to start using some new language models. But here are some example tools that do that kind of transcription. So fireflies and otter AI will both take audio contents and convert it into text. Again, something that's been around for a while, but the quality is drastically improving as these language models improve. descript is one of those newer examples where you potentially have a tool that is filling a gap that didn't exist before. So it's the ability to take text and create more audio from it. That is very realistic, sometimes scarily, so because you can train it on your own voice, you can use it to fill in gaps in recorded content. So if there's something about this particular webinar, where I don't like how I phrased something, a tool like descriptor would allow me to change what I actually said. And it would still sound like me, this has been used very heavily in podcast. But it is an area where again, the improvement in the language models have increased the flexibility of what these tools can do in terms of both understanding and generating audio content as well. Image Manipulation is kind of everywhere. Right now, I've got a couple examples. Here we had that image on the very first slide that was generated by Dolly. Stable diffusion is another model that's very similar where it takes text and creates images. So you give it a description of what you want. And it's going to create that you can give it feedback on those images. And it can refine that for you. Something like runway will actually take an image and allow you to change that image based on text that you're giving it. So if you wanted to change the color of something within a picture, you can do that within runway. If you want to do some editing that you might have had to go to Photoshop before, there are tools that are allowing you to do that with language, rather than, again, human expertise having to come into play. These are great examples of those cases where up to this point, we've generally expected humans to have to do these things. And now we're still providing direction, but the tool itself has gotten much smarter and how it can do it. The writing support is where we start drifting into the areas that we're really going to focus on today. The generative AI, especially around chat, GPT and text generation, there's some tools here that you can basically plug into your writing tools that either allow you to help complete blog posts, give you some ideas for new content, provide some adjustment to content as it's going through things. There's a lot of things that plug into email marketing, a lot of writing assistance that you can get through these tools. But these are all very specific tools for very specific tasks. Where we really want to get into with generative AI is the idea that we can go more generic, the AI has gotten smart enough that we don't necessarily have to build a tool for a task. Not saying that's not the way of the future that we might have models that do some things very, very well. But the idea that we can talk to AI and just ask it to do something and not care how it's going to do it, or what it's what is behind the scenes. That's really what's becoming more and more powerful with these large language models. Here are a couple of just generic tasks that you can do with these things. So translation might have been something where you used to have to go to Google Translate and say I want this in a different language, paste in your text on the left side, select your language outputs on the right. These language models allow you to basically have lots of content and then just ask it for different languages. supports most languages, it's trained on a lot, the quality is pretty high. I don't speak many languages. So I can't verify that. But the people I've talked to who do generally are impressed with how it handles idioms and slang, and really even some scientific language. The big thing there is that this is a rapid improvement from where we were a year ago, two years ago, honestly, five years ago, I wouldn't have thought that I'd be saying no translation is one of the more trivial pieces of functionality here. But it's also continuing to improve. So as it's trained on more scientific data, the more specific terminology that it knows in different languages also expands. So I expect this to be pretty close to perfect in the next couple of years as well. One cool thing that's been very helpful for me is the ability to summarize content. Again, not something I really expected to be able to tell a machine to do for me. But if there's a long paper that I'm reading, it's very easy to say Alright, here's the entire paper, summarize this down to a single paragraph something that I can consume in minutes rather than hours. Or just give me the key points. Maybe a bulleted list of here's what I need to be aware of from this particular content. Speaker 2 24:58 That leads pretty nice easily into understanding the content. So not just reducing it down to something I can read, but it could actually start to interpret that content for me as well. So I might have this paragraph that was short and easy for me to consume. But it might also ask it for what was the intent of this particular piece? What was the author trying to convey? What does this mean? Can you explain some of the things that I should derive from this piece of content? What's the tone, so if this was a conversation, or a webinar or some recorded content, what was the tone of the people involved in this conversation, and more, you know, the ideas that you can use here are pretty open ended. Because again, you're just starting to ask it to do things that could include creating content. So those tools that I mentioned before, where it helps you write, that's a derivation of this ability to create content, and that could be brainstorming something that could be actually writing a first draft for you. We'll get into specific examples of how you should handle this responsibly. But this is an area where again, I don't think I would have expected to be able to tell it to say, create an agenda for me or from this agenda, create a set of slides, or from these set of slides, write a paper for me write an ebook, but just something that is very, very possible now, and then helping with refining your content. So once you have written something, or you've used it to help you write something, the ability to go back and forth, and actually interact with it. So you can do editing suggested changes for more impact, or anything else you might want to get out of your content, even if it's just a rewrite this in my voice based on these pieces of my content. This is a lot of general stuff. So let's take this and apply it to some very specific examples. So I'm going to hand it back over to Thomas to start digging into things that might apply more specifically to your day to day. Speaker 3 26:46 Yeah, absolutely. And before we jump into that, we have I think, a pretty good relevant question from Gennaro in the chat. So the question is back in the 1980s, my college years, the first programs that were able to solve advanced calculus in an analytical, numerical manner released, were those based on AI, and then a follow up is Mathematica based on AI. And I think this is actually a pretty important framework to start thinking about where we are in the timeline, right? So I would say, yes, it was based on AI, but a different type of AI than we have right now. So in the in the 80s, early 90s, there was a lot of work around rule based AI. So it's still artificial, it's still a program a computer doing stuff. But it required a human expert to explicitly go in and code in the rules that should be followed. So there's different steps in, you know, solving a derivative, or, you know, integrating between two points of some function, right? Like there's explicit steps that a human being would follow. So the way that these programs were, were written back, then were through an explicit like, If This Then That, do this until you get the answer, right, then using things like recursion and interesting computer science technologies, that's kind of a first generation of kind of this AI that could do stuff like this. I would say the next generation, there's kind of a trough, basically, we reached our ability with the technology that we had at the time to kind of do a AI through the early 90s, then there was a resurgence in AI research in the mid 2000s, that started to go down this neural network path. So people started as computing power got more got cheaper and more powerful, we were able to start building these things called neural networks, I won't go into too much detail about what that is. But it's basically a new way for machines to instead of have a human explicitly write the codes for the rules, the machine could be given an example and then learn sort of the deep structure of those rules on its own. So that brought us through around you know, the about till now and I'd say we're interests entering a third generation. So you could learn calculus, you could have a machine or learn the rules of calculus on its own, provided that you gave it enough inputs, enough examples to base that learning off of then the kind of area we're interesting, enduring now, which I think is why we're on that upshot of the curve is using natural language to mimic thought without having given it examples to begin with, right sort of self supervised learning, learning on its own. And that is the first toe dipping into the pond of what people are starting to call AGI Artificial General Intelligence, right? The ability for the machine to learn on its own, how to even approach a problem without being given those instructions to start with. So when Chad GPT solves a calculus problem, it's basically accessing sort of deep into the neural network that was built on language that mimics thoughts, right. And that's different from a neural network that was given a set of characters problems and the correct answers and then sort of learning the structure of it. It's basically mimicking the way a human would kind of stumble through and solve a problem on its own. So it's very much on in our own new world, we're we're starting to get the the hints of the ability for a machine to learn how to think on its own instead of be giving example. So I hope that that answers the question. And that's for Mathematica, I'm, I don't know too much about it, I would assume so. But I don't know the answers to that specifically. Okay, so going into getting a little bit more targeted, more actionable for your use cases, I like to think of AI usage around three levels, kind of baseline level, the atomic building blocks of how you can use AI, specifically Chet GPT. And then how you can kind of map those to the all the atomic blocks together to do a higher level, you know, process and then sort of what's coming, what's coming next. So the very basic first level of using any AI specifically check GPT, or any of these text based models, it's the instruct version, basically, say, do this, and it will follow follow that path. So that's probably the most common way that people are using it. Now you open up GPT, you say, hey, write an email or, you know, maybe write a grant proposal, I'll give you an example of that in a bit. And the way to make sure that you do that part well, is you have to formulate your prompt the instructions that you're giving it to it really, really well, you have to use your sort of knowledge of the domain that you're working with, to put that language into the prompt so that it answers in the most relevant way possible. It's very effective way. And it honestly can save you a lot of time, if you just did anything, go into charge GPT. And ask it to do something for you. And it'll be decent, it'll be fairly, fairly good. There are techniques to give it better instructions, that process of giving it better instructions, it's called prompt engineering, we could do a whole 90 minute session on prompt engineering, in fact, with for another group we have, but there's a lot to learn, they're learning how to structure the prompts that you give in AI is a really important skill that I think everyone needs to start investing in now. And the important thing is, you start to approach this, the important thing to remember is this concept, we're going to hit this over and over again, that it's not thinking it's not understanding the words you're putting into it, it's predicting the next word in a in a sequence. So your job as a person interacting with these AIs, is to think about how you can structure your your input, so that it makes it more likely that you get the output that you want. And the one thing that you should not do is assume that it will understand negative feedback. And the most misuse the biggest misuse of GPT chat GPT that I've ever seen. And I see it all the time is somebody asks it to do something, and it does it wrong, or doesn't do it the way it wanted to. And a person just says that's not right, do it again, check GPT will never never figure that out. Or maybe it will every once awhile, but it's not the most effective way. And the reason why is you haven't seen it, you haven't provided it with the information it needs to get the right answer. Because if you remember that it's just a next word predictor, just saying, hey, that wasn't good enough. You don't it doesn't have the context answer it well. So the next level of doing this, the way you can kind of overcome that hurdle, is through thinking in terms of chaining prompts together. So what your goal with a lot of this is, is if you think of it, the instruction part is the atomic building blocks to build a larger process with you can think through, okay, if AI is the next word predictor, it needs to have information sort of loaded into the previous set of text, in order to give me the answer that I want, can we then through a smart sequence of of prompts, get it to access that sort of deep within its own neural network, the information that's sort of been trained to it, get it to pull that out, load in front of when I actually asked it to accomplish the tasks that I really wanted to do? Can I prompt it in a certain ways a few times to sort of preload the existing information about that it has about the world. And then once that information is loaded in, say, now go and accomplish the task that I want you to. But it's a technique called prompt chaining. And I'll show you good example of that with a kind of focused on tech transfer, use cases. But you basically don't start immediately by asking him what to do. You start immediately by saying, What do you know about this? Or I this is my background, this is what I'm trying to accomplish? Can you help me with it? It'll say yes, this is how I can help then you load it in with a what do you know about this and go from there. That still requires a level of interaction from you and you know, you can start having some priests saved, perhaps I'll show you what that looks like. The next level the the area that we're just emerging into. So within the past month, there's been a few prototypes of automated agents coming out. And it's basically structuring this chain of prompting in a way that is kind of intelligent and automated. So some examples of that are something called Auto GPT. It's very interesting, it's still very early on. But if you want to look up something at all, it's very, very cool. Jarvis is something that Microsoft has put out. Also, Microsoft has hinted that they're coming out with something called Microsoft, copilot, 365, and Microsoft 365, copilot, that starts to automate a lot of these processes using these language models in the background, in these language models, spinning up other language models and feeding them prompts based off what it knows. So those are three levels of usage, the first one, dive in, ask him to do something, the second one, think a little bit more deeply about the output of one prompt and feeding it into the next one. And then three, be on the lookout for this, because we're going to take off by the end of this year is my guess, a lot of automated versions of that where you just really provide it with a high level goal. And it'll decompose those goals into tasks, execute those tasks, learn from that process and keep going. So let's think through a couple of tech transfer versions. So I'll show you here. And then I'll actually jump into chat up to you to show you how I actually did this. But I came up with four kind of relevant examples. And you guys are the experts here. So what I would like you to do is if you've got other ideas of how to start going through this, if you want to see something in chat GPT we can kind of riff at the end and the q&a. But the first example is writing a draft of a licensing agreement. So let's say you want you know, your that's something you want to do. So a licensing agreement based off of some information that you're trying to work on, you can use chat GPT to actually initiate that process for you. Obviously, you wouldn't send it off without throwing it through legal first or having the lawyers look at it. But you could actually have it, you know, give it a structure tell you tell it the high level goal, and then have it actually write a draft of the licensing agreement for you. I want to show you an example of how I did that. No, I've kind of do it in real time for you as well. So you can see as we go. So this is how I did it. So if you remember, we've got kind of our instruct, and we've got our prompt chaining. This is an example of chaining. So what I do instead of saying, hey, write this licensing agreement for me, I start saying giving us some background about who I am my persona, the the what I'm bringing to this. So let's say I'm working at a tech transfer office, and I need to drive contracts for a licensing agreement, kind of setting it up with that and saying, Can you help me with that? So the reason I do this, the reason I seed this first is I'm trying to access what it already knows about tech transfer, what it knows about contracts or licensing agreement, because there's already some information based off of the way it was trained. And I just need to like, pull that into sort of the forefront. So basically, if you want to anthropomorphize a neural network, I'm trying to reach into its subconscious and make some part of that conscious, and load that in front. So I start here, and I always start with some version of this, and say, okay, of course, I'd be happy to assist you drafting contracts for licensing agreements. Kind of gives me what it knows about this. And then I say, Okay, this is, you know, basic background, I've got a new software. Okay, now that we've loaded, sort of what it knows about licensing agreements of tech transfer, we then approach it with our task, right. So we've got a software application, and we want to license it to a company for commercial use, should do this, follow this structure here. And because I first seeded it with information about what we wanted to do, it does a fairly good job at sort of producing a bare bones version of this, you may have your own structure that you'd want to follow here. And you could provide that in there. You could even see it with an example of, you know, a contract that was written for another purpose to say, here's an example. This was the general idea. And this is the structure that we followed. Here's our new idea and the structure, we follow the structure but updated, so it follows pretty well. And this is copy and paste, well, you could, you know, send this over, edit it lightly if you want to, or tell Chet GPT to edit it for you. And then send it over to legal depending on how much you engage before asking it to generate it. So this was a quick back and forth, it'll be more and more complex. So I put a tech transfer is not my background. So I didn't want to go too deep in that direction. But if this was something I were doing, I would probably see a few more questions in advance of asking it's actually execute on that contract. So that's one version. So using it for creating creating a draft document, you hop back over to Speaker 2 40:15 PowerPoint, actually, well, Thomas is heading back over. So a couple of questions in there. One of them I think, is very relevant. Well, we'll touch this a couple times. But is there a way to ensure the security of information sent, the more information you can provide, the better the content, the chat GPG is going to create for you. And that's true for any of these language models. So as Google and Bing and all these others can do the same things. Having that information, makes them produce better content, but having that information means they have that information. So how can you be confident in the security? We'll touch on this a bit more as we go? But the ultimate answer is, it depends. Depending how secure it needs to be, ultimately, you're giving that information to the language model, and you're trusting that they're going to handle it the way they say they're going to handle it. I guess. Well, I'm gonna leave it at that, because I think Thomas covers this, and I don't want to completely steal that. But it just to see that question. And so he's aware of it. Speaker 3 41:09 Yeah. Great, great point. And I think I'll show you there's a little feature that I mean, opening, I put a blog post about it, but it's a little bit buried, where you can turn off whether or not they save your data. Dreyse absolutely correct. Or you don't, you've got to trust it. They're handling it correctly on there. And so I would never personally put anything personally identifiable in there. So I'd never put anything like a social security number or credit card number or anything like that phone numbers even, I wouldn't put that in there. If it's patentable information, I'd be a little careful. You could probably have placeholders for specific points that you might not want open AI to access. But they do there is an option to switch off whether or not they store your data. It's switched on by default. And I'll show you in my next example, actually how to access that part where you can actually switch it off. While I've got the time I've got to see in question here from Stacey finning. How do you see it with an example? Paste a template agreement? That's exactly right. That's exactly how you would do that. So I mean, depending on the length of if you've got quite a lengthy contract, you may want to paste in sections at a time. But you would basically tell Chad GPT, here's an example of a contract agreement or contract that I want you to mimic. And you say, here's the introduction, here's you know, whatever the the relevant sections are, and you would seed it with that information, basically copying and pasting. And then it would have it loaded into it sort of, again, it's conscious mind to access it, when you need to create a new one. So that that's states, you're exactly right, it's basically copy and paste, it's as easy as that. Um, so the flip side of this, so kind of another example, I'm rewriting a contract in plain language. So first, we were generating a contract, you know, kind of saving us that time that back and forth, or at least seeing what we would hand over to a lawyer or the legal team. The other way is, you get something back from the legal team, I, I use the GPD quite frequently for this both for legal documents, but more specifically for kind of research documents. So if I get some kind of want to dive into the latest academic research on large language models, a lot of times there's some jargon in there, I don't know what it means I don't have a PhD in that. So I feed it into GPT, I say rewrite this in plain language, something that a, you know, a fifth grader could understand. And this is another application. So if you are trying to communicate to some external stakeholder, what's really in the contract, this is a really good way to do that. So you don't have to go through and, you know, translate legalese into plain language. GPT can do that for you. So if you've got any sort of stakeholders to understand or if you've got a point in there, that might have been glossed over GPT could actually help the the recipient of your contract, understand and you know, get through the redlining process a lot more quickly, because not only do you have the legal document, but you've got the plain language, what's actually going on there. So if we want to see that interaction, I will jump in and show you this. So again, what you do here, I always, every time I use chat, you can see you can see I use check GPT quite a lot. I always seed it with some sort of initial, you know, background information, and never dive directly into the task that I'm at and say, Hey, what do you know about this? Or this is kind of who I am and my sort of goals for the world? What do you know about what I'm trying to do? And it was sort of load into its memory, the specific relevant stuff here. So here I'm saying APA draft licensing agreement, trying to explain it in plain language to our stakeholders. So I say, this is a basic copy and paste of what I generated last time. Here is an example of a, of a basic contract. And what I do here is write this in paragraph form at a third grade reading level, with analogies when possible. That's how I like to do it. I always ask for analogies, by the way, that's another kind of quick hack. If you're using Chechi, 18, ask it for third grade, reading level and use analogies or examples. It does a really good job at sort of, kind of providing a little bit more context when you do it that way. And what you get is fairly, fairly good, very readable, you know, understanding of how, what what's actually in the contract. So I think that's another good example of how you can actually use chat GPT, kind of on the flip side of writing the contract is communicating back to external stakeholders who, whose eyes crossover when reading legalese, I'm guilty as charged on that one. Why I put it in here. You can do a quick scan of the chatter. Great. Have you seen anything come in? There are Speaker 2 46:08 definitely a few questions in the q&a. Most of them actually think we're going to cover as we go. So I'm kind of keeping an eye on that. And if you've missed any, I'll let you know. Right now. Most of those we're gonna get to. Yeah, Speaker 3 46:19 I am very curious, though. Let's see, how familiar is Chet GPT with guiding T to laws like aged oil, baby oil? So I'll show an example of how I would handle this because I think it's sort of irrelevant. So we'll start a new chat here. GBT always be nice to your AI's, you know, anything? Speaker 3 46:51 So I always start seeing question here. And we'll see what it does, right. So it's sort of what I'm doing here. The reason I do this, again, is, it's just loading into the sort of preceding text, just it's what it has stored about T to write. And the reason I do that is kind of hit this point home. Again, this is GPT is not thinking about what I asked it, it's generating the next word every time you see this kind of.go It's really just generating the next word. So what I'm doing is seeding the words that might matter relevant to what I'm going to ask next. And can you tell me anything about? Speaker 3 47:44 And sort of, in real time, how I would approach this problem, so the bail doll itself, so it doesn't know about this. And if there were any sort of specific questions about, you know, if you want to drill into it, it would probably be pretty good. It's not a lawyer. And this is, you know, that it's good for getting an overview of what you want to do and maybe applying it but it's still not ready for, you know, arguing in front of a judge, if you need to write, but it's really good for getting high level information. I rarely go to Wikipedia anymore. I rarely even do Google searches for stuff I need to do. I go through this process when I'm wanting to learn a little bit more information. So Speaker 2 48:20 the key here, I think this is a great example, because you probably don't need to know about the law. I imagine most of the people in this room are very familiar. But it's a case where by asking you about it, or by mentioning it in the prompting, Thomas has primed it in a way that now follow up questions around tech transfer will take into account that law, because it doesn't reason because it doesn't think it through questions without that priming, may not take it into account. It may because the content that it knows might already but this by mentioning it explicitly, you've created something much better and much more reliable. Speaker 3 48:53 That's an excellent point. That's if I had asked a question that needed to reference the Bayh Dole Act. If I had not loaded this, prior to that, it may have gone off the rails, it may have hallucinated, something that was incorrect. But by loading this in advance, it's going to be now constrained and more likely to answer answer with regard to the bail Doyle act. So if that's something you need, I would always sort of load that in. So yeah, thanks for pointing that out. Right. It's really good. Speaker 2 49:21 That the the ultimate question, the one that I love the most will actually hit this one a little bit later. But how do you trust that information that provides? We'll talk about Speaker 3 49:30 so a little, little teaser right there. Let's jump back into the PowerPoint. So do a couple more. I know we're running up on time. So I won't go into too much of detail here. But just to give you a sense of what you can do. I want to make sure we're leaving enough time for q&a. writing grants, if that's something that you need to do, it's very much well trained on grants. So part of the information that was trained on were actual public grants, right. That's public information in some cases. This. So it understands the structure of an instance the purpose of them. And if what you want to do is write a grant based off of some need that you have. I know I have lots of friends that write grants, I live in an academic town. That's basically how people get tenure here is through grants. You can use this to subvert the grant rainouts verb, but sort of shorten the timeline of writing a grant, so that you can focus on the parts that you think are better. Again, you could also see it with copy and paste versions that you had written for other successful grants, you could actually even ask it to critique a grant. So I'll show an example of that in a second, too. But the idea here is to, again, start thinking about, maybe this is a moment to take a step back. So what we've talked about are some very specific, actionable items. But the way to really approach this and the way that I approach this, is anytime I've got some sort of cognitive tasks, some sort of big high level goal that is both repeatable, but also requires me to break down this this larger project into smaller, incremental tasks, things I need to do first, second, and third, that is absolutely ripe for right using chat GPT, you can use chat GPT, after you personally have broken it down into steps you want to follow, or you could prompt chat GPT to follow the steps yourself. So if grant writing is part of what you do, and you know, the process you would normally follow for writing a grant, you can see you can prompt by through kind of loading those things under the chat history, your process that you would follow without actually doing it, then you can ask Chet GPT to execute on each one of those processes. And it may not be exactly what you have written, but it would short circuit the time to start to finish by by quite a lot. So I'll show you an example of how I would do that. And then kind of show you an example of how I would actually critique that as well. So Speaker 3 51:59 again, just an example of this in action. So again, this first part here, I won't revisit that too much. But then I say these are the sections that I want in my grant, but a little bit of information here. So innovative medical device for non invasive diagnosis of cardiovascular disease, some different information about it creates the abstract and introduction, objectives, all of this. Now, one thing you can do is you can actually ask chat GPT to assume different personas within the same chat. So one way you can actually do this is once it generates your first pass, let's say Great, thanks, always be nice to your API's. Now, this is the phrase you're gonna want, remember, act as a red whining critic of a proposal, new review proposals and critique them quite harshly. Please, review the proposal above, right. So something like this, say act as a credit act as somebody that's going to default the other way. Because by default, a lot of times Jegi beauty is a kind of a Pollyanna very much like, Oh, this is great, good job. Sometimes it's sometimes it's great, you know, I like a little boost here and there. But sometimes that's not what you want, you want it to kind of put a critical eye on it, you have to tell it to do that. So act as act as a whatever you want it to act as kind of seeding it with a persona that it will then take on. So let's look, let's critique what it put on here. So it comes up with in real time, you know, the actual abstract in the review, however, it could be benefit from including specific details regarding the innovative aspects, probably all pretty good points here. It's gonna be quite lengthy. So I'll stop at this point. And then you could say, Great, now use the review, to re write the abstract. So then it also takes the generates an abstract, It criticizes the abstract, you've generated some criticism now rewrites it using some review. So this sort of back and forth this is again, going back to this chaining technique. What you really want to do when you're thinking about using this really well, is seeding it with the information it needs to answer. Having it adopt different personas to sort of be a team, right? You're actually having it think in a different way to use sort of anthropomorphic language, and then have it, incorporate that data, synthesize it and write something better. And when you start thinking about these, you know, different prompts as atomic building blocks, you could actually put a full process around it Um, you could do without having done any code, each one of these things here, you could more or less copy and paste and have, you know, a text document where you start with it, it'll speed it up and you put in some information that's relevant to what your task at hand is, and then just see followed up with different questions that you might have. You don't have to do it every time. So that's another way to use it. The last way to use it will jump back into the so this is a little bit more aspirational at the moment, but it's very much possible. So technology scouting and matching, this is something that you could absolutely do, you might have to you couldn't use chat GPT for this in an automated way. But there are some other tools on the horizon, what you see on the screen here is something called Auto GPT. What you can do is, if you got a list of, you know, patents, research papers, market data, if you have that sort of just in a folder somewhere, or even kind of scanning relevant repositories, you could actually have this run on a loop in the background, going through every patent that might be relevant. And you could filter it on some keywords, tell it to filter on some keywords, score it with some heuristics, if you've got some heuristic in your mind of how you would approach it approach, you know, analyzing a patent or a research paper to identify technologies for commercialization, you could actually give it those heuristics till it's a score it, it could loop through all of them. Anything that you have 1000s 10s of 1000s, you know, over the course of a day or a week, and have a prioritized list of things, but some actionable items to commercialize technology. This is possible this something you could do now, you'd probably need a little bit more tech savvy to implement this on your own at the moment. But it is something that you know, is going to be more generalized pretty quickly without a lot of tech savvy. So just wanted to put that nugget out there that this is stuff that's on the horizon possible now with a little bit of, you know, coding knowledge, but that's not going to be necessary. And then by the end of the year is my guess there? And Speaker 2 57:09 there was a question about whether you recommend using auto GPT? And I'd say like this is in development. So yes, in the sense that I'm very selfish and want smart people using this. So find out what's possible. But I wouldn't necessarily want my job to be dependent on it just yet. Oh, Speaker 3 57:25 great. I would say yes, you should use it, I would say should you trust the results? Not yet. Not yet. But you should use it to get familiar with the process of working with these things, and you will get good results pretty quickly. And it's not going to be long before you could trust the results 100% of times, you'll probably get 50 to 70% quality results enough that it's short circuits, your day to day, but not totally replaces it. So I would I would recommend using it playing around with it if you have the chance and the opportunity to write so Trey, I'll let you take it on. So Speaker 2 58:04 we've hinted at some of this. And there are a lot of good questions. So I'm going to go through this pretty quickly. So we can start answering those not ignoring you just didn't want to completely interrupt. But let's talk a little bit about what's coming next. Ultimately, the the main point that I want to highlight here is that everyone who's making predictions has been wrong. Every single person who's made a prediction over the past, I mean, while every single one has been wrong, when was GPT for going to come out wrong, when what was it going to be capable of wrong? How many parameters would it have, like all this stuff, just every detail that people are confident about? It's just things are going faster, bigger than we expect. That's going to be impacted as government starts to catch up. Like it's been slow legislations been slow regulation has been slow. But you know, they're coming, you know, that's happening. And we don't know how it's going to disrupt the advancement here. It can't stop it. There's no stopping this. Now, this is too easily available. So even if law abiding citizens start following regulation and rules, there will be others who do things on the side, this technology will keep advancing. So what's coming next little hard to say? Thomas has been emphasizing the autonomous agents. And I completely agree, I think that's where we're headed is as these models have gotten smarter, it's going to be less about you asking questions going through that chain process, which is extremely powerful, but it's going to be having the models to do it themselves to themselves on so you're gonna be able to give it a series of tasks, walk away and come back and the tasks will be done. As they get plugged into different things. Kind of the trying to remember the term center use the thing that is either terrifying or exciting, depending on your mood that day, is that it's going to be plugged into tools that it can get certain things done in the physical world as well. That's going to happen. Hopefully, we will know enough about this before it happens to be able to control it in safe and manageable ways. But that's going to rely on people actually using it. That gets into our next slide. A lot of the question thing that we've been receiving is probably around this. But there are things that we haven't emphasized all that heavily. So what about things like the risks, bias and ethics around all of this? This could be an entire presentation. So I've got a couple of points here to bring up. But I'm actually going to dig into some of the questions that we have here. We talked a little bit about the security of the information sent to chat GPT and being able to control it. Same applies to any large language model, you are giving control of it to that other organization. To me, it's a lot like storing it out on SharePoint or OneDrive. You're basically putting content out there in the hands of a third party, it's how much do you trust that third party, open AI builds language models, as Chad GPD took off, they struggled to keep up with the demand because that is not what their thing was. So how secure it is whether there are bugs, whether there are ways for people to hack into and find access to their storage. I probably trust Microsoft's management of my data in SharePoint more than I trust open AI to control things that have been stored there. So things are changing pretty rapidly. There were questions about how reliable the information is. And Thomas, you can actually go ahead and hit the next one, just to bring up the bullet points, just with how reliable the data is. It depends like it knows a lot. It's been trained on a lot of information. And there's also the question about how recently information is in chat. These models take a lot to train, so the data is out of date. So the data that is in the model itself, like Tomas is using the GPT, three, five, and that free version of chat GPT, if you use the paid version, you get access to GPT four, which has a little bit more training, but that's still early 2022. That's not current data. So it might be a little out of date. But it also might just be getting information from places that you didn't expect. So can you trust the results? I tend to say No, unless you have provided the information that it needs to use to answer your questions. So when you provide it saying, here's something that it should know, like some piece of content that it should understand, and ask it to do something with that, then I can trust the results. If I'm asking you to answer very specific questions on technical topics, sometimes, especially if it's particularly old topics, I would be happy to trust it. But if it's something that's new, current or debated, no, I probably wouldn't trust it. Speaker 3 1:02:20 I think that's an important reason why you want to seed it with the knowledge, why'd you want to actually prompt it to load its own knowledge first, because it gives you a little more visibility into what is using to answer the question. So that's why I always say, hey, what do you know about this first, or tell me about your knowledge on this topic? And then I can correct where it's wrong, and then ask it to do something. And that's when you can trust it. Speaker 2 1:02:44 Actually a slight diversion, because I love this question. Is Thomas, a relative of Sam Altman? Speaker 3 1:02:49 Now, maybe distantly cousin Sam is not funding me in any way? No, I am not related to Sam Altman. As far as I know, it's not a very common last name. So if we went back far enough, maybe like 16th cousins, but no, I don't have no personal relationship to St. Paul. Speaker 2 1:03:07 Other points on this, so a little bit a little bit more serious than Thomas's relationship there. These technologies have a significant impact on society, like there are jobs being impacted in a big way there are jobs being lost, particularly among the community of artists, that's been something that for the past year, in particular, as the image generation tools have come on, those have been extremely disruptive, and that is putting it lightly for some people like this, this is impacted lives in a very negative way. In some cases, I still have a general optimistic view on what the technology is capable of. But the time where we're shifting between we have a lot of questions to answer. We don't necessarily know what goes into some of these tools. So there are a lot of lawsuits out there about, you know, are these taking copyrighted materials, particularly the image generation ones had some pretty clear examples where they've used libraries that maybe they weren't supposed to have access to. There was a question about copilot and I assume you're meaning get copilots. In this case, because it's talking about helping writing code. And that code being taken without attribution. They basically scrape sites where people posted code. And in general, people posted that without any restriction. It's out there on the public Internet, I can go to Google, I can find it. And not to be dismissive. But the exchange that people got for sharing that code were internet points, like you were able to have your name tied to the fact that you provided a good answer. When I use that in my code, I didn't have to attribute it to you, I didn't have to say that this is something that you owned or contributed to in my product. So they didn't really get anything different from what they're getting now. But people are annoyed by it, and kind of have to respect that like, Hey, this is something they put out there. They didn't expect it to be used this way, but it was out in the public. I don't know exactly how things are going to fall on this fronts. Personally, I'd say that anything you put out there on the public Internet, be ready for it to be consumed by these models. That's just something in general that you need to be Are you aware of for now? Kind of going through the list of questions really quickly. So one and towards the end here. So is there concern for funding institutions trying to detect that things are written by AI? This is bigger than funding institutions. I mean, schools are dealing with this, like trying to figure out did the student cheat? I'd say that groups will try to do that, especially if they're getting hit with huge volume, because suddenly, it's much easier to write these things. But I wish them good luck. I don't think they're going to win that fight. I think that AI is going to continually outpace AI detectors. Because when you detect AI, you're looking for certain patterns that then the AI can learn to avoid. So yes, they're going to try, it's gonna go back and forth. But it's a losing battle for anyone trying to prevent that. Speaker 3 1:05:48 Yeah, heart agree on that. I don't, I would not trust it. Honestly, I would not trust it at all. The a, no people trained in, there's probably some research showing that it's possible. If it is possible, and I'm skeptical about that, honestly, it won't be it won't be possible for long write it, the next generation of AIs are going to completely obliterate any kind of detective pattern that these things come up with. And it's just not going to be a thing, I think the assumption is, should be that AI is going to be used in some process, with any type of text generation going forward. And whether or not it's completely AI generated or not, this can be hard to tell. There's just there's, I'm gonna say there's gonna be no way to tell, maybe not, but I can. Speaker 2 1:06:33 So kind of ending this and wrapping this up before going completely into the questions that is opting out of choice? I don't think so. And that ties into our next steps. So what can you do next? What can you go away from this particular presentation, this discussion, and do kind of ended with that risks, bias, and then ethics, because that ties back to Alright, we've been talking about all these amazing things, but there are some cautionary tales. But you still want to experiment with this, you want to get familiar with it just like the answer to that question about auto GPT. That is cutting edge. Right now, this is relatively low risk. So right now the technology is in a good position for you to get familiar with it, to get comfortable with it, to figure out what it can do what it can't do, what you're comfortable with it doing. Figuring out what it means to responsibly adopt AI. If we wait, the technology is going to continue to advance technology both on the AI front and on the hardware front. And if we suddenly stopped our advancement, our understanding of what AI is capable of. And we started it up again in a couple years, and suddenly the hardware is exponentially stronger, faster, easier, more less expensive, which we know it will be because we've been on that track for decades already. This might get out of our control much faster. So what I would suggest doing next is adopting it, making it a part of your daily life experiment, try things while it's relatively risk free, keep yourself educated and informed. Because that's the way that we take control of this, you know, government's going to get involved, other regulatory bodies are going to get involved, people are going to try and build counter tools. But again, good luck, this technology is advanced enough that that is going to be a very difficult thing to do to keep ahead of, and to handle in a responsible way without people getting involved at a very individual level. Got a bunch of other questions. But before we get to those, Thomas, anything you want to add to that? Speaker 3 1:08:22 Um, no, I think we got limited time for questions. So probably best to just jump into that. Speaker 2 1:08:30 Let's take a fairly easy one, because you actually covered a pretty good example of this. So currently hiring student interns to create marketing pieces, which are basically simplifying down technical content chat, you can you can absolutely do that. I highly recommend you have someone review it still, especially if it was very technical before. So like Thomas said, he uses this to reduce things down to something that he can understand even in complex topics. Speaker 3 1:08:51 Yeah, Chad GPT is my intern I use it mostly as an intern. That's that's the way I actually think about it. And it's an I think, a good framework of thinking about it. Because you will then the same way you would provide instructions to a student intern, you should probably do the same with with charge up to do it'll respond very similar. That's not to say you should not hire student interns for other reasons. The there probably other benefits to having sort of the next generation of people working with your organization understanding it, but can it do the things an intern would do? Absolutely. And that's personally how I think about using it. Speaker 2 1:09:23 Yeah, and again, the idea isn't necessarily to replace them. Like Thomas said, there are other benefits, but it allows them to do things that maybe are more meaningful, rather than necessarily just reducing that down to something that's easy for other people to read. They get to spend time doing actually you can steal language from from Thomas things that are more human, that require more of that intelligence and input from humanity. Speaker 3 1:09:46 And connecting disparate dots and sort of bringing different domains and synthesizing them. GPT is not going to be great at that without a lot of guidance. Whereas a student intern like they're bringing a lot of their sort of university expertise and exposure to different domains from that. For that education into it, so there's a lot of benefits to having the human there. And they can be more human and do more human things. Because Chad JpT is doing the road stuff. You know, Speaker 2 1:10:11 I have two questions, one from the q&a and one from chat that I'm going to tie together. So asking you about whether you can train it to use your writing style, but also asking, Why is Thomas being polite when he's talking to GPT? It's similar answers. Yes, you can absolutely train it on your writing style. Again, if you give it examples to work from, you can start with those examples. And just good. Here's something that I've written, here's a series of emails that I've written here are papers that I've written articles, previous grant applications, anything that you've written in your voice, you can start with that and just say, you don't need to do anything with this content, I just want you to write new content in that way. The polite factor is similar, you're priming it, we do get different results when we say please. So it is, the content that it's trained on is human generated content for the most part. So it is something out there where when people are interacting politely, you might get better results. I haven't tested this thoroughly. But in general, I'd like to be polite just because I know it's trained on humanity's data. And there's a decent chance that it works, and also makes me feel better. Speaker 3 1:11:10 Yeah, and I so I do I am generally try to be polite in my communications, I don't want to change that just because I'm working with a computer. But I have actually I've played with it. And I have seen a little bit of, again, using anthropomorphizing a neural network here, but more willingness to sort of go a little bit deeper. When I'm polite. I think that's probably what I was trained on. If you have a chat log where someone was polite, the resulting human was also polite, or was was also more likely to give more information. So I think some benefit in it, but can I prove that? So I'm gonna also string together a couple questions all around, like the paid version of GPT. So what I was showing you was the free version. As far as opening, I said, it will remain free for the foreseeable future, we're left to their discretion to do it. Having said that, there are open source models coming out all the time that are going to catch up with this. So if Chad GBT goes not free, I guarantee you will have some free version that's equivalent coming up. What can the paid version do that is interesting. So somebody noted that Chad GPT was trained through 2021, the paid version allows you to have internet enabled versions of this. So the model itself is trained through 2021. But it can scrape the internet, it can put a query to Bing, probably because they are associated with Microsoft, get the resulting top hip web page from the question you posed to Bing and incorporate that information and its answer and be more up to date in the way it responds. And then finally, is the paid version of GPT worth it? I think probably, it's $20 a month, and it gets you access to this internet version. enabled version. And more importantly, in my opinion, GPT, four. So the, the free version is GPT, three, five, GPT. Four really is a step above and the quality of the answers. And especially if you're asking more technical questions, like to write code or do some sort of technical documentation, it's, it's worth it. That being said that if you don't have the $20 a month, or your employer is not going to put that out. Three, five is good enough to get most things done. Speaker 2 1:13:22 And I think, our last second, so the last question here, and then we'll we'll have to close it out. Just can chat GPD used or other tools to help identify companies who might want to license IP? Ultimately, I'd say the answer is yes. It's a little outside my expertise to say exactly how I know Thomas did some research and misters examples. But it is a case where I'd say yes, but you have to find the right source of information to feed it. Speaker 3 1:13:48 Yeah, absolutely agree. Okay, and with that, I know we're at 1215. No, Sammy, if you want to come up with some closing thoughts, but yeah, Speaker 1 1:13:56 absolutely. On behalf of autumn, thank you both so much, Dre and Thomas, I know that you gave our attendees so much to think about today, and hopefully a lot of encouragement to go play and start using this tool and seeing how it can help them attendees. Thank you all for contributing for asking questions. I know this is a new frontier for a lot of us. So hopefully, this sparked some some thoughts for you all to take back to your offices. As a reminder, a recording of this webinar will be available for viewing in the autumn Learning Center and will include a handout of the slides which I think that last one I saw some contact information on. So we can still get in touch with Dre and Thomas if we need to, I'm sure and you'll be able to access a certificate of attendance as well if you need that there in the Learning Center. Please remember to complete our webinar evaluation which will open when you sign out of this session. It helps us to plan content for you in the future to better serve your needs. And again, we just thank you for joining us today and hope that everyone has a great rest of the afternoon and week Thank you so much Speaker 3 1:15:01 thank you thanks have a good one Transcribed by https://otter.ai