Venture Everywhere Podcast: Liz O'Sullivan with Noah Spirakus
Noah Spirakus, LP and Partner of Everywhere VC, chats with Liz O'Sullivan, Co-founder and CEO of Vera, on episode 33: Vera-fy with AI
Listen on Apple & Spotify!
Episode 33 of Venture Everywhere is hosted by Noah Spirakus, LP and Partner of Everywhere Ventures, a founder collective and early stage venture fund investing in the future of money, health and work. Noah chats with Liz O’Sullivan, Co-Founder and CEO of Vera, a tool managing AI behavior, particularly generative AI, during deployment, addressing privacy, security, internal threats, and controversial content to ensure positive AI presentation while mitigating risks. Liz discusses AI's transformative potential and its evolution from commercial deep learning to advanced, scalable solutions. Liz shared her expertise and insights on commitment to building AI that's not just powerful, but also transparent and ethical.
In this episode, you will hear:
Vera's focus on safer generative AI and the need for evolving policies to match AI's rapid progress.
The impact of AI regulations on businesses and the importance of preparing for future requirements.
Managing public expectations realistically and openly addressing issues like bias and model failures.
Challenges integrating AI, such as ensuring safe interactions and complying with evolving legal frameworks.
Exploring US AI regulation strategies, including the effects of President Biden’s executive order on government and commercial AI use.
If you liked this episode, please give us a rating wherever you found us. To learn more about our work, visit Everywhere.vc and subscribe to our Founders Everywhere Substack. You can also follow us on YouTube, LinkedIn and Twitter for regular updates and news.
TRANSCRIPT
00:00:00 Jenny: Hi and welcome to the Everywhere podcast. We're a global community of founders and operators who've come together to support the next generation of builders. So the premise of the podcast is just that, founders interviewing other founders about the trials and tribulations of building a company. Hope you enjoy the episode.
00:00:20 Noah: All right. Welcome to the Venture Everywhere podcast. My name is Noah Spirakus, multi-time founder, a bunch of failures and successes that I'm sure we'll touch on today. But I'm super excited to be talking to Liz. Liz is the Co-founder and CEO of Vera. Go ahead, Liz. Do you want to introduce yourself real quick?
00:00:39 Liz: Hi, everybody. And hi, Noah. It's so good to see you. For those of you listening, Noah did diligence on our deal when we decided to work with Everywhere. And so it's really exciting to see you on the opposite side.
00:00:49 Noah: Likewise, it's fun to do the podcast.
00:00:51 Liz: Yeah, super fun. So hi, I'm Liz, Liz O'Sullivan. I'm the CEO of Vera and my background is, I've been doing AI companies since long before it was cool. Actually deep into the days when it didn't really work very well, it would have cost a lot of money. It took forever to deploy. And then once you did, you really had no idea whether it was going to work or not. So it's been really a thrill to see this field progress through the deep learning craze of, 2016, 2017 era.
00:01:16 Liz: And especially over the last five years, when it's become more than just a marketing term and people are actually deploying this technology at scale for some really impactful use cases. So the field has changed a great deal just since I've been involved. And of course, we all know it's a much longer history than that.
00:01:31 Noah: Oh, definitely. One of the things I'll touch on here is so, as a partner over at Everywhere Ventures, it was both exciting to see as Liz came in last year and we participated in their funding round. But we see so many pitches these days on AI companies. And it's fascinating your background because the companies that you've been at have been kind of at the forefront of, I would say, AI.
00:01:53 Noah: And I think we all have a little bit of gut response when we use the word AI, because I think back in the day, that was a big no-no word when we were just trying to solve the game of Go and play chess. But before we get in, more of your background and kind of the experiences you've had that led to Vera, go and give us the elevator pitch of Vera.
00:02:11 Liz: Oh, sure. Well, Vera helps you control the behavior of your AI, especially generative AI when it's in production. So for any of you who have done a lot of work and spent a lot of money and time training models that never see the light of day, there are usually really good reasons from your risk or compliance teams that have concerns about how and what’ll actually happen when put in front of our customers. And our platform is designed to answer those questions.
00:02:35 Liz: So they may be worried about PII or passwords or security vulnerabilities, internal threat leakage, right? Where there's intellectual property that maybe is too sensitive to be shared externally. Or maybe they're worried about your models responding to questions that are controversial, like politics, like sex, drugs and rock and roll. And since these models are now becoming brand ambassadors for whatever it is that you're interfacing with the public to do, our tool will be able to help you ensure that you're only ever putting the best face forward and not creating new security and privacy vulnerabilities in the process.
00:03:07 Noah: That makes sense. We've seen, I think, in the news, some of these fascinating stories where people are convincing ChatGPT directly plugged into dealership websites, convincing them to sell them cars for a dollar. And I can only imagine the ways that this goes wrong.
00:03:23 Liz: Actually, Noah, let me just, shameless plug here, which I was very interested to see the first court ruling that said that these bots could potentially lead to you actually having to sell that car for a dollar.
00:03:33 Noah: What? So touch on that. What was that? Let's just jump ahead. What was this court case and what happened?
00:03:40 Liz: It was really recently, actually in February of this year. One of the first instances of a chatbot coming under scrutiny was in Canada, where a chatbot gave a man incorrect information about his flight and he had to suffer some consequences because of that. And now the airline has to pay for that mistake because essentially it boiled down to it being an extension of that company's support team.
00:04:04 Liz: So they're not allowed to lie or hallucinate as people are calling it these days with chatbots. It kind of doesn't make a difference if it's wrong and your bot says it, then you might actually have to live up to that commitment.
00:04:15 Noah: What are you hearing from, whether it's your current customers or sales pipeline? How are companies reacting to news about this? How are they thinking about when they hear these stories?
00:04:26 Liz: Well, it depends on which department you're talking about. I know a lot of the CTOs and CISOs that we're talking to are just looking for excuses. They're looking for not excuses, but reasons that they could give their risk and compliance teams that this stuff's actually safe. The risk and compliance teams are really trying to keep pace with the incredible fast pace of what's going on. The research and development is just absolutely mind-blowingly fast.
00:04:51 Liz: And so not only did they have to get up to speed with a brand new technology, but they have to understand what data scientists mean when they detail risk, different measures of accuracy, different concerns. It's, very different process from rules-based software where you can kind of understand failure modes. Now with insanely large models, they can fail in one of a million different ways.
00:05:11 Noah: Fascinating. Okay, so let's go back to your background a little bit. So you've been in this space for a little while at various companies. If we go back to your time at, is it Clarifai, Clarif AI? How do you pronounce that one?
00:05:23 Liz: Good question. I believe it's Clarifai.
00:05:25 Noah: Okay, Clarifai. So founded in 2013, you joined there, you had a variety of roles there. What was your experience like? Cause my understanding from what they do is they were kind of one of the people early on in some of the deep learning around both text, but also a lot around image. What was your experience there and what was your role?
00:05:43 Liz: Well, I had to say a million different roles because it's just not a startup unless you have two different job titles, minimum. And I've had more fake ones than real ones at this point. So I came from a company that was doing AI the hard way, really pre-deep learning and had built a great deal of technology that ended up not working very well. My first day at Clarifai was mind blown. I absolutely was floored. I mean I don't even remember what it was. It was something silly. It was like a recommender engine for images trying to find furniture on the Home Depot website. It wasn't even actually for that client or something. It was just…they were demonstrating the capabilities.
00:06:18 Liz: I was buying a couch at the time and I just remember being like, holy moly, this would help me so much. I just want to find something with the same color or the same pattern and so on. And I just knew that this technology had so much potential, so much ability to benefit the world. And so I started out as a customer facing person, but because we were training custom models for clients and at the time it was exclusively image recognition, really had the opportunity to see A, what the tech was good and bad at and B, all the weird stuff it can do.
00:06:50 Liz: We had to QA our models before we put them into our clients' hands. And of course there was bias, of course there were just these weird oddities of behavior. They were really good at seeing things that they had seen a lot before and really bad at generalizing to stuff that they had never seen. And I knew that at the time, even though it was sort of taboo to talk about these issues publicly, somebody needed to. We should be just facing these problems head on instead of trying to pretend that they don't exist. If they actually do exist, which they do, then we should instead be trying to work together to find solutions.
00:07:21 Liz: And it's been fascinating to see the resistance of the field to things admitting that discriminatory bias can exist, that hallucinations can be harmful. It's sort of feeling a new cycle is beginning all over again, and it's a cycle we've already lived through from around 2018 era through now.
00:07:38 Noah: Interesting. Where do you feel that pushback is coming from?
00:07:42 Liz: I think there's a big part of it where people just believe that it's going to get better and maybe don't know exactly how or when that's going to take place. But Silicon Valley, you know, we always are comfortable putting something into production that's 85% of the way there. I think the challenge is that people don't understand that with AI, 85% of the way there is not done. That last 15%, it can take you 20 years, 30 PhDs and maybe $100 million and still not be possible. We just don't know. It's not the same as regular software.
00:08:14 Noah: Yeah, no, that's interesting. I remember doing, and back then we talked about, we never called it AI, but the machine learning pieces and kind of when we got our beak wet at the company running at the time building kind of a data science team, you would get 85 or even 90% correct and you'd have one thing that you wanted to correct or your client wanted to correct, you'd add maybe just 1%, 2% extra to the training set. And everything went off the rail.
00:08:43 Liz: You're giving me flashbacks, Noah.
00:08:46 Noah: Yeah. Late at night, one extra label of an image that looked like a cat and all of a sudden dogs were labeled hot dogs and it just went all over. So, we'll get up to the point where you left industry, started a company to fix some of these problems, but what were the conversations, you were working with customers at the time, you were in charge of QAing these models, what were the conversations back at that time period, 2017, 2019, with the end customers?
00:09:13 Liz: It was such a fun moment. I have, looking back on it, it was such great people working on such a lofty mission. And at the time we had really just begun to scratch the surface on commercial deep learning, the phone was constantly ringing from the biggest companies in the world. Just like, hey, we have this idea. Can we use AI to detect whatever it is? It didn't matter. Everything you could imagine, mushrooms, emotions, hair color. We were speaking to all the media brands, all the cosmetic brands too. And the answer to that question was I have no idea. Like, can AI do this? Can this solve this problem for us? We have no idea, but we can try.
00:09:54 Liz: And so we had this incredible couple of years where we were using the state-of-the-art technology to try to accomplish some of the more complex things that AI can do, or maybe couldn't do at the time. Retail applications and physical space were a real challenge trying to identify different products, different people, heat maps of people moving through a store. And so we would basically be running experiments constantly. It was this amazing culture of, just try it and see it out, you know, see how it fits, but set reasonable expectations also.
00:10:24 Liz: So that was really my first introduction to the weirdness of selling AI and building AI, as opposed to selling and building regular software. When regular software fails and you go to, a customer has a problem and they're all upset and they want a retro at what root cause analysis. You can always point to something and say, oh, okay, we identify there's a flaw here. We have a ticket for it. It'll be done in a couple of weeks. Let's move on with our lives. And I'll never forget the first time a data scientist came to me and I asked, why does this model think all of these babies are bicycles? And they were like, I don't know.
00:10:56 Noah: Please tell me that was a real situation.
00:11:00 Liz: It was, it was real. It was super real. And I was, that was one of my first client calls there. And so the, you know, client is frustrated. The data scientists are looking at me like, can you fix this for us? Why is the client asking this? I'm like, can you help me understand why this happened? But the answer was just, we don't know and we can't know. And that was, I think, really interesting. Nowadays, we have a few techniques that help to try to disentangle some of the, you know, and we're talking about feature importance, heat maps, LIME and SHAP, et cetera, that can get part of the way there. But still, the whole fixing of the problem is still a pretty big open question mark.
00:11:35 Noah: At that time, were the concerns the same as they are now? Cause it seems like there's a lot more about regulatory concern and compliance, those lawsuits that we're talking about where something, it's so close to being right, but again, we use the term hallucinating, right? To come up with incorrect factual information. At the time, were those the concerns or were the concerns a lot more tech oriented and like, Hey, it's just right or it's wrong. We just want it to get better. But if it's close, we're happy?
00:12:04 Liz: There were some questions about insidious bias for sure. Less so on the image generation or the text generation side because we weren't using deep learning to create anything. We were using deep learning to identify things. So our models, they're just classifiers. So is it a cat or a dog or a duck? Then we can tell you that. So you already had the images and we would just figure out what's in them. Nowadays, maybe this is a commonality across everybody who's been in AI for as long as I have, you're constantly surprised by the pace of it.
00:12:34 Liz: Not that transformers or diffusion models were a new technique, but that just simply adding so much additional data and compute power to it that we would get the ability to create an image or a paragraph out of nothing, we still don't know how that works. And so yes, that part of the problem is absolutely the same. I think what's different is that the applications of it are a great deal more impactful than simply identifying something that's in an image. That has severe civil liberties implications just as anything else. But we're also talking now about the creation of fake propaganda material, non-consensual sexual material, just the implications from a societal level are much grander. And I think that's why we're seeing so much more pressure on the government to act and react quickly.
00:13:16 Noah: So as you moved on from Clarifai, you moved on to Arthur, if I remember correctly. What was that like? Because Arthur is more of a company around, now we've gotten a little bit further into, some of these models are working. Now we need to monitor them, measure them, see how they're working, deploy them, as opposed to just more of the pure classification side. Tell us a little bit about your experience there. I’d love to hear more about, kind of how those conversations with those end customers evolved as people start to see more capabilities on how to use this technology.
00:13:46 Liz: Totally. When I left Clarifai, it was this weird moment where everybody sort of knew that these issues were bubbling up, that AI safety wasn't predetermined, wasn’t in a category the way it is today, but it was something everybody was thinking about and talking about because there were a lot of incredible activists and civil society professionals just raising awareness about the dangers here. And when we went to market with Arthur, I was the only non-engineer team of four when I joined. And we were basically building what our former CEO had wished he had at Capital One.
00:14:17 Liz: But if you think about what kind of models banks have, they're really boring. They're usually not deep learned models. They're usually not image or language models. They're tabular data, fraud detection, credit decisioning. And these models are usually very deeply regulated already, but there were still problems. And so just knowing what they were doing at any given moment in time seemed like it would itself be a big leap forward. But because I was in charge of talking to everybody, every lead, every customer support case that we had, every well-wisher and investor that wanted to come by and kick our tires, I started to sense a pattern.
00:14:54 Liz: And that pattern was essentially, the questions people had were, how is this going to help us get more models into production faster? This isn't gonna help me automate the privacy checks or the compliance checks. It's not going to intervene if something goes horribly wrong. We don't even know what horribly wrong looks like. What do we measure? How do we know and make a decision about, how much of whatever metric it is, is too much of that metric, didn't quite go far enough for me to be satisfied, to feel like I was really making a difference in the field. And so that's when I met my team and helped create Vera to do exactly that. Something more proactive that gives you actual control over the way your model is going to make decisions and be involved in your product on a day to day basis.
00:15:38 Noah: And what were people planning at the time? So when you were at Arthur before you started Vera, what were the conversations like? What were people trying to put in place? Was it, let's do generative AI on this side and even the Twitter solution to the Taylor Swift problem, the other things that have come up, it's generative AI against Regex in a search field. What were the conversations at the time? And then let's lead into, kind of what you've been working on lately.
00:16:05 Liz: You know, honestly, at that moment, we didn't even have generative AI on the radar. Well, let me say, the science did have it on their radar, but the enterprises were still struggling to get anything into production. I mean, you'd look at the stats, year over year, 90 or 80 at least percent of AI projects would just die on a vine and never go anywhere in the enterprise because again, just the different way of thinking about building models to achieve solutions to business problems as opposed to tabular data, as opposed to deterministic rules.
00:16:37 Liz: So they had some image and language use cases that typically had to do with automating support functions or assisting people in the support function and image especially was relevant for double-sided marketplaces, places where you have a lot of user generated content, of course, social media companies as well. But it was the outlier. It wasn't really a commonplace use or function of AI to be A, customer facing and B, to be mission critical to your business. Whereas nowadays, there is a little bit of magic and AI of course is not magic, but it can take something that's unstructured and conform it to a predictable pattern. Like a support use case, like a code generation task that helps you resolve a very common bug, right?
00:17:26 Liz: So the requests don't have to be perfectly uniform like they would in traditional software. If they're close enough, you can probably get the right answer to the user at the right moment in time. And just that little tiny bit of oil is enough to unlock a huge number of use cases and even the ones that are customer facing. I think that's relatively new to have real deep learned models interfacing with your users. That's really something we haven't seen very much of in an active way, not in a path. There's a model running in the background here, sure. That's the thing, but in an active way that's participating in the customer interaction, that's pretty new.
00:18:02 Noah: Yeah, that makes sense. Do you remember Microsoft Tay, their Twitter AI bot?
00:18:07 Liz: How could I forget?
00:18:08 Noah: And I looked it up while we were talking because I wanted to remember the year. So that was 2016.
00:18:13 Liz: Yep.
00:18:14 Noah: Do you think that was as disastrous of a thing that that was? And for those listening, if you don't know, Microsoft decided in, early days of deep learning to build kind of a, probably the early stages of the generative AI model, understanding language, but did it on Twitter. And so whatever your viewpoints on Twitter, that's not the conversation. There's a lot of stuff on Twitter that maybe you don't want to learn on. And it became racist with people interacting with it.
00:18:40 Liz: Pretty quickly. Yeah. It didn’t take long.
00:18:42 Noah: It was pretty crazy.
00:18:44 Noah: Do you think that was, from an industry standpoint, a really good light bulb of people starting to see where this could go wrong, as opposed to just the technical side, or what were the things that led up to the lawsuits, obviously today that we just talked about, but what were some of the experiences that you saw in the marketplace of things going wrong where people started to take this seriously?
00:19:05 Liz: Great question. Yeah. Let me think on that for a sec. Cause Tay obviously comes up a lot, but I don't know how familiar you are Noah, you probably are pretty familiar with them, the buying mentality. People love saying no. So if you ask, are you worried about what happened with Taybot? People have a lot of reasons to say no, because Tay was incorporating new conversational data points into its training data in an unsupervised way, evolving itself and not itself, right? Like it was designed to do this in an unsupervised way, but it's still not got agency, just to be clear, just to be clear.
00:19:38 Noah: It's weirdly supervised, but supervised learning by people on Twitter.
00:19:43 Liz: Right. That's very fair. You characterized it better than I did. But that whole concept of incorporating new training data in a way that is in an automated feedback loop, that is obviously dangerous and you just don't know what people are going to be feeding it in. It could be incorrect data. It could be racist data, whatever it is, you're going to get more of that behavior. Most people don't do that. Most people train a model and then they decide what training data to then incorporate for future updates. So that's a good excuse. So it's like, Oh, well our model's not Tay because we have these controls and so on and so forth.
00:20:17 Liz: But I think it's a big part of the reason why we don't see that use case more often. Now that it's been clearly demonstrated that this can end poorly. So people don't usually try that. But what people are actually using today, I think if you pull back the veil on a lot of these, especially customer facing use cases, a lot of it is just OpenAI. And then the set of controls they allow you, which are really not very robust, right? It’s difficult to customize, really deeply customize OpenAI. And this really, it's impactful, especially if you don't believe, like we don't believe that any one model provider is going to be the holy grail of models. That there's a lot of competition in this space and there's a really vibrant open source ecosystem.
00:20:59 Liz: So if you want to have policies that apply to more than just any one model, then you would need to rebuild those policies from scratch every time you want to onboard a new vendor. And we see this as actually critical infrastructure, this policy infrastructure. What can you ask a model, what can you reply with, and then to see every question and response and decide where you need to continue fine tuning the models. That's something that I think, it needs to be model agnostic. And that's why we're doing what we're doing.
00:21:28 Noah: I like that. And you bring up a really valid point, which is before you used to have control over what you fed into the model. Oh, well, we're not gonna use this stuff. We're gonna filter it before it goes in and then have more confidence that it's not gonna say certain things because we haven't trained on that. Now, when you're just implementing stuff that's off the shelf, for lack of a better word, with an OpenAI or an open source model, or even in a scarier way, you're implementing third party software solutions that then are implementing this and you're losing some of those controls. How do you guys think about, as a company, working with your partners, how does someone come in? How do they prevent some of these things that we've talked about using Vera?
00:22:07 Liz: Well, you can do it in a number of different ways. And I think part of what makes us unique is that we don't assume you have the deep AI knowledge that we have to start. So it's a good MVP toolkit for getting started. And it's also a really robust API that enterprises or small to medium sized businesses can use to control it all programmatically. But if all you want to do is get started with generative AI in a more safe and secure and reliable way, then we have a chat app and it looks a lot like ChatGPT, except you're not just talking to ChatGPT, you're talking to any one of the 1,400 models that we support.
00:22:42 Liz: And we have the same policies that apply to all of them. And we can help you decide which questions will be best served by which models in response. So it's this very robust solution that looks and feels like a seamless user experience, which was really important to us that we could absolutely give you, chat experience, that is at least on par with some of the more common commercial providers. But behind the scenes, there's all this additional infrastructure that goes into it. Are you sending something that you shouldn't be? Is the model trying to say something back that is a little controversial or dubious?
00:23:15 Liz: And I think it's even more critical because for whatever reason, we've not been able to identify why. The model providers do a lot of changes and updates on a regular basis and we don't always get word that we've retrained the model, it's answering so and so questions differently, or there's some new factor in place that makes it more likely or less likely to refuse a question. That stuff's all constantly going on behind the scenes. So you really do need that layer of just logic in between your models and your users to make sure that with, worst case scenario if something crazy happens, it's still not going to answer questions about politics.
00:23:48 Noah: Gotcha. Okay. So if we think about again that ability to combat stuff that's coming out of an LLM against the LLM generating this content. I think one of the things that I've seen, especially in a lot of startups, as we see these pitches, is if you're a thin layer on top of something like OpenAI and then you're building your own custom stuff, it's really easy to go deep into something, whether it's human supervised learning or whatever it is, and then miss the next update that comes out or get so entrenched in your path that the model just surpasses past you because these things are coming so quickly these days.
00:24:24 Noah: If a company is trying to build their own stuff in-house, obviously we think they should use Vera, but if they're building something in-house, because they've already got something integrated, what sort of controls, or how do you think about controls, which again, they should use you guys, because it's easier, but how should they think about controls from a static versus dynamic process as these models evolve so quickly?
00:24:50 Liz: Yeah, it's a great question. And I think that the whole field of AI governance is inspired by themselves, very antiquated fields, model governance resulted, from SR 11-7, the very famous banking regulation that has three lines of defense and a bunch of different approval processes. And you still get models that behave pretty oddly, you know, in banking context and maybe not all the risks are caught. But it's pretty basic stuff, right? Logging, monitoring, checklists, these things have existed for a really long time. And we just frankly think that they're giving governance a bad name because it does create the homework of work, as I like to call it, all this extra stuff that you have to do just to get something into production.
00:25:31 Liz: So like you've already trained your model, you've already done all these great checks and measurements and so on. Now you have to go fill out a form and then wait for compliance people to review it and they have all these additional questions. So there are additional tools out there on the market that are this way, right? That they're essentially GDPR compliance for AI, assuming that the rules and regulations that come down are going to be pretty similar to that. And to some degree that's not completely false, right? Regulators are very accustomed to the traditional rules of governance, but I think maybe it all just kind of misses the pace at which this field is progressing and the level of uncertainty that we all have when you put a model into production.
00:26:08 Liz: I think everybody wants to solve these problems at the model level. Not entirely sure why. It feels like we're kind of doing an alchemy all over again as a human species. It's like, just add more data. It'll come out like gold next time. I promise. It's just more data. It'll work. I swear. But I just don't think that that's realistic. There are other technologies that we know can do a really good job of redacting or blocking or transforming, and then, you know, even some of the newer technologies like agents can be really good at restricting different topics and things like that.
00:26:38 Liz: So at Vera, we're not afraid to use the latest and greatest technology. Not everybody can be like us where our CTO is the most impressive, ex AI head of Yelp, Berkeley PhD candidate kind of guy. But that's why we work with customers so they can benefit from our knowledge too.
00:26:53 Noah: That makes sense. Is it fair to say that if your development teams are starting today, you're inside a larger company, you're doing some sandboxing and playing how this technology could work. If you're not starting with a solution like Vera or something that people are starting to go through from the start, and if you have no rules in place at the beginning, it's just as fast to get started. If you're not doing that, and you're adding these rules kind of after the fact, it's really gonna block you into the 2023 of March technology.
00:27:24 Noah: Because to your point that you said earlier, these things, even GPT-4 or probably more so on the open source models, but month to month, even without a revision change, some of the content is changing. And so if you've got these static rules that you red teamed your model internally for however long, you have no idea what the next version is. Or if OpenAI launches something that's half the price, which they seem to like to do lately, you can't shift backwards even because you're trapped in this kind of static protectionism around what you're building.
00:27:58 Liz: That's exactly it. And then I guess you say the pricing in this field is so dynamic too. Do you need to send every part of a customer support chat to GPT-4? Absolutely not. Most conversations start the same way. Hi, hello, how are you? Or I have an issue. Great, let me hear you say about it. And so you know what you can use for that? It's a super fancy new technological advancement called caching. Been around for a million years, but hey, what, it's free, it's instant. And so we can actually store these really common answer requests as responses to any variant of hello, how are you? And so on and so forth.
00:28:33 Liz: And so it really becomes this orchestration problem, which I think a lot of misguided founders tend to think of as their secret sauce. It's not, you're solving a problem. If you're in HR, focus on solving HR problems. You shouldn't have to be, expert at Model Ops and all of these other things that are just the consequence of the field we work in, in order to make it to the next level. Use your expertise, focus on the things that are going to really power your business forward, and let us handle the how and the why and the how much of it because we're here to save you money and time. And yes, you're not going to have to rebuild this infrastructure from scratch every time something new or weird or wonky or even just very exciting is released in the field of AI.
00:29:13 Noah: So let's talk about the fun conversation everybody likes to talk about lately. How does regulation play a part of it? If I'm a company that wants to get into AI and one of my engineers brought me a cool thing they built on ChatGPT and I want to go think about deploying it, yes, there's the governance side of like, I just morally believe I want to have the right thing and not get in trouble. But the other side is obviously regulation that's usually behind and kind of creeping up on us. You are on the National AI Advisory Committee. First off, how do you pronounce the acronym? And then second off, where is that going and as a company thinking about deploying something on my own using Vera, something like that? How should I be thinking about regulation?
00:29:57 Liz: So first of all, to me, I couldn't be any more sad that it's pronounced NAIAC, which I just, of all acronyms, come on, that's NAIAC. It's a town in Long Island, isn't it?
00:30:07 Noah: It's a silent A.
00:30:09 Liz: So we're the NAIAC and yeah, it's been an incredible experience. It was hard to believe, you know, at first that I was receiving, email from the secretary of commerce, but you live, you learn, and then you get used to it. And so myself and I think 26 other members of academia and civil society and industry from all the big tech companies convene a few times a year to write reports for the president and for Congress and to work with agencies and agency heads about answering their questions. Practically, how does this tech work? And then B, to give varying perspectives and a consensus seeking body to try to figure out, what do we do about it?
00:30:44 Liz: And so I think in recent months, it's been absolutely a full court press on trying to figure out the answers to some of the thornier questions. Really none of these very thorny questions were tackled through the first volley of AI regulation that we saw in the United States at the federal level, which was the executive order that President Biden signed into law last year. So of course, the executive order focused mainly on the way that the government itself buys and uses and deploys AI, which is a really big deal. So I think a lot of people would have wanted to see a great deal more strict rules, but those would require Congress, and let's be frank about it.
00:31:23 Liz: Congress is saying a lot of things. They're posturing as though they're gonna do something big on this space after this newest, maybe the third task force has been created. But at the core, the country is still incredibly divided, and it's hard for me to see a future where they can come to consensus on things like labor rights or deep fakes and free speech, moderation on social media platforms. All of these things have grave implications and there just is still a really deep ideological divide.
00:31:50 Liz: But if we think about the way that the security field evolved in the last 20 years, similar to the way that it seems to be heading in AI, it started with the government imposing requirements on companies that wanted to sell their technology to the government. And the way that the government buys technology has a way of filtering down into the rest of industry. There's really no law requiring that when we go to do business with XYZ Bank, that we have to prove SOC 2 Compliance. It's just a good idea from a business perspective to check those boxes. And I think we're seeing the same sort of flow from ideas to standards, then to the government regulating itself, and then eventually to that filtering down into commerce. That's kind of the way I see it happening in the US. It's obviously a very different story overseas.
00:32:37 Noah: So if I hate politics and I want to play with this and deploy it in my business, is it fair to say that if I'm using a solution like Vera and I've got this kind of platform intermediary, you guys are going to be letting me know and helping me implement those things without me having to take engineering time a year and a half from now, once we have something in production, because Congress decided to implement some ban on particular words or whatever it might be.
00:33:03 Liz: Absolutely. And so were not a perfect one size fits all solution for every single thing. But yes, we check a lot of boxes, especially when it comes to security and data management and production. And of course, some of the bigger risks like using generative AI to commit crimes. All of these things can be stopped or prevented or mitigated with the Vera platform. And similarly to the European Union's regulation, which has recently been signed and accepted. Two years from now, once it gets to be an enforcement policy, you're going to have to create an impact assessment and demonstrate to the relevant authorities that you have taken steps to mitigate some of the risks you can identify.
00:33:39 Liz: And Vera is a risk mitigation solution to various points of uncertainty and risk along the deployments lifetime. So yes, we absolutely are. And it wasn't because we knew any of this stuff was coming. It's just what I would tell the government to build or to require because they're tractable problems. They're stuff that actually software can address right now in a very much deeper and more pressing way. We're never going to be, and I don't think anything can be the thing that stops the computers from paper clipping all of humanity into oblivion, but certainly not a risk that I guess we could put a filter on for paperclip. But I don't know that, that's exactly the right approach.
00:34:19 Noah: Yeah, no following for sure. So is it fair to say that if I'm thinking as an executive in a company that wants to start to implement generative AI. There's kind of one thought process, which is I care about AI safety because, and you can't see my bunny ears in the air on a podcast, but because I'm woke and I just, all for my own morals and whatever. And then the other thought process is if you want to be in production, you are going to be required to do certain things over the next two to three years from a regulatory standpoint. And if you don't do it now, you're just setting yourself up for cost, tech debt, pain, and you're likely gonna have to rip out solutions and re-implement them later on. So starting with something like Vera allows you to know that, if it's working and if you get it deployed, you can easily navigate those regulations, do those impact studies and whatnot, as opposed to having to start over from scratch.
00:35:15 Liz: That's exactly right. Noah, I knew you got it when we met all these months ago, but it's so true. And I am so sad to watch the AI safety, AI risks conversation devolve into whether or not you're woke. It's not that simple. There are great business reasons why anybody would want to control the behavior of models that aren't deterministic, that are actually predictive in production and to do it in a smart way that can help reduce risk, increase your brand safety and still impact the bottom line positively. And we do all of those things, right? Whether or not it's about a particular brand of safety.
00:35:52 Liz: So that's been really sad to watch because it's detracting from the real conversations about what are the consequences if we were to get this wrong? And there are pretty significant consequences for getting it wrong. I mean, a lot of the people who come to us in the sales pipeline are people who went to production without anything in place, and then all of a sudden they're having to interface with the FBI. Somebody tried to create child sexual exploitative material or people are committing crimes and they're getting subpoenaed about whatever it is, like hacking attempts or other forms of deep fake manipulation. So there's a great deal of energy on the enforcement level about AI gone wrong.
00:36:28 Liz: And we don't need new laws to do that. The FTC, the CFPB, the EEOC, and lots of other agencies have a really, really big say in being able to call you in front of their agency heads and subpoena all your materials and figure out whether you're sufficiently governed. Most recent example of this was actually Rite Aid was subject to a very severe consequence when the FTC came calling and the FTC said that it was negligent, the degree to which they had not monitored, not sufficiently mitigated risks of false positives and false negatives with, it wasn't even generative AI, it was regular computer vision classification task that went wrong and falsely accused a bunch of people of crimes that ended up with them being banned from the store.
00:37:14 Liz: So this is just one of many examples that we're seeing this really intense energy around enforcing existing laws and Vera can help you not run afoul of those existing laws in addition to future proofing you against the ones that come.
00:37:29 Noah: And that Rite Aid example, just so people can be properly scared, is an example where I don't believe it was in-house either, right? They were using a third-party vendor to do some of this. Still got in trouble because they didn't have their own checks and balances in place internally. So all the more reason to go take a look at a solution like Vera. All right, so we're coming up on time. So speed round. These are some questions that we throw out to founders on the end of all these podcasts. What is a book you're reading or a podcast you're currently enjoying? Besides this one, because it's probably the top of your list.
00:38:00 Liz: Clearly this one is at the top of my list, especially this episode. Noah, But no, actually I'm really excited too. I just… The Three-Body Problem just came on my radar again, and I'm enjoying the TV show. And so I think people at Everywhere have interviewed me once before. And I'm the nerdiest of all nerds. I love, Golden Age of Science Fiction. And so just very cool topic. I think the show is pretty great. I'm excited to read the book.
00:38:26 Noah: Okay, awesome. Number two, if you could live anywhere in the world for one year, where would it be?
00:38:31 Liz: Bali, right? Who doesn't answer Bali? (Remove Noah)
00:38:34 Noah: Just bring a Starlink satellite, right?
00:38:36 Liz: Oh God.
00:38:37 Noah: Number three, favorite productivity hack.
00:38:40 Liz: You know, honestly, I was just asked this question recently. And I think that my solution to this is if somebody is a co-founder of a company, I'm working constantly. I'm never not working. But I've recently learned to give myself the permission to stop once I reach a point of diminishing returns. Once you start to get that inkling, like I'm kind of tired. I'm not sure if this is my best work, but you can either power through and get to the end of it, or you can take a break and come back an hour or two later.
00:39:05 Liz: I just, I'm not a huge fan of believing that people have to be sitting in one place for 11 hours to really be productive, to build world changing technology. It's like, we're all human beings, we're all different. So we love being a fully remote culture that helps people understand, and get the most out of themselves in their own unique ways. And so I have to give myself the permission to understand that I also am the person that maybe wants to work at midnight or at 6 AM. It kind of doesn't matter. But when I feel that motivation to do it, I take advantage of it and give myself the permission to not when I'm not.
00:39:38 Noah: I'll double down on that. Say one of my biggest productivity hacks was having kids and it sounds so counterintuitive, but the forcing function of having to figure out what you have to do in which order and knowing when you're done, you're done was amazing. Number four, and then I'm going to add a four point B after this, but where can listeners find you if they want more information, whether it's on the AI side, the Vera side, or on the work that you're doing from a regulatory standpoint? And then also, what types of companies are you looking for? Who should reach out to you on the company side?
00:40:10 Liz: Yeah, super good. Well, on the AI side, on the government side, you can find us all at AI.gov. And I highly encourage you to sign up for notifications because we have public meetings all the time. And we would love to hear your comments. We actually have to read and respond to every single one. This is a soapbox that I will never not have. And it means that you can also participate and we need you to participate in this conversation about what to do about AI. So that's thing one.
00:40:35 Liz: You can find us at Vera@askvera.io. That's also our Twitter handle. And I'm Liz J. O'Sullivan everywhere from Twitter to Threads to Instagram, all of the different places, is the same handle. Liz O'Sullivan, of course, is my LinkedIn. You can reach out to us there. And right now we're very interested to be hearing from companies that are deploying generative AI, especially in a customer facing way. You don't have to be a big company. You could be a small company. Our API pricing is extremely affordable.
00:41:02 Liz: So less than you would expect by leagues. And you can come in and have us craft these very detailed policies based on templates that we've built over and over again, and we're happy to help you get started and hopefully to do it faster.
00:41:14 Noah: And it's definitely cheaper than adding it later.
00:41:17 Liz: Yeah. Don't rebuild all your infrastructure from scratch. Just do it right the first way. Use Vera.
00:41:23 Noah: Awesome. Well, Liz O'Sullivan, Co-Founder, CEO of Vera, making it so that AI can be productive, cost-effective, and safe. Thank you so much for spending time with us today on the Venture Everywhere podcast, and look forward to talking to you soon.
00:41:36 Liz: Thanks, Noah. This was fun.
00:41:38 Noah: Have you watched Toy Story?
00:41:40 Liz: Yeah.
00:41:41 Noah: Remember at the end when she's like, are they gone? I'm still smiling.
00:41:43 Liz: Yes, totally. You should include that in the podcast.
00:41:48 Noah: That could be the end.
00:41:50 Scott Hartley: Thanks for joining us and hope you enjoyed today's episode. For those of you listening, you might also be interested to learn more about Everywhere, where a first-check pre-seed fund that does exactly that, invests everywhere. We're a community of 500 founders and operators, and we've invested in over 250 companies around the globe. Find us at our website, Everywhere.VC, on LinkedIn, and through our regular founder spotlights on Substack. Be sure to subscribe and we'll catch you on the next episode.
Check out Liz O’Sullivan in Founders Everywhere.