Venture Everywhere Podcast: Liz O'Sullivan with Sylvia Kuyel
Sylvia Kuyel, founding parter of the Fund XX as part of Everywhere VC, catches up with Liz O'Sullivan, co-founder and CEO of Vera, on Episode 15: Vera-fy Your A.I.
Listen on Apple, Spotify, Pandora!
In episode 15 of Venture Everywhere, Sylvia Kuyel, founding partner at Fund XX as part of Everywhere VC, interviews Liz O'Sullivan, founder and CEO at Vera, a startup leader and AI advocate advising early and mid-stage companies on Artificial Intelligence design practices, strategies, and product development. Liz discusses her startup experience and her mission to promote responsible AI. She highlights the unpredictable nature of AI and the necessity for human oversight. Liz also addresses challenges in scaling teams and fostering inclusivity in remote work environments. She believes AI can advance science but stresses the importance of governance and policies to protect marginalized communities.
In this episode, you will hear:
The importance of being practical and realistic about the capabilities and limitations of AI, as well as the need for human control over its use.
Vera as a gateway for companies looking to deploy AI and to foster its adoption while reducing risks.
Potential benefits and risks of AI, including job automation, bias, and the transformation of various industries.
Liz’s personal journey in the AI field, including her work at CloudFlare and her decision to leave a job involving military applications of AI.
The importance of understanding both the long-term risks and the immediate risks of AI, and the need for collaboration and compromise within the AI community.
If you liked this episode, please give us a rating wherever you found us. To learn more about our work, visit Everywhere.vc and subscribe to our Founders Everywhere Substack. You can also follow us on LinkedIn and Twitter for regular updates and news.
FULL TRANSCRIPT
00:00:01 Jenny Fielding: Hi and welcome to the Everywhere podcast. We're a global community of founders and operators who've come together to support the next generation of builders. So the premise of the podcast is just that: founders interviewing other founders about the trials and tribulations of building a company. I hope you enjoy the episode.
00:00:21 Sylvia: Hi, I'm Sylvia. I am founding partner at The Fund XX as part of Everywhere VC. We invest in women CEOs and I've got a long career as an operator running customer success and customer revenue, most notably at Cloudflare. And today I am incredibly excited to be interviewing Liz O’Sullivan, founder and CEO at Vera. Liz, I'll let you say a short intro about yourself before we dive in.
00:00:47 Liz: Hi everybody. Thanks Sylvia. It's nice to be here. Super great to be working with your fund. We're really big fans of Jenny and everything that you guys have been doing up at Everywhere. My background, I've been doing startups my whole career. More than twelve years on the business side of AI companies dating back into the good old days when it didn't work very well at all. Super excited to be here.
00:01:08 Sylvia: Amazing. Well, I will say every conversation I have with Liz is an inspiring one. So excited to see what we uncover today. And maybe with that, Liz, I think it'd be great for the audience to hear a little bit about the work you're doing now, the impact you're hoping to make.
00:01:27 Liz: Yeah, thank you so much. It's, I think, a pretty well known problem at this point that AI is exciting, it's powerful, it's increasing productivity, it's making our imaginations run wild in every industry, from medicine to the legal field, to science and research. But the practical side of AI is that it's kind of unpredictable. And when it screws up, it screws up in some crazy ways that may actually cause damage.
00:01:58 Liz: Not just to companies who are deploying it to their brands, of course, but also to society and to the world at large. And my co-founder and I have been working on AI for long enough to where we remember the times where a lot of what we see today was just science fiction. And nowadays it's part of our mission to be responsible for shepherding this technology into the world in a reliable and predictable way that can help reduce its risks. We believe in AI and we want it to flourish.We want to foster its adoption here and globally, but we're not going to be able to do that unless we're practical and realistic about what it's good at, what it's bad at, what it can do now, what it maybe can do one day, and the places where we need to maintain real human control over what it can and can't do.
00:02:55 Sylvia: Mhmm. That's a huge problem. How and when do you decide you're successful?
00:03:01 Liz: Well, so I know where we'll be successful is where we act as sort of the gateway for any company looking to deploy AI, anybody looking to use it in an enterprise setting, that all of these transactions come and flow in and out of our platform. Honestly, a better measure of success would be where AI is ubiquitous. It's in wherever it is meant to be, where we can demonstrate that it actually is useful and that you truly can trust its responses. And you can trust your team to use it because they all understand what it's good at, what it can't do, and what sorts of things they just shouldn't share with a model for one reason or another.
00:03:42 Sylvia: Yeah, there's a lot of optimism and fear in the market today around AI. Do you orient more towards one or the other?
00:03:52 Liz: It depends on the day, absolutely. Some days I truly do worry about this technology, whether or not we're ready for it. There are plenty of use cases where people ascribe human properties to what are essentially chat bots, and they think that they can trust models to guide them in making really critical human-level decisions about themselves, about their partners, about their mental health, their physical health, and are inclined.
00:04:24 Liz: Research shows time and time again that people, certain types of people, are more willing to trust machine outputs than they are what I would call the outputs of a human being, or actually a conversation, right? The flip side of it is that we see research and applications of AI that are just absolutely mind blowing and that transform the way that we understand our bodies, ourselves, our cellular structures, the implications for longevity and medicine, and notoriously difficult to cure diseases like cancer are unending, and the list is long. And so it's one of these questions of, yes, it is both a powerful tool that's exciting and a risk.
00:05:07 Liz: But the key part to get right is how do we do it? How do we implement it? What are the tools and frameworks and of course, laws and policies that we need to do it in a reasonable way? That doesn't always mean you have to move slowly, but it does mean that you have to move intentionally. And that's exactly what we built our platform to do.
00:05:26 Sylvia: So, Liz, I know that you, as a human, have a really incredible story as to how you came into AI and have become a meaningful voice in the space. It'd be great if you could share with us those critical points that got you on this path.
00:05:46 Liz: Oh, my gosh, sure. I'm happy to. It is kind of a fun and weird story. It's certainly not what I imagined for myself when I took my first startup job twelve years ago. All I wanted to do was work in a cool tech company that was working on stuff that was interesting. And to be completely frank, that first company, AI was sort of an afterthought. It was, how do we make this automation happen in a way that would reduce the amount of work that recruiters needed to put into reading resumes and potentially reduce what they thought at the time, human bias.
00:06:23 Liz: And I was there for five years. We were like a family. And I only left because I got this incredible opportunity to work at a computer vision company. I didn't realize it at the time, but this was a critical juncture. It was 2017. GPUs had really just become more commonly adopted for this use case. They had made some really significant architectural advancements and benchmarking advancements. And my first day was like, absolutely a nuclear bomb went off in my brain. And it was insane. The things that I saw the technology doing, and this is five years ago now, or six years ago now, and it was just so inspiring that I knew that we had to do whatever it took. And that company was doing so many world changing things.
00:07:09 Liz: We were helping authorities catch really bad actors in platforms that couldn't moderate manually petabytes of data. We were diagnosing diseases. And I look back and wonder, to what degree was it really clinically reliable? And I don't know. At the time, we were just way before anybody knew what the limitations of the technology were. Customers would come to us and they would say, can AI do this thing? And we would say, I don't know. Let's try it. Let's give it a shot. And there were lots of things that it turned out it could do really well, other things that it really did not do very well at all.
00:07:45 Liz: And so when we got this very large, company's largest contract of all time to work with the government, I was thrilled. I was so excited. My team was responsible for labeling data and creating data sets to prevent and debug a lot of the worst things that can go wrong with AI, like discriminatory bias. And boy, do I have some battle scars from that. But when we found out we were working not just with the government, but with the military, it took on a whole new character. And I continued to support that work. I thought it was really, I grew up in a very patriotic family. Everybody was so excited that we had this opportunity to give our country the best tools.
00:08:24 Liz: But again, it just lingered in my head. The ways the technology failed were so hard to predict and so hard to transfer to a new domain. And when I eventually figured out that this was not just a military project, it was to do with drone photography, surveillance, and perhaps even automated targeting. I essentially asked the CEO, are we building Skynet? And he answered me in the affirmative that we were. It wasn't that cute or simple, and I totally respect people who are okay with that, that they feel that this technology is going to save lives. And there's a great argument for that.
00:08:59 Liz: For me, it wasn't right. And I needed to quit that job. And so I did. I ended up writing for the ACLU, accidentally going viral and joining a campaign to stop killer robots at the UN Convention on Certain Conventional Weapons in the United Nations and Geneva. It was absolutely insane year that led me to where I am today.
00:09:20 Liz: So I would absolutely do it again. It was a moment that changed my life forever, but I also wouldn't wish it on anybody. It was, very stressful time. Not everybody has to make that decision at that point in their lives.
00:09:34 Sylvia: Yeah. And that you're going viral kind of keeps you, got you on a path where you continue to be in the center of conversations related to AI.
00:09:45 Liz: That's right. It's incredibly humbling. When I got the email from the Secretary of Commerce telling me that I had been nominated to be an inaugural member of the National AI Advisory Committee and to write reports for the President and for Congress, I thought it was spam. I didn't want to click any of the links. I was so worried. I brought my best friend in and I said, is this real? Is this really happening? And then of course, it turned out to be real.
00:10:14 Liz: So it's been an incredible 18 months working on the committee. And our first report just went to the President's desk about six months ago. Right now we're working on some additional recommendations around worker rights as we start thinking about mass adoption of this technology and the implications it has for not just creative workers, but for workers in every industry.
00:10:35 Sylia: Amazing. So I think to an outsider, it sounds like you're on top of the world in the hottest space of the moment. But bring us back to the ground. What's the biggest challenge you're facing professionally these days?
00:10:52 Liz: Oh, my gosh. It's going to be so cliche. Because I think every startup is facing a lot of the same challenges today, which is the nitty gritty of scaling the team, building the policies, growing an inclusive and wonderful culture that encourages everybody to be their best selves and to do it in a fully remote setting creates a whole lot of white space that we need to fill with culture and in a text form, right, or video chat form.
00:11:26 Liz: We are racing to market with a lot of eager companies that try to say things like, our models don't need bias protection, they're already safe, and so on and so forth. And a lot of smart companies understand this is just marketing copy. But it can be really easy to drink the Kool Aid when you start to see the productivity gains and all of the benefits to smart automation that AI has been promising for so long.
00:11:51 Liz: So we're just like everybody else. We have a team and a goal and a will to accomplish it. And now it's all just about execution. So check back in a year and some change and we'll let you know how it went.
00:12:04 Sylvia: Amazing. So now let's look past the year. Let's think 5, 10 years. What do you think the big vision is?
00:12:14 Liz: The big vision is: I don't want to be the typical tech bro and inflammatory and overstate the claims, but really and truly we are trying to bring about the Star Trek future of abundance for everybody. I think that the near-term risks of AI are severe and urgent. And we think about things like job automation. We think about things like the transformation of art and the way that it's affecting creative workers and the civil rights implications of the way that the technology mishandles minority categories in every group, whether that's gender-based or racially oriented or disabled, community-based. And not to detract from any of those things.
00:13:02 Liz: But it just takes a little bit of imagination to see a world, maybe it's not ten years away, but maybe it is, where we can think about lives of leisure that aren't focused around the grind that we have in America, where our basic needs are taken care of for us through machines, through the opportunities, through good government. And I know that seems like it's a difficult thing for us to imagine, as tough as things are right now, but we have to have hope. And that optimism is what keeps me going every single day.
00:13:35 Sylvia: Yeah, that's very much on the pro side. What gives you that confidence that you think this is going to be a net positive on society?
00:13:44 Liz: It's a seriously important question, and to me, it just comes down to the way that this technology is accelerating science. The project that I'm thinking of most often is DeepMind’s, the methods that they used, which were actually not generative AI at all. They were reinforcement learning-based approaches to figuring out the structure of every protein in our body, a task that we previously thought impossible. And especially with the acceleration of healthcare funding due to the pandemic, et cetera, we're starting to see the pace of life-saving technologies making it through FDA approval accelerate.
00:14:22 Liz: And that's just biology. We can also think about AI applications in physics. So I think a lot of people make mistakes in thinking that all AI is racist, toxic, sexist, et cetera. AI is a huge umbrella term that encompasses a great many technologies, and some are a lot more straightforward than others. And so when we think about what is AI good at, it's really good at problem spaces where we actually understand what's going on. So, for instance, in biology and in physics, there are rules that are inviolable. If you understand those rules and you can see we have a clear definition of what correct looks like.
00:15:03 Liz: Even if it can't figure out the right answer, it can rule out a lot of wrong ones. So you take this infinitely large space of possible answers, shrink it down to six or seven, and then apply humans to conduct the experiments that we need to figure out. How do we get to fusion and to provide power for every person on the planet? How do we make the best use of food resources, how do we desalinate water in a cost effective way? I believe that AI has a lot to contribute to the field of science into making it move faster and faster.
00:15:35 Liz: The questions are, is our society ready for that? Do we have the right governance? Do we have the right people-based governments and systems in place to protect everybody so that the marginalized don't get left behind? I don't know the answer to that question, and it's why I spend so much time working on policy and trying to educate lawmakers about not just its potential, but how it's causing real harm right now.
00:15:57 Sylvia: Absolutely. What's one area that, let's say, others in the field take as a possible truth and you just totally disagree with it?
00:16:10 Liz: So I have to pick one?
00:16:11 Sylvia: You get whatever you want. You can give us a portfolio.
00:16:16 Liz: There's so many. It's interesting because the field of, I'm going to make a really broad statement here of AI risk, incorporates the views of some very very different communities and on one side there's what's being known as like, AI safety, which focuses almost exclusively on long term risks. And then on the other side there's AI ethics, which really exclusively focuses on the risks of people alive today. A lot of the biggest AI firms have been essentially centered around philosophies of effective altruism, which is like utilitarianism on steroids. I love that I get to use my philosophy degree in my tech career, by the way.
00:17:01 Liz: It's like, couldn't have planned it any better. But these people, they say that the thing that we should be the most concerned about is existential risk when it comes to AI. And right now, kind of the field is at odds with each other and these two groups do not like each other at all. So I'm kind of an outlier in that I walk both sides of the coin, of the equation. And I do believe that we should be worried about existential risk. I think there are some real threats here. I also believe that we should be worried about the people who are alive today probably as the priority. And I don't believe that those are mutually exclusive.
00:17:35 Liz: So it's kind of fascinating to watch this divide widen. And I don't love what it says about our ability to come to a compromise as a society. But I still need to maintain that optimism and think that there's a way. We're all reasonable, we're all human beings. We can see each other's perspectives if we try hard enough. And I think it's just going to take some grand scientific advancements. And these are in the works already, probably have been for many years, to really bring one side to the other side and get everybody at the same table to talk it all out.
00:18:08 Sylvia: Yeah. Would you say that your ability to span both sides is your superpower or would you say something else has really been - what has helped you thus far?
00:18:22 Liz: I think that my ability to understand people, which is honed from a long career in business and trying to serve customers and to interpret their needs and to be proactive about giving them what they want and what they need, that's my superpower. I'm very empathic. I think it makes me a good leader. I think a lot of the things that sometimes men point to in leadership and say, like, this is a weakness. I find those to be my strengths. And so I love that I can bring that to a field that is overwhelmingly male and white and straight and it just makes me a unique player. My co-founder as well. We've both sort of been through the wringer in tech our entire careers and now having a shop of our own is just an absolute dream come true.
00:19:09 Sylvia: I love that, Liz. Okay, speed round time. You're reading or podcast you're enjoying?
00:19:18 Liz: I'm a huge dork, so I love golden age science fiction and my best friend has just forced my hand and I have agreed to start reading the Culture novels. So I have not read them yet. They're very dense and thick and complicated, but I'm really excited to try.
00:19:34 Sylvia: Awesome. And if you could live anywhere in the world for one year, where would it be?
00:19:40 Liz: I think I would live right where I just got back from vacation. My first vacation since we founded the company, and that was in Sicily. So beautiful.
00:19:50 Sylvia: Oh, so jealous. Okay, favorite productivity hack?
00:19:55 Liz: I'm not going to even pretend - it's ChatGPT. I love bouncing ideas off of it. It's really fun. And when I don't like its answers, I just ask it to rewrite it in iambic pentameter or something equally nerdy and it makes it that much more fun.
00:20:09 Sylvia: That is so appropriate for you to have chosen. And last but not least, where can listeners find you?
00:20:18 Liz: I am less and less on Twitter these days, I'm not going to lie, but I do exist there. My handle is @LizJOSullivan. You can find Vera at AskVera.IO or our website and on LinkedIn AskVera.IO.
00:20:33 Sylvia: Thank you, Liz. It's always inspiring to catch up with you.
00:20:36 Liz: So great to see you again. Sylvia, thank you so much.
00:20:41 Scott Hartley: Thanks for joining us and hope you enjoyed today's episode. For those of you listening, you might also be interested to learn about everywhere, we’re a first check preseed fund that does exactly that. We invest everywhere. We're a community of 500 founders and operators and we've invested in over 250 companies around the globe. Find us at our website, everywhere.vc on LinkedIn and through our regular founder spotlights on Substack. Be sure to subscribe and we'll catch you on the next episode.