Hello, and welcome to Decoder! This is Jon Fortt, CNBC journalist, cohost of Closing Bell Overtime, and creator and host of the Fortt Knox podcast. As you just heard Nilay say, Iâm stepping in to guest host a few episodes of Decoder this summer while heâs out on parental leave, and Iâm very excited about what weâve been working on.
For my first episode of Decoder, a show about how people make decisions, I wanted to talk to an expert. So I sat down with Cassie Kozyrkov, the founder and CEO of AI consultancy Kozyr. Sheâs also the former chief decision scientist at Google.
For a long time, Cassie has studied the ins and outs of decision-making: not just decision frameworks but also the underlying social dynamics, psychology, and even, in some cases, the role that the human brain plays in how and why we make certain choices. This is an interdisciplinary field that Cassie calls decision intelligence, which mixes everything from statistics and data science to machine learning. Her expertise landed her a top advisor role at Google, where she spent nearly a decade helping the company make smarter use of data.
In recent years, her work has collided with artificial intelligence. As youâll hear Cassie explain it, generative AI systems like ChatGPT are making it easier and cheaper than ever to get advice and analysis. But unless you have a clear vision of what it is youâre looking for, and what values underlie the decisions you make, all youâll get back from AI is a lot of messy data.
So Cassie and I really dug into the science behind decision-making, how it intersects with what weâre seeing in the modern AI industry, and how her current work in AI consulting helps companies better understand how to use these tools to make smarter decisions that canât just be outsourced to agents or chatbots.
I also wanted to learn a little bit about Cassieâs own decision-making frameworks and how she made some key decisions of her own, such as what to pursue in graduate school and why she decided to leave academia for Google and then strike out on her own just as the generative AI boom was really starting to kick off. This is a fun one, and I think youâre really going to like it.
Okay: decision scientist Cassie Kozyrkov. Here we go.
This transcript has been lightly edited for length and clarity.
Cassie Kozyrkov, welcome to Decoder. Iâm going to welcome myself to Decoder too, because this isnât my podcast. Iâm just having a good time punching the buttons, but itâs going to be a lot of fun.
Yeah, itâs so great to be here with you, Jon. And I guess we two friends managed to sneak on and take over this podcast, so Iâm really excited for the mischief weâll cause here.
Let the mischief begin. So the former chief decision scientist at Google, I think, starts to frame what it is youâre good at, and weâre going to get into the implications for AI and leadership and technology and all that. But first, letâs just start with the basics. Whatâs so hard about making decisions?
Depends on the decision. It can be very easy to make a decision, and one of the things that I advise people is, unless youâre a student of decision-making, your number one rule should be to try to match the effort you put into the decision with whatâs at stake in the decision. So, of course, if youâre a student, you can go and agonize over, âHow would I apply a decision theoretic approach to choosing my sandwich at lunch?â But donât be doing that in real life, right?
Slowing down, thinking carefully, and considering the hard decisions and doing your best by them is, again, for the important decisions that will touch your life. Or even, more critically, the lives of thousands, millions, billions of other people, which is something that we see with technology that scales.
It sounds like youâre saying, in part, knowing whatâs at stake is one of the first tough things about making decisions.
Exactly. And knowing your priorities. So one of the things that I find really fascinating about what AI in the large language model chatbot sense today is doing is itâs making answers really cheap. And when answers become cheap, that means the question becomes really important. Because what used to happen with decision-making for, again, the big, thorny data-driven decisions, was a decision-maker might come up with something and then ask the data science team to work on it. And then by the time that team came back with an answer, it had been, well, a week if you were lucky, but it could have been six weeks, or six months.
In that time, though, you actually got the opportunity to think about what youâd asked, refine what it meant to you, and then maybe re-ask it. There was time for that shower thought, where youâre like, âOh, man, I should not have phrased it that way.â But today, you can go and have AI attempt an answer for you, and you can get an answer really quickly.
If youâre used to just immediately running in the direction of your answer, you wonât think as much as you should about, âWell, how do I test if this is actually what I need and whatâs good for me? What did I actually ask in the first place? What was the world model, if you like? What were the assumptions that went into this decision?â So itâs all about priorities. Itâs all about knowing whatâs important.
Even before we get there though, staying at the very basic level, how do people learn to make decisions? Thereâs the fundamental idea that if you touch a hot stove, you do it once and then you know not to do that again. But how does the wiring in our brain work to teach us to become decision-makers and develop our own processes for doing it?
Oh, I didnât know that you were going to drag my neuroscience degree into this. It has been a while. I apologize to any actual practicing neuroscientists that Iâm about to offend. But at least when I was in grad school, the models that we had for this said that you have your dopaminergic midbrain, which is a region thatâs very important for movement and for executing some of what you would think of as the more instinctive behaviors, or those driven by basic rewards â like sugar, avoidance of pain, those kinds of rewards.
So you have what you might think of as an evolutionarily older structure. And isnât it fascinating that movement and decision-making are similarly controlled in the brain? Is a movement a decision? Is taking an action the same thing as making a decision? We can get into that. And then there are other structures in the prefrontal cortex.
Typically, your ventromedial and dorsolateral prefrontal cortices will be involved in various kinds of what you would think of as effortful or slowed-down decisions â such as the difference between choosing a stock because, I donât know, you feel as if you donât even know why, and sitting down and actually running some numbers, doing some research, integrating all of that and having a good, long-think ponder as to what you should do.
So broadly speaking, different regions from different evolutionary stages play into decision-making. The prefrontal cortex is a little newer. But you have these systems â sometimes acting in a coordinated manner, sometimes a little in conflict â involved in decision-making. But what we also really cared about back in those days was moving away from the cartoonish take that you get in popular science, that you just have one region and it just does this one thing and it only does this thing.
Instead, itâs an entire network that is constantly taking in inputs and processing all of them. So, of course, memory would be involved in decision-making and, of course, the ability to imagine, which you would think of more as engaging your visual occipital cortices â that would definitely be involved in some way or other. So itâs a whole thing. Itâs a whole network of activations that are implementing human decisions. To summarize this for you, Jon, neuroscientists have no idea how we make decisions. So thatâs the funny conclusion, right?
What we can do is prod and pry and get some sense of it, but at the end of the day, the actual nitty-gritty of how humans make decisions is a mystery. Whatâs also really funny is humans think they know how they make decisions, but quite often you can plant a decision and then unbeknownst to your participants, as we call them in the studies â Iâd say victims â unbeknownst to them, the decision was made for them all along. It was primed in some way. Certain inputs got in there.
They thought they made a decision, and then afterward you ask them, so why did you pick red and not blue? They will sing you this beautiful song, explaining how it was their grandmotherâs favorite color or whatever it is. Meanwhile, the experimenter implanted that, and if you donât believe me, go see a magic show. Itâs the same principle, right? Stage magicians will plant decisions in their audiences so reliably, otherwise the show wouldnât work. Iâm always fascinated by how seriously we take our human ability to know and understand ourselves and feel as if weâve got all this agency side by side with professional stage magicians entertaining crowds every day.
But it sounds to me like maybe what really drives decisions, and maybe this motion and movement region of the brain is part of it, is want â what we want. When weâre babies, when weâre toddlers, decisions are: Do I get up? Am I hungry? Do I cry? Itâs basic stuff that has to do with mostly physical things, because weâre not intellectuals yet, I guess.
So you need to have a want or a goal in order for there to be a decision to be made, right? Whether we understand what our real motivation is or not, thatâs a key ingredient, having some kind of want or goal in decision-making.
Well, it depends how you define it. So with all these terms, when you try to study decision-making in the social biological sciences, youâll have to take a word, such as âdecision,â which we use casually however we like, and then youâll have to give it a little box that makes that definition more concrete. Itâs just like saying: âlet X equal…,â right? At the top of your page when youâre doing math, you can say let X equal the speed of light. Now, from now on, whenever I write X, it means the speed of light. And then for some other personâs paper, let X equal five, and then whenever they write X, it means five.
So similarly, we say, âLet decision equalâŠâ and then we define it for the purposes. Typically, what decision analysts will say defines a decision â the way they do their âlet decision equalâŠâ at the top of their page â is they say that it is an irrevocable allocation of resources. Then itâs up to you to think about, again, how you want to define what it means for the allocation to be irrevocable, and what it means for the resources to be allocated at all.
Is this an act that a human must make? Is it an act that a system downstream of a human might make? And what are resources? Are resources just money, or could they include time? Or opportunity? For example, what if I choose to go through this door? Well, in this moment, in this universe right now, I didnât choose to go through that door, and I canât go back. So in that sense, absolutely every movement that we make is an irrevocable allocation of resources.
And in companies, if youâre Google, do you buy YouTube or not? I mean, that was a big decision back then. Do I hire this person or that person? If itâs a key employee role, that can have a huge impact on whether your company succeeds or fails. Do I invest in AI? Do I or donât I adopt this technology at this stage?
Right, and you can choose how to frame that to make it definitionally irrevocable. If I hire Jon right now at this point in time, then Iâm maybe giving up doing something else, such as eating my sandwich instead of going through all the paperwork of hiring Jon. So I could think thatâs irrevocable. If I hire Jon, I might be able to fire Jon tomorrow and release whatever resources that I cared more about than time and current opportunity. So then I could treat that as Iâm able to have a two-way door on this decision.
So really, it depends on how you want to frame it, and then the rest will somewhat follow in the math. A big piece of how we think about decision-making in psychology is to separate it into judgment and decision-making.
Judgment is separate from decision-making. Judgment comes in when you undertake all the effort of deciding how to decide. What does it actually mean for you to allocate your resources in a way without take-backsies? So itâs up to the decision-maker to think about that. What are we measuring? Whatâs important? How might we actually want to approach this decision?
Even saying something like, âThis decision should be made by gut instinct rather than by effortful calculation,â is part of that judgment process. And then the decision-making process that follows, that is just riding the mathematical consequences of whatever judgment setup you made.
So speaking of setup, give me the typical setup. Why do clients hire you? What kinds of positions are they in where theyâre like, âOkay, we need a decision scientist hereâ?
Well, typically, the big ones are those involving deployment of AI systems. How would you think about solving a problem with AI? Thatâs a big decision. Should I even put this AI system in place? Iâm potentially going to have to gut whatever Iâm already using. So if Iâve got some handcrafted system some software developers have already written for me, and Iâm getting reasonably good results from that, well, Iâm not just going to throw AI in there and hope for the best. Actually, in some situations you would do that, because you want to say, âIâm an AI company.â And so you want to default to putting the AI system in unless you get talked out of it.
But quite often itâs effortful, itâs expensive, and we want to make sure that itâs going to be good enough and right for that companyâs situation. So how do we think about measuring that, and how do we think about the realities of building it so it has all the features that we would require in order to want to proceed. Itâs a huge decision, this AI decision.
How much does a leaderâs or a companyâs values matter in that assessment?
Incredibly. I think thatâs something that people really miss when it comes to what looks like data or math-y situations. Once we have that bit of math, it looks objective. It looks like âyou start here, you end up there,â and there was only one right answer. What we forget is that that little math piece and that data piece and that code piece form a thin layer of objectivity in a big, fat subjectivity sandwich.
That first layer is: Whatâs even important enough to automate? Whatâs important enough to do this in the first place? What would I want to improve? In which direction do I want to steer my business? What matters to me? What matters to my customers? How do I want to change the world? These questions have no one right answer, and will need to be articulated clearly in order for the rest to make sense.
The companies tend to articulate those things through a mission statement. Very often, at least in my experience, those mission statements arenât nearly detailed enough to guide the granular and deep series of events that AI is going to lead us down, no?
Absolutely, and this is a really important point that blossoms into the whole topic of how to think about decision delegation. So the first thing leaders need to realize is that when they are at the very top of the food chain in their organizations, they donât have the time to be involved in very granular decisions. In fact, most of the job is figuring out how to delegate decision-making to everybody else, choosing whom to trust or what to trust if weâre going to start to delegate to automated systems, and then letting go of that decision.
So you donât want to be asking the CEO about nitty-gritty topics around, letâs say, the cybersecurity pieces of the companyâs shiny new AI system. But what the company needs to do as an organization is make sure that somebody in the project is thinking about all the components that need to be thought about, and that itâs all delegated to the right people. So part of my role then is asking a lot of questions about whatâs important, who can do this, how do we put it all together, and how do we make sure that weâre not operating with any blind spots or missing any components.
How typically are clients ready to provide you with that information? Is that a conversation theyâre used to having?
Again, weâve come a long way, but for the longest time, as a civilization working with data, weâve been fascinated by just being able to potentially do a thing even if we donât know what itâs for. We thought, âIsnât it cool that we can move this data? Isnât it cool that we can pull patterns out of it? Isnât it cool that we can store or collect it at scale?â All without actually asking ourselves, âWell, where are we going, and how are we going to use it?â
We are growing out of that painful, teething phase where everyone was like, âThis is fun, and letâs do it for theory.â Itâs kind of like saying, âWell, weâve invented a wheel, and now we can invent a better wheel, and we can now make it into a tire and it can have rubber on it, but maybe itâs made from carbon fiber.â
Now we are moving into, âOkay, this thing enables movement, different investments in this thing enable different speeds of movement, but where do I want to go? Because if I want to go two yards over, then I donât actually need the car, and I donât need to be fascinated by it for its own sake.â
Whereas if what I really need to do is be in the adjacent city tomorrow, and I donât currently have a car, well, then weâre also not going to talk about inventing it from scratch by hiring researchers. Weâre not going to think about building it in-house. Weâre going to ask, âWho can get you something that will get you there on time and on spec?â These conversations are new, but this is where weâre going. We have to.
It sounds like, and correct me if Iâm wrong here, AI is going to help us a lot more with giving us facts and options and less with giving us values and goals.
I hope so. That is the hope, because when you take values and goals from AI, what youâre doing is taking an average from the internet, or perhaps in a system that has a little bit more logic running on top of it to direct its output, then you might be taking those values and goals from the engineers who designed that system. So itâs like saying, âIf Iâm going to use AI as my rough draft every time, that rough draft might be a little bit less me and a little bit more the average soup of culture.â If everyone starts doing that, then itâs certainly a kind of blending or averaging of our insights.
Perhaps you want that, but I think thereâs still a lot of value in having people who are close to their problem areas, who are close to their businesses, who have individual expertise, to think a little bit before they begin, and to really frame what the question is rather than take it from the AI system.
So Jon, how this would go for you is, you might ask an AI system, âHow do I live the best possible life?â And itâs going to give you an answer, and that answer is not going to fit you. Thatâs the thing. Itâs going to fit the average Joe. What is or who is the average Joe, and how does that apply to you?
Itâs going to go to Instagram, and itâs going to look at whoâs got the most likes and followers, and then decide that those people have the best lives, and then take the attributes of those people â how they look, how they talk, the level of education they say they have â and say, well, hereâs what you need to do to be like these people who, the data tells us, people think have the best lives. Is that a version of what you mean?
Something like that. More convoluted, because something that is worth realizing is that an advantage machines have over us is memory and attention, right? What I mean by this is if I flash 50 digits onscreen right now and then ask you to recall them, youâre going to have no idea. Then I can go back to those 50 and say, âYeah, the machine remembered it for us this whole time. It is clearly better at memory than Jon is.â
Then we flash these things, and I say, âQuick, whatâs the sum of these digits?â Again, difficult for you, but easy for a machine. So anything that fits in our heads as we discuss it is going to be a shortcut of whatâs actually possible when you have memory and attention at scale. In other words, weâve described this Instagram process that fits in our heads right now, but you should expect that whatever is actually going on with these systems is just too big for us to hold in there.
So sure, Instagram and some other sources and probably even some websites about how to live a good life applied to us, but itâs all kinds of things all jumbled together into something too complicated for us to understand what it is. But the important thing is itâs not tailored to us specifically, not without us putting in quite a lot of effort to feed in the information required for that tailoring, which I encourage us to do.
Certainly, understanding that advice is cheaper than ever. I will frame up whatever is interesting to me and give it to the system. Of course, Iâll remove the most confidential details, but Iâve asked all kinds of things about how I might, letâs say, improve looking at real estate given my particular situation and my particular tastes. Iâll get a very different answer than if I just say, âWell, how do I invest?â Iâve even improved silly things, like I discovered that I tie my shoelaces too tight. I had no idea, thank you, AI. I now have better technique for having feet that are less sore.
Did you discover through AI that you tie your shoelaces too tight?
Yeah, I went debugging. I wanted to try to figure out why my feet were sore. To help me diagnose this I gave the system a lot of information about me, such as when my feet were sore, what I was doing at the time, what shoes I was wearing. We went through a little debugging process: âOkay, first thing weâll try is using a different shoelace-tying technique from the one that you have used, which was loop and then loosen a little bit.â Iâm like, âWow, now my feet donât hurt. How awesome.â
So whatever it is thatâs bugging you, you could go and try to debug it a little bit with AI, and just see what you get. Maybe itâs useful, maybe it isnât. But if you simply give the system nothing and ask something like, âHow do I become as healthy as possible?â Youâll probably not get any information about what to do with your shoelaces. Youâre just going to get something from very averaged-out, smoothed-out soup.
In order to get something useful, you have to bring something to the table. You have to know whatâs important to you. You have to know what youâre trying to achieve. Sometimes, because your feet hurt right now, itâs important to you right now, and youâre kind of reacting the way that I was. I probably wouldnât ask any proactive questions about my shoelaces, but sometimes what really helps is stepping back and saying, âWell, what is there in my life right now that could be better? And then why not ask for advice?â
AI makes advice cheaper than ever before. Thatâs the big revolution. It also helps with all kinds of nuanced advice, like pulling out some of your decision framing â âhelp me frame my ideas, help me ask myself the questions that would be important for getting through some or other decision.â
Where are most people making the biggest mistakes, or where do they have the biggest blind spots when it comes to decision-making? Is it asking the right questions? Is it deciding what they want? What would you say it is?
One is not getting in touch with their priorities. Again, when youâre not in touch with your priorities, anyoneâs advice, even from the best person, could be bad for you. And this is something that also applies to the AI sphere. If we arenât in touch with what we need and want, and we just ask the soup to give us back some average first draft and then we follow it to a T, what are the chances it will actually fit us very well?
Let me put a specific situation on this, because Iâm the parent of a soon to be 17-year-old, second- semester junior in high school whoâs getting ready to apply to colleges, and this is one of the first major decisions that young people make. Itâs two-sided, which is really fraught because youâre deciding where to apply, and the schools are deciding who to let in.
It seems like that applies here too, because some people are going to apply to a school because their parents went there, or because itâs an Ivy League. So through that framing, can you talk about the types of mistakes that people make from the perspective of a high schooler applying to college?
Iâm going to keep trying to tie this back a little bit to what we can learn about our own interactions with LLMs, because I think thatâs helpful for people in this brave new world of how we use these AI tools. So again, we have three stages, approximately: you have to figure out whatâs worth asking, whatâs worth doing, and then you need to get some advice or technical help, some execution bit â that might be you, it might be the LLM, or might be your dad giving you great advice. And then when you receive the advice, you need to have a moment in which you evaluate if itâs actually good for you. Do I follow this, and is it good advice or bad advice; and do I implement it and do I execute it? Itâs these three stages.
So the first one, the least comfortable one, is asking yourself, âWell, how do I actually frame what Iâm asking?â So to apply it specifically to your kid, it would be what is the purpose of college for me? Why am I even asking this question? What am I imagining? What are some things I might get out of this college versus that college? What would make each different for me? What are my priorities? Why are these priorities my priorities?
These are questions where if you are not in tune with your answers, what will happen is you will receive advice from wherever â from the culture, from the internet, from your dad â and you are likely to end up doing what is good for them rather than whatâs good for you, all from not asking yourself enough preliminary questions.
Itâs like the magician scenario. They feed you an answer subconsciously, and you end up spitting that back without even realizing itâs not what you really wanted.
Your dad might say, as my dad did, that economics is a really interesting and cool thing to study. This kind of went into my head when I was maybe 13 years old, and it kept knocking around in there. So thatâs how I found myself in economics classes and ended up majoring in economics at the University of Chicago.
Actually, itâs not always true that what your parents put in there makes its way out, of course, because both of my parents were physicists, and I very quickly discovered that I wanted nothing to do with physics because of the constant parental âyou should do better in physics, and you should take more physics classes.â And then, of course, after I rebelled in college, I ended up in grad school taking physics in my neuroscience program. So there you go, it comes around full circle.
But the point is that you have to know what you want, whatâs important to you, and really be in touch with this so that youâre not pushed around by other peopleâs advice and even what seems like the best advice â and this is important â even the best advice could be bad for you. So when you think someone is competent and capable, and so I should absolutely take their advice, thatâs a mistake. Because if whatâs important to them is not whatâs important to you, and you havenât communicated clearly to them or they donât have your best interests at heart, then this intelligent advice is going to lead you off a cliff. I just want to say that with AI, it could be a performance system, but if you havenât given it the context to help you, itâs not going to help you.
The AI point is where I wanted to go, and I think youâve talked about this in the past too. AI presents itself as very competent and very certain that itâs correct with very little variation that Iâve seen based on the actual output. Itâs not saying, âEh, Iâm not totally sure, but I think this when itâs about to hallucinate,â versus, âOh, hereâs the answer when itâs absolutely right.â Itâs sure almost 100 percent of the time.
So thatâs a design choice. Whenever you have actual probabilistic stages in your AI output, you can instead surface something to do with confidence, and this is achievable in many different ways. For some models, even some of the basic models, what happens there is you get a probability first, and then that converts into action or output that the user sees for other situations.
For example, in the backend, you could run that system multiple times, and you could ask it, âWhat is two plus two?â And then in the backend you could run this 100 times, and you discover that 99 out of 100 times, the answer comes back with a four in it. You could then show some kind of confidence around this being at least what the cultural soup thinks the answer is, right?
Letâs ask, âWhat is the capital of Australia?â If the cultural soup says over and over that itâs Melbourne, which it isnât, or that itâs Sydney, which it also isnât â for those for whom thatâs a surprise, Canberra is the right answer. But if enough of the cultural soup says Sydney, and weâre only sourcing from the cultural soup, and weâre not kicking in some extra logic to go specifically to Wikipedia and only draw from that, then you would get the wrong answer with high confidence. But it would be possible to score that confidence.
In situations where the cultural soup isnât so sure of something, then you would have a variety of different responses coming back, being averaged, and then you could say, âWell, the thing Iâm showing you right now is only showing up in 20 percent of cases, or in 10 percent of cases.â Or you could even give a breakdown: âThis is the modal answer, the most common answer, and then these are some answers that also show up.â Not to do this is very much a user-experience design decision plus a compute and hardware decision.
Itâs also a cultural issue, isnât it?
It seems to me that in the US, and maybe this is true of a lot of Western cultures, we value confidence, and we value certainty even more sometimes than we value correctness.
Thereâs this culture in business where we sort of expect right down to the moment when a company fails for the CEO to say, âIâm really confident that weâre going to make this work,â because people want to follow somebody whoâs confident, and then the next day they say, âAh, well, I failed, it didnât work out.â We kind of accept that and think, âOh, well, they gave it their best, and they were really confident.â
Itâs the same in sports, right? The teamâs down three games to one in a best of seven series, and the team thatâs only got one win, theyâre like, âOh, weâre really confident we can win.â Well, really, the statistics say youâre probably not going to win, but we know that they have to be confident if theyâre going to have any chance. So we accept that, and in a way weâve created AI in our own image in that respect.
Well, weâve certainly created AI in our own image. Thereâs a lot of user-experience design that goes into that, but I donât think itâs an inevitable thing. I know that on the one hand, there is this concept of the fluency heuristic. So a person or system that appears more fluent, with less hesitation, less uncertainty, is perceived as more trustworthy. This research has been done; itâs old research in psychology.
Now you see that the fluency heuristic is absolutely hackable, because if you forget that youâre dealing with a computer system that has some advantages, like memory, attention, and, well, fluency, you could just very quickly rattle off a bunch of nonsense you donât understand. And that lands on the user or the listener as competence, and so translates as more trustworthy. So our fluency heuristic is absolutely hackable by machine systems. Itâs much harder for me to hack it as a human. Though we do have artists who manage it very well, itâs very difficult to speak fluently on a topic that you have no idea about and donât know how any of the words go together. That only works if thatâs the blind leading the blind, where no one else in the room knows how any of it works either.
On the other hand, Iâll say, at least for me, I think it has helped me in my career to form a reputation that, well, I say it like it is, and so Iâm not going to pretend I donât know a thing when I donât know it. You asked me about neuroscience, and I told you that itâs been a long time since my graduate degree. Maybe we should adjust what Iâm saying, right? I do that. That is not for all markets. Letâs just say many would think, âShe has no idea what sheâs talking about. Maybe we shouldnât do business with her,â but for sure, thereâs still value in my approach, and Iâve definitely found itâs helped me to become battle-bested and trustworthy.
That said, when it comes to designing AI systems, that stuttering lack of confidence would not create a great user experience. But similarly, some of the things that I talked about here would be expensive compute-wise. What I see a lot in the AI industry is that we have business people thinking that something is not technologically possible because it is not being given to users, and particularly not at scale, or even offered to businesses. Quite often, it is very much technologically possible. Itâs just not profitable to offer that feature. There is no good business case. Thereâs no sign that users will respond to it in a way that will make it worth it.
So when Iâm talking about running something 100 times and then outputting something like a confidence score, you would have some decision-making around whether it is 100, 10, or 1,000; and this depends on a slew of factors, which, of course, we could get into if thatâs the problem you as a business are solving. But when you just look at it on the surface, Iâm saying essentially 100 times more compute, right? Run this thing 100 times instead of once, and for what? Will the users respond to it? Will the business care about it? Yeah, frequently youâd be amazed at whatâs already possible. Agents like [OpenAIâs] Operator, [Anthropicâs] Claude Computer Use, [Googleâs] Project Mariner, all these things, they are underperforming, relative to where they could be performing, on purpose because it is expensive to run them well. So it will be very exciting when businesses and users are ready to pay more for these capabilities.
So back up for me now, because you left Google about two years ago, a little less than that. You were there for about 10 years, and long before the OpenAI and ChatGPT wave of AI enthusiasm had swept across the globe. But you were working on some of this stuff. So I want to understand both the work at Google and what led you there.
I think you said that your dad first mentioned economics to you when you were 13, and that sounds really young, but I think you started college a couple of years later. So you were actually on your way to those studies at the time. What made you decide to go to college that early and what was motivating you?
One of the things we donât talk about enough is that knowing what motivates someone tells you more about that person than pretty much anything else could. Because if youâre just observing the outcomes, and youâre having to make your own inferences about how they got there, what they did, why they did it, particularly with survivorship bias occurring, it might look like theyâre such total heroes. Then you look at their actual decision process, and that may tell you something very different, or you may think someoneâs not very successful without realizing that theyâre optimizing for a very different thing from you. This is all a very long way of saying that â Iâm glad weâre friends, Jon, because Iâll go for it â but itâs always just such a private question. But yeah, why did I go to college so young? Honestly, it was because I had skipped grades in elementary school.
The reason I skipped grades in elementary school was because I came home â I was nine years old or so â and informed my mother that I wanted to do this. I cannot remember why. For the life of me, I donât know. I was doing something on a nine-year-oldâs whim, and skipping grades wasnât a done thing in South Africa where I was growing up. So my parents had to really battle with the school and even the department of education to allow it. So there I was, getting to high school at 12, and I actually really enjoyed being younger. Okay, you get bullied a little bit, but I enjoyed it. I enjoyed seeing that you could learn a lot, and I wasnât intellectualizing it the way I am right now, but you could learn a lot from people who were older than you.
They can kind of push you, and Iâm a huge believer in just the act of being surrounded by people who will push you, which is maybe my biggest argument for why college still makes sense in the AI era. Just go be in a place where everyoneâs on a journey of self-improvement. So I learned this and ended up making friends with 12th-graders when I was 13, and then at 14, they were all out already and in college. And I had spent most of my time with these older kids, and now Iâm stuck, and I basically want my friends back. So that is why I went so young. It was 100 percent just a teenager being driven by being a social animal and wanting to be around my peer group, which…
But be fair to yourself. It sounds as if you just wanted to see how fast the car could go, right? Thatâs part of what it was at nine. You realized that you were capable of bigger challenges than the ones you had been given. So you were kind of like, âWell, letâs see.â And then you went and you saw that you were actually able to handle that, the intellectual part. People probably said, âOh, but the social part would be hard.â But âHey, I got friends who are seniors. That partâs working too. Well, letâs see if I can actually drive this at college speed.â That was part of it, right?
I am so easy to manipulate with the words, âYou canât do X.â So easy to manipulate. Iâm like, âNo, let me show you. I love a challenge. Letâs get this thing done.â So yeah, I think youâre right in your assessment.
So then you went on to do graduate work, after the University of Chicago, to study neuroscience, with some economics in there too?
So I actually went to Duke for neuroeconomics. That was the field. You know how thereâs macroeconomics and microeconomics? Well, this was like nano-picoeconomics.This was about how the brain implements decision-making. So, of course, the courses involve experimental microeconomics. That was part of it, but this was from the psychology and neuroscience departments. So itâs technically a graduate degree in psychology and neuroscience with a focus on the neuroscience of decision-making, which is called neuroeconomics.
I also went to grad school twice, which is definitive proof that Iâm a bad decision-maker, in case anyone was going to think that I personally am a good one. Iâve just got the technique, folks. Iâll advise you. But I went to grad school twice, and Iâm just kidding. It was actually good for me to go to grad school twice, and my second time was for mathematical statistics. My undergraduate work was economics and statistics. So then I went for math statistics, where I did a lot of what we called back then machine learning, what we would call AI today.
How many PhDs were involved there?
[Laughs] No PhDs were harmed in the making of this person.
Okay, but studying both of those disciplines. What were you going to do with that?
So coming back to college, where I was taking courses around decision-making, despite having been an economics and statistics major. I got a taste for this. So Iâll tell you why I was in the stats major. The stats major happened because at about age eight or nine, just before this jumping of grades, I discovered the most beautiful thing in the world, which everybody knows is spreadsheets. That was for me the most gorgeous thing. Maybe itâs the librarianâs urge to put order into chaos.
So I had this gemstone collection. Its entire purpose was to give me another row for my spreadsheet. That was the whole thing. I get an amethyst, I could be like, Oh, it is purple, and how hard is it? And itâs translucent. And I still find, though I have no business doing it, that the act of data entry with a nice glass of wine is just such a soothing thing to do.
So I had been playing with data. Once you start collecting it, you also find that you start manipulating it. You start to have these urges like, âOh, I wonder if I could get the data of all my files on my computer all into a spreadsheet. Well, let me figure out how to do that.â And then you learn a little bit of coding. So I just got all these data skills for free, and I thought data was really pretty. So I thought stats would be my easy A. Little did I know that itâs actually philosophy, and the philosophy bits are always the bits that should kick your butt or youâre missing the point. But of course, manipulating the data bits was super-duper easy. Statistics, I realized as I began to soak in the philosophy, is the discipline of changing your mind under uncertainty.
Economics is the discipline of scarcity, and the allocation of scarce resources. And even if money is not scarce, something is always scarce. People are mortal, time is scarce. So asking the question, âHow are you going to make allocations, or what you might call decisions?â got in there through economics. Questions like âhow to change your mind and what is your mind set to do. What actions are on the table? What would it take to talk you out of it?
I started asking these questions, and then how does this actually work in the human animal, and how could it work better? These questions came in through the psychology and neuroscience side of my studies. So I was studying decision-making from every perspective, and I was hoarding. So here as well, did I know what career I was going to have? I was actively discouraged from doing this. When I was at the University of Chicago, even at that liberal arts place, my undergraduate adviser said, âI have no idea what job you think youâre going to get with all this stuff.â
I said, âThatâs okay, Iâm learning. I think this is kind of important.â I hadnât articulated back then what Iâll say now, which is that data is pretty, but thereâs no âwhyâ in data. The why comes from the decision-maker, right? The purpose has to come from people. Itâs either your own purpose or the purpose of the people whom you represent, and that is what gives direction to all the rest of it. So [itâs] just studying data where it feels like thereâs a right answer because the professor set the problem up so that thereâs a right answer. If they had set it up differently, there could have been different answers.
Realizing that the setup has infinite choices, that is what gives data its why, and its meaning. That is the decision piece. Thatâs the most important thing I think any of us could spend our time on. Though we all do spend our time on it and do approach it from different lenses.
So then why Google? Why did you promise yourself you wouldnât work for a company for more than 10 years?
Well, weâre really getting into all the things. So Google is a funny one, and now Iâll definitely say some things that I donât think Iâve said on any podcasts. But the true story of that is that I was in a math stat PhD program, and what I didnât know was that my adviser â this was at North Carolina State â had just taken an offer at Berkeley, where he could not bring any of his students along with him. That was a pretty bad thing for me, in the middle of my PhD.
Now, separate from this going on that I had no idea about, I take Halloween pretty seriously. Itâs my thing. At Kozyr, itâs a work holiday, so people can enjoy Halloween properly if they want to. And I had come on Halloween morning dressed as a punch card as one does with proper Fortran to print happy Halloween as one does, and a Googler was giving a talk, and I was sitting in that audience, the only person in costume, because everyone else is lame.
Let that go on the record. My former classmates should have been in costume, but we can still be friends. And so at 9AM, Iâm dressed like this. The Googler lady talking to the head of the department is like, âWhoâs that grad student who was dressed as a punch card?â The head of the department, not having seen me, still said, âOh, thatâs probably Cassie. Last year she was dressed as a Sigma field,â just from measure theory. So I was being a huge nerd. The Googler thought âculture fit,â 100 percent, letâs get her application in.
And so the application was just for a summer internship, which seemed like a harmless thing to do. Sure, letâs try it. Itâs an adventure. Itâs Google. Then as I was signing up for it, my adviser was like, âThis is a very good thing for you. You shouldnât even hesitate. Donât be asking me if I want you here doing summer research. Definitely go to Google. You can finish your PhD there. Go to Google.â And the rest is history. So a much, much better option than having to restart and refigure things with a new adviser.
How did you end up becoming this translator between the data people and the decision- makers?
The role that I ended up getting at Google, the formal internship name, was decision-support intern. I thought to myself, âWeâll figure out the support, and weâll figure out the intern.â But decision, this is what Iâve been training for my whole life. The team that I was in was like a SWAT team for data-driven-decision making. It was very, very close to Googleâs primary revenue. So this was a no-messing-around team of statisticians that called itself decision support. It was hardcore statistics flavored with data science, and it also had a very hardcore engineering group â it was a very big group. I learned a lot there.
I applied to potentially stay in the same group for a full-time role with strong prompting from my PhD adviser, and I thought I was going to join that group. A tangential thing happened, which is that I took a weekend in New York City before going to Mountain View, which is where I picked out my apartment. I thought I was going to join this group. I was really, really excited to be surrounded by deep experts in what I cared about. These experts were actually working more on the data side of things because what the decisions are and how we approach them are so regimented in that part of Google. But I took this trip to New York City, and I realized, and this was one of the biggest gut-punch decision-making moments for me. I realized Iâm making a terrible mistake, that if I go there, I will just not enjoy my life as much as if I go to New York City.
So there was so much instinct, there was so much, âOh, no, I should actually really reevaluate what Iâm doing. Am I going to enjoy living in Mountain View?â I was just so set on getting the offer that I hadnât done what I really should have done, which was to evaluate my priorities properly. So the first thing I did was I called the recruiter and I said, âWhoa, whoa, whoa, whoa. Can I get a role in New York City instead? It doesnât matter which team. Is there something we can find for me to do here?â So I joined the New York office instead. Very, very different projects, very, very different group. And there I realized that not all of Google had this regimented approach to decision-making. There is so much translation, even at a place like Google, thatâs necessary for products that are less close to the revenue stream.
So then there has to be a lot more conversation about why and how to do resource allocation, and whoâs even in charge there, right? Things that when youâre moving billions around at the click of a mouse, you tend to have those questions answered. But in these other parts of Google, there was so much more color in how you could approach it, and such a big chasm between the people tasked with that and any of the data or engineering or data science efforts we might have.
So to really try to fill that gap â to try to put a bridge on it, so that things could be useful â I worked way more than my formal job said I should to try to build infrastructure. I built early statistical consulting, because that wasnât there. You couldnât just go ask a statistician whoâd sit down with you and talk through what your project was going to be.
I convinced people to offer their 20 percent time, stats people by specialization, to offer their support on projects that were not their own project, to put some structure to this, and made resources and courses for decision-makers for how to think about dealing with data folk. I really tried to bring these two areas together, and eventually it became my job. But for the longest time, it wasnât. Sometimes I faced questions. What are you? Who are you? Why are you actually doing what youâre doing? But just seeing that things could be made more effective, and kinder, for the experts who were going to work on poorly specified problems unless you specified the problems well first, was motivating, so thatâs why I did it.
Trying to tie this all together, it sounds like that values and goals piece, and the philosophy element you talked about in school as being important, were coming back into play versus just focusing on the external expectation, like going to work for Google, of course, youâre going to go to Mountain View. Thatâs where the power is. Thatâs where the data people go, and youâre smart enough to be with the data people.
So if youâre going to run the car as fast as possible, youâre going to go over there, but you made a different kind of decision than perhaps the nine-year-old Cassie made. You stepped back and said, Wait a minute, whatâs going to be best for me? And how can I work within that while pulling in some of this other information?
Yeah, for sure. I think that something that we can say to your 17-year-old is that itâs okay. Itâs okay if itâs difficult when youâre young to take stock of what you actually are. Youâre not formed yet, and maybe itâs okay to let the wind take you a little bit, particularly when you have a great dad whoâs going to give you great advice. But it would be good if you can eventually mature into more of a habit of saying, âWell, Iâm not the average Joe, so what do I actually want?â And working for what is thought of as â I donât want to offend any internal Googlers â but they did have a reputation for being the top teams.
If you wanted to be number one and then number one again and number one some more times, that wouldâve been the way to do it. But again, maybe itâs worth having something else that you optimize for in life. And I, as it turns out, Iâm a theater kid, a lifelong theater kid. Iâm an absolute nerd of theater. Iâm going to London for just a few days in two weeks, and Iâm seeing every evening show and matinee. Iâm just going to hoard as much theater as I can for the soul. And so living in New York City was going to be just a better fit, not only for theater but for so much more that that city has to offer.
Having lived in both Silicon Valley and the New York area, I promise you that yes, the theater is far better in New York.
I mean, I went to all the plays in Silicon Valley as well, and I did my homework. I knew what I was getting into or out of. But yeah, it takes practice and skill to know that some of those questions are even questions worth asking. And Iâve developed that practice and skill from originally knowing how to do it to help others, having studied it formally, being book smart about it. These are the questions you ask. This is the order you ask them in. Itâs something else to turn that on yourself and ask yourself the hard questions, that book smartness isnât enough for that.
Thatâs good advice for all of us, whether weâre running businesses or just trying to figure out life, weâve all got decisions to make. Cassie Kozyrkov, founder and CEO of Kozyr, former chief decision scientist at Google. Thanks for joining me on this episode of Decoder.
Thanks for having me, Jon.
Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!
Decoder with Nilay Patel
A podcast from The Verge about big ideas and other problems.
SUBSCRIBE NOW!
Read the full article here