Hey podcast listeners I'm Rob woodland director of research at 80,000 hours today I'm speaking with the well-known writer and entrepreneur Julie Galef about the career path that she's taken the research she's doing now and her opinions on a whole wide range of topics we also talked about how people can pursue careers like hers in which they try to enhance human decision making and and general judgment we have a lengthy Career profile on paths of just that kind are slated to be released on our site in the next few weeks so look out for that the
conversation was recorded at effective at resume global san francisco the largest annual conference for the effective altruism community you can hear a bit of shouting in background but I'm sure that will only add to the ambience if you'd like to get coaching to help you work on similar issues to what julia is working on that is Improving human judgment the Arbour our ability to make accurate predictions and make wise decisions together then I strongly suggest applying for free one-on-one coaching by clicking the link in the show notes or on the Associated blog post we've helped
a lot of people pursue more impactful careers of this of this kind and I'm sure we'll be able to lead you in the right direction as always I recommend you get the episode on your phone rather than listening to It on your computer you can do that by searching for 80,000 hours on whatever podcasting app you use and now I bring you julia gala today I'm speaking with julia calif julia is a writer and speaker focused on improving human judgement especially about high-stakes questions julia has been the host of the rationally speaking podcast since 2010
who founded the Center for Applied rationality in 2012 and is currently working for the open philanthropy Project on in on an investigation of expert disagreements thanks for coming on the podcast Julia my pleasure Rob good to be here so what have you been up to this year so I have kind of a mix of projects right now I'm doing the podcast as you mentioned those come out every couple of weeks I'm working on a book which won't be out for a little while both the podcast in the book are in various respects about improving human
Reasoning and judgment and then the the thing that you mentioned with the open philanthropy project is this kind of independent project that I conceived of and and open plan that they agreed to contract me to do it's a part-time project basically what I'm trying to do is host identify important questions important in the sense that they could have a serious impact in the world like the answer to that question has a serious impact on How you try to impact the world try to identify important questions over which thoughtful well-informed people disagree so they have different
models of the questions so for example you know is super intelligent AI is something that's on the horizon or not if it comes is it going to be you know what's the probability that the outcome is gonna be good or not how should we be dealing with the housing crisis in San Francisco is is ending mass incarceration feasible Or desirable goal things like that not not questions like I don't know is astrology real my question is over which like thoughtful reasonable people can have different models so the project is identifying those questions getting to know
and starting dialogues with experts with different models of those questions and then hosting conversations to try to get to the to the root to the crux of why the experts disagree so comparing their models holding their models up Against each other and noticing you know the areas of overlap or non overlap and doing this process in conjunction with a bunch of interested non experts especially from tech or finance or government or the media people who are interested in impact in the world positively and have a sort of disproportionate amount of resources or influence in the
world but aren't experts themselves so hopefully giving them a richer and more accurate Understanding of the topic by hearing sort of the best arguments on both sides and listen to the experts talk to each other so have you managed to solve the problem of expert disagreement yet I was going to say depends on your definition of solved but I think by any definition of solve what what kinds of techniques do you have and and are they bearing any fruit sure so one I mean the technique the word technique is a little bit strong But one
heuristic that I've been using that seems to be good is that I I've come to think that it's important to not frame these conversations as we're trying to change each other's minds or even as we're trying to like converge and reach agreement it seems to work better to just frame the goal as let's get let's really understand as sort of precisely as we can what our models are and where they just where they diverge from each other and why they diverge and So the goal the goal is framed as as understanding the landscape of the
different models and not as as like shifting someone's opinion and I think I think that so my current my current hunch is that even if your goal was shifting someone's opinion that this this frame of trying to understand the models actually works better than having the goal of shifting an opinion because you have less resistance to the idea of understanding it if you don't think that Understanding means can changing your mind I think that's part of it I think it's also that too when you're focused on the goal of trying to change someone's mind you
end up missing a lot of important details so you end up trying to like make arguments at them but those arguments aren't actually gonna be all that useful or relevant to them because you've missed something important about why they believe what they believe and so all of that Important groundwork of like getting clarity on what they're cruxes of their belief actually are that groundwork seems to happen more readily when you frame the goal is you know let's understand our respective models as opposed to let's try to converge hmm is this the same as the double
crux process that that was developed at the Center for Applied rationality it's related to it the double crux process is framed as trying to Trying to reach convergence and what I'm doing is it's very influenced by that like I talked about cruxes I talk about trying to find and I suppose I should define for the listeners what crux is a crux is a underlying sort of belief or assumption or premise that is is feeding into your view about the topic at hand and feeding in an in a sort of causally important way such that if
I change my mind about this underlying thing it would also change my mind about the Higher-level question so for example if you know if Rob and I disagree about whether it's okay to eat animals a crux like a crux might for me might be well I don't actually think animals are you know have the capacity to suffer if I did then I would think it's not okay like I currently think it's fine but if I had a different opinion about whether animals can suffer then I would not think it's fine so that's the crux for
me you could have different cruxes about The same topic like Rob's Crocs might be it's wrong to cage animals like it just in principle to restrict their freedom yeah sure yeah and and if I'm trying to think of what could influence that like if you thought that the animals were just as happy being caged as not being caged then you might say well okay maybe it's not wrong to to cage them or maybe you think that that the lives of factory farmed animals are worse have like negative utility but if you thought Thought that they
had positive utility maybe you would be less confident that it was wrong I don't know anyway so there's a lot of the goal though is to just sort of dig into the disagreement until you find the ideally find the double crux the thing that you know I both have in common that if this thing was different then you'd both change your mind exactly so hopefully in the same direction yeah so that so the double Crux process that the Center for Applied rationality has been you know tinkering with and teaching people and practicing is kind of
a more formal process that that that's related to what I'm trying to do what I'm trying to do is it's a little less formal partly because I host these conversations as like dinners and I don't it seems to be somewhat in tension with the goal of having a convivial Dinner if we've got like easels and whiteboards out and and also just like my goal like I think understanding these issues is really valuable and it's sort of my main goal but I have this other secondary goal that's just creating promoting a norm especially among you know
important or influential people in these different fields promoting a norm of being curious about important questions and being and seriously engaging with those questions seriously Engaging with different models of that question and genuinely trying to understand to reach the best sort of most accurate understanding that you can of that question which i think is not a common norm at the moment I think most people you know it's not that they're not thoughtful or smart it's just you know it's not it's not our default way of engaging with ideas to like try to seek out differing
models and try to understand why the experts don't agree With each other so you know I have this kind of broader goal of creating this this intellectual community and culture among at least a subset of people in these different fields tech and finance and academia and the media and so on that at least asks these kinds of questions and approaches disagreements with this spirit and so that's somewhat broader and fuzzier then you know reaching our double crux but I think would would nevertheless be really Valuable to achieve so the open philanthropy project only funds research
if they think that it's going to be pretty damn valuable for the world kind of back home are they are they hoping to see oh now to be clear I'm a contractor I'm not they didn't give me a grant or anything so like yeah the the like vetting process is much less like stringent and rigorous for contractors a successful yeah cut in under-the-radar maybe I Should become a contractor but yeah what kinds of things are they hoping will come out of it I mean I think Holden's main goal is just like get get influential people
to be aware of and seriously engage with questions with topics that are if not ei then yeah adjacent and by ei adjacent I mean well basically just what I said about important like questions that that that have significant bearing on on what we should do to positively impact the world To like reduce risks or to create a lot of value and and I guess yeah ideas more broadly not just about object level causes like AI or you know animal welfare things like that but sort of EA EA memes that have to do with how you
think about things like the very idea of asking yourself what would change my mind about this or the idea of you know asking about evidence or like tagging things with different epistemic status or looking for cruxes that's it's not Unique to EA but it's like it's pretty distinctive about the EA and rationality communities and I think that that kind of way of thinking about things is something that he would know that open philanthropy would love to to be more common in you know Silicon Valley or in the world in general so was it perhaps more
of an outreach project than than a research project or is it bit of both I guess I wouldn't call it a research project I would call it I don't know I Guess you could call it outreach you could call it you could call it community building since I am trying to create this intellectual community it has an element of research to it in that I'm trying to learn methods of voting of intellectual communities yes and also methods of of like finding these cruxes more effectively like the thing I the thing I mentioned about how to
frame the conversation there's a bunch of other things I could mention along those lines Like kinds of thought experiments or questions that turn out to be really useful in these conversations and make the conversation more productive so there's an element of research to that but it's like you know far from an RCT hmm so on the double crux process I haven't actually tried it but I'll tell you why I suspect that why I'm a bit suspicious of it I'm just having heard it described so I feel like in life you go through and you have
tons Of experiences and you're just constantly kind of learning you build up different like reference classes of different kinds of categories of things and as a result you end up with different gut judgments different predictions about how people are gonna behave or what's gonna happen if this has changed or that has changed and very often it doesn't just come down to some like single disagreement that you have about a particular you know if if X then A if Y then B I I imagine that very often you just find people just have like different world
views in a thousand tiny ways that through through their life that build up a different a different whole perspective on the world with a thousand little brush strokes and like each brush stroke no no single brush stroke like makes the painting is that a problem that you encounter when you're trying to find these these cruxes yeah I mean I will say it it doesn't Usually happen that there is a single double crux such that you know if I changed my mind about that I would totally do a 180 on the no important higher level question
and same thing for you that's pretty rare see far has usually presented the technique in that way just in the same way that like when you're teaching an economic concept you just simplified it straight out yeah yeah and I think it's I think it's a useful framework to have in your mind as You're as you're talking with someone about the topic what I find tends to HAP like I'm just thinking about the last such discussion that I hosted it was about whether how big of a problem is it that tech companies can manipulate lay claim
to our attention to the extent that they can and keep us kind of hooked on our our devices or hooked on you know a platform like Facebook etc like we had one they're arguing that this was like a Serious a serious threat to human happiness and well-being and sort of a threat to the fabric of society basically and that it should be a cause more like taken more seriously by EA's by their own standards and so we were we were kind of you know debating that I like to call these conversations sometimes fun debates because
it's it's similar to a debate in that we're discussing a disagreement that we have but hopefully dissimilar to a debate in That we are like collaboratively trying to understand our respective models instead of trying to you know argue or win anyway so so to your question about like what is this process of looking for cruxes actually what does it look like we so we ended up identifying like three or four major cruxes such that if we believe something differently about that crux it would at least make us less confident in our view and they were
some of the more empirical things and some of Them were values so empirical thing was being less like I for example I was less confident in the data on showing a connection between use of these various apps and depression or anxiety hmm and I think if if that evidence was really solid I would be taking this much more seriously and so I can just do that thought experiment like what if I found out this was really really well done research in that world I would like be pretty concerned about this like wow This seems like
a major detriment to to like human well-being and like a lot of humans are affected and will continue to be affected more and then there were these more more pissed I don't even more philosophical cruxes like it turned out we disagreed about what criteria to use to determine whether humans are being hurt or not and have a theory of value yeah basically our theory of agency or something so like my take was basically look if People self-reflective if if you ask people like look here's how much time you spend on this app here's the various
evidence about how it impacts you do you want to continue spending the same you know amount of time or would you rather you know have have commitment devices where you could like tie yourself to the mask and like limit your Facebook use or at least make it a little more difficult for you to use Facebook or something if you gave people that information and They said like nope I'm fine doing what I'm doing I don't want to limit my access then I would call that you know people are not being hurt by this or like
at least I would not be willing to water claim people are being hurt what if they said in the moment they were suffering while I was happening very unlikely I I guess I think it's unlike I I think it's unlikely that they would say they're suffering in the moment and still endorse it but if they did endorse It then I would say like I guess maybe I'm more of a preference utilitarian or something than a hedonic utilitarian whereas my this other guy at the dinner who had been sort of making the strong case for why
this problem was really important felt that look people are sort of - we've all been too corrupted to by this thing - we have like false conscience and he was like you know we just can't we can we can't really imagine what it would look like to have Lives in societies that weren't so dominated by technology and so we have nothing really to compare this to maybe the 13 year olds count but I can [Laughter] point people won't be able to remember the fit the pre the pre social media right right yeah so that that
was um that was a more philosophical crux yeah yeah so you said earlier that you're running a book is this something that you think more people should do to get Their ideas a education should I be writing a book I wouldn't mind having written a book but unless sure about the process of actually writing well I feel you there as someone in the throes of it I definitely feel you I you know the thing that I think books do really well is provide a nice sort of you know for for a thesis or ideas such
that it's easy to spread and talk about and they do this better than blog post for the most part so even if like I've Heard people sometimes say you know most books should be blog posts or something or most books through the articles or something like that and I I sympathize with that view most podcast should be movies maybe I sympathize with that view although you know even if there is a lot of padding and books I think padding and redundancy can actually be good for for making content stick and and like impact people so
unless unless you know annoyed by by padding in books then some people Are but but even if even if you could have expressed the same point in a blog post having a book for whatever reason with like a certain title that's that's like been sort of added to the list of like books about this topic and it's you know been written up in some articles etc it makes it just part of a public conversation in a way that's really hard to do with you know even if you write a ton of blog posts on a
topic and it'd be like you know it would be a good example You know Guns Germs and Steel or release takes out some territory yeah it's got like a you know it's a very long book with lots of detail but it's got kind of a thesis and it's as you say Guns Germs and Steel and even if someone hasn't read it if they've heard about it they sort of know the concept and it it provides this nice little handle for a for a point of view or thesis that makes it easy to talk about and
makes people want to talk about it that's something I Think books can do really well mmm oh now I need I guess is an idea actually what to write about so you do quite a lot of different outreach activities with your idea so you've got the rationally speaking podcast that you've been doing s4 for seven years for seven years now yeah how the podcast compared a competitor to compared to snapchat say snapping their ideas so I hear so how do podcast computer be known has been a Good vehicle for sharing your ideas it really has
I didn't have any particular grandiose plan when I started it but but I've been really happy with how it's gone so far I have been so you know I obviously I tried to pick guests who I think have interesting things to say and are doing interesting work that you know deserves more attention and so on but underlying it all the the purpose of the podcasts are like the driving force behind the choices that I'm making for The podcast is really promoting this approach to epistemology that I that I support them and I wish were more
common and so the kinds of questions that I'm always trying to ask and the stuff that I'm most interested to to talk to the guests about is stuff like you know what counts as good evidence and how how confidently can we know things and what are the kind of standards of this field and how good can they be how like how much knowledge could we possibly have With with confidence about questions like the one you're studying things like that and I when I can I really like to get to this point in the conversations with
the guests where they're thinking on their thinking in real time basically about like the implications of their research or the you know epistemic status of their claims things like that because I wish that people thought in real time more often as opposed to just kind of regurgitating cached things that They've you know said again and again in different contexts or that they like have heard and you know think you're supposed to say are supposed to think not to sound too you know arrogant or anything to like criticize all modern communication or anything like that I
think we will do that from time to time question is just the balance terrible I just wish that there on the margin I think it would be good for our like collective stomach health if people If people spent more time talking about and thinking about things where they don't have a cached answer yeah and are trying to think on the spot so those are those are the kinds of like practices and questions that I'm trying to use my podcast to to promote mmm how many people did you end up reaching like what's the reach of
my podcast I think now a typical episode gets about thirty five thousand listeners occasionally one of my episodes will get a lot more than That like around 100,000 but that's relatively rare thirty-five thousands more like the modal number yeah so it's really yeah it's quite a lot given the amount of the effort that goes into a podcast isn't isn't that large you can you know do it in a day it's like it's like speaking to an enormous auditorium here yeah I guess like a sports stadium right yeah I guess Wow to have fun to visualize
concretely my audience yeah and they you know they tend to be Like smart and thoughtful people like based on the comments I get or the emails I get the people I talked to on Twitter and also so I've been I've been running ads forgive well in the last few months and then give all has told me that a they've also tried running ads and other podcasts or other platforms and a disproportionate amount of the donations they've gotten have come through my podcast which makes me feel very proud of my audience yeah you've Told your audience
well yeah I mean this podcast is also also new but I've generally found that if the longer the content the better the comments become or at least people right I think a lot of people the worst responses tend to come to people who've read the headline or maybe only the first few words of the headline sometimes but if if they have to actually go through and listen to like you know a whole hour of conversation in order to get to Something semi outrageous that someone's responses to that increase the cost of of trolling or outraged
exactly yeah speaking of the horrors of social media and people being annoying online you're also quite a big star on Twitter but I've definitely been ramping up my use of it and I like Twitter it's it's dystopian it's just opiate I don't I mean look I've heard I've heard the complaints about it I've and I don't Really doubt them I don't doubt that people have really bad experiences on Twitter yeah but yeah my expense has just been great i I just find it like okay the comments that I get are you know they're not all
great comments but they're mostly like sincere and engaged and sometimes they're like really thoughtful and interesting there's just so many interesting people on Twitter like there are all these social scientists who have conversations in Real time with each other and public that we can all listen to and comment on about like just the latest papers or the debates in social science like like this this moved to lower the p-value threshold to like point zero zero five I think it was just really cool to be able to see these conversations between experts yeah about a thing
that they actually disagree about in real-time and I well it's true the praises of Twitter much longer if you let me though I guess I can't hate it that much because I read it every day in this trap I suppose I guess evidence against my belief that if someone is consistently unhappy doing something yeah and then they will choose to limit their access to it and you're like evidence against that I suppose it depends what popper you're in I mean people are usually fairly friendly to me but then when I read other people's threads it's
got you've got someone you knew smart saying something very Reasonable and then you just read their the responses and it'll be like I don't know Stephen Hawking's like view on quantum physics and then you just blow you have like I just finished high school but my view of physics is it's a bit it's frustrating although like perpetually amusing I suppose amusement is or like I guess bemusement is an attitude I strive for as a substitute for like indignation and outrage and frustration yeah I don't always succeed By I don't know I it's not that I
don't also get really irritated by people sort of misunderstanding sometimes seemingly willfully misunderstanding my points or other people's points I just try to keep in mind that communication does seem to be really hard and without any 140 characters the time yeah I got to say that the character limit is there's some advantages to it it has forced me to like be much more concise and to the Point than I otherwise would and it's kind of funny too like I had this one blog post I'd written that was like four paragraphs when I realized wait I
could literally just write this in 140 character isn't and I wouldn't lose that much that's kind of a good thing but it does have this downside of you know there's some things you really just can't say in hundred forty characters and so you have to like do these strings of tweets and like it's just gets so Messy really does not feel optimal in that way but so what have you learned about communication from either being on Twitter or doing a podcast I would say well I'm like continually adding to my stock of ways that people
can misunderstand topic X or topic Y and I think one one thing that I didn't appreciate enough when I started out writing blog posts or doing interviews etc is that it's not enough to just like if you're worried about people you know Misunderstanding you and thinking you're saying X it's not enough to just add a sentence in your interviewer and your blog post or whatever that says by the way I'm not saying X people will still think you're saying X and respond angry as if you did yes and you know maybe some of that is
sort of they're being intentionally they're like intentionally misinterpreting you but a lot of it is just you know people don't read super closely so they may like miss or not Quite parse that line like if if they're going in to with the assumption the expectation that you you believe X then I think it's just kind of easy for the human brain to like reinterpret that your your line so that it doesn't quite have the corrective impact that you expect it will yeah and they also you know it's been helpful for me to model this process
as people having priors about what you believe and don't reading it and not well yeah so They're not reading it carefully is like a separate problem this this other problem is they have priors out about what you believe and you can give them evidence that will try to budge them away from those priors but if the priors are strong you often need a lot of evidence to budge them from those priors like you have to not only say like let's say let's say I made a post about criticizing some government intervention for being ineffective they
may pattern Match me to like oh she's probably a libertarian she like hates government whatever and and that may not be wholly irrational either because often it is the case that you know people who criticize government programs are like statistically much more likely to be libertarian than people who don't criticize government programs or something like that don't anchor too hard on this one actually so they have this like Assumption about me and I can say like it doesn't mean that all government programs are bad but if I just have that one kind of paltry sentence
that might not update them that much from suspecting that deep down I really hate government I might have to give stronger evidence like saying more sort of sincere positive things about examples of government programs that I think were effective and just like spending more time and like emotionally salient Content budging them from their assumption from their prior that I hate governments so so that was one major lesson for me was was realizing that I can't just like say I don't support X and cause people to believe okay she doesn't support X yeah and also that
like that like they're not being completely irrational if they have a prior about what I believe based on what kind of people tend to say what I say you know that makes sense Did you find ways to make the podcasts more popular over time like adjusting format or changing your hosting sale if I'm asking for a friend of course to be honest I have done embarrassingly little optimization okay um you know for a podcast that's been around over seven years I you just sold two people I I mean I'm I'm mostly just doing what I
would I enjoy and sort of shrugging and being like well whatever audience I get from the thing I'm I enjoy grapes and I'm like very pleased that it's you know as large as it is I'm sure however that's not like a defense of not optimizing like I could optimize some and I think think I think I would still be doing things that I enjoy maybe just as much but like maybe appealing to more people so I keep kind of vaguely thinking like yeah I should really I should really like do more research on how to
improve podcasts or make them more widely appealing I have some ideas I have like ways think like experiments that I want to try with the podcast that you know I could like experiment and see what sticks that was some good cafe ting I was about to patent Matt you know people who just hate optimization so you spent some time in academia earlier in your career right and then you decided that it wasn't the right fit for you oh yeah I mean I spent one year in a PhD in economics before dropping out if that counts
as being in Academia although I also I was a research assistant for several years before that to various social science researchers at Columbia when I was an undergrad and then MIT and Harvard after I graduated I spent a year at the National Bureau of Economic Research as a research assistant and then a year at Harvard Business School writing case studies on International Economics for a professor there so some experience with academia aside from my One you know might like abortive foray into a doctorate so why did you decide not to go down that route because
it seems pretty close to what you're doing yeah it was I mean so the one year wasn't it wasn't like a sudden turnaround after one year where I was like super pumped and then I you know quit after one year despite that I I'd kind of by the point I started the PhD I'd kind of started having doubts about whether this was the Right career track Moorefield for me but I thought like you know I should give it a shot anyway now that I've come this far because I'd like spent my undergrad studying statistics and
doing research for professors but the idea that I would go into academia I'd already put a lot invested into it so I didn't want to give up too quickly but the the reasons for leaving I mean they were both personal and kind of intellectual or ideological the personal reasons were Just I think I really am a generalist by Nature like I've optimized my career so far for getting to spend as much time like thinking and talking to people about a wide variety of interesting and important topics and I love that and it's really hard in
academia to do stuff like that like I guess until you're sort of really tenured and you can just be a dilettante you have to be really narrow and detail-oriented so there were there were the personal reasons and then the The ideological or intellectual reasons were you know this was this was before the replication crisis hit like a few years before but nevertheless I had sort of noticed a bunch of these problems with social science methodology not not just completely of my own accord you know I like talked to people who had really who are really
discerning about research methodology who like had concerns and you know I'd seen you know there was some specific papers where I Had kind of inner knowledge into the workings of how that paper was put together and it was like seeing the sausage being made like I remember talking to one professor who described how they had they had run some like mini surveys ahead of time to figure out which wording of their question would be most likely to get the results that they wanted in the main study that they actually published and they like did they
like felt no compunctions about Doing this or telling me about it do I want to answer this correctly and like that's not to say that there is some good research being done or that I couldn't have chosen to do good research if I'd really tried but it felt like the deck would be stacked against me if the incentives are or like if if you get rewarded for publishing a lot and trying to be like a stickler for research quality makes it harder to get published then it felt Like you know academia is already really hard
and competitive and this would be like making it even harder on myself do I really want to do that you know yeah so you've kind of unconventional career since then seems like you've been kind of making your own future like starting your own project basically yeah I think the thing is is that a path that you would recommend yes well as it felt risky at any point I mean I've been fortunate and having like friends and Family who I could live with or you know my parents gave me like some monthly stipends after I left
grad school when I didn't have a job you know this isn't something that everyone gets to do I am definitely lucky and I recognized that you know and that was it was super helpful to have that cushion of you know at least a couple years when I didn't have to be like really supporting myself in New York City fully and I could just explore and meet people and learn about Different opportunities and so on I think that one like generalizable piece of advice even if you can't you know do exactly what I did is to
as much as it's feasible for you just spend a lot of time getting to know like interesting and smart people working on cool things and even if you can't predict exactly how that will end up benefiting you I have like decent confidence that it will in some way does just those connections are how you hear about cool Opportunities that you know aren't public or that's how you like end up finding people to work with on something that wouldn't have occurred to you if you hadn't known them that kind of thing but that's been really useful
to me in the long run so obviously one of the key decision points was deciding to leave your PhD have there been other kind of crossroads that you've been out we had really hard career decisions to make um honestly looking back at all of the Shifts or I don't know about all oh certainly a lot of the career shifts that I've made or shifts in like my plans or how I've been thinking about my career they've mostly been epistemological in some way like I mentioned the econ one where I was you know nervous about the
quality of research when I was in college this would be like an early example I switched from a there's gonna be a political science major and then Switched to economics and then switched to statistics and basically I just I was very interested in questions the questions that political science studies but then just got frustrated with the lack of rigor and answering them which is not entirely because political scientists aren't rigorous people it's they're just very hard questions to get rigorous answers to because you know you can't really run RCTs on countries or like rerun History
which is unfortunate so that was one there was also this I tend to gloss over this period of my career just because it like makes for a more complicated story but I did spend like a year and a half thinking I was going to go into urban design an architecture yeah I don't you know google my name you can find stuff I've written for metropolis or the architects newspaper or it was 2009 2008 something like that okay recently well I mean like Nine years ago what was you thinking man this was right after I left
my PhD um and I was just you know basically my plan was I'll be a freelance journalist as a way to to like learn about cool stuff being done and and so you know some of the freelance writing opportunities I was able to find were about urban design and urban planning an architecture and I've always kind of been drawn to subjects that are about complex systems and complex systems Interacting with each other and like making complex systems work better this was kind of what drew me to economics and urban design and you don't mean visual
design you mean like thinking through the social science and economics of how you lay out a city yard or Gaza's transport I I guess I I mean I mean all of those layers like there's definitely a physical design layer like it was pretty cool actually to think about how the Physical design of a downtown or the physical design of you know a waterfront or a park or campus or something can make the space work better either like work better socially like cause cause people to have better social interactions in that space or make it work
better economically so this route it was just cool to think about the intersection between physical design and you know economics or psychology unfortunately the the rigor in those Fields was also not that great and I think partly that's because you know designers tend to go into those like ask those questions and you know designers people who go into design are usually not the same people who are like super interested in really rigorous social science methodology and also again it's kind of hard to do experiments about a downtown of a city so that was sort of
why I ended up shifting into science journalism because scientists loved to Answer questions like how do you know and like what evidence are using and when I would ask those questions of designers talking about their projects they like didn't they were like confused or put off without me asking the question or they would give an answer that was like kind of orthogonal to what I was asking all these designers just obsessed with the impact evaluation oh I don't really fault them and not really their thing yeah so looking looking back Say to when you graduated
from high school either are there any other paths that you wish you might have pursued earlier on like Ryan and I running for political office running for political office sounds horrifying [Laughter] are there other I mean you know I it's easy to look back and wish that I had done things sooner or like not taking random detours into architecture and urban design and but all it like but you Know you don't look back with regret that you didn't you know commit yourself to dentistry or something no I mean my my life right now is just
pretty amazing by my standards like I really I remember someone asking me back in 2007 or something whenever I dropped out of my PhD like well what would you ideally like to do and I said honestly I would like to spend as much as my life as possible just talking to smart and interesting people about important Things like that would be great and that's not like a defined career path but I feel like I basically do that now you know I have the podcast I give talks sometimes and this project for open philanthropy for the
open philanthropy projects involves having interesting and important conversations with smart and thoughtful people and I'm like doing the thing I wanted to do it's hard to you know hard to imagine it being that much better for me by my standards do you Think you ended up in a good place in part because you've explored so widely you tried so many different things and it is so hard for me to have conclusions about like why to the extent that I succeeded at my goals so far why is that I it's really hard to speculate you don't
generalize I mean the one thing that I said earlier is something that I'm decently confident in that like like if I look at it the opportunities that I got that helped me progress to where I Am now they they seem to be because I just like met a lot of smart and cool and thoughtful people working on important things and and ended up getting opportunities I wouldn't have gotten if I didn't have that network of friends basically so I do you think that's good advice for people in general who aren't already confident about what they
want to do and have like a clear path to follow broadly speaking the problem that you're working on is Improving like human judgement and and reasoning and it seems like one of the places that this would be most valuable valuable would be in kind of higher tiers of government or other kind of influential institutions like their like the World Bank or perhaps the Bill and Melinda Gates Foundation how feasible do you think it is that like some of the research that you're volved with our aware of other people like Philip tetlock are doing on Forecasting
could could actually be applied to significantly improve the way that decisions are made in these you know important institutions I mean I think it would be amazing if if legislators or or you know policy makers were really training their judgment to you know improve their ability to be calibrated to like have practice best practices of like questioning their own judgment or or like like seeking out people who disagree with them etc etc Unfortunately I think most of the problem comes down to incentives and if you as you know if you as a Congress prison for
example or don't get rewarded for your accuracy then it's you know just gonna be really hard to get you to try to improve your accuracy I mean most members of Congress like ninety percent of something re-elected every year so a lot of them just aren't in that much danger it's it's a bit surprising that they don't use the fact that they have Very high re-election rates to that mean that they've in a sense got quite a lot of discretion there they could like vote for things or against things that they don't liked it may be
may be more than they do and they could just you know now that often just actually express their opinion and try to be reasonable and like some of them would lose their seats but you know many of them would then get to actually do what do what they believe I don't actually know how how insecure Congress people should feel about their their seats and maybe they like feel more insecure than they should or something but I just don't see an active force pushing them to be more act like like so let's say they let's say
they knew their seat was secure and they were well intentioned and really did want to like pass the best policies for the country they still like the impact of your decisions is so like long-term and uncertain so it's like really kind of Hard to tell if you made the right choice or not and you get you get like adulation or disapproval in the short run based on whether your choice seems good or not and so it it just seems like my rule is basically anytime the benefits of accuracy are uncertain and in the future and
the costs of trying to be more accurate or paid upfront in terms of like effort or unpopularity there's gonna be a really strong pressure to Against accuracy yeah I guess the kinds of people who tend to get elected are probably not the most intellectually fastidious people and I think I was also happy to and while it's true that most of them are reelected when it comes to congressional elections 8 to 2 years so they also run the risk of getting primaried if they stick out too much so they're Hardy could vote to not nominate but
I guess right yeah the primary step complicates things a bit yeah so like Has your personal experience given you much insight into like what places it might be possible to get more reforms are there other some institutions that are more open to changing how they think about things and trying to become more rational well the intelligence community has has seemed quite interested in this and in fact I ARPA which is the so so I are both sort of a newer spinoff of DARPA where DARPA is funding research that Could produce innovations helpful to the defense
community to the military and I ARPA is doing the same thing but for the intelligence community so they've been it's run by Jason Matheny and he's he was actually the one who funded Phil tetlock work on on forecasting that eventually got turned into the book super forecasting so you know Jason is all about like epistemic rigor and an accuracy yeah both both Jason and Philip tetlock have a have a great to come on The podcast at some point so we'll be able to find a time saucer great about that yeah and so they tell it
in this community is do you think that can be explained by the incentives being good for the for the bureaucrats there oh well so I don't actually think I don't actually think that the current intelligence community or the intelligence community historically is that incentivized to be to try to improve Their accuracy and if you look at the kind of forecasts that people in the intelligence community make there they're often sort of hedgy and you know they're not the kind of thing where you could really tell if the person was right or wrong but I guess
the reason that I named the intelligence community is for a couple of reasons one because there just happened to be people like Jason who are who are working on changing the incentives by you know Experimenting with forecasting tournaments and things like that and two because it at least seems like in the intelligence community there are fewer disincentives for accuracy than there are many cases that like you know you're not like I don't know if you're if you're a pundit you don't have some people to the general public yeah you don't um people aren't pressuring you
to be either you know really sort of mainstream appealing and Likable they're not pressuring you to be contrarian and you know super original in your ideas so at least in the absence of those pressures I think there's like more hope for for instituting new norms of accuracy hmm are there any other places that you can think of where there's been progress made I mean it seems like taking a longer-term view people are more reasonable than they were 200 years ago so like bit by bit the quality of you know discourse in Public has mostly been
improving but perhaps the last few years doesn't look so good but yeah I'm just thinking like what what we read today from those times is kind of the most outstanding work by it by the very brightest people comparing the best to the best or about this oh no II don't mean into the median yeah I'm thinking justjust people are a lot more educated now and yeah you're not even convinced that things have gotten more reasonable that's that's Very interesting no I mean certainly we we know more now so we know more science I think the
US has always been so I guess I'm just think about the u.s. now to keep it simple a simpler the US has always been pretty strongly anti intellectual but it okay so in in one sense we're maybe more reasonable and that like there's more scientific knowledge than there was before in another sense I feel like we're less reasonable in that the way topics are Discussed is more linked to entertainment value and sensationalism than it used to be and from what I've read we're also more polarized and so and like the more polarized you are the
harder it is to have reasonable discussions because you sort of you know instinctively react against whatever people like quote-unquote on the other side are saying it maybe were more reasonable now it's not a clear slam-dunk answer to me yeah I guess it Would be quite hard to answer this definitively because you'd have to find a way of you know randomly sampling a bunch of discourse today and above 1820 and maybe use a look at newspaper like like opinion pages or something of newspapers from and judge them yeah interesting okay well I'll see if I can
find if anyone's actually into that no I didn't that would be a cool thing to find out I suspect hasn't been looked at but okay so maybe on the broader scale We're not getting more reasonable but are there any any lights of hope other than the intelligence community well I'm pretty happy with what's happening in the social sciences yeah I mean in this in a bunch of scientific fields I've just been paying attention Moore's the social sciences like you know the replication crisis is is depressing in one sense to realize that that a large fraction
of studies don't replicate and that things like like pee hacking or Sort of statistical like like misapplications of a statistical tests just you know universally throughout a particular field in a way that like really impacts the the truth of the results stuff like that is really common and finding that out in the replication crisis has been depressing a bit but I I feel like I've seen attitudes changing just in the last like two years even that but that there's much more sort of like Pro Openness Pro rigour anti P hacking attitudes being espoused now than
there were even two years ago I don't know if this next thing I'm about to say is true but I at least heard a rumor that in in like job interviews or when deciding whether to hire someone to a research role to a professorship people like are starting to look at stuff like do they share their data do they like pre-register things like that we now like pre-register our medical Studies that's been really good so there's a lot of sort of maybe feeling like we're worse off because we're uncovering problems that had always been there
and and they're now visible when they weren't before but from what I can tell like a large fraction of scientists maybe the majority of scientists really like really want to fix this problem and are sort of spending a lot of cycles doing so that's cool as long as they don't have to totally Ruin their careers in order to do it that they yeah I mean yeah it's you know yeah like if you care about something you'd have to care about something quite a lot to be willing to sort of pursue it even at the expense
of your own career I think most of us humans are maybe not quite that altruistic but you only need some amount of altruism to get a lot of progress collectively so there's a lot of different ways that people could try to Tackle the general problem of human rationality and irrationality mmm-hmm are there any kind of paths of study or work that you'd particularly like to highlight like fields that someone could go into or questions they could pursue someone was 20 and they're listening to this and they're thinking I really like Julia Geller for like what
she's doing you know what should they study ideally and you know where might they go go to work once they graduate or is it just You have to be an eclectic public intellectual no no um I mean so I think it would be great there's been you know a lot of research into irrationality into heuristics and biases this is the kind of thing that that Danny Kahneman and Amos Tversky woman Nobel Prize for a few years ago there hasn't been a ton of research on interventions like realistic interventions that might help improve judgement Phil tetlock
is is kind of one Of the few exceptions to that other than that like so I would say the amount of research on D biasing or improving judgment is is maybe an order of magnitude smaller than the amount of research demonstrating the existence of judgment flaws and even within that that subset of research about D biasing most of the interventions are that I've seen are pretty small-scale they're like if we tell someone about this bias do they demonstrate it in you know a contrived Experiment in the lab that day which is a far cry from
like can we improve someone's judgment in a lasting way that impacts real-life decisions that they make for their life or their career and the reason of course that that is such you know so rarely studied is that it's a very expensive thing to study you need these long term studies it's hard to it's hard to like test things in real life in a naturalistic setting as opposed to in a nice simple contrived Lab experiments but but that is the kind of research that I think we actually need to you know have any shot at like
like a really rigorous base of knowledge about improving judgment so that that's the kind of research I would love to see someone do in academia or alternatively fund as an independent funder because you know again the incentives are somewhat stacked against you if you're trying to get a lot of papers published as a young scientist so the natural Things to study I guess would be psychology or economics or some other kind of social science oh yeah so I mean I guess technically the kind of studies I'm talking about could be done in a bunch of
different disciplines or departments it could be done in economic like behavioral economics or cognitive science um maybe a few others but maybe business not sure but yeah I think to like get a feel for the landscape of topics and What what interventions would be promising enough to try I think studying yeah behavioral economics and cognitive science is probably what you want to do so you've talked about eye operand Philip tetlock are there any other like really outstanding research groups that you could join once you'd school it up later on in your career hmm sounds like
it's really small well I mean research groups is tough I can like think of particular professors Doing work that seems good I mean Tom Griffiths lab at Berkeley I think it's the computational cognitive science lab I might have gotten the name slightly wrong but but he's doing great work on like studying whether the brains intuitive decision-making heuristics are optimal under certain conditions and how can we tell and then there's also Dan kahan at Yale Law School who's done a lot of work on I think his his lab is called the cultural cognition lab and he
Has a blog if you just look goo --gel cultural cognition you can read a lot of his research and that seems like well done and interesting and and about important topics so for the right person their potential kind of PhD supervisors or mentors perhaps perhaps yeah a little bit how do you think about your career going forward where do you think it might be in five or ten years time I would love in five years or ten ten years is to who knows what the world Looking like in ten years back back doing urban engineering
perhaps yeah God maybe I'll be a dentist I don't know I've decided social psychology is to unrest I need industry besides drilling yeah but in five years it seems not wholly implausible to me that we could have a sort of loose knit unofficial community of you know a hundred people in like spanning VC and tech and in the government and the media who are really Like thoughtful and and curious and have engaged with like the twelve most important issues for the future of the world and have like heard the best arguments on both sides and
have you know revised their view somewhat over time and are taking actions to like acting on those models that they have you know forged through this process to me that that seems both like plausibly achievable in five years and also like it would be really good for the world Obviously a hundred is a minority but it's you know a hundred of relatively influential people in their different fields who influence where funding goes you know potentially where like which how lobbying money is spent to influence policy what ideas are being put out into the public discourse
these are really useful things and so I think people who are in a position to direct those resources and public attention and so on having even a subset of those be have Invested time over the course of several years making their models of these topics more accurate would be really valuable so you're involved in both the effective at resume and rationality communities what kind of mistakes do you think they might be making at the moment I mean I'm the quite a fan of both of those communities I'll just say off the bat I think well
so ok so these mistakes are not Universal but mistakes things that seem Plausibly like mistakes to me that I've seen at least some large subset of those communities making would include leaning too heavily or putting too much trust and explicit reasoning which is not to say like you know blind guessing or or just like pure intuition is is optimal but I think there are certain there are certain models like I don't know utilitarianism frameworks I guess which often give like counterintuitive answers and I think the rationality a Communities are are quite good compared to most
of the world at saying like ok well just because it feels counterintuitive doesn't mean it's wrong and like the logic this is what the logic spits out and so you know we should really take that seriously and I think that's great I think the world needs a lot more of that but at the same time if something feels counterintuitive or suspicious or it feels maybe I don't know sketchy or like it might have Ethical concerns around it or something I think those you should take those concerns seriously too and try to interrogate like what seems
wrong about this you know is this I guess I just I don't want people to lean too heavily or too completely on any one explicit reasoning framework I thought that uh I thought that Paul Cristiano did a good job in a recent blog post which I don't maybe you can link to link to great I think he called it integrity For consequentialists and I don't know exactly what his trajectory was of landing at this view but basically this is the kind of thing that I think can happen sometimes if you allow yourself to be suspicious
of some of the sketchy or like counterintuitive conclusions of a framework like utilitarianism you can say like well gee it sort of seems maybe bad to have people like breaking promises to each other if they think that's the concept like utilitarian Thing to do and that's like a fork in the road we're on the one hand you can say oh well it's the utilitarian thing just do it or you can say like hmm the seams may be bad let me think some more and like see if I should be revising this model somehow and I
think Paul's post integrity for consequentialists is a really nice like elegant revision of a standard utilitarian model that I think works better it's probably not perfect but it's and it's the kind of thing that You won't come to if you just sort of like trust the logic of your current framework even if it feels wrong hmm so yeah putting putting more weight on like stuff seeming weird or being uncomfortable with conclusions is like one potential thing I would advise and then I guess it's not clear to me it seems plausible to me that it might
be a mistake for the EI community to be trying to grow and do as much outreach sorry to grow as fast and do as much Outreach as it is doing I it seems to me like the ei so if the a community was more like a political movement then then that would seem good political movements needs money and they need votes and you can sort of anyone can give money in votes and so you want to get as many new people and as possible but there's this other end of the spectrum that's more like a
like scientific community or something and and you don't want to just like add as many people as you can to The scientific community of anyone who you know wants to join you want to like keep the epistemic standards and the quality of discussion really high and so you have to be more selective about who you add and I think is like somewhere in between those two polls and you know it's not obvious to me what the right answer is in terms of like how fast is yellow more elitist or yeah basically so you know it
may be a mistake I might upon more reflection careful consideration think it's a mistake to grow as fast as we are trying to grow interesting I am I read that post by Cristiano it's really good so I definitely I definitely link to it yeah yes um I guess I haven't noticed that many people you know you know being dishonest or betraying one another and in that kind of way but maybe I only interact with people who you know I do I trust which is my point That you know if you behave poorly then people are
just not gonna want to be around you so I've chosen the people who I work with and the people who I'm friends with like carefully chosen to be very you know trustworthy reasonable people so yeah I mean I think I think there are strategic arguments in favor of of integrity and keeping promises even when it's not locally utilitarian or it doesn't seem locally utilitarian and so I think to some extent I mean to Be clear I haven't seen a ton of this of like actual promise breaking I've seen a little bit of it it's not
clear to me like the world in general like is often dishonest and breaks promises it's I suspect that the EI community is actually better than the you know average level of integrity in the world as a whole so I've seen a little bit of like I don't know promise breaking the name of utilitarianism maybe what I've seen more of them that Is like people endorsing that as a rule as opposed to doing it themselves in a way that I was able to perceive okay interesting maybe we can talk about that more another time so what
do you think is the biggest downside of the career path that you've that you've taken mmm the biggest I mean one downside is just lack of certainty like if you if you have a more well defined career track like let's say you go into academia and you get tenure or you become a doctor And you have a practice or you you know become a lawyer and you become a partner etc there's some stability there and some certainty about what things will look like for you ten or fifteen years down the road and I don't quite
have that I feel like I've built up some some security just through like diversity like the kind of stability that comes from robustness where I have like sort of a number of different irons in the fire and you know maybe if one of them Doesn't work out I can like ramp up the others or it won't be completely catastrophic because I'm not putting all my eggs in one basket so I've tried to build in some robust that way but it is deaf you know I am kind of figuring it out as I go along and
that's just something that's gonna be true anytime you do something that doesn't have a standard template yeah someone who was speaking to for another today was saying that it's a lot easier To do that when you have a partner who's able to you know potentially financially support you if you need to run your own projects I guess you've potentially I guess you were leaning on your parents earlier in your life Yeah right after I left grad school yeah is that kind of true that well if you have like money in the bank or if you
have yeah if you have your runway that it's a lot easier to it's true yeah absolutely So are there any things that you could imagine learning in the next few years they could really send you off in a different direction with your career working on different problems or tackling them in other ways yeah I mean I think if I if I updated significantly in favor of one particular global catastrophic risk being imminent and likely I might like the stuff that I'm doing is sort of it seems to me to be useful and valuable in the
medium run And sort of useful like an expectation across a lot of different possible like there's no one you know useful consequence that I think is likely to result from what I'm doing I just sort of think that in general if we have you know influential people in decision-makers following thinking and discussing procedures that are correlated with accuracy then we get better results in the long run but it's a very indirect you know connection to Draw and it's sort of may not be the best thing to do if there's like one risk that suddenly looms
I might just like shift my attention and resources to working on that like one particular risk yeah that makes sense so we've been at a a global all day and we're both pretty hungry so we should often get dinner but one last question is are there any other conferences that you go to regularly where people could I guess potentially meet you or or like a network with other People if they're interested in working on the same kind of topics oh well I mean one conference that I have been going to every year is the North
East Conference on science and skepticism which is sort of like my roots it was the origin of the podcast and you know there by the origin of my current career trajectory it's in New York every year and it's run by basically the skeptic community so there's some overlap between the skeptics and you know the Rationalists or yeas they tend to focus on like evidence and scientific knowledge and scientific literacy and education and things like that and they don't talk they don't tackle the same kinds of questions that ei or the rationality community does they're not
quite as focused on like what is the biggest sort of most important impactful thing that there is to figure out and there may be somewhat more focused on like just promoting the sort of Consensus view in a scientific field against you know misinformation or pseudoscience or or like fraud which i think is also valuable and so yeah but that's in New York every year I tend to do a live podcast taping at Nexus every year and so you can you can check out information about the previous Nexus and then as we get closer to the
next one it'll have information about buying tickets and so on it's and you CSS org great well my guest today has been julie Gala thanks so much for coming on the podcast er my pleasure this has been fun Rob thank you I hope you enjoyed that episode once again you can get personalized one-on-one coaching free to help you work on the same kinds of problems that Julia is by applying on the 80,000 hours website the link is in the blog post or the episode show notes if you enjoyed that episode we have much more where
that came from you can subscribe and see All the episodes that we have out and keep track of new ones by searching for 80,000 hours in your podcasting app it would also be great if you could let a friend know about the show or rate us on iTunes thanks for listening speak to you next week you