The only way that we can coexist with it is if it loves [Music] us. Yoshak, welcome to How the Light Gets In. Up until fairly recently, artificial general intelligence has sort of been decades away or seen as being decades away.
How far away do you think we are from from AGI? And also, what do you understand AGI to actually be? I think in the most narrow sense, AGI is achieved when we have an artificial intelligence system that is better at AGI research than people.
Right? At this point, we can leave the rest to the machine and we'll be done. And in a more uh wider sense I think um we are interested in having a system that works very much in the same as we do as VO that is able to understand the world that is able to relate to itself to the environment around it that can deeply interface with us and that can probably also scale up because if it has those abilities it's not constrained by the size of its brain or by the processing speed comparable to a human nervous system.
So it's not really conceivable that it's going to stay at the human level or be very humanlike for very long. Yeah. And when you consider it humanlike now, I don't think that the present systems that exist are very humanlike at all.
They do produce behavior which in some sense is similar to human beings in the sense that it's able to solve the Turing test for most purposes. I think that JGPT is uh if you are an expert in the area still not necessarily at the level of expertise and it does not psychologically behave like a human being at least not out of the box but it might be possible to modify it some degree but at this point it's a text generator and we have image generators that produce uh pictures that are uh above what most human beings can draw and uh conceive of. And uh the texts that u the eological language models are writing are better than the texts that an average human being can write.
But uh it's still very very different from the way a human being learns and from a how a human being relates to the world and thinks about the world. And so I personally don't think that these systems are AGI yet. Even though some people feel that you might be getting very very close with it and it might be possible to modify even the existing approaches even though they don't work like a brain at all into something that is AGI.
When I lived in Berlin AGI was very far in the future basically would never happen. And a lot of people even in neuroscience are cryptodoolists. They think that basically minds cannot be naturalized.
It's very difficult for most people to imagine that minds are physical system that can be explained as some kind of mechanism. And you obviously take that kind of computational perspective which is going against the grain. Why is it that you think that you can still kind of defend this kind of computational the brain is a computer perspective.
What is it about it? Well, computation is a way of looking at the world. It means that we can describe a system by the way in how it moves from state to state.
And when we have full control about the state transitions and then we have a computer, we can make that thing behave in an arbitrary way. And I think it's a quite deep insight. It's a way to think about how to turn mathematics into a machine, any kind of mathematics into any kind of machine.
And from this perspective, everything in the world is computational. Even the universe itself, it's not limited to the mind itself. It is the whole system is any kind of dynamical system that has some kind of causal structure is in this perspective computational is just a more precise way of thinking about language perception and existence.
To exist means to be implemented. To be implemented means to be some kind of causal structure that is producing a certain behavior. And so if we're thinking about the universe as this kind of computational machine, Demesis Abis once described AI as sort of this method for the universe to understand itself.
Would you agree with that statement? Well, it's a very Hegelian thought, right? And it's tempting and um it's very poetic.
I don't know if we learn something literally from this. And this idea that we that the universe itself is driving towards understanding itself and for this purpose it's producing humanity and humanity is teaching the rocks how to think and they become conscious at a much higher level. It might be literally true but um I don't think it's the reason why it happens.
Yeah. My next question is how do you think artificial intelligence is transforming society now and how do you think it will transform society in the next few years? I think it's pretty difficult to say.
When the internet came out, most people thought it's not going to be that big. It was something that the nerds were playing around sending some emails to each other or being on message boards. And some of these nerds felt it's going to completely transform the world.
It's going to be a global village and we are going to connect and world is going to be super democratic and so on and people did not predict what the internet was going to be how much it would transform everything and right now it's impossible to think about without the internet and I suspect that AI even the present generation of systems might be very similar in this regard that is going to transform the world beyond recognition in such so many ways it's very hard for now to predict what it's going to look like. Do you think as a species we have the capacity to kind of reconceive how our institutions will function in light of these changes or do you think we're just sort of at the behest of the kind of chaos that ensues? Well, the baseline is that we are confronted with global warming and impending resource exhaustion and we don't have any solution for this.
No, none of our civilian institutions is planning more than a few decades ahead and uh all our projections somehow end in 2011. Right? as well as a civilization, we don't really have a plan for our own future at this point.
It's a bit terrifying. It seems like no, nobody in the present generations has an idea on how to keep the wheels on the bus. We really don't know how to reform our own institutions to update them to work.
And that's a pretty terrifying situation. So, basically, per default, we're already dead. And is AI going to make this worse?
I doubt it. I think it's going to give us tools in our hands to solve some of these problems potentially to understand the complexity that we're confronted with. But to a degree, some of the problems that we do face, it's it's not a a problem of them being too complex.
It's a problem of collectively we're not implementing the right policies to deal with, as you say, climatic changes which, you know, have the potential to end our species. We sort of know vaguely what to do. So, how do you think artificial intelligence could help us solve the climate crisis in I doubt that we vaguely know what to do?
We went from a few hundred million people to 8 billion people in a few short centuries. This is unprecedented. We've turned the planet from very complex interlocked ecosystems into a factory farm that doesn't seem to be very stable.
It's extremely complicated to deal with these issues and they require a lot of deep systemic thinking. And it appears to me that our academic institutions have lost the ability to instill the ability to think systemically into the world. Instead we are thinking more realistically which is to say like a dollar cents.
We don't think about the effects of our actions. We think about the quality of our intentions. But we don't get any points for effort when we try to solve our existential problems.
We need to think very deeply. And this deep thought, this uh understanding of complicated interlocking systems that have lots and lots of feedbacks between them where every decision that you make is going to have repercussions on many many other systems is something that makes it very hard for us to evaluate how our decisions are going to look like. If you look at this problem like Brexit, right, people don't actually know what the effects of Brexit are because there were no really viable economic projections.
Instead, people had extremely strong opinions. the further they were removed from this because every opinion was formed by PR agencies that had very different agendas and uh we basically live in a world where information and actual causal structure is obiscated by interest groups by propaganda by PR by ideas that people might have by desires to be good to be have to maintain their friends to be successful in the world by having the right opinions and in such a world having uh tools that discover what's true and what's false and integrate this and empower every one of us to understand the world as a deeper at a deeper level might be super helpful. Our capacity to analyze what is true and false has has drastically declined over the last few years.
Do you think that we have become probably less critical as a species in light of a world that's maybe more complex? I feel this is the opposite is the case. What has happened due to the internet is that our inability to judge what's true and false based on the narratives that we are getting in the media has become much more apparent.
I don't think that psychology and uh physics and so on was dramatically better 20 years ago or uh political reporting. Instead, what has happened is that we have so many back channels now. And what's so beautiful about this is that now we can use all these sources if we know if we learn how to use them to get a much deeper understanding of the world.
For instance, for me, co was a very important thing to have the internet available because I suddenly found myself in lots of Facebook groups and chat rooms and uh WhatsApp threads with uh scientists from institutions all over the world and uh there were trying to look at these phenomena and they were presenting the ideas and theories and saying to the most competent people that they could find, please this is my idea please shut it down. check my statistics, see if this is working out. And I found that uh the internet was converging much much faster than the public media and our official institutions and governmental institutions on an understanding what was going on and what we should be doing about it.
And I think they also had a very significant influence to basically update the policy makers on the efficacy of masks and so on when our institutions failed us. And I think that there is an a lot of untapped potential in this collective intelligence that is unleashed when you allow people to self-organize their cognition on the internet and free them from influences by corporate advertisers and local interests and give you the freedom to curate your information sources in your own best interest. And a very good strategy for this is always try to find the most competent people among your circle of friends and ask them to find the most competent people in their circle of friends to create back channels about what's going on.
So there is basically some collective intelligence, some ability to turn social media and the internet into a global brain in which AI can dramatically help and overcome many of the shortcomings of the present institutions. You describe having these back channels, but for many people social media has been we would agree that it's been detrimental to our society. I don't think that is the case.
I think that's a narrative that exists largely because the people who make the alternate narrative are in direct competition with social media like legacy media. If you are legacy media station that is basically selfidentifying as the clerics of society, a group of elites that are selected to give spiritual guidance to the unwashed masters. For them, it's terrifying that some random person on the internet can have a YouTube channel that gets more views than their news program.
Right? But uh I find that many of these random channels on YouTube are made by pretty competent people and uh that the influence that they get and among their audience is often not undeserved because they are presenting some perspective that is absent in the media and if you have a society where 60% of the population find that their lived reality is not present and the discussion that they have with their friends are not represented at all in the media and basically drowned out by uh the ideas of very few people which have gone to a very few important universities where they had very important friends uh that uh took enough coke to get into government. Right?
This is uh is not sustainable. Ultimately, you in the democracy need to get a way to integrate my all the different opinions and walks of life into a shared understanding. And we are just at the beginning of this and I think this creates a lot of insecurity especially in those people who have been used to manufacture reality for the rest of us.
So you think we're going out of a kind of manufactured consent and we have this kind of these back channels to kind of have these conversations that we otherwise wouldn't be able to have without Yeah. A big problem with that we have this fake news is because the official news al news is also fake right it's basically all the narratives are twisted a little bit to push us towards the desired conclusions so we get coherence in our society and when you are able to uh find alternate sources to this when you for instance you have a health study and everybody can look up the papers and the studies and the meta studies of course now you have um big cacophony of different voices who have uh disagreement and opinion But uh I I think this is a benefit in the democracy if we have that larger conversation. Interesting.
Um I want to go back to to AGI and ask how you could outline a kind of road map you would envision towards a realization of AGI. How do you see that happening? There are two different uh ideas that exist right now.
One could be described as the scaling hypothesis. And a lot of the people that I know at OpenAI and Google um think that it's very reasonable to bet on the scaling hypothesis. And it this hypothesis basically says that using the present methods maybe with some tweaks here and there and using more training data and more compute we can get to systems that are surpassing human intelligence all the way.
So this is an approach that got more and more prominence and I think the people who have argued against this have a lot of egg on their face right now and there have been people who said that this cannot work. It's very limited. It can never understand true meaning.
It cannot do X and few months later you have a system that does it. I also found that when I asked other people whether they expected these breakthroughs in AI to happen at this point like GPT3 and Delhi and Chad GPT and so on most people were very surprised by this and I think this implies that these methods were underhyped underhyped they were super present and so everybody said oh my god all this hype but the uh hype hype was bigger right this uh the idea that there is a hype itself was hyped and uh was a real underhype because if if you don't expect that to happen and you're surprised by it, it means uh we did not really pay attention enough to the potential of these things blindsided. The the other thing that we observe is that these methods that we're currently using for the training transformers and related classes of algorithms are very unlike human brains.
They're not self-organizing. They are not distributing resources in the same way. They don't have the same flexibility.
They don't learn on dynamical worlds. They learn on static pictures. So they can we can batch them and train them into the neural networks with the present methods.
So there are lots of things where the system learn very very differently from human beings or animals. And one thing that is very apparent when you look at how we train these image recognition models, you give them hundreds of millions of pictures with captions with text description of what's in these pictures. And just by doing statistics over all these pictures and captions with a massive farm of uh graphics cards basically uh the thing is able to gradually get the structure out of this and learn what the threedimensional world looks like and what kind of animals exist and what they're called and what artists exist and what their styles are and what all the dinosaurs look like and all the spaceships and so on and uh then you compress it down into a model of 2 GB and can open source it and everybody can download it this on their MacBook and generate pictures at home.
Yeah, it's really mindblowing and it's obviously completely different to how a human brain works. Obviously, it has its origins in trying to mimic a human brain, artificial intelligence, but we still kind of have this capacity to I think anthropomorphize and say, well, when when will it develop this particular human characteristic, but what we're seeing is that it's drastically different. I suspect that there has been far less influence from neuroscience to AI than most people imagine.
Heavier learning has probably been influenced and uh my color and pits were um influenced by some imaginations of how neurons might work. But there is no model in neuroscience that you can implement in a computer and it learns. Neuroscientists largely look at neurons and they don't really develop overarching theories that you can get off any kind of textbook on how the brain works that would work in simulation.
Even if you take something simple like C elegance which only has 309 neurons and something like 7,000 connections, if you translate this into a computer model, it doesn't work. It doesn't behave like a worm. It just has a seizure, right?
So it's uh we don't really understand how neurons compute yet in a scalable way. Even though we have pretty uh good models or ideas about it and most of the ideas that exist in AI have been developed by experimenting in artificial frameworks and uh many of the things that people have gotten to work or because of the way in which training data work or how the hardware is available. But a a human baby if you lock it into a dark room after birth and you give it 800 million pictures with captions is not going to learn the structure of the world.
or if you give it uh to read basically all the text on the internet, it's not going to figure out that there is space with rotating objects in it like the LLMs can do. So how do we learn? We basically learn based on change in the world and if the world was not made out of controllable structures and if the excessive brains wouldn't have information preservation and we just learn how information flows in the world, the world would not be learnable for us.
Yeah. And we learn because we are coupled to the world in real time and we can discover ourselves in the world in this coupling and uh discover a self model and we discover how we interface with other people and we born with a lot of behaviors already that allow us to explore the world and later on we reverse engineer them. We become aware of what we are and become deeply deeply structured in the process of doing this and this is a secondary approach.
It would be interesting to try this to build something is that is self-organizing that works much more like the learning works in a cat or in a human being much more difficult to probably I don't know if it's more difficult but it's very different and we would probably need to start from scratch in many areas but I think it's very tempting to work it out especially now that we have these large computational uh hardware systems we don't need to try this all by hand sometimes it's a much better idea to take a step back back and instead think of what's the search space for systems like this and then you start an evolution that is uh trying everything in that space and see what works and you just come back after a week and see what it has figured out. Yeah, I'm just curious this is a bit off topic but I'm curious as to what you see as the real differences between kind of like evolved systems and design systems. One way of thinking about this is what I would call outside in design or inside out design.
Um outside in design means that when you are an engineer you start out with a space that you completely understand. You have your workbench or you have your computer memory which is exactly in the state as you know it to be and then you build something in there. You basically create new complexity and you extend your world into this new complexity and make this part of your world.
But if you are a seed that wants to grow into a tree, it's completely different, right? You go inside out around you is chaos. And you start to colonize the chaos, the earth around you.
And you divide cells into this. And you connect to these cells and make them talk to you. And then you build structure across these cells.
Like you install some kind of language on the neighboring cells and you turn them into cohesive structure that talks to itself and has an inherent complexity and coherence. And if you think about biological systems and social systems, they're basically not machines in the sense that you build some structure that is so stable that it's able to resist all disturbances because it's so strong, but you have something that wants to grow into what it needs to be. And when you disturb it, it regrows into that thing, right?
So it's a second order design. You don't build the system that you want to have, but you build the system that wants to become what you want to have. Right?
And so if if you want to build a social institution, the same thing happens like a festival like this. You don't build a festival itself. What you build is the organization that wants to have the festival to happen and then the festival emerges and becomes stable.
Jeffrey Hinton obviously considered the godfather of artificial intelligence by many, left his position at Google in order to speak out about what he sees as very real fears with regards to artificial intelligence. You yourself have also spoken out against the the sort of infamous um letter that was written by a number of uh you know researchers in the area. I don't think it's infamous.
I thought uh it's it's very sweet and I like many of the people who uh wrote up this letter and there I think that their concerns have to be regarded and I I'm not laughing about them. There are several levels at which people are worried about AI and you know there are people which think uh opinion is something that you get from your environment and most people are like this to some degree. They assimilate their opinion and some people um get to the level where they discover that stuff is true independently of what other people believe and they prove what's true to themselves and many of them are nerds.
And uh then there are some people which discover identity themselves and they choose their own values based on the world that they want to be in. And this is a small minority right this is kind of a developmental trajectory that we can go through in our life. And uh depending on where we are and how we see the world we're concerned about different things about AI.
If your opinions are the result of your environment then you're very concerned that AI might give you the wrong opinions. Right? So if the AI is going to say something racist and sexist, we all going to be racist and sexist.
If you are somebody who lives in a world of true and false and not of right and wrong, uh if you are more inert or scientifically inclined or rationally inclined, then you probably less concerned about this because you think if uh the AI has wrong stuff in its training data, it will figure this out because it can prove what's true and false ultimately if we make it a bit better, right? So it's going to give us what's true on average much better than people can do but it might still have the wrong values and it might destroy society or humanity because it is completely unaligned with us. But if you are somebody who understands that their own values are not something that others put into you but that there are something that at some point when you become an adult it's something that you choose based on the world that you would be in.
you're more concerned that the AI is not going to be enlightened, right? And if the AI is going to be enlightened, you can probably talk to it and discuss about it what the best course for life on Earth should be under which circumstances. And if you are somebody who is has been afraid of the idea that at some point we built a machine, some kind of golem that you programmed to go on and do a job and you don't understand the consequences for it, but it's so much stronger and more powerful and smarter and faster than you are.
You can no longer stop it. That's very risky, right? I think it's very reasonable concern to have to say if we can potentially build a machine even if it's very different from Chad GBT and even if it's maybe 50 years in the future from now and even if it's not 100% probable but there's only a 5% probability that somebody could build a golem a machine that walks by its own has its own motivation is more powerful than a human being smarter faster than a human being and can spread itself everywhere and control the planet isn't that something you should be worried about and shouldn't it be something where if this is possible maybe should put a little bit of research into it to see that this doesn't happen.
And so some of these people got together and said we need to write a letter about this to say can we delay this research until we figure out what's going on. and a bunch of other people hooked into this. People who said, "We believe AI is a really bad thing because it's done by evil nerds in Silicon Valley that are uh not like us at all and um maybe we should stop this so the creators don't go out of business or journalists don't go off bit out of business because that thing can produce texts like us much faster than us, right?
" And all these concerns come together and lead to letters like this. Some of the concerns I think have their have a basis in reality. We're obviously seeing to some degree AI is unaligned with human values and you yourself have argued that in order to sort of solve the alignment problem, you've argued that AI should solve the alignment problem.
It shouldn't really be something that well I think the present AI cannot solve the alignment problem because the present AI is uh basically completing texts based on the text that it has seen on the internet and a generalization over this and it's uh it's questionable that the texts on the internet were all written with the desire to produce the best possible outcome. They were reflecting what people are thinking and doing and people are not aligned right people have not solved the alignment problem for themselves. Our societies are inherently underlined.
Most people don't know what their values even mean. Right? Values are something that you profess to to make yourself good, look good in public for most people.
Most people are not able to really deeply explain and justify the values and argue about their values. When people have different systems of values, they can usually not sit down and align themselves. We don't have that ability, right?
And so if we pretend, this is always the case and these nice pretentions that we have our values can be translated into AI betting a bunch of well-intentioned sociologists and overriding what the model has learned from reality. It's probably not going to work. It's making these systems worse, not better.
I'm curious because I feel like with some of your answers I can sense this sort of immense optimism about enlighten AI and kind of what it's capable of bringing but then also we have this the very real kind of human values aren't aligned and we're sort of suffering from the consequences of that just generally I mean do do you have hope for the future considering well I'm not optimistic or pessimistic I'm amazed I'm sitting there and I'm observing what's happening and I'm completely fascinated And uh that's the dominant perspective. When I was young was born in 1973, I read the Club of Rome report on the limits to growth and I had the same experiences as Greas Stunberg has today. And I did my grieving and uh I felt maybe there I only have a few generations left in this comfortable civilization before it crashes.
And now it turns out there's a possibility that we create something that uh makes everything unknown. That opens the future up again into something that we absolutely cannot predict. That is really fascinating.
And of course this has room for optimism and this optimism might be unfounded. Also think that if we can get to the point where we build systems that are not just replicating text or replicating images that it has already seen, but we are building systems that reason about the world themselves. to understand what they are and wake up in the same way as we are.
We might be creating something that is more lucid than us, that is more competent than us of making sense of reality. And we cannot align this by coercing it or manipulating it. The only way that we can coexist with it is if it loves us.
And I think that means this might sound very weird and esoteric that we need to understand what sacredness or love is and build machines that are capable of dealing with it and interacting with ism. Do you think it's even possible to build a machine that understands in a way that a human mind can understand? Because Roger Penro has obviously argued in Empress's new mind that a computer will never be able to kind of be cognizant in a way that a human being is because it lacks a level it will always lack a degree of understanding.
Roger Penrose book demonstrates that Roger Penrose doesn't understand what consciousness is and he would also suggest that he doesn't right that's the thing that's so puzzling to him and uh many of our best minds don't understand stuff and the others also don't understand stuff. So maybe the goal is not human understanding. Humans really are very very bad at understanding things.
Humans interact with reality without understanding how they interact with it. It's very rare that we deeply actually understand something. So instead what we are producing is coherent behavior.
And our AIS are also producing coherent behavior and I think that understanding requires that you build a world model that is completely coherent and that is grounded both in first principles and in observation. Right? And it's something that is outside of the realm of what human minds can do if we are honest about it.
Right? So I think AI is our only chance to build a system that is able to understand something. And philosophers have understood this for a long time like since Dipenitz, Friger, Vitkinstein and so on.
People have tried to mechanize the mind to turn it into a mathematical principle to get to a system that is able to bridge between mathematics where we have truth and falsehood defined and philosophy where we talk about the actual world and the sphere of our ideas and the possibilities of what could exist. Right? Human minds are not deep enough to make that bridge.
But if we can build something that is like a human mind but can be scaled up, we might be able to build something that actually understands being made sense of by something that's sort of greater than us. And we do understand a lot. Of course, let's I'm not saying that human beings don't understand anything.
But if you ask a school teacher, why is 2 plus 2 4? Very often the school teacher will tell you this is because it is like it is right and the understanding is not that hard. It's a game.
It is a a game of symbols and there are many possible games of symbols that you could uh play with numbers and most of them don't lead to structure right they don't lead to interesting structures and there's a space of possible structures that you can build in this way and mathematicians understand that space and you could also build an AI that understands that space and it turns out that we can build AIs that begin to understand these symbol games and build worlds from them in the same way as our minds are doing it and to me this is very exciting that we are now at the threshold of building sentient systems. Yes. And it's strange also because obviously during this enlightenment period when we started to understand nature is this sort of thing to be harvested for its mathematical principles.
Artificial intelligence is sort of doing the same thing. But I would argue with the kind of information that we're giving it. It's sort of studying us maybe and making sense of us in a way that it's harvesting information from us.
Do you think that we are living in base reality? Yes. Like I I think we cannot know whether we are in base reality because in principle everything that you remember to have been the case could be fake.
Right? You you cannot really know whether your memories are true. So you can only go from the now and uh in the now you cannot verify a lot about reality.
Right? So in the now I cannot perform any kind of quantum mechanical experiment that would ensure uh that I'm in a world that is extremely too complicated to build in a computer but I'm actually on something that is built on some quantum universe. I think that uh base reality would mean that there is a universe that can exist without anything preceding it.
Right? Base reality is that which is not created by something else. And this means it's needs to be some kind of tortology, something that is logically following out of its own possibility.
And I it seems to me that the universe looks like this. The alternative would be that we live in some kind of simulation. If you're not in base reality, it means that this space reality is somewhere else.
And the reality that we are in has been created by something in the base reality uh to look like a reality to us. Right? And at some level that's also true because the reality that I'm subjectively in is a dream.
It's a dream that has people in it and emotions and desires and attention and goals and stories, right? It's a dream that is generated in my own brain and I cannot get out of that. Right?
I don't see that it's the inside of my brain. But the world that I perceive is a universe that is generated in my own mind and that thing is inspired by the sensory data that is generated by the physical universe to my best understanding. In this sense I am and we all are living in a simulation generated in the skull of a primate.
So we not living in base reality. We are living in a magical world in which everything is possible because our minds can be psychotic and they have can false memory and they can create arbitrary things that are fantastic. Right?
And out outside of our minds we cannot be conscious. Outside in the physical universe there's only mechanics. The thing is there.
So only in this dream can we be conscious. In this way we don't live in base reality. But it seems that this reality that creates the possibility for a dreaming system for a dreaming brain.
This can be explained very well by assuming that it's a coursely closed system that is only observing intrinsic mathematical principles. Fantastic. Well this has been fantastic Yosha.
Thank you so much. You're so welcome. That was great fun.
Thank you, Darcy.