There's been much discussion about the far future possibilities of AI and we will discuss that but it's also good to focus on the here and now and the current impact challenges and opportunities of artificial intelligence. So let's start there and then move into the further future as we go along and this is a session in which I would love to have some of your contributions. So at a couple of points during this conversation I will be reaching out to you to get you to make questions and comments.
So, we should move quickly on, right? Um, if we Let me just make sure my telephone's turned off. That would be bad, wouldn't it?
If the moderator telephone went off the So, let's focus on current um possibilities with AI. Um, let's start with you, Orisa, if we may. Um, what do you see as the major challenges for AI now that it's presenting to us um as humans?
Thank you Adam. Uh so yeah uh challenges uh I think there's a you know uh various uh challenges you know like the privacy accountability issue transparency but I think the biggest challenge is like the challenges to like the what is a human being. I think this is kind of related to the morning session.
So when you know I think many of you actually are using you know the Chachip or the Gemini or even Deep Seek and maybe you you think it's really very effective uh you that supports your work or maybe your lives however or maybe the research as well however uh so it kind of you know questions us what is our human being's role or what what we can do to you know uh to to use that in proper way. So that that actually challenges what is our human beings role or what actually we expect to you know for the future society. So that that's the biggest question and with that under that I think there also exists some kind of you know very uh domain based discussion about you know the privacy security issues safety issues but I think we need to tackle on this bigger issue first and I think lovely you start there and I mean one clear example of that being a problem is is the automation of many people's jobs and so many people who are not in any way involved in the development of AI or thinking about AI are finding AI is potentially going to replace what they do and that's a huge societal disruptor and I guess that point that they are not involved they're just experiencing this is a big change I I think so too so uh this this whole this this big big you know chip or the LLM issue actually changed because in the prior in like a 2015 or 16 something the many discussion about this AI challenges is mostly for the AI developers but right now what we are talking is like you know the employment education or future work uh or or maybe you know uh what what's the human role will be so you know the the challenges came to much more you know the user based challenge user oriented challenges so I I think this kind of widens you know the the people who need to talk about this topic yeah indeed indeed exactly it should be a much more inclusive conversation do you want to comment on this briefly Yeah.
Um um yeah, I want to mention about the recent advancement of uh AI and uh uh as Adam said uh the automation is uh progressing. Uh we have AI agent uh services a lot recently or we could say agentic AI which can do things like uh make make reservation for hotels or travels or uh do purchase uh on the Amazon website or something like that. The past um generative AI does the conversation.
So if we put question they can give us answer but what they can provide now is action the sequence of actions. So they do things for for the users. So that's one thing the another thing is physical AI.
So the uh with the advancement of the so-called the robotic foundation model the robot robots now have the generalized behaviors. So they can afford laundry or they can you know um uh uh you know do the many housekeeping jobs. So now the the from now on maybe the robot robotic industry will change a lot and that that is happening right now.
Yeah. And and in terms of AI agents of course they can set their own sub goals and that raises questions about what goals they will set themselves. Yeah.
Yeah. That's true. But it's currently it's a more like a a theoretical occasion but the reality is uh we cannot uh expand the domain where the AI agent we work.
So uh for example the uh you know uh doing uh you know that that making a reservation or or the purchasing something for the user is a good starting point but if we expand that then the agent does not work anymore. So now uh we don't have the level of you know discussing the sub goal should be a issue for the human or not maybe we are more you know in a primitive level yeah compared to that yeah thank you very much so much much positive and much much challenge around this ada do you want to talk about the impact of AI on research at all on research on research I don't know do you well I don't know it's up to you work on pro you work on folded proteins and one of the ma one of the major developments in AI recently has been the the the solving of the protein folding problem for alpha fold with alpha fold 2 um from Google deep mind but maybe you want to talk about something else is the floor is yours please thank you so proteins are performing almost all functions of every living cell in every living animal human enough bird, flower, anything that grows proteins are able to do it or they are designed to do this by their structure. The structure is um accommodating the those materials that participate in producing new materials.
And when proteins were discovered, I don't talk about long ago, but when there was already sequence of proteins, sequence of the amino acids that make the proteins, people thought that maybe they can predict will what will be the the structure of the protein that has to make this and this and this challenges And I don't want to say they failed. I also don't want to say they succeeded. The success was marginal very good for the time about between 15 to 20% of correct um prediction of structure based on the structures that were known at that time which were very very few.
I'm talking about now 60 years ago or 50 years ago and the level of prediction was more or less constant with a little a little um increase until about 40 45% of correct prediction. So for the prediction there was a specific organization created called CASP. They were focusing on a specific protein which was at that time a subject for research of structure.
And in parallel they wanted to uh predict the structure and as I said the there was an increase in the correctness of prediction but not higher than about half the cases. This was every second year called CASP and every second year it was somewhere else in the world. But uh we I I thought that 50% predict correct prediction is fantastic but mold that started this this initiative thought that it it should be better.
When AI came into the into the game, then this the story changed but not only because of AI but also because more structures were known. It means the basis to to this type of research became larger and there was more information from structures that were determined by by experiment crystalallographically. That's it's very nice actually that you emphasize the human component in this because yeah the story could be told as perhaps I introduced it that AI came along and alpha fold 2 and suddenly we had the structure but yes the the human effort of the hundreds of thousands of people who'd been collecting structures and in and depositing them in the protein datab bank and and the human success that had happened prior to that goes alongside it.
So it's it's a beautiful example of humans and AI working together actually. Yeah, it's and that's the only way structures became part of AI. Yeah.
Protein structures and also protein proteins duties performance. Can I can I just jump into here? So I think what you mentioned is really important you know how AI you know the the human endeavor to you know to do this research and then there comes like AI as a tool and you know how we can uh use this technology in very uh good manner or maybe you know how we can use you know correctly as a tool and my my my research field is more you know like you know where who how people can use this AI in the workplaces or you know or maybe in the the research area and what I see is that kind of collaboration is not working well uh uh to to some extent well well uh in some kind of field and why I you know there's some places where this kind of collaboration works really good but there's some kind of places that this is actually not working well and this is not from only from the technical limitation but it's more like how human beings can uh adapt or maybe the the environment or the readiness toward this AI.
So I think you know the the what you mentioned is like how human being or the the the researchers are actually you know accumulating data and how they can use this AI as a tool but somehow maybe we we think that AI as a very great tool but the the the thing is that even if the technology uh is good but the the people's awareness or maybe the the the places uh awareness or maybe like you know infrastructure issue, the networking, the security issue, uh or maybe you know uh how do you say the the the data set if those things are not ready and I I can say that almost uh many places are not ready yet then uh that the the the inter implementation of AI is not going well. So I think you know uh we we in the in the in the short term what we are actually facing is that uh maybe we are we might be expecting too much on technology but we also need to uh kind of reconsider our work style or maybe our awareness or maybe we can you know maybe we can adjust it to the AI but also we can also adjust how AI works. So it's it's more like a collaboration.
So it's not like you know just implement the technology and everything goes well. It's not like a silver bullet. Thank you very much indeed.
That's a very nice point to open up to the audience. We we don't have very long. So if you would like to make a comment or quest ask a question about this near-term AI um impact of AI, please do so.
Uh I can't believe in there. Is there a hand raised? I see a hand raised here.
Could we have a microphone come down please? Microphone. I'm told that there are people with microphones running around.
They need to run faster. Hello. Thank you very much.
Over here, please. Sorry. All the way here.
Here. Here, please. Okay.
I I don't know. Sorry. I I I've picked somebody.
I hope it's the right person. There we go. Thank you very much.
Please, if you could make a short comment, that would be great. Okay. Thank you.
So you mentioned about the employment and I think it's really about how uh in in current situation it's about is the society is about earning the revenue from the one who pays it as the return to the labor we do. But uh I know like many years ago there was a really idealistic uh idea that uh we will the the automated system would reduce our work and do all the laborious work and enjoy our lives. But I think this implicitly stand on the idea that people still have the properties or earnings or whatever to do the daily life regardless of this uh reduced labor.
But in reality um do you think it are we able to accept especially those of whom are in power the idea that uh people are earning all these uh earnings for the daily lives uh even though they rely on the automated machines. Thank you very much indeed for that very interesting question. So I suppose this really raises the question of things like universal basic income.
the idea that we should that that we should all Yeah. The utopian idea that our our our drudgery should be done by machines and we should be free to enjoy our lives. Would anybody like to tackle that question?
It's a big one. Arisa, do you want to have a quick quick comment? Um I think it's we we when we talking about this you know the the the employment and what what the technology can do uh the the one one thing is that the like you mentioned like a universal basic income but on the other hand when we talk when we think about the especially the Japanese situation I think you know we we are actually uh facing the shortage of labor so uh I I think to some extent uh we we actually are welcoming you know how how to our our tasks or our works can be t taken by the artificial intelligence or the robotics.
So uh the I think the point is that um it's not like you know our our job or our work can be totally replaced by the human being but the human being's role is to consider what can be done by human being and what can be done by machines and that that kind of management thing I think can still can only be done by human being. Thank you. Thank you.
Uh yes please add up just a little comment to you what can be done by human being and what can be done by machine there is in between what can be done by nature. Oh very true very true but I thank you very much for raising this point of how the conversation has become one of fear of machines rather than benefit and idealized please. Yes.
Thank you very much for your interesting uh talk. I was wondering if you could comment about what your opinion is on the the dangers of the randomness of AI and uh the fact that there so-called cognitive processes are essentially an enormative black box. Do you do you feel that this danger is uh is uh is true is a true danger?
Thank you very much indeed. So we will address um as we go forward in the discussion a little bit in a minute the the kind of future dangers but right now one good example of the black box is alphafold 2. So the protein folding problem was solved by a process we do not understand.
We don't know how alphafolds 2 does this. So do you have any quick comment on the fact that in the end what we thought would be solved by understanding is not solved by understanding. You prefer not to you prefer not to make a com.
Okay, fine. You tackle would you like to tackle that that that issue of of the fact that Yeah. Yeah.
Of course there are many risks and uh we have to consider about that. Actually in Japan we have the AF strategy council uh with uh Arisa and also we recently recently had the uh group called AI inst institutional uh study group uh that uh discuss about the regulation legislation about the AI and we made a discussion a lot about the AI risks and what can be done toward mitigate that risk and that that trend is ongoing the globally and uh uh we we have to have the international uh collaboration toward that respect and uh uh to to some you know extent the there there is a misconception misunderstanding of the uh recent technology but uh to to other you know respect maybe we have to uh take more you uh uh you know cautious step toward the you know advancement of AI. Yeah, but when it Sorry, I don't you want to go?
You will do that. I No, no, I'd rather I'd much rather listen to you. So now, now I can say something.
I was waiting. In my opinion, the next or one of the next duties of AI is to predict which proteins can do what. So if we need to do something that is still not done naturally, can we design a protein according to what we understand to do this particular assignment?
And the and the question relates to this idea of okay as humans we'd if we came to that if we managed to do that we'd understand the processes we used to get there. In this case, we're putting information into a large language model basically and getting out an answer which is we don't know how it came to the answer. The answer is right perhaps, but we don't know about how it got there.
Does that matter? It matters um by the the way we think and the reasonable way we think. But nature came to to some to to incredible incredible uh designs without studying together with us.
So we have to give nature a lot of respect. True enough. Maybe I have one comment.
So uh I think you know we we we we need to I think that this kind of you know blackboxing issue is very important also like a transparency accountable issue. So one thing is that technically or technologically uh we we need to you know find the way how we can make this kind of blackboxing into more like XAI explainable AI that is one issue but on the other hand um I think it's when we consider about like a so social risks the problem is not about you know the the technology is blackbox but we actually don't know who takes the responsibility of this risks. So you know consider about you know when something goes very wrong or you know some kind of incidents happen and uh you can well the the machine tells uh we made this wrong this decision because this actually was the reason you know but it's not like you know the explanation what we want but we want you know who takes the responsibility or how to mitigate you know how how to prevent the that kind of risks to happen you know again so I think from like societal social you know um points uh we we we need to think about you know the accountability issue as well.
So I think they there's a lots of way to tackle on this black boxing issue and for my my opinion from like a social scientist is that we need this is also the topic of the morning session but we need to have some kind of collaborative approach with the engineers and also like for social scientist or more for the politician politist uh to to tackle uh with this you know this this new challenging but also you know that brings the opportunities. Thank you very much indeed. Thank you for the point.
We have so little time to cover so much. But anyway, we're touching on things. Let's move.
We have to move forward a bit. Let's use Jeff Hinton to begin that conversation. So, uh, in this conver in this podcast recording, if we could put the Jeff Hinton, uh, picture up on the on the screen now.
Thank you. In the if we could play clip two first of all, this is Jeff talking about whether AI will become conscious. Clip two, please.
that's becoming very central what it is to be human because the debate about whether these things will want to take over is all about do they have desires and intentions and many people think for example there's something that'll protect us which is they're not conscious and we're conscious we got something special that they ain't got um and they will never have and I think that's just gibberish um I'm a materialist um consciousness emerges in sufficiently complicated systems. P systems complicated enough to be able to model themselves. And there's no reason why these things won't be conscious.
Okay. And in fact, let's run straight into clip three as well. And then you I'm going to come to you.
So if we play clip three next, please. And this is about in if they I I'm just worried by the fact that there's very few cases of more intelligent things being controlled by less intelligent things. Once they're a lot smarter than us, I don't think they'll put up with that.
Well, that's what worries me at least. Now, there's there's one line of argument that's more promising, which is a lot of the nasty characteristics that people have come from evolution. We we evolved with small waring bands of chimpanzees or our common ancestor with chimpanzees.
Um and that led to this intense loyalty to your own group and intense competition with other groups being willing to kill members of other groups. Um that's sort of shows up in our politics quite a lot right now. um these things didn't evolve and so it maybe we can avoid a lot of that um nastiness in things that didn't evolve.
It's a nice thought that we could learn how to behave from them. Uh yes, in fact AI mediators are now quite good at getting people with opposing views to come to see each other's view. So, there's a lot of good can be done with AI.
Um, and if we can keep it safe, it's going to be a wonderful thing. AI mediators, maybe we should have AI moderators. I don't know.
Anyway, the it was interesting that he he actually took a different view from Rich Roberts earlier who was pointing out that humans kind of know how to behave towards each other and he and here we're saying that humans don't know how to behave each other. Maybe I I will, but Utaka, let me come to you. the so will AI become conscious is one question and then if it if it becomes super intelligent will we be able to look to to control it?
Yeah, thank you for asking. Um I I I think uh the the with regard to the consciousness uh issue, I my answer is yes that we can build uh the AI with consciousness because as Hinton Sen said that I believe consciousness is a mechanism and that that can be you know uh you know clarified and uh revealed uh uh in an art way. So the AI could be conscious and they can model themselves and uh yeah uh that's the answer.
The the another question um I I I think I believe the the less intelligent agent can control the the more intelligent agent that is happening all the time in the human society. Yeah. uh I'm sorry but yeah uh but uh it's depend it depends on the the how reward is you know distributed or the how the value is created for for example if some agent is created to maximize some objective function there could be another agent that that could you use that agent for their purposes.
So maybe the the the objective function is for example very narrow one the the other agent can utilize that for for their purpose. So it's a it's a relation of objective function or you know purpose. So, so we can design that and uh in the human society we sometimes have have have the triangle control system so that any one you know agent should not have the much power over others.
So that kind of you know uh devices could also be possible. Yeah. And do you think enough work is going in currently to actually looking at how to implement all that basically under the under the title of safety research?
Yeah, I don't think it's enough. we we have to be you know more careful about risks of AI as professor Hinton says always and we maybe this is a not short-term but uh middle-term long-term issue but we have to make uh enough endeavor to realize that kind of safety maybe we we have more international research on this respect and we have some government governmental uh cooperation to you know uh monitor the uh you know ongoing development in the uh you know companies globally. Yeah.
Arisa would you like to comment on this? I I I I agree with what Yakasan said about you know we we are actually doing you know well collaborative work on the safety security issue but when I talk with the other colleagues from overseas I I think you know we we need to be more conceptually talk about what does safety means or maybe you know what does consciousness means you know because you know when we in Japanese we we in Japanese we call safety as anam uh or maybe uh you know some person or some people may think you know the the safety can only be applied to you know the physical or financial or maybe the national safety or the security but it might be you know dependent even though you know we we use the same word the Japanese safety might be slightly different from you know European safety because you know it dependent on the like a circumstances or situation where in the island society we actually you know uh have well educated but maybe in the you know places with totally different situation or circumstances what does safety means or safety criteria that they actually want is totally different. So I think even though you know uh we wanted to use the artificial intelligence uh uh in you know our daily lives or maybe in like the education or the politics or other way around um I think it's uh we we need to uh you know philosophically or maybe in the conceptual level first of all discuss about the safety and what kind of safety actually we wanted to you know have so uh I I think this is not the technical issue or the technological issue but it's more like ourselves you know focusing on what kind of society or what kind of future we wanted to live.
So that that's the starting point of the question. Do you want do you want to comment? I want to to add something to you.
Mhm. The question was whether whether AI can become conscious. Nature is conscious.
Mhm. We can be we can be upset about this point or that point or try to change it but nature is consist without any human being making it. It's by nature.
So all what we have to do is to follow nature in our AI. What do you mean by nature is conscious? We are alive.
We are alive and we can take a flight from Tokyo to London. Nature is conscious. Otherwise, it would fall into the ocean.
If if Jeff was here, he'd say all of that what you just said was all very well, but actually we're in an environment where governments are deregulating AI. They're, you know, they're they're they're loosening the fetterss. They're saying innovate we we it'll stifle innovation if we if we impose any restriction and that um that is the we're going in the wrong direction.
Um and also he'd say that government advisers were often people who were had vested interests in seeing AI companies succeed. So how do you how do you change that environment? Do you need to change that environment?
are changing their politicians. That's a that's another big conversation. We can have a we can talk about that at the end of the meeting if you like.
Um yes, changing the politicians is one way, but would you anyone like to comment and then I'll open it to the floor for a just very quick last comment? Well, we might not, you know, directly answer to your question, but uh there's a kind of old saying that I really like is that the the road to hell is paid with the goodwill. Paved with the goodwill.
So even though you know people are not intending to you know make the the wrong uh you know use of the AI or you know but people are you know thinking to make the society good or the better but if if there's a less collaboration or or you know or maybe people might think you know the safe society might be you know powered by the giant effect you know the the country or the maybe the one of the politician. So I think you know we we we what we need to do is to consider uh well to or or maybe to to share the the viewpoints of what what does the society would like and how how we can you know uh make the governance of the AI system and I think the world today is very becoming fragmented and the pe people are actually you know kind of kind of entered the the AI race. However, because it's entering the AI race, uh this this is the actual point we need to uh think about the collaboration or or maybe to to think about how we can make this AI uh more controllable or maybe uh to to make in in the governance framework.
So, um I I think what we are doing currently uh in the governmental level or maybe in the academic level uh is very important. But I I think what we we also need this kind of public debate as well otherwise uh you know you know it's not only the the the the scientist or the politician who is making these rules or maybe using this technology because I I see the lots of people are actually using the chhatt today. So that we are all in charge with this kind of governance of the control.
Thank you. Thank you. Very quickly.
Yeah. Yeah. Um if uh professor Hinto have attended this meeting may maybe he should take the very you know risk side uh and uh maybe I wanted to have the very positive side but yeah so I have to do both this time but I think uh on the one hand the the AI uh you advances society a lot.
So we are using chipdity a lot and maybe the science would even more advanced with the help of AI that's true and on the other hand maybe we have to have more you know cautious step and last in the last dinner the the many people mentioned about the Ashoma conference and which is a very good uh activity and and in the biology field there's no accident so far that's fantastic and maybe same same thing should happen in also the AI field. So we have to make and as a researcher. Yeah.
Yeah. So the cinema conference self-organized by people in starting the biotechnology organ biotechnology revolution. They got together and they decided on the safety principles then and there and that's 50 years ago and yes now now the same thing is that kind of way forward should we have just a just a minute or so but is there somebody who wants to make a comment or ask a question please?
I'd love to hear from somebody. There's somebody right in the middle. Can we get a microphone to this person?
Sorry if I didn't see anyone else. Just here. Thank you very much.
So I think that pretty much the last word of the of this session goes to you. So thank you for microphone and thank you for nice nice discussion and uh I have a positive opinion about the uh AI. uh if the AI is uh more clever smart than human uh if the uh government or uh company is uh organized or managed by AI the society could become better of course there's a risk but uh uh uh I think the uh evolution of AI is a positive for human so what do you think about the organization or management by AI Ada would you like to have your institute managed by AI Because more more clever than human maybe maybe clever better at management better at handling p interpersonal relations as Jeff said good mediators what do you think very quickly I'm not sure that I would like to have it managed by AI formally but in life it is in real life it is this way or that way you can look at it at the end AI is one of the more important important factors in having my instinct in practicality yes it does manage you but actually I don't think anyone should manage you add I think yeah myself of course but the institute is larger than just me but but in general do you think it would either of you like to comment or both of you like to comment in like 10 seconds on the idea of of AI management companies countries AI can manage the the organization very well if given the proper objective.
So the setting the objective the purpose is the humans you know role. I think my my quick answer is that um maybe they can but I think if you wanted to do that in proper way we need to change the system because our current system is designed that the this organization or maybe the government or company is you know controlled or maybe governed by human beings and what human can do is totally different from what technology do. So if we wanted to you know shift it on that way we first need to change the system.
We thank you very much indeed. So we all learn need to learn from each other basically. Wow.
Okay. Thank you very much indeed. We got through a lot in a short time with your help.
Thank you all very much indeed.