so good afternoon everyone my name is Ena van zand and I have the pleasure of um presenting to you a panel today on behalf of um the Brussels organization of cpdp or the organization behind it which is called privacy Hub um this panel will be conducted in English and that is my fault because my Brazilian Portuguese is not very good at all um so the panel will be conducted in English uh but uh afterwards you are free to ask your questions in your own language as long as it's Portuguese Spanish or English um and uh
I'm going to give each of these speakers about eight to 10 minutes to talk about their topic I'm uh uh very pleased with this a very good panel that we have here today with u um very high level expertise on the topic that we will be talking about which is the impact of the EU artificial intelligence act on um AI policy and governance on this side of the Atlantic um and specifically we will be talking about the role of supervisory authorities and whether it will be necessary to have specific AI authorities or it's also possible
to look at authorities that already exist and I'm going to start by giving the word to Philip medon um you have a master and a doctor from in civil law and you're here working as a professor in civil law um at fuv um you're a lawyer and a researcher and also a poet I heard um an author of the book artificial intelligence and civil responsibility autonomy risks and solidarity and you have integrated the team of legal Scholars who have worked on the AI bill for Brazil so I'm going to give you the word and please
welcome thank you so much uh before anything I would just uh would like to be mindful of my time of your time as well uh thank you in first of all the Privacy salon for this amazing invitation to be here it's a huge honor to be part to take part in such a panel with all these uh experts uh around me and I just can't miss the opportunity to remember our dear friend denil doneda once again who has been remembered ever since we started this morning but it's never too late to remember him again and
we'll be always doing that um well I'll try to to make it make it as quick as possible on a AI regulation uh timeline in Brazil this AI regulation in Brazil we can start by regulation at least not by hard law we have to go back and start with the Brazilian strategy on AI the so-called abbia it was first published introduced in 2021 but it was said that it was incapable of dealing with concrete problems of AI it did not p enough not provide enough tools for us to deal with the concrete problems posed by
AI therefore uh this uh strategy is now under review so we'll probably have an updated version of this strategy in a couple of months that's at least what is expected and at the same time they started to develop in Congress different bills on AI and that is something interesting because the bills did not specifically consider took into consideration the work that had previously been done by the Brazilian strategy on AI the first bill that was introduced in Congress was in the chamber of deputies it was called bu 21 A2 a because it was the the
second version that was approved by the Chamber of deputies and something that is kind of shocking is that it was approved in an urgence regime so what was the urgence to approve an AI Bill back then in 2020 when it first was introduced in in Congress and it was approved in 2021 some might claim that it is considered a principle based approach on AI but I wouldn't agree with that I don't think that is a principle based approach it is a no regulation at all approach it provide it is based on general principles it brings
no clear definition or responsibilities or sanctions and it doesn't tackle specifically the enforcement of the AI bill in Brazil therefore this bill arrived at the Brazilian Federal Senate and the Senate realized what it was clear to everybody else the bill was incapable of of coping with all the challenges posed by AI so the the Brazilian Congress decided to create a committee of jurists of experts which I had the honor to be uh to be part of it and we have worked throughout the year 2022 uh to bring a new bill the so called bu 2338
which was only introduced last year in May but uh what was the the core of that bu it was first of all uh a human Centric approach it had a human Centric approach and Professor Don was one of the members of that set commission and I clearly remember vividly remember he was sitting next to me when he said in the in the first Hearing in Congress this is a human Centric approach to um the second thing it provides just like the AI act a risk-based approach but it goes beyond because it not only brings a
risk-based approach but also brings a rights based approach which is something that we call here in Brazil as jauchi Kaba a small kind of grapefruit that we only have in Brazil and it's something that outstands from stands from the other bills that are being discussed discussed abroad but of course that did not please everybody especially uh uh the private sector on the grounds that this bill provided in their view a lot of Extreme Measures that could stifle Innovation so what was argued back then is that the bill uh provided so much governance measures that it
could stifle Innovation so what the Senate decided to do instead of voting the bill they they have appointed a new committee this time the committee was uh uh was composed by 13 senators and these senators were appointed last August and they were supposed to have ended that discussion last year but they are still postponing postponing postponing every week we say oh there's not going to be a there there will be a vote right now but the vote never comes last week we were a couple of us who are here the audience as well we were
in Brazilia and it was really expected that there was going to be a vote but once again it was postponed and so far there is no clear sign or on whether it will be voted or not and when but the the the main thing about this uh new committee is that they have proposed the the senator Eduardo Gish the reporter he has introduced new versions of the bill updated versions of the bill and these updated versions they have uh carried out on the rights-based and rights-based approach but they have changed something very important which is
the structure of the enforcement and that is what I'd like to spend my my my my last minutes on um the first version as I already mentioned the 2120 approved by the Chamber of deputies provided no enforcement at all the first version of B 2338 which was the outcome of the commission of experts it provided uh a model of regulation of enforcement focused on a single body it would be called the soall uh competent Authority this competent Authority which would be defined by uh the administration the the the admin the executive branch this Authority would
have the power to enforce AI in Brazil so it will be a single body what has changed in the latest versions of the the the the the bill they have now introduced a system of Regulation instead of a single body it is now a system of AI regulation and governance so what seems clear among all this this this belief in this bill is that at least they have agreed on something which is a multis sectoral approach on the regulation and enforcement of AI so it is expected that this uh this ecosystem the system of AI
which is called siia it will who who will be part of this commission sectoral bodies they will represent private sector for instance to a committee and a council who will be created and we will have many members and that's arguably the the most uh controversial thing the Brazilian DPA the Brazilian data protection authority A&D the first thing about this uh solution is that there are some uh concerns whether it is constitutional for the Senate to appoint this uh Authority that is something that has already occurred when we were discussing our data protection bill which is
the fact that is the Senate competent in terms of constitutional law to to to appoint an organ to create uh uh to make uh uh the government spend money on creating something that is something under discussion but another point is thatd will be the coordination body it will be uh the coordination body in charge of uh trying to to to to to raise dialogue among all these uh people who will integrate who will take part in this ecosystem but uh among all these concerns there are some people raising questions like will it be capable of
dealing with artificial intelligence given the current structure because it's a small structure these days but on the other hand there is a another very strong argument which is okay if we build from scratch it will be even harder so should we take advantage of existing authorities existing bodies to enforce AI or should we create something from scratch that's pretty much what's being at stake right now but to close in my last minute and a have uh and comparing with the AI act there's something that I've recently read written by Professor Natalie muha who was even
with us here a couple of weeks ago and she said something regarding a act that I think that we could apply to Brazil as well she said we must be careful that AI act won't be turned into motherhood and apple pie and what she means by that by saying that we shouldn't turn into into motherhood and apple pie it is something that everyone everyone agrees on on the United States but is it really effective in the real world so is this ecosystem all these people trying to regulate AI at the same time capable of really
enforcing and making what Ros cound once said law in action versus law in the books are we aable of taking law from the books and taking it to action that's pretty much what we are discussing right now but while we we still don't have an AI office there's something we can't deny other agencies such as the the &pd itself can act on this Gap they can help breach this Gap because for now since there is no office there are certain parts where we have uh Common Grounds for the action of &pd so consumer law can
be applied so it's not like we have no law we have no specific law but that doesn't mean we have no law but what will happen in Congress well that uh is something that we can't predict but hopefully we can't go back so in terms of not going back I'm I'm trying to see say is we can't go back to the model of of Bill 2120 we have to work on the model of Bill 2338 thank you thank you very much for this overview and and thank you also for raising some of the questions about
the &d because we have somebody here from the &d uh Lucas bores uh would you from your perspective and also from your background explain what your perspective is on the questions that were raised um about supervisory Authority okay um good afternoon uh thanks for the invitation it's a pleasure to be here for the second time this this day uh the second and last time I'm not be here I I'll be in the at cpdp tomorrow but not in the in any panel so uh but uh I think this is very important for us at ING
that's why here for the second time because this is a topic that we are following closely close and uh obvious as uh Philippi just said we are uh kind of the appointed Authority at at this moment uh in the bill so uh this this is a very important topic for us that so that's why I'm here so uh I have three points the the the question that was given to me was about the role of uh data protection authorities dpas in AI governance and uh the first thing I want to mention uh is something that
Philip just had have said is that existing rules apply to system so uh this may sound like uh May uh obvious but I think this is something that we need to discuss a little bit and uh I I like to start here uh quoting Lia K she's the uh Federal Trade Commission chair uh so she said um although this AI tools are novel they are not exempting from existing rules and the FTC will vigorously enforce the existing laws I know uh that the panel here we are discussing the eui act and uh and I'm quoting
uh us regulator but I I made this on purpose just to emphasize that um uh even in a country with a a pro Innovation uh approach which is the case of USA uh it doesn't mean that uh any any kind of rules apply or that they they are not worried about the impacts and the uh they don't have concerns about AI uh or the effects of AI in society so uh from the other side this does doesn't mean that we don't need laws to do with with AI or that there are some issues that are
not properly address in current legislation that's not my point here uh what I want to highlight is that we can and should discuss Uh current legislation and then how it applies to to AI system especially uh which is the G of Brazil when there is no uh not a specific law to do with AI so uh this is the first thing I want to emphasize here that existing laws and especially data protection laws apply to I system and uh what is more important than that that data Protection Law will continue to regulate AI system even
if uh we don't have in the case of Brazil a new law yes so uh the develop development and implementation of AI system will have to comply with data Protection Law regardless of the existence of a new specific AI regulations and uh uh another point that uh data Protection Law can help us achieve our goals regarding regulation because uh there are many Tools in uh data Protection Law that we can use to uh to establish establish rules and to deal with the risks and the harms that uh may occur regarding AIC what are these tools
they protection principles like uh transparency purpose limitation data minimization uh data protection impact assessments uh legal basis for data processing and data subjects rights these are some rules that already exist and that uh if correctly applied to AI system they they should they may help us to regulate and to establish some uh values that these systems uh May uh may follow and and uh reduce the the impact that caus in societes so uh I think uh data Protection Law can help us understand how AI works and its impacts on on people it's important also it's
it's a very important tool to provide legal certainty to our organizations and to find a balance between just and uh Innovation and also uh to guarantee fundamental rights so this tools and these rules already exist and we need to uh discuss uh how they apply what what what are the best interpretation uh in the case of applying this rooms to AI systems uh another point that I want to bring here is that there is a strong connection between Ai and data Protection Law that's uh because AI systems needs a a massive amount of data and
especially personal data and uh AI syst that uh users or that are trained with data with personal data they are usually classified as high high risk if you look at uh the EU AI act or the bill 2338 in Brazil you'll see that the many many of the the topics that are classified as high risk they will involve uh personal data so uh that's the case for example of faal recog the use of AI systems in migration issues job Selections in schools or education uh and uh I think that some of in this context some
of the most controversial uh issues regarding AIC Sy will be answered by their protection authorities uh I'm uh uh if you consider that AI systems use personal data and that we have many tools in the data Protection Law that can uh help answer this these questions that will that will necessarily happen I think so uh the last point I want to to bring here is that uh this is this is already happen my time is uh closed finish it so uh DPA have already assumed a central Ro regarding AI regulation so we're not talking here
about the future but this is already happen it's the case of face recognition for example the clear VII case in Europe and also uh regarding generative AI for example the Italian DPA has issued a preliminary order U against against sh T Brazil npg has just issued also a preliminary order injuction against meta uh and uh other Regulators like in in France or Ico in UK there are publishing uh public consultations on many topics regarding uh data Protection Law and the AI systems uh so uh to finish I would like to quotes Gabriela zi Fortuna of
future of priv privacy Forum uh she said DPA are read AI Regulators whether they like it or not it really depends on then how active they are in this sense but they have all the tools that they need so that's the message I want to to bring here uh I think uh uh regardless of the existence of a new law in Brazil uh in the case of A&G we we still have uh we already have this this role and uh we are doing it and uh I think that uh we need to uh advance and
Foster this discussion and and to apply the the the legal tools that exist uh that are present in our data Protection Law to answer at least the some of the main questions that uh AI assistance brings to to us thank you Alberto I saw you not several times um Brazil is not the only Latin American country of course looking at AI governance you are an associate Prof professor at the University of Chile law school and you have a PhD from Georgetown University um and are currently also a member of the committee of experts advising the
Chilean government on AI governance what is your take on these questions perfect um hi everybody um I'm going to skip the pleas Tories just say thanks for the invitation to be participant of this panel and I will skip them because I is that they are really tough for the time here so I'm going to into the matter that I has been requested to speak about um the primary purpose of my presentation is tell you a little bit about how does Chile is Chile dealing with this uh to describe a little bit uh what is the
inational arrangement that we are uh working for related to artificial intelligence and and and some Reflections about that that arrangement I should start by saying that Chile as most Latin American countries it provide some protection to uh personal information way before we adopted data Protection Law like most of the American countries we provide protection to personal information through constitutional Provisions um in the beginning you can see in in the Juris pren in Chile in particular that there were there are tons of cases LW from the 80s in which cour were using the right to privacy
to provide protection regarding to UND processing of personal information and that was primarily the case 2018 when we introduce a constitutional amendment that introduce uh the an autonomous right to control personal information the problem with this General Provisions General and Abstract provision of the Constitution is that they seem really difficult to apply in because they are super ambiguous and abstract and and that make very evident the need to have a layer of more granular concrete uh uh normative steps that explain why in 1999 she adopted the First Data Protection Law uh at that time it
was looked at that law as a very successful experience provide it was the first Latin American law providing comprehensive protection on that from data privacy at that time uh however the law they had a lot of shortages uh being the main of them the fact that there it was Ted with numerous permissive extion to the principle of consent we didn't have any provision regarding uh Closs border data flows and which is more important purpose of this conversation didn't have any provision regarding to an authority on the matter during the last six to seven years the
National Congress in Chile has been discussing a new data Protection Law that will deal with all those shortage and others uh the new data Protection Law is meant to uh incorporate into the ch framewor uh the recommendation of the OCD we are talking about rules from the 80s but also uh the principles and and a lot of the provisions are very with the European Union GD uh things that you will find in this project of law that finger cross should Beeks uh and entering force in two years since off in two years after his publication
um this law introduced a new set of principles uh on data protection uh incorporate new uh a new set of Rights for data subject as well as um new obligations for for data computers uh specific provisions on it's it improves the enforcement uh with a regime of on on infringement and sanctions it improves also institutional Arrangement and particularly why because it it deals with International flow of information data but also it does create competent independent authority to deal with the supervis with supervising and enforcing the data protection uh this is important because basically as as
you will see it will be in the design that we're putting together for the artificial intelligence deal The Authority will play a significant role the data protection authority in Chile or agency uh will be an independent Authority do basically with supervising and enforcing the law uh this is a collective organization formed by three members they are chosen uh by agreement between the president and the Senate that ensures somehow some political leverage but also Independence uh the agency will have authority to this is important to interpret an over City compliance with the law to issue regulation
but also will have authority to determine infractions between two sanctions as well as to know about data subject complaints provide assistance and advise the legislative legal modifications and engaging International cooperations have very comprehensive number of powers that is Authority will have two minutes I will run them uh in 2004 in May 2004 the government introduced a deal on artificial intelligence this is not an isolated effort from the government this is something that can be trust back to 2019 when the Chan government under a different Administration put forward for the option at National policy of the
matter it has been adopted at Le that policy in 2021 it has been renewed a specific regulation has been adopted for the use of artificial intelligence in the administrative sector in administration but we need to have a more comprehensive approach and that comprehensive approach is the one that concretized in the the bill introduced early May the the law introduced in Chile the bill introduced in Chile follows very closely the European Union standards it reflects on the aded by the European Union act principle supervision transparency explicability and the like it also adopt the approach of uh
identifying level of risks and matching the level of risk from uh to with mitigating and preventive measures in order to avoid the risks now that allows to distinguish between certain systems that are forbidden and certain system that have no significant uh impact or or risk uh the law also State a region of infractions and pen and monetary fines in case of impeachment and for purpose of enforcement for for purpose of institutional enragement it creates two bodies a technical Advisory Board and it bus on the top of the data protection authority The Advisory technical committee is
a multi-stakeholder entity representatives from the Academia Civil Society government as well as um the business sector that advis the minister of Science and Technology on a specific regulation that is necessary to go further than the text of the law in in ISS to develop promote and Implement uh and improved artifici intelligence systems within the country uh it's the minister of Science and Technology basically because this has been the the minister the ministry leading these initiatives over the course of the last year um but the actual Ro to Force to supervise to supervise the the the
matter of artificial intelligence Act is the data protection authority in ch this is great because a number of challenges associated with data with artificial intelligence are associated with data protection uh so it's fantastic in that way it makes sense however it on the other hand it may overstate the relevance of personal data information in the use of artificial intelligence systems let me give you a couple of examples that will make the argum in Chile we're using a lot of artificial intelligence and it's not only the applications that you can downloading in your funds we're using
artificial intelligence to manage uh Water Resources we are using artificial intelligence for Traffic Control we're using artificial intelligence for uh prying Industrial Waste the limited to no use personal information and there are and this raise a number of questions will be the thata potation authority prepare to deal with enforcement enforcing the law regarding to non the processing of nonp personal data that's the first question then if you look at the at at the at the powers that authori is being provided the artificial intelligence act provide primarily I'm being I'm summarizing here my notes okay Pro
provide primarily reactive ctive measures to supervise to know of complaints to determine infractions and to pass a penalty that's it and there is a question to be answered about what will be the powers capacities and resources that the data protection the personal data protection authority will allocate to provide nonreactive more proactive roles with artificial intelligence which is critical it cannot be underestimated because a number of public government and public service but also private actors are looking for advice to know how to deal with this moving forward and that the authority may not be the one
prepared to do that that's a challenge thank you you brought up a number of very good points indeed uh the use of non-personal data prevention of impacts rather than reactive responses and also the independence of supervisory Authority um I'm moving on to Bab aoto who is a PhD researcher at the law Science and Technology Studies Institute at F univers in Brussels um So based in the EU um and I would love to hear you know what the perspective is from the EU on the independence and also the role of data protection authorities in AI government
well thank you very much in thank you very much for all the panelists I am a part of the the team that organized this panel uh uh we organized this panel because we first thought about this topic uh as a very interesting topic and then we organized a a conference in Brussels before cpdp Brussels with FV H we wrote this paper a a paper that will be published uh at here at cpdp lat proceedings uh but we don't have con even like answers to this questions so probably you're thinking yes the Act was published last
Friday probably you all have the answers that probably no we don't we still have the same questions that everyone posted here uh they are the same uh challenges so uh giving a little bit on the act so the Act was published last Friday so now it's officially a law under the EU uh as most EU regulations it's grounded on the regulation of the internal Market but also in the protection of fundamental rights and principles of uh the EU as well so this double access is something that is always in the regulation of data protection and
also digital laws in the EU um uh in this case uh it was very complicated for the a act to create uh to say that uh what you're trying to regulate is it just AI but also fundamental rights and the ACT is based on as the professor Alberto said is a risk-based approach so it's it's not as the gdpr so it was a very complicated as matter uh and the AI act said like this we have uh AI um institutions they will regulate and observe how the AI Act is applied and um and used but
we leave to the member states to designate if it's AI Authority or it's the data protection authority for example uh so certain countries such as Spain for example they were the Pioneers they created this AI agency uh but in the other hand other member states such as Italy uh they designated double such competence so the data protection authority the guarante will be uh competent in one part and the cyber security uh Authority will be competent in the other part these are already complicates things so we can see that although uh the AI act tried to
push for a single author we are landing kind of the same place as Brazil that which is not a bad thing actually I don't think it's a bad thing so yesterday the edpb the European data protection uh board uh issued a um opinion I guess it's a opinion right uh on this saying that okay uh we know that there's a country that already created this designated AI authorities such as Spain but for the ones that didn't please hear us out uh we think think that it's very important to protect fundamental rights so we uh are
kind of trying to make your mind that it would be interesting for us if you gave the data protection authorities the the competence to regulate AI as well so whereas the AI act tried to push for a single for separated Authority for the AI uh the dpb is trying to go on the other way again close to the Brazilian uh uh measures that uh so we don't have answers still uh we still have some questions to make because I I agree that for example data protection and uh personal data is a very big part of
AI and I think the dpb also acknowledges that but also we must take into consideration that they are active more than that and we have uh a mix of personal and non-personal data so going back to what suppos about about Alberto said uh so um we still don't have qu answers and although the ship has sailed we we don't have the ship sailing very much on the sea um we I think that the so the the the idea for this panel was to see if we had the Brussels effect and I don't think we have
the Brussels effect this time I think we have a data protection effect more or less that is trying to to see that um going back to also ptoa said it's the law of everything so maybe you are going back to the law of everything which is every everything that touches personal data is going to be under the data protection authority which since I have just two minutes is just dangerous as well because we know the data protection authorities they don't have uh money expertise to regulate and supervise these kind of Technology they have shortage of
workers they have uh prob sometimes problems with Independence uh we had in Belgium a very big scandal with independence of data protection authority so it was very complicated uh I was a part of uh an access now uh study on data protection authorities in Europe and one of the main points that we found on this study was that they are short of uh staff they're short of money so we are edpb is pushing for this uh competence for them although we don't see any type of changes giving them more money and availability to to enforce
these rights so I think um if we push the supervision for thepa probably we will have a problem also with the gdpr because then we are putting everything in one basket it's going to be complicated so thank you yeah thank you very much Barbara you also mentioned human rights and I think this is a good bridge to the last speaker we have Pedro Martin who is a academic coordinator at data privacy Brazil um and also a specialist in profiling and personal data protection um please uh give us your opinion on thank you well first thank
you for the invitation on being this panel an honor uh I'll also try to keep to the time um so I I'll try to address this I think this is a common issue on all those the the the participations here on the interplay between data protection and AI governance and I think there's a another reason for that because well trying to take a step back when a couple years ago maybe we we were talking about data protection at something new the gdpr and lgpd we were very much freezing how data protection is not anymore um
a governance on privacy so we are not more any more ties to privacy issues such as the pro the the know the knowledge of intimate information or information that is related to uh that that a person doesn't want doesn't want to disclose and then we move from this framework of privacy to framework of data protection where what matters is not as much as if a data is revealed something intimate about someone but rather if the the the processing of of a data links to someone so there is a identif iability criteria so if in and
if it's possible to identify someone or someone identifiable will be affected by this data processing so there this new framework of Regulation we put forward we put risk assessment measures we put transparency uh obligations with accountability um uh framework of publishing impact assessments of uh the registering assessments of such things as leg legit that interest assessment so a bunch of documentation that we we we we thought as a society that was important for data processors to to put forward when they are processing personal data so we created this big arrangements to try to get a
grasp on technology technological change so new technologies were processing personal data in heavy amounts and they were impacting people in the end of this process so we put a a a a big um supervisory model that is led by a data protection authority so it's even though it has a lot of obligations there is a central Authority that guides this regulation which is a data protection authority and when I see now the the talk about AI regulation I feel like this this next step on technological regulation which we we use the same language that we
created for data protection so we we have we created a language as Society of what we consider important to get grasp on technology so that technology doesn't uh Advance uh despite societal issues or despite uh in harm of of fundamental rights or concentrating markets or concentrating power or creating power in balancing or further advancing vulner vulnerability so we have this common minimal framework of what this language of technological regulation is with data protection so we have fundamental rights we have rights of bring brought by data protection regulations we have accountability measures we have uh transparency
measures and we have a data protection author of central figure to to put forward this regulation and now we have ai and there is of course this this this thinge that yes AI uses a very heavy amount of personal data but it doesn't necessarily needs to but this doesn't change the fact that it can it can even without you using personal data affect fundament rights so I think what we are now doing we are now passing through a phase of abandoning identifiability as a criteria for regulation just as the same we abandoned privacy as a
as a as a criteria for uh regulation so of course there are still privacy regulations there are still data protection regulations but we are getting on a moment that we need to govern data in a bigger framework uh and AI is May is like uh something a technology that blows this debate wide open so we we cannot I think with AI the thing is it it blows in our faces that we need to advance the language that we use to talk about data governance and to get a grasp on technology and to appropriate technology so
that it is developed and applied in a way that promotes human rights that promotes um things that we value as a society and this is why I think that the interplay between data protection and AI regulations is not only because AI treats personal data process personal data but because we we have a somewhat fairly Comfort we are com minimally comfortable with the language that we created for AI for data protection regulation and to develop develop new verbs and new uh adjectives of of the on this language for AI regulation so I think so putting this
framework on this very step back how I see um the enforcement systems of AI uh that how this um they are radical thing applies on on on the supervisory framework so in data protection we have this Central Authority that is very important because it's a as a it's a new language so we need someone to drive forward this regulation and now with AI and I think uh as everyone here said we we have a a spread of enforcements so we have uh both in the and Brazil we have a very we we begin to see
that it's a stral Authority maybe have some there are some positive effects but there are some um because we we expanded we expanded the language that we talk about we need more people to to be brought on this language so we have um sectoral regulations and actors being uh put forward in the ca the Brazilian system of enforcement uh more plainly that then we we saw in data protection no one says that sectoral regulations doesn't apply doesn't regulate data protection of course it's a general law so the the financial system also uh the regulator also
regulates data protection because it's a general uh law but we have a central figure with the data protection authority now we are less comfortable with this central figure and we are putting more uh s uh in the in this enforcement system uh so plainly um obviously the the the The sectoral Regulators are being brought on this system but also uh we with with this uh spread of what we want to regulate we begin to see also new topics that uh interplay with data processing so we also see data PR labor precarization and decent work as
a top topic that that's beginning to be even more um connected with data governance as a whole so then we in in some versions of the law more recent versions We have the ministry of labor as an important actor in The enforcement systems uh in order to protect the effects on data processing on on AI systems on labor and uh decent work um minimal standards we see the environmental impacts of AI and its infrastructure and then we have the ministry of environment as an important actor uh to to make sure that AI developments uh occur
in consonance with just climate Justice and that uh climate impact is something that is should be considered on the impact of Assessments of of AI and we see copyright also being something severely discussed so basically what I'm trying to say is that as we we use the the the foundational language for data protection we expand it we we expand it to talk about AI we see new issues and new topics that uh that are Urgent for us to address and AI accelerated this process a lot and we and because of that more actors are trying
to be put in place in this um in this enforcement ecosystem I I'm not quite sure if it's a good or bad thing like we I think we we all agree here but I think at least this is where it comes from this this is PR of the enforcement that the the EU Bill intended to be a little bit more centralized and maybe it's not very much being applied at that but in Brazil we we this is certainly abandon like this is certainly there are already a lot of more actors wanting to participa in this
ecosystem so and I just to finalize I think I have some time um I think this enforcement and this debate uh it's Incorporated differently in each in each of the the the the in each society and and and regarding social economic status and I believe that in Brazil and maybe Global thought as a whole we this is why we are beginning to see uh uh data um systemic issues such as discrimination and precarization of exploitation of Labor especially data labeling works that usually occur in the global South as issues that are a related to AI
that is not something lateral or collateral to or parall well but it's something completely related to Ai and because of that as well I think the multis sector approach that is something very uh important in Brazil and civic participation on on on governance that is also something that we I think we established um on technology regulation in the last couple of years uh is something that cannot be abandoned on AI regulation so I think that's it for not thank you uh yes thank you very much for this a little bit more provocative contribution pointing out
the discourse and what has been called the proceduralization of the protection of humans in the end um I I think I should have mentioned uh that you're also the author of a book on uh the free development of personality in the in the era of algorithmic governmentalization so yeah governmentality so yeah indeed um thank you for this uh this focus on what is important humans um uh I think we still have some time to open the floor for questions you can ask questions in uh Portuguese Spanish or English whichever you prefer um and yeah it's
up to you and there's a question over there thank you all it was very enlightening to hear so different perspectives my name is Marcelo malaguti I work as a special adviser at ji but right now I'm talking in personal capacity um as a researcher I have been researching cyber security for more than 30 years and all I know regarding Ai and data protection comes from this particular view so forgive me if I say something quite wrong but um my perception is um that considering AI connected to data protection is something like a tip of the
iceberg we are looking at the consequences of um misuse of data that could harm people but as Professor Alto said we have a lot of applications that don't even bother regarding the the individuals and the citizens and so on so uh we have Logistics we have chemistry we have nuclear research we have lots of research that never touches the the personal data so when we talk about data protection at large it would be nice to have a DPA um regarding controlling it but when we think about personal data protection it sounds to me like a
huge mistake to use personal data Protection Agency uh to control a tool that touches almost every point of society including my point cyber security so um I was thinking it it was a Caba as Philip said a very Brazilian thing to to to put together DPA and AI but as far as I understand right now uh it is something um quite usual or unusual uh regarding many countries including Chile uh but I would like to know um if someone sees um a different perspective a different possibility for Brazil right now since we don't have my
supposed uh cyber security agency that would be interested in a small part of AI also um but um we also have some sectoral Regulators that are quite Limited in their scope so um what do you think would be possible for Brazil right now or should we uh do as the Americans have have decided let's not talk about AI regulation for a few years let's see what happens worldwide and and so see what we can do after that thank you very much for your attention thank you for that question maybe uh let's go first already to
a representative of a a data protection authority um I I think you're right when you when you say that uh AI isn't all about uh personal data I think uh uh you have a point and probably that's why we need new law uh but uh I also think that this kind of thought doesn't resolve the the problems because that that's just uh how things are uh we we have strong connection between Ai and data protection because as I said uh many AI systems that are highrisk AI systems uh use personal data or impact people so
uh we need to face this kind of of highrisk AI systems U uh looking and using the language uh to quote here my friend use the language of personal of data Protection Law at the same time it's true that we have uh different uses or different AI system that just doesn't uh use personal data and uh that's probably why uh my point of view we need to regulate and you need to uh give this more one more step uh uh to uh deal with this uh new technologies I think just as uh Pedro said here
uh we have a new technology we have impact on people and we have uh big areas of concern of of how these systems will imp impact in society so we need to regulate this we need to face the the the problems and the questions that they they bring to the m so uh one of the tools that we have is data Protection Law it's not enough uh we need probably I think we need the new law to deal with this specific uh uh problems and this specific impact so I think these are complementary uh point
of views is data Protection Law and a new law that uh bring new tools that we can uh that may help us uh to deal with the risks and to prevent harm uh to people I I completely agree with Pedro in the point that it's not that much relevant what is the nature of the authority that we will have in place here the point is where we are going to be sure that the law will be enforced and that it will be enforced in order to protect fundamental rights or to fulfill other public interest purposes
that's the issue okay uh I tend to believe that that potion Authority may have the personal data protection authority may have a challenge because they need to realize that there is a lot of work Associated to nonpersonal data protection artificial intelligence system that may have to deal with if assume the competence but and the day of tomorrow you will have autonomous car driving here in Rio and you will have automobile Administration trying to regulate the personal data protection trying to regulate the use of personal data probably will have the consumer protection authority also trying to
regulate the RO of consumers it can be really mess at the very least we need to have certainty that we will have a coordina Authority or mechanism to Ure that don't need to go to that messy process that ultimately will translate in lack of enforcement to protect people and an an ability of governments to fulfill public needs yeah I think Barbara also mentioned the European data protection boards who said that they basically want to be a single point of contact they do not necessarily want to rule them all theyes they want to make it clear
that there is one point of contact sorry so uh I just like to to make a clarification before anything uh when I was referring to the jabuti Caba I was referring to the rights based approach as the jababa and not the authority uh mechanism the supervision and the enforcement uh I think that there are two things we we need to to to think about I think we all agree that if it was possible possible everybody would like to create a new agency from scratch but that's the ideal world we have to think with the real
world with the real problems uh how hard it is forpd and here we have a representative to really uh work how long is it taking to actually uh have the enough resources it needs to operate in the real world and make a difference so we are facing the exact same problem with AI so uh creating a new a new new body from scratch might have the same problems but uh if it were a NPD alone enforcing AI as the single competent Authority maybe that would be a problem but there would be a lack of expertise
there but since it is it won't be the only Authority that will take part in this ecosystem maybe these other agents that will be engaging in this regulation can help &pd carry out this this task especially when it goes beyond the data protection uh concerns which it has already the expertise but my my my final remark would be the following do we want to use the data protection reasoning to AI because when we look uh you you you have mentioned B that maybe we don't have the Brussels effect in Brazil which I kind of would
slightly disagree but uh in terms of AI but the point is even though we might not have uh brel effect on AI we have a data protection effect on AI our law is structured in a data protection basis but but there's something even trickier that goes before that our data Protection Law is pretty much based on our consumer code so at the end of the day are we do we want to to make AI consumer code based AI are we looking for a a product safety legislation which is something that was the initial desire aspiration
from the European Union to to to build on a on a product safety approach so that would be my my my my final remark to think about what we want as an AI legislation we want to take advantage of data protection and also from consumer code or do we want to to to make something different and making something new adapted to AI yeah the closing word is I know that I'm over the time but I know I remember this last week we had this judgment which is case c75 7222 which ruled that the the German
uh consumer Association could enforce uh could go and try to enforce the gdpr so we're kind of going back again so even though they have the GPA the German consumer so it's a mess but I would just want to to put this bomb and leave thank you yes indeed and and and and then it always comes back to the matter of resources and and so on I still have loads of questions and I saw there were still some other questions in the room but I'm sorry we have to close it off um we are still
around ask us questions in a more informal manner in the coming days thank you very much for your attention and um let's uh hope we can continue this conversation thank you but