all right I think we can we can start and uh I will take a few minutes uh with the introduction so maybe other people Jo join us in in the meantime yeah please come in oh but you're from the organization right uh that's a shame yeah okay do you want to stay anyways yeah all right um okay thank you uh everyone for joining us um my name is Franco hiana I am from access now I'm a policy Analyst at access now and uh today we have a very interesting uh session titled AI from data collection
by default to Collective privacy by Design we're going to be addressing uh I would say rather important question around Collective privacy and before we we dive into it I want to read um just a you know few uh short paragraphs that I prepared uh for this occasion as an introduction to the subject that we will be addressing privacy and data protection has a great importance as both individual and Collective pillars that support free equal and fair societies while individual data rights emphasize personal autonomy and control over one's information Collective data rights focus on empowering vulnerable
communities and provide broader societal benefits like trust in institutions upholding Democratic principles and fostering societ social cohesion current data regulation Frameworks primarily highlight individual consent and data rights often overlooking the wider The Wider societal implications of data processing activities and the potential for Collective action and redress our goal in this panel is to reimagine privacy and data protection to ensure a fairer more Equitable digital future that benefits individual communities and society as a whole to do that I am here with um a wonderful set of speakers I will be introducing them um very briefly beginning
with Ellie magdonald who is joining us from the UK she is the advocacy lead at Global Partners digital where she works primarily on AI and emerging Technologies trust and security and encryption um it would be great uh I don't know if you have if we have someone from from technical support we have Ellie and Juan cast Here There and yeah because we we we do have both of you Han and Ellie on on big screen in front of us but I'm not sure folks can actually see you so yeah there wonderful um also joining us
online we have Juan degard from Colombia uh he holds a degree in philosophy and a masters in bioethics he is an independent consultant on issues of Ethics technology and human rights focusing specifically on DPI digital ID AI regulation labor and Technology um I'd like to say that Juan is the philosopher in the in the room um and joining us here in person we have paa Wes uh who is a PhD candidate in law and AI in the Catholic University of Portugal Porto researched at nucleo legal from the pical Catholic University of Rio de janeo and
the focal point in the AI working group uh from the Coalition uh from the digital rights coalition here in Brazil and last but not least we have Lucas Maron who's a lawyer Master student in civil procedural law at the University of sa Pablo and he works as a lawyer for the Telecommunications and digital rights program for the of the consumer protection Institute IDC all right so uh to begin with I think we should address the very concept of collective uh privacy so Juan can you please tell us why AI pushes policy makers and Civil Society
to consider Collective privacy as a concept and eventually as a potential new right sure thing thank you um first of all it's great to be here uh I'm sad I can join you in person in Rio the yearly trip to to the city was becoming a very neat time of the year but uh well maybe next year um so I I will answer your question um but I won't answer it till the very last part of the presentation because I want to begin somewhere else in order to get to Collective privacy um which is I
think the the the basis I mean I'm going to try to present the basis that gets us there um if possible I don't know where the philosopher in the group is going to do the most uh clear presentation but I hope it at least uh gives some ideas into into questions we may want to answer throughout the the panel so I wanted to begin instead of of talking about Collective privacy at the beginning by addressing how we currently try and protect um privacy in the in the current state of affairs and the main mechanism we
have for that is informed concern or I should say voluntary informed consent and I think the the crisis that we're going through indicates that that uh whole procedure to to defend privacy it's clearly obsolete I think consent nowadays is not voluntary it's uh not consented in most cases and it's definitely not informed um and I think a couple of examples should illustrate what I think why I think that uh whole structure of of protecting privacy has reached this point um I think it's certainly not voluntary in that in many instances in which we surrender personal
information we are not doing it uh because of because of a voluntary reason which we want to give the information but we're doing it because of what we going to gain with that information the most obvious example of this could be for instance public um interest of gathering information such as digital ID in which case you cannot withhold consent because you get a casted out of benefits of public benefits so it's not voluntary in that sense it's certainly not informed and this is particularly important for AI as we're not able to tell how our our
own information is going to be used in the future um that's particularly case with a systems because nobody really knows how data is actually being processed and how it is being used due to the Blackboard principle so even if they inform you that they are going to use your information to for instance strain a model or to create generated images out of it or that sort of thing they they are not in a position to tell you how your information is going to be processed uh specifically so it's not informed in that sense either and
it's certainly not uh consented in that many instances of using your personal information uh through AI models is Po exterior to the data Gathering that took place in the first place right so you're not consenting to that particular use of your information in the context of AI so I think the the the basis we have to protect privacy currently is definitely not uh doing its its work it's not working properly for protecting individual privacy and that leaves us in in a landscape that uh determines that we need to move forward with a different Paradigm a
different way of thinking of how to protect privacy so with that basis I I'll now try and address the question regarding group privacy um and I think there's an underlying question that we haven't completely answered yet as a society uh and it's whether we treat privacy as an individual right currently the most intuitive way of of answering that question of course is that we do we do treat it as an individual right because of course informed consent is an individual document that you sign as a natural person and that should Grant you uh some sort
of control over your own information but there are some instances and I'm I'm I want to stay in those examples for a little bit in which we're thinking of data protection not not as an individual right but as a collective right not in terms of a collection of individuals but of a collection of data points of course data points don't have rights but individuals who who hold those eight points uh do have them so in some instances we think of data points of or the collection of data points as being more sensitive of course uh
and of being protected protected in a particular way and I think that Paradigm should help us move towards uh a group uh protection or a or or a group privacy Paradigm of course I'm thinking of the sort of thing that uh we protect biometric data or heeld data with the sort of of Paradigm we have for that in the AI act which my colleagues will probably address uh briefly um the biometric data are particularly protected because of the concerns we have of how they can be used and this sort of Paradigm I think points Us
in the right direction of how we want to protect uh privacy as a as a collective right I think the issue is not on how we group individuals and whether we're going to Grant a particular group of individuals with a higher standard of uh privacy but rather whether there are data points that shouldn't be able to be used in certain ways what has changed with AI is not the type of data or the amount of data that has been gathered because we have big data practices going on for for a long time so that hasn't
changed that much but what has changed is how those datas are uh those those data sets are processed and how they can be used so I think privacy should be more concerned with that second part on how we use data rather on how we collect them uh and and definitely the burden should not be on the individual I think for that uh in order to achieve that uh higher standard for for protecting the privacy of certain data groups we should be taking more seriously a couple of principles that are in most data protections um legislations
around the world uh which I think are sort of overruled by Ai and those two are the limitation of purpose and the minimization of P personal data uh we usually think of the those as being something that's PR prerogative to the individual something that the individual can choose I limit how uh an entity some big Tech or the state can use my data and I limit the extent of data from me which is gathered so the minimization and the purpose limitation are usually thought of as being a a like a responsibility of the individual I
think the shift should be there we should take those principles into consideration on whether the big Tech can actually gather those data and how they can use them and I think less emphasis is placed on how that data is used rather than whether people surrender it um I don't want to go too much into detail as of now because I know we we'll have time to address more questions but uh in order to to close this first introduction uh I'd love to to share with you a metaphor that I read on a on a recent
article that I thought was very accurate and it's a fisherman's metaphor um the idea is that when you're Gathering data uh you may have you may cast a very wide fishing net which is aiming at at capturing large prey say whales or that sort of thing but most of us are actually sardin and in the in the net and we're what what we have in place is regulations or or protections that allow each particular sing to to get out of that net and I think we shouldn't be aiming for that we shouldn't be aiming at
stronger uh informed content forms or that sort of thing but we should rather be looking at the fisherman who's trying to cast the net and and wondering whether we should permit uh Nets to be casted in in towards that sort of Brae and rather than allowing for that Sant to to escape the net um so I I hope that metaphor serves well throughout the the rest of the conversation and for this first introduction I'm going to leave it there uh I'm sure we'll have time to to tackle a lot more questions thank you Juan that
was wonderful uh I think now everyone understands why I say you are a philosopher uh ending ending your uh initial remarks with that metaphor and I would like to to um highlight um and quote one thing that you say that that that uh stroke to my mind which is um Collective uh right understanding Collective not as a collection of individuals but rather as a collection of data points uh that that is one thing that I would like to um reflect um as we as we move along with this with this session uh but now I
want to um uh I want you listen from Ellie and and and and um ask her um how this question of collective privacy has been addressed in the EU in the UK or globally and if she can uh share Visions or experiences uh regarding this this um policy uh yeah this this this way of addressing such an important uh question so Ellie can you can you share with us thank you thank you so much Franco and um thank you for allowing me to join you in Rio from London um and I'm very privileged to be
speaking after Juan who made such astute remarks um much like one um I'd just like to reflect first on the question and then I will indeed on the the broader theme of the session and then I will certainly answer the question um I think it's such a important and valuable discussion we're having because uh the nature of datadriven technologies have challenged the premise of individual rights and individual redress and that's because of the very nature in which they work by profiling and grouping people according to their characteristics and preferences um often that can align with
existing protected characteristics like race or gender uh but to Juan's um very valuable Point um they can also be more novel and unpredictable such as arranging us um by certain attributes such as being a slow Mouse scroller or um a dog walker um so I think uh considering how the novelty of these Technologies impacts the existing framework that we have is really important uh particularly because they also have the potential to um increase at scale those violations and to increase that scale those Collective harms that do impact particular groups so Gravely um and in that
context of course um the individualized model that we have where individuals have to bring a claim once their data has been um extracted or misused it's very difficult and very challenging in that context um but you asked me particularly about strategies and particularly to contemplate the landscape outside of Latin America in Europe in the UK and globally um so I guess the short answer to that is that Collective privacy is not reflected um in the like regulation that we currently have um but I hope that in my intervention I can show that there are strategies
that we can use um to deal with Collective Harms in existing legislation as well as Avenues through which we could um stretch or reconceptualize the legislation that we do have to accommodate for Collective rights and Collective harms um so to give the context in the EU we have of course the general data protection act and the new EU AI act and these are really valuable instruments their importance shouldn't be understated because we're very fortunate to to have a comprehensive data protection framework in Europe um but as I mentioned they don't conceptualize privacy or data protection
um collectively and they don't facilitate collective redress and that is a challenge um however I do think that there is a kind of notable aspect of them a strategy you could say which um warrant some contemplation and that's the requirement to um to include ex Ane or a preemptive impact assessment so in the gdpr that's the requirement to undertake data protection impact assessment where data processing is likely to be high risk and um similarly in the eua ACT there's there's a similar mechanism of a fundamental rights impact assessment but looking a bit more particularly at
the gdpr this data protection impact assessment requires um assessing the necessity and the proportionality of processing um assessing the impacts to rights including their severity um and and assessing what measures should be put in place to um address uh potential negative impacts on rights um and this concept of risk is linked directly to the harm or damage to individuals and that is conceptualized in an individual manner um but I do think there's some interesting work happening among the local data protection authorities to conceptualize this a bit broadly a bit more broadly so in the UK
the information commissioner's office which is the data protection authority um interprets this Clause around significant economic or social disadvantage um to refer to societal or systemic impacts so that gives us a room um and I think that we used robustly and we used well um this mechanism of preemptive impact assessment can be useful because it allows us to identify and hopefully to address preemptively uh systemic or Collective harms before they occur um however it is it is limited it's still individualized and it doesn't provide for this Collective redress so what do you do then once
it happens of course there are other mechanisms available in the law for that but I do think that this preemptive rights impact assessment is an interesting one and one that we can do more with um it's also one that's reflected just looking at the global level in the Council of Europe's convention on AI um which is the first AI convention that takes um an explicitly human rights approach and which can be Global because it's open to accession globally and the council will soon be developing its methodology for rights and impact assessment um so while observers
like ourselves at GPD to the Council of Europe's um convention on AI do have some concerns about the shortcomings in the ACT we think that this um impact assessment mechanism could be a really interesting place to push um and then I just wanted to reflect briefly too on on the impact of AI related legislation I did mention briefly the AI act but I think that's also um given scope for some really interesting work by data protection authorities um to undertake algorithmic impact assessments um and an example of that would be um the Netherlands where after
a number of really Grievous um AI related discrimination cases their data protection authority is now introducing algorithmic impact assessments um so I do think that's an interesting and pragmatic mechanism through which to approach it I also wanted to note briefly the work of um other organizations in Europe so there was a really interesting conference this year and the European data protection Summit where this similar question was considered and there was some discussion about how to reconceptualize the European Collective redress directive which is um a consumer rights related mechanism which permits um um cases to be
brought by a group of claimants so there was some thinking in at that Summit about how that could be stretched or reconceptualized um to take it from its more consumer rights Focus to more of a collective group Focus however toan's presentation I will make the important cat that that was uh very much a collective of individuals bringing a claim as opposed to a collective approach um so while there are strategies within the existing laws we have I think that this talk this discussion now and kind of imagining work that that we're doing you're doing is
so important and I'll leave it there and looking forward to coming back for more questions thank you now thank you Ellie that was wonderful um I think that you're absolutely right when you point out the fact that um uh attributes such as race gender and other uh person attributes Can U be um put at risk uh specifically uh when processed by AI systems and that should be taken into account when uh designning policy remedies uh for for for such an important um challenge that that we have in in society and and and also um I
would I I I would also like to point out the fact that when you mention DPA um in Europe um making a good job trying to you know put out algorithmic assessments and how to U kind of um reduce risks in in uh through that way uh I I was thinking about the the example that you mention of the Netherlands and and one report that they published recently stating that almost any data scrapping activity could be against uh the gdpr and and and and that was a a a great uh starting point for for conversation
and and that was you know that that happened uh maybe I don't know like one month ago so it's still very very fresh so I would encourage folks here in the room to go and and and check that out fortunately it's only in Dutch or English uh but I think that you know English is going to is going to be um easier than read it in in in Dutch so uh yeah uh if you Google it you will be able to to find it um I would like now to um come all the way back
to Brazil and and and and ask paa uh how is the data protection landscape here in Brazil and also if she can tell us a little bit about what's going on in latam and also um I know she has been working a lot around the AI bill here in Brazil so it would be great if she can tell us a little bit on what has been going on and uh what we can expect uh from the Brazilian Congress in that in that sense paa thank you good morning so just to begin I want to apologize
for my bad English I time of the morning morning so um be patient with me um but yeah talking about personal data regulation here in Latin America um I think we are now in a scenario that Brazil is kind of f example of personal data regulation with lgpd and the other countries in the region are in a process of trying to create a new legislation or just update the ones that they have and in the meantime they are also trying to create legislation for AI may maybe trying to put the the cars I don't know
how to say it in in in English but okay uh trying to anticipate some things that they they will have to do in the future but we have no um basis to do that because I think to have and to started thinking about AI regulation we have to have a proper field of data protection uh in the in the country so um I totally agree with one that we have um consent uh situation that is not uh adequate for the scenario that we have today be huge technology just scraping our data online and in Brazil
thinking about collectively um rights and um actions I think we have a scenario of protecting rights collectively for years uh especially because of the our consumer code and all all the authorities that are trying to put claims like in a collective way we have a a microsystem of collective collectively right I don't know if that the term in English but we have a a good scenario of this this protection collectively in lgpd we have a mention in article 20 uh 22 about the possibility to cons to data data data subjects to put the claim like
individually or collectively but I think also that today maybe we have to think about other concept of personal data because in a way that we have today is really have to be Iden a person identified or identif I don't know how to say in English but okay um we have to have a person person related and not just thinking about groups and today with all the the situations of algorithm making decisions and so on today maybe it's not so important to know who you are individually but uh in which group you are positioned so we
have to think about this um maybe new concepts of personal data or so on but in this case I think it's good to think about AI regulation to try to uh put things that are not regulated or we have questions about so thinking about uh the Brazilian AI regulation we have a nowadays um a bill the 2338 that is current in the Brazilian Senate is the is the best project that we have here in Brazil because we had one in the past that was not so good was really Prince based because in the end they
they were trying just to create like soft law in in in hard law uh if I can put that way but this this bill that we have right now is really inspirate in the AI act but I usually as a person from the global South I don't like to say that we are just importing things from Europe uh so we have an inspiration but we have our particularities that the bill is trying to address I think it can do more but nowaday it's best it's good because they they look for our scenario of uh structural
discrimination they they concept indirect indirect discrimination the and we have a lot of uh articles that try to protect more vulnerable people so I think it's really interesting this this sense but we also have like uh criticisms uh related to how we have a bill that protect fundamental rights and it's it's rights based uh not just risk-based approach but rights based approach and nowaday is almost uh authorizing use of facial recognition in the in the law enforcement so it's a in a a huge qum that we have right now um um but in in general
the the bill as are today is is today is really good because it's risk based and rights based approach in a way that we have just obligations for the the AI agents according to the level of risk so we have the the the AI systems that will be prohibited because they have excessive excessive risks and the the the systems that will be just high risks that they are in it's authorized but we have to comply with a lot of uh governance measures so some something like um algorithm impact assessments as Ellie was saying uh also
looking more for data protections and and all these preventive uh measures so it's a good way and also uh put measures to Foster Innovation it was a huge crism that some sectors of the society were trying to say that the bill was just um would just hamper Innovation and this um economic development in Brazil but now we have like especially in the in the bill thinking about measures to Foster Innovations and we're also uh trying to create a kind of new systems of enforcement having a central Authority with other sectoral authorities all working together so
it's uh it's just a bill right now that is really good and I know I have just one more minute because I told Fran that I I would be straight in eight minutes so right now in a scenario that this bill is to be voted in the Senate we we have to wait for the the next episodes of this of this novel but okay I just want to end my my first speech now to to say that uh I work for a coalition of net Network rights and we just uh released an open letter about
this bill in Brazil just defend sending this this open letter so if you want to uh download it it's available in in our website we just published with together the we access now so it's in Portuguese in English and uh also in Spanish but we have to just see some things in the Spanish version but um to conclude also access now publish a a report about AI regulation and the scenario of a regulation here in in Latin America and they just publish it yesterday in English so please just check and um I will be happy
to continue um this conversation after thank you B paa that was great one thing um that I think you know we we we have to um we have to say as you know as much as it's necessary uh no matter how many times you you and I mean not not you but Regulators may may hear this um when it comes to Latin America we do need data protection first and AI regulation second we need to have this order in terms of you know uh what's what's necessary to deliver um comprehensive protection for for Citizens um
we are in a time where AI hype has taken uh the policy development agenda uh completely and data protection has been uh somehow I don't know forgotten or left behind or you know put on hold say it as you wish and there are a lot of AI um bills being introduced in you know uh almost every country in the in the region you can actually go and and and check that information in the AI um report the that that act is now published in English just yesterday and it's also available in Spanish since January February
I don't remember I don't remember um so you know from Civil Society we we need to flag the fact that we do need to address data protection first if not any AI regulation that come into place will be you know somehow incomplete or will not be able to deliver uh the very purpose uh for what it was created in the first place so uh yeah that's that's where we're standing in Latin America right now but uh thankfully we have Lucas here in the room who can uh tell us how to navigate uh Collective privacy protection
from a consumer's related rights point of view uh and also that is very interesting uh since there was a recent decision from the Brazilian DBA to suspend the deployment of meta's uh new privacy policy to train its erative AI systems using U users public content so Lucas and just uh to uh make clear I think the organization for which Lucas Works was behind uh such um such decision so Lucas can you tell us more about this uh thank you yeah thank you uh first of all I'd like to thank you Franco for inviting me and
providing such an interesting panel and I'd like to thank Paula for her work in braz aib B she gives us uh a voice for civil society in the Senate and if not for her work and some other colleagues uh civil society would be represented in this AI bill so uh Brazil has a relatively young DPA it's been establish establishing itself since 2020 and I use the word establishing as Min it because it's unfortunately not adequately funded by the government to uh efficient regulate efficiently regulate and supervise personal data protection in Brazil uh only last year
uh our DPA has finished its regulatory powers to sanction and fine perpetrators of data protection laws for example and its first fine was for a small company that doesn't even exist anymore and it lost like um 40 14,000 High which is like 300 3,000 something like that yeah um but we had this interesting news regarding meta policy to um train its generative AI using um users public content and this now this new meta uh privacy policy came into force in June 26 and essentially allows the company to use public available information and content shared by
uh its consumers to train and improve its generative AI systems uh on the same day on the 26th uh e f a report to our DPA uh with a analysis in how this uh meta policy violates our data protection and consumer laws and on July 2nd so uh a week later our DPA issue a preventive measure ordering its immediate suspension in Brazil and also issued a daily fine of 50,000 highs um for no compliance after evaluating the risk to data subjects of this uh this privacy policy uh in its analysis the DPA considered uh the
legal hypothesis used to justify processing of personal data uh being legitimate interest of the company uh to be inadequate and that it cannot be used by uh by the company to process sensit sensitive personal data such as biometric data from consumers and another argument was uh really based on purpose and necessity because our law is uh somewhat based on gdpr so it's kind of similar this uh those principles and so it it gets somewh what Ellie was explaining to us and it's uh argument was consumers expect that the information shared by them on meta's platform
is seen by their relationships and their friends and family and not exactly it would be uh used to improve generative AI especially since the data was provided years uh before the this new privacy policy came into place and and on July July 17th So Yesterday meta announced it suspended generative AI training with Brazilian user data but we have some issues right there because first of all our DPA measure was issued days after the policy already came into place so um and after a few days meta responded and asked for a few more days to comply
with the order and on July 10th uh our DPA allow agreed to allow five working five more working days for uh meta to comply so for at least three weeks Brazilian personal data could be used to Trin generative Ai and we don't know anything about meta isn't isn't trans transparent about its use and they haven't confirmed they just announced they uh stopped training yesterday uh um even though our DPA had uh analyzed and said it could provoke damages to data subjects and was deemed as irreversible uh even though the quality of the analysis by our
DPA should be recognized in this case and it was a correct interpretation of the law uh the measure came a bit too late yeah and in this case we fear something quite similar might happen to what we've seen in the past with our DPA uh such as the WhatsApp privacy policy case in 2021 which our DPA emitted a preventive measure appointing appointing sever infringement of our data Protection Law um by WhatsApp because of their excessive collection of data from consumers and it's sharing with other meta companies for advertisement and content Rec Commendation so in that
case uh our DPA Al actually had an administrative investigation with three other authorities in Brazil but months later it it changed it its mind and found the privacy policy was adequate and after just a few cosmatic changes from WhatsApp so on this Tuesday the 15th of July Inc we filed the action against WhatsApp and our DPA uh demanding against the DPA uh transparency regulations and regulation about how secrecy is granted in ad administrative proceedings as this has been a major problem in their administrative investigations all Civil Society can access their investigations we can know how
much uh work they've done we just see things are publicated and we never see the response of their the companies or even the some sometimes uh government yeah we have some data leaks in Brazil uh even involving uh sensitive data such as health health data leaks and we don't know exactly what the government has uh how it has defended itself you know so uh in this overview uh I'm a said we need to stop treating uh a collective problem of data sharing and processing with AI from an individual uh perspective as consumers as one beautifully
explained to us H cannot consent without uh design bias and induction from uh companies and DPA especially in this case our Brazilian DPA needs to stand and assert itself uh differently from what happened in WhatsApp cases in 2021 as it's not enough to have a law written and and Authority set up if it isn't um efficient in its investigation we need to uh have our authorities uh guidance to develop a collective privacy uh recognition that's it thank you thank you Lucas that was wonderful I'm always amused to see how strategic litigation can pull the trick
sometimes you know it's it's it's a lot of fun uh but it's also you know um um a reason to keep uh working and to engage with organizations such as IDC who can um navigate the complex of um litigation we have seen this also in my home country of Argentina and uh it always um strikes me as something um you know amazing and and and that's why we wanted to to hear from from you and just for the audience to know uh the DP the Brazilian DPA uh decision on suspending meta's privacy policy uh is
also in line with what happened in the EU uh where not of your business um filed 11 complaints which uh ended up with the i DPA um suspending metas um policy and and then it was then um later uh stated by meta that with that that that such decision was um a shame and a lack of opportunity and against Innovation so just just for you to have an idea on on on how uh a company like meta respond to when DPA actually uh do their work uh the the the the the arguments are very um
lighted in in in in in my perspective at least um all right so um we have a few more minutes uh for for questions if uh anyone wants to raise their hand okay we do have a question over there uh do you mind yep thank you hi everyone uh nice panel thank you for all the speakers you awesome uh so my question is for index so uh I I know that uh the legal basis of legitimate interest cannot be used for um sensitive data as biometric so I have two questions because I think we are
in a complicated landscape with this the decision it all I don't think it's a problem I think it's an advance I thank you for your work always because de is awesome I was also Anan so I share the feelings for the oranization but I was a bit worried about uh how to navigate the proper legal basis to use public information to train AI is this a general problem or we can use other legal bases such as legitimate interest for other types of data or uh the civil society and the the DPA in your opinion should
consider it a problem as well even we if we have an transparency measures being used by the private sector is it a problem at all we we should always use or look for use the consent or can re can we rely on other legal bases such as legitimate interest but on others as well because otherwise I think it would be kind of hard to manage uh to advance the technolog like like generative AI but I mean it's it's hard to obtain conent and it's not always the the the best legal basis as you don't have
a proper assessment of the rights that are being um damaged of how consumers can interact with the company and all I I I think we have like a a ghost on legitimate interest but sometimes it makes it it gives a bur to the companies to assess so hard what what rights are being uh damaged how can we give transparen measures and everything that it's even better than consent even in Brazil that people are not have not literacy enough to consent freely and informated so I would like to hear your thoughts on that because I think
it's interest um that's a really interesting question and I think the answer right there is we need some uh Baseline established by our DPA on legitimate interest uh it has our Brazilian DPA at least has published some uh studies on Concepts such as legitimate interest and another studies on Concepts from data protection and they are really interested they establishing some guidelines to use uh legitimate interest but they can't be enfor right now so uh we need some written Baseline as it's our tradition to have things written in Brazil um but you definitely can't process data
using legitimate interest for the usage of the company a data there that is uh generated using apps using uh platforms using um digital uh websites from companies this can be treated uh this can be processed uh inside the platforms using legitimate interest for example but you can't give it uh you can't use this legal hypothesis to process the data for outside the the platform or to explore economically this data you know uh I don't know if was clear enough yeah you uh basically you need guidelines for an our DPA to be able to use legitimate
interest and you need to know when you can or you can't use as um a legitimate uh legal hypothesis to process data but um I believe if we have this Baseline at least we be more um it'll be clear when you can or you can't because as itself you have you have right now you have to look Case by case right wonderful I think we are one minute uh past but we can stay for a little bit longer right yeah 10 minutes yeah all right is there any other question uh from the audience yeah we
do have one here hi good morning uh my name is Isabelle I work as a journalist in a website called mobile time M time and um yesterday uh tunu asked the audience and the the speakers uh if they have the same feeling as she has that big tags don't want to to be regulated at all and um in any basis and um so I want to to ask the same thing for you guys and if you have this this feeling as well that in Europe or UK or in your country if the big tax they
don't want to be regulated at all because we have this feeling here in Brazil wonderful Ellie do you want to take this one um I can yeah I can um I mean I think we're seeing that in short yes it feels this way right like if we look at some of our advocacy um related to the EU AI act or the Council of Europe's framework convention on AI we see these really glaring gaps in the legislation um and really hard fought B including by all of the organizations represented on this panel and those of you
in the room um to ensure that those legislations are robust that they can cover um private sector that they can cover National Security uses and yet we see these really glaring gaps where they don't um but I I don't wish to be so pessimistic I think because of that really hard for workk we also see um that significant gains have made but I guess you know for example include including biometric data within the scope um as Juan mentioned in his intervention um and also the fact that there is even a fundamental rights impact assessment as
part of the EU act that was a really hard for development and by civil society and by affected communities um so I I want to be optimistic but I suppose that it speaks to the point that um uh we can't always rely on these legal tools and on these power asymmetries um to to grant us the rights that we need we need to have these broader conversations where we do the imaginary work um and where we act collectively um because it won't happen if if we don't do that work um yeah I hope that can
be a helpful and not too pessimistic answer no that was that was wonderful um I also like to add that um there were certain uh times when big Tech actually approached civil society and regulators and said please regulate us that was uh that might have been an a strategy to uh create more asymmetry and to make competition even harder for players uh behind and their capacity of computation and Innovation so I think this is a very complex question and I don't think we we might get a proper response if we go and ask straight to
Big Tech uh what their position is so yeah uh I think that's a wonderful way to um wrap up this this session um I'd like to thank everyone joining us and especially um our speakers and uh Juan Ellie thank you for joining us online I hope you have a wonderful rest of your day and uh yeah let's keep conversation in the coffee break let's go for coffee uh before other sessions start thank you bye-bye thank you we'll join you in person next time thank you so much