1.0 PrivacyCamp 22 privacycamp 1 2022-01-25 2022-01-26 0:05 2022-01-25T09:00:00 09:00 0:30 Alice privacycamp-100-opening Creative Commons Attribution 3.0 false Opening en Opening of Privacy Camp 22 Privacy Camp 22 Team 2022-01-25T09:30:00 09:30 1:00 Alice privacycamp-1-stop_data_retention_now_and_forever Creative Commons Attribution 3.0 false Stop Data Retention – now and forever! en Our fundamental rights to privacy and freedom are at the core of our constitutional order, and they should apply effectively in the digital world as well. But even after countless attempts have been struck down by the European courts, the European Commission and some of the EU member states are holding on to the idea to introduce measures of data retention in the Union. In July 2021, the Commission published a “non-paper” considering the legal options to introduce the indiscriminate retention of user data in some form or another. We do not want to let this happen. But even if for many activists the decision against data retention is no longer a question, it remains a task to reach the new generation of internet users and make them aware. Is it enough to describe the legal situation that the practice of data retention is not compatible with our fundamental rights to freedom and privacy? What about the new data economy and all the glittering attempts to introduce data governance for all as the new normal? Artificial intelligence cannot be without data, but what will this leave private? Since the European Commission is not listening, we need again the power of many to make clear that EU citizens do not want their personal data to be retained. Actual case law like SpaceNet/Telekom will also be part of the discussion, as well as interesting stories from where it all started (and what we could learn from them!). Patrick Breyer, MEP, European Parliament Rena Tangens, Activist and Artist, Digitalcourage Noémie Levain, Juriste et Membre, La Quadrature du Net 2022-01-25T10:40:00 10:40 1:00 Alice privacycamp-2-surveillance_tech_as_misclassification_2_0_for_the_gig_economy Creative Commons Attribution 3.0 false Surveillance tech as misclassification 2.0 for the gig economy? en For almost a decade, gig economy employers have relied on sham contract terms to misclassify workers as independent contractors to deny them their statutory rights. Recently, some workers have been successful in asserting their rights in court. But the key to legal victory is proving that such workers we indeed under the direct management control of the employer and not truly independent. Since these victories, gig employers have taken steps to conceal the true nature of the relationship. Language at work is much more buttoned-up and the controlling hand of management is hidden in a management function. Automated decision-making driver key processes that impact workers such as recruitment, performance management, work allocation, pay and even dismissals. We have seen a rise in surveillance tech in the name of platform safety and anti-fraud detection. Platform employers even profile staff based on predictive ‘fraud probability scores’ and used such profiles, not to dismiss an employee though to be engaged in criminal fraud, but to instead prioritize automated work allocation decision making. In other words, it is not really a worker fraud prediction score as much as it is really a management performance rating in disguise. Wider adoption of facial recognition systems with a known unacceptable failure rate, when used with people of colour, has led to discrimination and unfair dismissals. In this session, we will consider: How gig employers have adapted by using tech to conceal management control? The regulatory and legal drivers of gig employer adoption of worker surveillance tech? How gig platforms are cooperating with state security services to share personal data and why? How can workers and unions organize to take back control of their digital footprint at work? Ayoade Ibrahim, President, National Union of Professional App-Based Transport Workers (NUPABTW), Nigeria Kate McGrew, Co Convenor, ESWA Aida Ponce Del Castillo, Senior Researcher, ETUI 2022-01-25T11:50:00 11:50 1:00 Alice privacycamp-3-the_dsa_its_future_enforcement_and_the_protection_of_fundamental_rights Creative Commons Attribution 3.0 false The DSA, its future enforcement and the protection of fundamental rights en The Digital Services Act (DSA) proposal currently under discussion aims at providing a harmonized regulatory framework for addressing online harms, while at the same time creating accountability for service providers and protecting users’ fundamental rights. A specific set of challenges – in the DSA proposal as well as in other initiatives relating to the regulation of platforms – concerns the effective enforcement of forthcoming and existing rules, and, more in general, the capability of such rules to meaningfully protect fundamental rights. The experience with the enforcement of GDPR, marked by major stalemates and shortcomings, is exerting a significant influence in the DSA negotiations, where the idea of centralized enforcement for dominant platforms, and of increased cooperation among national authorities, is gaining political support in the legislative process. The individual and societal harms stemming from the platforms’ adtech-centred business model have taken centre stage in the DSA debate. However, more radical substantial restrictions on surveillance-based ads are at this point not expected to make it to the final text. In any case, in light of the DSA’s stated goals, these questions remain absolutely relevant and urgent: can the DSA provisions (particularly the systemic risks management mechanism), where properly enforced, effectively address the mass-scale and systemic violations of fundamental rights occurring as a result of business model choices? This session will focus on a set of questions related to how the DSA can be expected to have an impact, in particular given its enforcement in practice: Does the enforcement structure provided for by the DSA proposal(s) have the potential to ensure adequate protection of fundamental rights and other democratic values? What are the potential challenges we can identify and anticipate with respect to the meaningful implementation and enforcement of these provisions? Can the DSA’s enforcement chapter prevent the reoccurrence of some of the biggest failures seen in the enforcement of GDPR (for example with regard to the systemic violations and harms, on an individual and societal level, brought about by the ad-tech industry)? Moderators: Joris van Hoboken, Associate Professor – Professor of Law, University of Amsterdam – Vrije Universiteit Brussels & Ilaria Buri, Researcher, University of Amsterdam – Institute for Information Law (IViR) – DSA Observatory Jana Gooth, Legal Policy Advisor, European Parliament – Assistant to MEP Alexandra Geese Paul Nemitz, Principal Advisor, European Commission – DG Justice Eliska Pirkova, Europe Policy Analyst and Global Freedom of Expression Lead, Access Now Eliot Bendinelli, Senior Technologist, Privacy International 2022-01-25T12:50:00 12:50 1:10 Alice privacycamp-4-edps_civil_society_summit_2022_your_rights_in_the_digital_era_have_expired_migrants_at_the_margins_of_europe Creative Commons Attribution 3.0 false EDPS Civil Society Summit 2022: Your rights in the digital era have expired: Migrants at the margins of Europe* en In the past years, the European Union (EU) and the Member States have shifted asylum and migration policies even further to prioritise the prevention of new arrivals, the detention and criminalisation of people who enter, return of people with no right to stay, and the externalisation of responsibilities to third countries. In that context, authorities have increased the collection, retention and sharing of data of people on the move as a core aspect of the implementation of EU migration and border management policies. As a result, the EU has multiplied and rapidly expanded databases and networked information systems for use by immigration authorities. These databases have a key role in increased deportations, notably by ensuring the biometric registration and monitoring of almost all non-EU nationals. The data held in these systems is also interconnected as part of the “interoperability framework”, that allows access by a growing range of law enforcement authorities, further cementing the trend of ”criminalising” people on the move. Crucially, this interoperability framework blurs the distinction between the different policy areas of asylum, migration, police cooperation, internal security and criminal justice. Furthermore, the EU’s focus on returns justifies the systematic exchange of information among national authorities, EU agencies and third countries. Changes currently under discussion, as part of the EU Pact on Migration and Asylum, mainly seek to deepen these “securitisation” trends. Lastly, the EU and the Member States are increasingly turning to novel techniques to ‘manage’ migration, by funding and carrying out technological experiments, including through AI and other data-driven technologies, for mass surveillance at the border, predictive analytics of migration trends, and assess the ‘risk’ posed by people on the move in ways that deeply affect experiences in the migration system. In addition, those new tools are deployed despite serious doubts about the efficiency of -at least- some of them, such as the use of lie detectors in the iBorderCtrl project”. The fundamental rights implications are far-reaching for asylum seekers and migrants: their data is used to prevent their arrival, to track their movements, to detain them in closed centres, to deny them entry or visa applications, and much more. Considering the vast power imbalances they face up against the EU migration system and the multiplicity of the involved authorities and the complexity of interconnected IT systems, it is harder for them to exercise their fundamental rights, notably their rights to privacy and data protection. How are data-driven technologies used in the field of migration and border control? How does this impact people on the move – their safety and fundamental rights? What impact do these policies have on the right to seek asylum, the principle of non-refoulment, and other duties of Member States under international human rights conventions? How can the EU data protection framework protect the rights of people on the move, what are the challenges to realising their rights as data subjects and how can we overcome them? What impact may the forthcoming legislation on AI have on their rights to privacy and data protection? The EDPS Civil Society Summit provides a forum for exchange between the EDPS and CSOs, to exchange insights on trends and challenges in the field of data protection and engage in a forward-looking reflection on how to safeguard individuals’ rights. Dr Teresa Quintel, Lecturer, the Maastricht European Centre on Privacy and Cybersecurity Alyna Smith, Deputy Director, Platform for International Cooperation on Undocumented Migrants (PICUM) Wojciech Wiewiórowski, European Data Protection Supervisor Sarah Chander, Senior Policy Advisor, EDRi 2022-01-25T14:00:00 14:00 1:00 Alice privacycamp-5-drawing_a_red_line_in_the_sand_on_bans_risks_and_the_eu_ai_act Creative Commons Attribution 3.0 false Drawing a (red) line in the sand: On bans, risks and the EU AI Act en The EU AI Act contemplates a risk-based approach to regulating AI systems, where: (i) AI systems that cause unacceptable risks are banned and prohibited from being placed on the market; (ii) AI systems that cause high risks can be placed on the market subject to mandatory requirements and conformity assessments; and (iii) AI systems that pose limited risks are subject to transparency obligations. While this seems like an intuitive approach, current classifications of systems under each of these categories reveals a dangerously inconsistent schema within which fundamentally dubious technologies like emotion recognition are characterized as “limited risk”, and systems can only be classified as posing unacceptable risks if they meet unreasonable high thresholds and arbitrary standards, exposing individuals and communities to nefarious AI use cases. There is an urgent need for civil society participants to: Come up with a collaborative strategy for more successfully setting red lines for technologies that should not be designed, developed, standardized or deployed in democratic societies. Ensure that the high thresholds and wide exceptions carved out for technologies posing “unacceptable risks” are revisited to ensure that they are more thoughtfully drafted to prioritise the protection of fundamental rights. Learn from experiences in non-Western jurisdictions, and from civil society actors working in jurisdictions beyond Europe. Questions to be asked include: 1) What are the main hurdles on the path to establishing red lines, and does civil society currently have a viable action plan for overcoming them? 2) What successes and challenges have arguments that advance the imposition of red lines faced so far? 3) What are the best ways to reveal the inherent inconsistencies vis-a-vis risk classifications within the current proposal? 4) What key factors should civil society keep in mind while strategizing on an advocacy plan? 5) Given the global supply chain of AI technologies, what can we learn from experiences of jurisdictions around the world? 6) How can civil society build a compelling narrative in favour of imposing red lines on some AI applications? Professor Lorna McGregor, Professor, Human Rights, Big Data and Technology Project at the University of Essex Daniel Leufer, Europe Policy Analyst, Access Now 2022-01-25T15:20:00 15:20 1:00 Alice privacycamp-6-connecting_algorithmic_harm_throughout_the_criminal_legal_cycle Creative Commons Attribution 3.0 false Connecting algorithmic harm throughout the criminal legal cycle en Predictive automated decision-making throughout the criminal justice cycle impacts freedom, liberty, and other protected rights. Although risk assessment algorithms, predictive policing, and biometric surveillance by law enforcement have attracted significant attention, the broader ways algorithmic harm is inextricable from the criminal justice cycle have not received as much attention. This panel will explore how automated decision-making in housing, education, public benefits, and commerce impact the criminal justice cycle and the systemic failures that allow those uses to exacerbate negative impacts and perpetuate societal inequities. The panel will also discuss what this means for advocacy and whether it is even possible to use automated decision-making equitably in this context. Specifically, panellists will discuss different regulatory solutions addressing both criminal justice algorithms and AI broadly to inform advocates’ efforts to understand and mitigate algorithmic harm. Dr. Nakeema Stefflbauer, CEO, Frauenloop Clarence Okoh, Civil Rights Legal Fellow, NAACP-Legal Defense Fund Silkie Carlo, Executive Director, Big Brother Watch 2022-01-25T16:30:00 16:30 1:00 Alice privacycamp-7-regulating_surveillance_ads_across_the_atlantic Creative Commons Attribution 3.0 false Regulating surveillance ads across the Atlantic en In this day and (digital) age, rumours spread easily. Be it false rumours, creepily-targeted services, or politically – dubious claims, surveillance advertising is a practice that encourages all. At the core of BigTech’s business model and an entire AdTech industry sits a process relying extensively on tracking and profiling individuals and groups, and then microtargeting ads at them based on their behavioural history, relationships, and – supposed – identity. In Europe and United States alike, the debate around how information circulates in our society intensifies. Coalitions of civil society organisations, as well as multi-stakeholder, bipartisan forums are emerging, advocating for stricter regulation of tracking-based ads. EU’s Digital Services Act (DSA) has the chance to change the status quo and bring better protections for online platforms’ users and customers alike. Similarly, a new bill proposed by US Democrats aims to tackle the use of digital advertising targeting on ad markets hosted by platforms like Facebook, Google, and other data brokers. The aim of this session is to find new avenues of cross-Atlantic cooperation between academia, civil society and decision-makers, in order to increase the impact of surveillance ads regulation in US and EU – from adoption to enforcement and impact for people. For this, the following questions will be explored: How can stakeholders use fora such as the Transatlantic Consumer Dialogue (TACD), the Trade&Technology Council (TTC) or any other transnational groups to leverage better regulation of tracking-based ads? Are we witnessing a shift in platform regulation: addressing exploitative business models? What are Big tech defense and lobbying responses? What different roles should civil society, industry and investment associations play in this debate? How can European and US governments representatives take a leadership position in advancing better regulation of tracking-based ads? Jan Penfrat, Senior Policy Advisor, EDRi Nicole Gill, Co-founder, Accountable Tech (US) Jon von Tetzchner, CEO, Vivaldi 2022-01-25T09:30:00 09:30 1:00 Bob privacycamp-8-centring_social_injustice_de_centring_tech_the_case_of_the_dutch_child_benefits_scandal Creative Commons Attribution 3.0 false Centring social injustice, de-centring tech: The case of the Dutch child benefits scandal en What is the role of technology and automated systems in exacerbating existing social injustices? How can we identify the real harms automated systems can generate without disregarding the historical and social context that produced these systems in the first place? With the increased attention to the potential discriminatory and harmful effects of automated systems, especially in the context of government, comes the tendency to overfocus on the role of tech in systemic injustices. Clearly, critically examining the role of technology and developing the necessary vocabulary to talk about the harms they generate, is of vital importance for holding digitising governments to account. Nevertheless, the historical context of systemic injustice and the concrete harms experienced in this should be the focus point of the debate. With this session we want to contribute to making the debate on digital rights, specifically in relation to marginalized voices, less technocentric. There is a need for nuance on the question to the driving role technologies play in exacerbating and perpetuating social injustices. By using a specific case study from the Netherlands, the child benefits scandal, as a starting point, it brings to fore the entanglement of a longer history of racist practices by governments, the increasing use of new technologies such as automated decision-making systems in government agencies, and the (potentially) outsized role/factor of algorithms in these discussions. By engaging with, and prioritising local expertise of anti-racist organisations in the Netherlands, we believe that they can provide a deeper understanding and contextualisation of the concrete harms and social injustices of marginalised groups, specific to their localities. Their vital contributions can help to reconfigure and move the tech-driven harms debate towards wider social justice goals. This panel brings local anti-racist organisations into the discussion on digital rights first through the concrete case of the Dutch child benefits scandal, then by reflecting on the broader context of digitalisation in the public sector. This will provide greater nuance of articulating and addressing the problems of technology use and social injustice. Panel goalsThe panel aims to bring together local and on-the-ground anti-racist organisations from the Netherlands into conversation with digital rights organisations. The overall objective is to identify concrete harms, as well as to provide nuance to the discussion in de-centring the role of technologies vis-a-vis social injustices. As a concrete outcome of this panel, we hope to identify key considerations to framing and addressing social justice concerns and tech-driven harms, by centering local knowledge and expertise. This includes identifying the areas in which local anti-racist organisations lack capacity, how digital rights organisations can support their efforts, and how these communities can be built and sustained beyond these discussions. Guiding questions Broadly, how are social justice concerns and tech-driven harms being discussed in the digital rights spheres? How is the discourse of digital rights and technologies shaped? How do they differ from conversations and work in anti-racist organisations? How has the Dutch child benefits scandal been framed and discussed? What are the shortcomings and missing perspectives in these discussions? How does this relate to broader issues on the use of technology in government and its related agencies? How can the perspectives of anti-racist organisations be centred in these discussions? What kind of structures and resources are needed to amplify the voices of local anti-racist organisations and activists to tackle some of these issues caused by the use of technologies? What kind of capacity-building do they need to build sustainable communities, and how can digital rights organisations help with that? Sanne Stevens, Co-Director, Justice, Equity and Technology Table Merel Koning, Senior policy officer on human rights and technology, Amnesty International Nadia Benaissa, Policy advisor, Bits of Freedom 2022-01-25T10:40:00 10:40 1:00 Bob privacycamp-9-ministry_of_microsoft_public_data_in_private_hands Creative Commons Attribution 3.0 false Ministry of Microsoft: Public data in private hands en Concerns about digital sovereignty and the dependency of public powers on private infrastructures are on the rise. From Universities receiving “free” cloud storage from Google to States increasingly reliance on Microsoft or Amazon cloud services, questions arise about sovereignty when crucial information is in the hands of for-profit organisations. The EU has expressed a priority to regain digital sovereignty and launched the GaiaX project for cloud services and compete with big giants, only to be taken over by Huawei, Alibaba, Microsoft and Amazon. Are we moving towards another power grab by big companies over public infrastructures? Who should run our digital services and how should they be run to be truly “sovereign”? What would human-centric sovereignty entail? What is the role of open-source software as a political tool? Seda Gürses, Associate Professor, Faculty of Technology Policy and Management, TU Delft Frank Karlitschek, Founder and CEO, Nextcloud GmbH Estelle Massé, Legislative Manager and Global Data Protection Lead, Access Now 2022-01-25T11:50:00 11:50 1:00 Bob privacycamp-10-a_feminist_internet Creative Commons Attribution 3.0 false A feminist internet en This panel advocates for a feminist internet and the need to empower its communities, which work with the technologies to do so. We, as artists who run a network of autonomous servers, have acknowledged a desire by creatives and activists to make and publish their work on our platforms, which are aligned with our identity politics, collaborative ethics and privacy concerns. However, our communities lack the structural resources to enable feminist hosting platforms to become sustainable in the longer term. Our invited speakers will elaborate on the urgency of technofeminist infrastructures in relation to the wider context of digital rights, and will address the challenges their mission entails in relation to internal and external agency. The discussion will unfold around the following topics: Gender bias in the FOSS and digital rights movement Governance for collectivised agency and reaching out to allies Resources for long-term sustainability. Nate Wessalowski, Researcher, Leuphana University Lüneburg Mallory Knodel, Chief Technology Officer, Center for Democracy and Technology Andrea Zappa, Web developer, Freelancer Maddalena Falzoni, Founder, MaadiX ISP Anaïs Berck, Artist 2022-01-25T14:00:00 14:00 1:00 Bob privacycamp-11-regulating_tech_sector_transgressions_in_the_eu Creative Commons Attribution 3.0 false Regulating tech sector transgressions in the EU en The Covid-19 pandemic has spurred intense ‘sector creep’, with firms such as Google, Facebook, Amazon and Palantir seeking new markets and opportunities in global public health. These ‘sphere transgressions’ embed new possibilities for the monitoring and control of public and private life which will not disappear with the waning of the pandemic. We will bring together researchers and practitioners to discuss and debate the effects of this phenomenon and will also share research from the Global Data Justice Project at Tilburg Law School. We have three aims with this session: To surface this new and rapidly developing phenomenon as a specific issue for the rights community globally, consulting with participants about its manifestations in different places. To connect it to a range of civil and political rights issues, including but going beyond privacy. To debate possible responses on the part of civil society and rights groups. Through debating this at the panel, we hope to understand what leverage should be brought to address it – data protection and privacy claims, regulatory measures, civil society awareness-raising and resistance, pressure on government for transparency and democratisation of decision-making, or norm-building in international fora. Our aim is to build a community around this issue – both by surfacing related issues from different countries and regions, and by involving participants actively in the search for responses. We want to connect the privacy community, who have been working on issues related to this for a long time, with those coming from other rights perspectives who may have new insights and responses from their own fields. The benefits of this session for participants are first, the opportunity to collaboratively build knowledge and participate in research on this emerging phenomenon; second, to empower them to identify it as a new problem and discuss ideas about how to act on it in their local environments, and third, to create the opportunity for a network to grow around this problem and to follow it, and civil society responses, over time in different environments. The session will be interactive – we hope to use it as an opportunity to learn about new cases of sector creep around the world, and different views on responses. Usha Ramanathan, Independent Legal Researcher Ouejdane Sabbah, Lecturer/ Project Associate, University of Amsterdam/ Global Data Justice Mariana Rielli, General Project Manager, Data Privacy Brasil Research Association Aaron K Martin, Post Doctoral Researcher, Global Data Justice Project, Tilburg University Scott Skinner-Thompson, Associate Professor, University of Colorado Law School 2022-01-25T15:20:00 15:20 1:00 Bob privacycamp-12-regulation_vs_governance_who_is_marginalised_is_privacy_the_right_focus_and_where_do_privacy_tools_clash_with_platform_governance Creative Commons Attribution 3.0 false Regulation vs. Governance: Who is marginalised, is “privacy” the right focus, and where do privacy tools clash with platform governance en Many internet services are designed to collect personal data and exploit established and novel marketing techniques to nudge users to abandon increasingly more of their undivided attention. While users may expect that, in return for their data and attention they will receive content tailored to their interests, what they get is content selected and moderated based on the services’ business interests, irrespective of user enjoyment or societal welfare. Indeed, many internet services enforce moral and societal frameworks that the target audience may neither be subject to, nor agree with. Instead of serving the needs of users and treat them as ends, add-driven services objectify these users as means for profit, reducing the users’ purpose to that of consumers who are to be manipulated to consume more and specific content at the choice of international corporations. Thus, in order to build respectful technologies away from structural exploitation, we must go beyond considerations of data privacy to examine the ways in which technology fails to meet users’ expectations for what they will receive in return for personal data and engagement. Specific issues for certain groups of society show that detrimental and discriminatory effects are pervasive and indicate that the underlying issues require novel approaches of regulation and community-driven platform governance. Examples for these effects are: Children: Manipulative online services for children do not cater for their best interest, but may present a threat to their development and freedom of thought. Sex workers: Digital services frequently deplatform and censor discussions of sex and sex work preventing sex workers doing legal work from being visibile to broader society, and from having their own access to supportive community, harm reduction information, and digital financial services. In this panel we will look at digital infrastructures reflecting the needs of these two groups, children and sex workers. Our analysis drives is driven by the understanding that a sole focus on privacy and data protection may not be the appropriate way to regulate digital platforms and to guarantee a safe environment for users. We will discuss different personal, legal, and technological aspects of personal safety, internet governance, and regulatory ideas beyond the General Data Protection Regulation and the Digital Services Act, to work towards new community-driven infrastructures that cater for intersectional justice. Specifically, we want to explore the boundary where the limits of regulation and community-driven privacy tools clash with platform governance. Elissa Redmiles, Research Faculty, Max Planck Institute for Software Systems Tommaso Crepax, IT Researcher, Scuola Superiore Sant Anna Patricia Garcia, Assistant Professor, School of Information, University of Michigan, USA Laïs Djone, Board Member, Utsopi, Be 2022-01-25T16:30:00 16:30 1:00 Bob privacycamp-13-how_it_started_how_it_is_going_status_of_digital_rights_half_way_to_the_next_eu_elections Creative Commons Attribution 3.0 false How it started / how it is going: Status of Digital Rights half-way to the next EU elections en We would like to have an overview of how it has been for digital rights so far in the current EU legislative term and take 2-3 takeaways of what to expect by 2024 and ideas of what to expect for the next elections. By doing this analysis we would provide analytical and strategic tools for civil society to face the existing and future challenges we’re facing. We would encourage self-criticism of strategies and acknowledging what could have worked better so we can address these failures in our current and future work. Alexandra Geese, Member of the European Parliament Asha Allen, Advocacy Director for Europe, Online Expression & Civic Space, Center for Democracy and Technology (CDT) Joris van Hoboken, Professor of Law at the Vrije Universiteit Brussels (VUB) & a Senior Researcher at the Institute for Information Law (IviR) Anna Fielder, EDRi President and Senior Policy Adviser & Chair Emerita of Privacy International