Photo by Niv Singer on Unsplash
AGOPOL ONLINE CONFERENCE
Algorithmic Governance and Cultures of Policing (2)
Diffusion of Policing in the Algorithmic Society
November 18, 2022 – 09:00 – 18:00 Oslo time / Online event
Please click the link below to join the webinar:
Passcode: 782254
Organizers
Veronika Nagy, Utrecht University, Netherlands
Ella Paneyakh, College of International and Public Relations Prague, Czech Republic
Tereza Østbø Kuldova, Oslo Metropolitan University, Norway
Policing practices have arrived at a new stage of digital transition in late modern societies. The tasks of maintaining social order and public safety are gradually being extended from traditional crime control and investigation measures towards invasive prevention strategies across many segments of society and everyday lives, spanning social, economic, and even environmental issues. Traditional policing actors are losing their legitimacy and monopoly on crime control, while new providers of security and intelligence services proliferate. This fragmentation and diffusion of policing, along with the shift towards pre-emption, generates both a culture of responsibilization and new opportunities for state-corporate collaborations in and outside of the scope of traditional security governance. Certain policing functions are being gradually taken over by civic and state agencies responsible for licensing, maintenance of registries and databases, while others are directly delegated via regulation to private sector actors, in-house corporate security, civil society organizations and diverse public bodies beyond traditional law enforcement (Kuldova, 2022). Hence, diverse actors become plugged into networks of intelligence work (Ben Jaffel & Larsson, 2022) and into the logic of pre-crime and pre-emptive policing, often relying on the data-driven technologies (McCulloch & Wilson, 2016). At the same time, law enforcement agencies also rely on data created and maintained within civic agencies and private corporations in their everyday policing work. With the growing dominance of nonstate and often also transnational actors in the security, intelligence and consultancy markets, digitization, AI, big data analytics and automation, have become the technosolutionist (Morozov, 2013) buzzwords and selling points – coveted by the public and private sector alike – promising efficient, transparent, and objective control and prevention measures. These buzzwords and the fear of missing out is not only used to justify the use and expansion of different identification, surveillance, authentication, verification, and monitoring technologies to fight criminal threats, but these buzzwords also promise seamless preventive and pre-emptive solutions for threats associated with specific target (high-risk) groups, situations, and locations. While these technologies are frequently developed with good intentions, they do and can contribute to criminalization and discrimination, further fueling the crisis of law enforcement on one hand, and societal mistrust and injustice on the other. This conference seeks to interrogate these and related issues.
References
Ben Jaffel, H., & Larsson, S. (2022). Introduction: What’s the Problem with Intelligence Studies? Outlining a New Research Agenda on Contemporary Intelligence. In H. Ben Jaffel & S. Larsson (Eds.), Problematising Intelligence Studies: Towards a New Research Agenda (pp. 3-29). Routledge.
Kuldova, T. Ø. (2022). Compliance-Industrial Complex: The Operating System of a Pre-Crime Society. Palgrave Pivot.
McCulloch, J., & Wilson, D. (2016). Pre-crime: Pre-emption, precaution and the future. Routledge.
Morozov, E. (2013). To Save Everything, Click Here. Public Affairs.
Conference Program
09:00 - 09:15
Opening Statement
Tereza Østbø Kuldova, Veronika Nagy and Ella Paneyakh
09:15 – 10:00
Algorithmic Policing of Synthetic Media: Deepfakes to DALL-E
Ignas Kalpokas
Vytautas Magnus University, Lithuania
Synthetic media abound in today’s digital-first world. Broadly defined as any form of content (audio, visual, textual, or any combination thereof) created primarily through the use of AI-enabled tools, synthetic media first came to prominence in the form of deepfakes (although they were, by most counts, predated by some text and music generators). More recently, image generators, like DALL-E or Midjourney have attracted the public’s attention. On the one hand, the proliferation of such tools should be able to democratise creativity in unprecedented ways. On the other hand, though, such generative capacity gives rise to new incarnations of old fears: fake news and synthetic non-consensual porn. As legal regulation is either lacking or difficult to enforce, the emphasis has shifted to online platforms and their algorithmic content moderation tools to ensure that the public sphere is cleansed of actual or alleged threats. This is also part of a broader move towards ever-greater delegation of quasi-judicial powers to online platforms. While such delegation has frequently been problematic due to the lack of accountability and due process guarantees by platform companies, the regulation (and policing) of synthetic content raises new issues: now platform companies are tasked to determined, in an automated fashion, matters of artistic value, decency, permissible and restricted speech, fair use and copyright infringement etc. While in course and other regulatory and judicial bodies such questions are often a matter of interpretation, the expectation that such decisions could be made algorithmically seems misguided at best.
Ignas Kalpokas is Associate Professor at the Department of Public Communication, Vytautas Magnus University where he also heads the MA program Future Media and Journalism. His research focuses on the social and political impact of digital technologies, algorithmic and platform governance, fake news and information warfare, and media theory. Ignas’ teaching stretches across the domains of journalism and media studies, disinformation and propaganda, and geopolitics of the internet. He is the author of Creativity and Limitation in Political Communities: Spinoza, Schmitt, and Ordering (Routledge, 2018), A Political Theory of Post-Truth (Palgrave Macmillan, 2019), Algorithmic Governance: Politics and Law in the Post-Human Era (Palgrave Macmillan, 2019), Malleable, Digital, and Posthuman: A Permanently Beta Life (Emerald, 2021), and co-author of Deepfakes: A Realistic Assessment of Potentials, Risks, and Policy Regulation (Springer, 2022) and Governing the Metaverse: A Critical Assessment (Routledge 2023).
10:00 – 10:45
The Hidden Consequences of Predictive Policing for Everyday Police Work
Lauren Waardenburg
IESEG School of Management
Predictive policing is a much-debated topic. Ever since the first implementation at the LAPD in 2008, popular press as well as academic interest has gone to the moral and ethical questions surrounding this AI-based solution. While bringing these issues to light is of undoubted importance, at the same time, not much attention has been paid to the consequences of these tools for everyday police work. In this presentation, I will talk about these, often hidden and unexpected, consequences of developing and implementing a predictive policing AI system. I use insights from my 31-month long ethnography of the Dutch police and show that new and potentially very powerful roles are emerging.
Lauren Waardenburg is an Assistant Professor at IESEG School of Management in Lille, France. Her main research interests are related to the role of technology for occupational emergence and change, the reconfiguration of work and organizing due to intelligent technologies, and the duality of the physical and the digital. She has a specific interest in using ethnography as a research method for studying technology in practice.
10:45 - 11:00 Break
Panel 1: Data Production and Manipulation
Moderator: Ella Paneyakh, College of International and Public Relations Prague, Czech Republic
11:00 - 11:15
Anticipating Crime: Standardization, Discretion and Datafication
Helene Oppen Ingebrigtsen Gundhus
University of Oslo, Norway
Christin Thea Wathne
Oslo Metropolitan University, Norway
Pernille Erichsen Skjevrak
Oslo Metropolitan University, Norway
There are different types of use of algorithmic prediction and decision-making in organizations, and we also see differences over time. In Norway, intelligence-led policing and risk assessment tools are implemented to prevent crime. There is a lack of empirical research examining how police officers are using data driven technologies and how their varied usage affects not only approaches to potential criminals and crime prevention but also the technologies themselves. We will particularly explore the tension between invisible objects as discretion and the increased standardization that risk assessments tools enables, by critically looking at crime prevention efforts using risk indicators to predict crime. The theoretical approach is inspired by social theory assuming that contextual aspects and community of practice together with non-human things do ‘make a difference in the course of some other agent’s actions.’ The backdrop is the increased aim for standardization of police data tools, methods, and integration of data, to reduce biases and make the police knowledge more neutral and objective. This is prominent on different scales, from the data systems and formats, made by global firms, to the actual co-production of the outcome by the technology and users. The paper aims not only to focus exclusively on the role of digital data and software in the creation of potential criminals and crime, but also to explore the co-construction as a process that involves both kinds of actors: human ones, such as programmers, end users and experts, as well as non-humans. The paper aims to contribute to discussion on how does the datafication of the future change present policies, collective practices of imagining, planning, and controlling future(s).
Helene Oppen Ingebrigtsen Gundhus is a Professor and Head of Department - Department of Criminology and Sociology of Law, Faculty of Law, University of Oslo. Her research interests include policing and society, security and social control, crime prevention, knowledge regimes, science and technology studies (STS), qualitative methods (discourse analysis, ethnography)
Christin Thea Wathne is a Research Director and Research Professor at Work Research Institute (AFI), Oslo Metropolitan University in Norway. Her research interests include leadership and management, New Public Management, organizational development, organizational learning, professions, social identity and working environment and mastering.
Pernille Erichsen Skjevrak is a PhD student at the Centre for the study of Professions, Oslo Metropolitan University. Her research interests are social deviations and professional actors, particularly professional judgments in crime prevention. In her PhD project, Pernille aims to examine the interplay between standardized assessment tools and professional discretion in the Norwegian police.
11:15-11:30
Retrofitting Predictive Identification Models to Political Demands
Fieke Jansen
Data Justice Lab
The advent of predictive policing systems has been a key object of study to theorize new forms of algorithmic governance and its impact on society. With the term predictive policing systems, I refer to tools that use data sets of different sizes to feed into an algorithmic model that is intended to predict either places where crime is most likely to occur in the near future (hotspot policing), or persons who are likely to get involved in crime (predictive identification). These tools come with the promise of doing more with less resources, as it should allow the police to better allocate resources, adjusting the patrol frequency in ‘risky’ neighbourhoods, which will lead to crime reduction (Van Brakel, 2016; Brayne, 2017, 2020; Hardyns and Rummens, 2018). Scholarly critiques on the turn to predictive policing raise concerns about the reliance on datasets that reflect historic inequalities and perpetuate racialised policing (Williams and Clarke, 2018). These systems create policing futures in which police attention is increasingly directed to already over-policed communities, as these tools do not analyse nor predict crime but analyse and predict police activity (Lum and Isaac, 2016). What is missing from these debates are the political and institutional drivers that are shaping why and how police are turning to predictive policing systems, which in turn shape how these data models are constructed. This paper will discuss the case study of the Top 400, a predictive identification program that is hosted by the city of Amsterdam in partnership with a number of public authorities including the Amsterdam police department. Here a home-grown data model is used to identify 400 youngsters that are at risk of engaging in a criminal career. The research is based on an analysis of over 4000 pages of FOI documents provided by the municipality of Amsterdam and the Amsterdam police. These offer insights into the notion of retrofitting, where the political demand to fill the list up to 400 individuals is placing continuous pressure on the municipality to change the variables of the data model.
Brayne, S. (2017) ‘Big Data Surveillance: The Case of Policing’, American Sociological Review, 82(5), pp. 977–1008.
Brayne, S. (2020) Predict and surveil: Data, discretion, and the future of policing. New York: Oxford University Press.
Hardyns, W. and Rummens, A. (2018) ‘Predictive Policing as a New Tool for Law Enforcement? Recent Developments and Challenges’, European Journal on Criminal Policy and Research, 24(3), pp. 201–218. Available at: https://doi.org/10.1007/s10610-017-9361-2.
Lum, K. and Isaac, W. (2016) ‘To predict and serve? In: Significance.’, Significance, 13(5), pp. 14– 19. Available at: https://doi.org/10.1111/j.1740-9713.2016.00960.x.
Van Brakel, R. (2016) ‘Pre-Emptive Big Data Surveillance and its (Dis)Empowering Consequences: The Case of Predictive Policing’, in B. Van der Sloot, D. Broeders, and E. Schrijvers (eds) Exploring the Boundaries of Big Data. Amsterdam: Amsterdam University Press, pp. 117–141. Available at: https://www.ssrn.com/abstract=2772469 (Accessed: 2 March 2021).
Williams, P. and Clarke, B. (2018) ‘The Black Criminal Other as an Object of Social Control’, Social Sciences, 7(11), p. 234. Available at: https://doi.org/10.3390/socsci7110234.
Fieke Jansen is a Postdoctoral Researcher at the Data Justice Lab. She holds a PhD from Cardiff University where she looked at the institutional and societal implications of the introduction of predictive identification and biometric recognition in Belgium, the Netherlands, and the UK. She is the author of the mapping study ‘Data driven policing in the context of Europe’ and co-author of ‘Biometric identity systems in law enforcement and the politics of (voice) recognition: The case of SiiP’ (alongside Lina Dencik and Javi Sánchez-Monedero). Throughout her work Fieke worked to repoliticize and decenter technology in discussions around harms, justice and rights. Fieke is the Chair of the board of the Digital Freedom Fund. Prior to starting her PhD, Fieke worked at Tactical Tech, a Berlin based NGO, as the project lead for their Politics of Data programme. At the Dutch Development organisation, Hivos, the intersection of human rights, internet and freedom of expression.
11:30-11:45
Disrupting the Lifecycle of Data: Data Practices and the Early Intervention Dilemma
Vanessa Ugolini
School of International Studies, University of Trento, Italy
By normalizing the collection of personal data beyond security purposes, governments have prepared populations to live in a permanent state of risk by trading privacy for security. While personal data figure as transversal solutions to the management of crisis, security authorities are endeavoured to consistently detecting all possible criminal harms before their materialisation. This paper engages with the role of data as threat enabler, by looking particularly at the EU high-tech information architecture. The aim is to investigate the socio-political, legal and technical dynamics behind the production of data for the governance of security issues. Arguing that personal data is subject to a security judgement in a redundant manner has implications for how we think about their circulation in the digital environment. I advance the concept of ‘data lifecycle’ to better capture the various moments in the life of data. Data are captured and stored within different databases, at different moments and across different spaces. However, between the collection and the use of data for a defined purpose, multiple activities, such as processing, analysis and transfer, produce an effect on the life of data that inherently transforms its ontology. Through these practices, data are constantly reassembled, recombined and re-purposed, to become re-usable entries across different information systems for “crime management”. The management of crisis in the ex-ante scenario, that is, before a crime occurs, is facilitated by the introduction of datafication technologies that offer the opportunity to collect information and learn facts critical to predicting crimes. Accordingly, framing information systems in terms of instruments for crime prevention opens up an avenue for inquiring into the effects of data-driven governance.
Vanessa Ugolini is a PhD Candidate at the School of International Studies at the University of Trento, Italy. Currently she is undertaking PhD research on datafication technologies implemented in the European Union for security purposes. Achieved a master’s degree with distinction in Intelligence and International Security at King's College London. Awarded 'Barrie Paskins Award' for Best MA Dissertation by the Department of War Studies.
11:45 – 12:00 Q&A for Panel 1
Panel 2: Face Recognition
Moderator: Veronika Nagy, Utrecht University, Netherlands
12:00 – 12:15
Diffusion of Policing Meets the Unrule of Law: Russian Mobilisation Case
Ella Paneyakh
College of International and Public Relations Prague, Czech Republic
Observations from current ‘partial mobilization’ in Russia in the context of war in Ukraine demonstrate how digital technologies can be used and abused by government in case legal restrictions fall in a state of exception. In the past, Russian government made multiple attempts to regulate use of new digital technologies, including prevention of dissemination of personal data and restriction of use of facial recognition systems to those functions that presumably do not break citizen’s rights. Overall, despite significant misuse in politically loaded cases, use of digital technologies even within law enforcement system seemed to be overregulated rather than underregulated, to the extent where some police officers preferred to obtain digital data from the shadow market, and only later, in case the data worked for their investigation, to legalize these data by performing a formal query to an official data base. The state of exception, though, practically nullified the restrictions. The conscription campaign declared by government, namely, recruiting several hundred thousand adult men to the military in addition to regular draft that targets young men over 18 years old, involves wide use of facial recognition technologies, as well as multiple databases, including those created for taxation purposes, citizen and international passport registries, airport screening systems, data bases of delivery companies and taxi services, etc. Integration of public and private databases created mostly for civic purposes with those created initially for law enforcement allows the police involved in the operation to detect and deliver citizens that appear in the system as subjects for conscription. Surprisingly though, while making the citizens targeted by recruitment more vulnerable, the integrated digital technologies do not introduce more order in the process of involuntary conscription. Instead, due to opportunistic behaviour of the street-level personnel performing different tasks in the process, they seem to multiply chaos and vulnerability, including for those not actually targeted by conscription.
Ella Paneyakh, PhD (kandidat nauk), is a Researcher in the College of International and Public Relations Prague, Czech Republic. Before she was a Docent at the Sociology Department, National Research University Higher School of Economics, St. Petersburg campus, 2015-2020. In 2009-2015 she worked as a professor in the Department of Political Science and Sociology at the European University at St. Petersburg, and as a Director and Senior researcher for the Institute for the Rule of Law in the same university. Previously, she studied as a doctoral candidate in the Sociology Department of the University of Michigan (2002–2009) and received an M.A. from the Department of Political Science and Sociology at the European University at St. Petersburg (2001). In 1996, she received a specialist diploma in Economics in the St. Petersburg University of Economics and Finance. She has been a columnist for Vedomosti in 2002-2020, and she received a “Liberal mission” journalism award in 2020. She is a member of the Redkollegia Award panel. She has also written for Forbes-Russia, Eurozine and many other media.
12:15-12:30
Policing Faces – Perceptions of Facial and Emotion Recognition Technologies
Diana Miranda
University of Stirling, UK, Lachlan Urquhart, University of Edinburgh, UK
Lachlan Urquhart
University of Edinburgh, UK
In this qualitative study, we explore UK police force perspectives on use of automated facial recognition (AFR) technologies. We consider the narratives of front-line officers (semi-structured interviews) on how AFR would impact their professional practice. We observed scepticism and disbelief from officers around reliability, accuracy and effectiveness of AFR. Surprisingly, this often corresponds with wider critiques observed in the narratives around AFR from societal stakeholders including mass media, citizens and civil liberty groups. However, this scepticism fades as officers consider the future for AFR, where optimism and increased confidence in its use grows. We develop an empirical and legal analysis that reflects on how these future visions shape potential use of AFR in UK policing. We also consider the related technique of emotion recognition that seeks to augment AFR by reading citizens emotive states. As an emergent tool, we draw on experiences from AFR to reflect on possible futures and expectations of what we term intelligent facial surveillance (IFS). This allows us to extend our focus to AI-enabled surveillance targeting the face in policing and to consider other emerging technologies beyond AFR (e.g. emotional AI). We conclude with 10 practical lessons that aim to inform any future law enforcement use of IFS in the UK and beyond.
Diana Miranda is a lecturer in Criminology at University of Stirling, UK
Lachlan Urquhart is a senior Lecturer in Technology Law and HCI, University of Edinburgh, UK
12:30 – 12:45
With Great Power Comes Great Responsibility: Proportionality within the Use of Facial Recognition Technology by Law Enforcement
Natalia Menéndez González
European University Institute, Italy
Facial Recognition Technology (FRT) is a disruptive technology with huge capability and impact. Its potential for remote deployments allows for performing contactless biometric identification. It is not the only contactless biometric technology, but it particularly enables unnoticeability by the subject to it. This characteristic raises several legal issues such as the difficulty of obtaining informed consent. It also enhances FRT's ubiquitous presence, which is particularly prone to entail biometric mass surveillance scenarios. As a result, biometric mass surveillance apparatus has been used to oppress minorities such as in the case of the Uyghur people in China. Precisely these characteristics are the ones that have made FRT particularly attractive to Law Enforcement (LE) agencies, that have shown great interest in FRT as a valuable tool for crime prevention and investigation as well as for general security purposes. Uyghur's case perfectly illustrates a potentially abusive and discretionary use of FRT by LE; hence some sort of supervision must take place. In this scenario, the Principle of Proportionality (PoP) arises. The PoP is an argumentative legal framework to avoid discretional activity by public powers. It allows the judiciary to limit the action of, for instance, police activity. According to the European Court's jurisprudence, the PoP has three elements. First, ‘suitability’ appeals to the appropriateness of the measure to fulfil the purpose of its adoption. Second, ‘necessity’ implies recourse to the less onerous means if there is a choice. Third, ‘proportionality stricto sensu’ means that a measure is disproportionate if, although suitable and necessary, it puts an unreasonable burden on the individual. Hence, my contribution will analyse which criteria apply for proportionality when FRT is used by LE?
Natalia Menéndez González is a PhD candidate at the European University Institute where she researches the proportionality within the use of Facial Recognition Technology by law enforcement authorities. She is also a teaching assistant at the School of Transnational Governance, a research fellow at the Center for AI and Digital Policy and the Information Society Law Center at the University of Milano, co-founder of The DigiCon blog and a board member of the PhD students in AI Ethics research group. She has participated in numerous research projects, and conferences and has several publications. Her other research interests include AI Ethics, especially for Natural Language Processing models.
12:45 – 13:00 Q&A for Panel 2
13:00 - 13:30 Lunch Break
Panel 3: Algorithmic Policing Across Political Contexts
Moderator: Tereza Østbø Kuldova, Oslo Metropolitan University
13:30 – 13:45
Everyday Policing and Governance in Kerala: Understanding the Terrain for Algorithmic Transition
Ashwin Varghese
O.P. Jindal Global University, India
To effectively understand the transition to algorithmic governance and policing in specific societies, it is important to first comprehend everyday policing and governance in their particularistic contexts. We cannot assume that states that are adopting smart policing, and data driven pre-emptive policing are doing so from similar experiences, histories, cultures and politics. The state of Kerala, India is a case in point. Modern policing in India is significantly conditioned by its experience of colonialism and subsequent expansion of capitalism. Within the postcolonial context of India, the experience of governance and policing in Kerala has been unique on several fronts. Since its formation it has charted an alternate development trajectory, popularly known as the Kerala model. While literature on the Kerala model has focused heavily on the state’s social welfare measures like access to food, healthcare and education, little attention has been paid to its policing. I focus my attention on policing and governance and how it has fared within the Kerala model so far. Drawing from a part of my doctoral study – an ethnographic study of a police station in Kerala – I will draw out the terrain of everyday policing and governance in Kerala, laying out the social, political and historical context through which the everyday practices of the state have evolved. Today, the state government in Kerala and its police force are embarking upon ambitious projects of smart governance and policing. I argue that the scope and limitations of these projects can be fully grasped only with a thorough understanding of existing everyday practices, which these initiatives aim to modernise/change.
13:45 – 14:00
Biometric (Data) Governance and Digital Surveillance: A Comparative Analysis of Bio-Politics in India and China
Gurram Ashok
University of Hyderabad, India
This is an analytical paper which critically engages with comparative analysis of operational aspects of Michel Foucault’s bio-politics through algorithmic rationalities of governance in India and China. The objective of the study is to examine how India and China, being different regime types, could adopt a similar method of governance, that is biometric (data) governance, in the disguise of providing the national security but gradually entrenched into delivering the welfare services, to the poor, which eventually paved the way for a digital (mass) surveillance, especially against “the dissent”. The study aims to bring out non-western experience of surveillance state and its legitimacy among the masses from the global south. The study also traces the role of data governance by both state and market agencies in changing nature of capitalism from “Industrial to Financial to surveillance capitalism” in the neoliberal era. The Methodology of “Critical Discourse Analysis” is used, in the study, to analyse the policy practices such as Aadhaar based services and DNA profiling in liberal democratic India and, “Social credit system”, “Genomic Surveillance” and internet control in totalitarian China. The convergence of algorithmic governance techniques, in liberal and totalitarian regimes, is the result of increasing urge for bio-politics where the state and market agencies compete to control ‘every aspect of human life’. Thus, the biometric (data) governance coupled with surveillance technologies is leading to “backsliding of democracy” in India towards “tyranny of technology” that is visible in China’s “digital authoritarianism”.
Gurram Ashok is a Doctoral Fellow at the Department of Political Science, School of Social Sciences at the University of Hyderabad, India.
14:00 – 14:15
Revolutionary Roots of Algorithmic Policing? A Police Ethnography on Contemporary Grassroots China
Lingxiao Zhou
University of Illinois at Urbana-Champaign, USA
By 2022, China fundamentally reorganized its system of local governance to serve the party-state’s increasing demand for preserving stability. The new grassroots governance repertoire centers a spectrum of digital technologies, utilizing various digital information platforms to collect a wide range of police intelligence across a manageable set of databases. The analysis focuses on the mode of information gathering and integration evident in the daily policing praxis. Based on 12 months of ethnographic research in various police functional branches within a single county combined with historical research, this research reveals current algorithmic policing, including identifying everyday disputes among the people that may threaten social order as well as COVID-19 prevention and control, is historically and cultural informed by Maoist technologies of policing class adversaries. Specifically, the operational principle of “labelling” (daimao, which is literally translated as “wear a hat”), a Maoist practice of differentiating and hierarchizing subjects, remain embedded in the algorithms by which local party-states decide which approaches (i.e., the required police surveillance, quarantine measures, etc.) are applied to which individuals. Analytically, this study suggests a new concept, revolutionary policing, to make sense this contradictory linkage between the revolutionary ideal of changing the future and the contemporary domestic agenda of maintaining the status-quo. Drawn from conversations in the literature on governmentality and surveillance studies, this study of frontline agents contributes to our understanding of the subjective experience shaped together by an assemblage of revolutionary political technologies and policing by app.
Lingxiao Zhou is a PhD candidate in the Department of East Asian Languages and Cultures at the University of Illinois at Urbana-Champaign, USA. His research is at the intersections of policing, law, and East Asian studies. He is currently conducting his dissertation research on grassroots social governance (shehui jiceng zhili) with the focus on digital technologies in the P.R. China.
14:15 – 14:30 Q&A for Panel 3
Panel 4: Predictive Policing
Moderator: Helene Oppen Ingebrigtsen Gundhus, University of Oslo, Norway
14:30 – 14:45
Critical Digital Literacy in an Age of Big Data Policing: What is Needed, What is Possible?
Jelena de Rijck, Pieke de Beus & Paul Mutsaers
Radboud University, Netherlands
In the 15 years or so in which we have been engaged in criminal justice research, we have seen a groundswell of opposition to criminal justice agencies almost unequalled in history. The countless protests that have erupted have been predominantly focused on the police as the frontline organization that has become a security provider to some but a security risk to others (Mutsaers 2019). Increasingly, these protests have gone digital with the ‘archival power’ (Trouillot 1995) of metadata such as hashtags at protestors’ fingertips, quite literally. However, with the equally paced digitalization of policing itself, new challenges are posed to ‘policing the police’ initiatives. A distinct feature of this digitalization of policing is the proliferation of predictive policing tools, such as the Crime Anticipation System (CAS) developed by the Dutch National Police. CAS provides a spatiotemporal risk assessment of high-impact crimes to allocate police resources accordingly and prevent crime. However, such risk assessment tools have risks of their own. Not only can they threaten citizens’ right to non-discrimination and privacy, but they often lack transparency and explainability. The black box nature and opaqueness of the internal logics make it difficult for the police themselves – officers and data-scientists alike – to explain predictions, making it almost impossible for the public to understand it and fight against potentially discriminatory effects. How can citizens protest against high-dimensional datasets and obscure algorithms? What sort of critical digital awareness and literacy is needed to know if, why and how one is profiled? In addition to studying CAS and its workings in a classic ‘beat ethnography’, we are carrying out a digital ethnography that examines online activism and focusses on ‘local’ online knowledge of CAS. We ask if online (minority) communities are already trying to protect themselves against the risks of predictive policing techniques like CAS and what is needed for such protection to work?
Jelena de Rijck, Pieke de Beus and Paul Mutsaers are all three affiliated to Radboud University’s department of Cultural Anthropology and Development Studies and to iHub—Radboud’s interdisciplinary research hub on digitalization and society. Jelena as an undergraduate student doing a research internship; Pieke as a PhD candidate in the project Predictive Policing and its Discrimination Risks; and Paul as a police anthropologist and supervisor in the PhD project. We are involved in various research projects covering algo terrains such as the Dutch big data policing system CAS, anti-police protests in the digital sphere, and the online-offline nexus of (criminalizing) drillrap in Dutch youth detention centres.
14:45 – 15:00
The Post-Racial Politics of Predictive Policing
Sanjay Sharma
University of Warwick, UK
Jasbinder Nijjar
University of Surrey, UK
Policing strategies aiming to identify and nullify risks to national security in western nations have become central to the biopolitical control of racialised populations. While the disproportionate impact of pre-emptive counter-terrorism policing on ‘Muslim’ populations has been highlighted, there is a lack of research that unpacks the racial techno-politics of predictive policing as securtisation. Counter-terrorism is governed by a state of crisis and perpetual contestation against (unknown) future threats. And predictive policing is progressively dependent on the computational production of risk to avert acts of terrorism. Extant forms of counter-terrorism algorithmic profiling are being expanded which increasingly obfuscate their racialised epistemologies. Black-boxed abductive algorithms deployed in predictive policing are mobilising post-racial calculative logics that renew forms of racism and oppression while appearing to be race-neutral. Moreover, these predictive algorithms seeking to identify potential suspects entrench and expatiate race-making, and thus inadvertently foment the dread of terrorism. This paper interrogates how dominant notions of prediction and risk in counter-terrorism discourse are entangled with profound fear over racialised Muslim populations, to argue that technologically-driven pre-emptive policing efforts to thwart future acts of terrorism exemplify a strategy of racial containment (which is inexorably predicated on its own failure). We demonstrate that predicting imminent terrorist threats, the security state institutionalises Muslim subjects as abnormal, rendering certain behaviours, religious expressions, modes of dress, relationships, political beliefs and other innocuous aspects of everyday life as racialised markers of risk towards terrorist violence. This post-racial mode of algorithmic profiling as prediction expands and intensifies the carceral state vis-à-vis the control of abject bodies. We maintain how predictive policing produces Muslim subjectivities through martial logics of immanency and threat, that rationalise targeted and escalated modes of disciplinary and biopolitical corrective intervention, in defence of western civilisation.
Sanjay Sharma, Centre for Interdisciplinary Methodologies, University of Warwick, UK, researches digital technologies and social justice, from decolonial and abolitionist perspectives. His is recent work is critically exploring how applications of 'Artificial Intelligence' (AI) are deepening social harms across society.
Jasbinder Nijjar, Department of Sociology, University of Surrey, UK, is currently examining institutional racism in London's Metropolitan Police Service.
15:00 – 15:15
Automating Police Discretion: The End of a Truly Reasonable Suspicion?
Kelly Blount
University of Luxembourg
The use of risk assessment tools by police and growing sophistication with the addition of AI for collecting and sorting data, have led to the proliferation of predictive, or algorithmic policing. Predictive policing, which is defined as the prediction of where or by who crime is most likely to next occur allows for the allocation of police resources based on perceived risk. Predictions are generated according to relative risk and based on correlations in data, rather than provable or observed causation. In addition, because predictions are necessarily relative as well as correlative, it is important to note their overall generic quality. Further, discounting causation for more and “smarter” understanding of relative risk has changed the calculus by which police interact in the community. Predictive policing may supplement, but also fully supplant, the use of police officer discretion, a core component of the policing toolbox. Therefore, the output of risk assessments colors the expectations and perceptions of an officer, causing the creation of suspicion to be fully or in part dependent on automation. As a result, the reasonable suspicion standard, so crucial to criminal justice and fundamental rights standards throughout international and national legal regimes, may become a derivative of automated data processing. This paper will argue that police reliance on risk scores automates the processes precedent to building suspicion. Therefore, though police decision making is not automated per se, the suspicion necessary for taking particular action is automated in that officer discretion is influenced by algorithmic calculations. The paper further argues that the end result is ‘automated suspicion’, by which the legal frameworks applicable to policing no longer match the current state of policework. The paper will conclude in questioning whether predictive policing may be lawfully used in the existing legal frameworks and whether de facto automated suspicion is in practice a form of automated decision making.
Kelly Blount is a Doctoral Researcher at the University of Luxembourg, as a member of the Doctoral Training Unit on Multi-Level Enforcement. Her research focuses on identifying the applicable legal framework(s) for predictive policing and specifically police uses of AI. She is a licensed attorney in the United States, having earned her Jurist Doctor in 2017, and a student coordinator for the Jean Monnet European Union Law Enforcement Network since 2019.
15:15 – 15:30 Q&A for Panel 4
15:30 - 16:00 Break
Panel 5: Ethics
Moderator: Tereza Østbø Kuldova, Oslo Metropolitan University, Norway
16:00 – 16:15
How Is Deep Fake Threatening Public Safety and What Can Law Enforcement Do About It?
Aurelia Pūraitė
Mykolas Romeris University, Lithuania
The number of cyber incidents in Lithuania, as in the whole world, is increasing every year. According to the National Cyber Security Center at the Ministry of National Defense, in 2020 cybernetic incidents of different kind increased by 25 percent, and the number of incidents related to the distribution of malware increased by as much as 49 percent, the same trend being observed in other states. Categories such as deep fake, propaganda, or disinformation are related to cybersecurity as well. Propaganda, systemic disinformation campaigns and deepfake content designed to misinform the public, to influence politics and democratic processes, contribute to fraud and other crimes. It may be amazing when deep fake images created with the help of AI add value to a movie, but the same technology quickly turned very threatening and provocative when a fake and heavily manipulated video depicting Ukrainian President Volodymyr Zelenskyy circulated on social media and was placed on a Ukrainian news website by hackers in early March, 2022. Is law offering at least some tools that can help victims of crimes linked to the use of deep fakes (such as defamation, intentional infliction of emotional distress, privacy tort, and some others) to protect themselves, and moreover – are law enforcement officials ready to identify and prevent these possible violations, and what other possible threats deep fakes can have on law enforcement activities is being discussed in this presentation.
Aurelija Pūraitė is a Professor of law at Mykolas Romeris University Public Security Academy (Lithuania), Vice-Dean for Science and Project Activities and the Head Sustainable Security Research Centre. At present she is implementing several research projects, such as AGOPOL (“ALGORITHMIC GOVERNANCE AND CULTURES OF POLICING. Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa”, funded by Norwegian Research Council); HYBRIDC (Erasmus+ project in the area of higher education cooperation, funded by European Commission, creation of the Master’s Curriculum specialising in fighting international hybrid threats); HELCI (HIGHER EDUCATION LEARNING COMMUNITY FOR INCLUSION, HELCI project will elaborate, validate and share innovative contents by developing MOOCs addressed to universities communities, on the topics of non-discrimination, ethnic and cultural diversity, affective and sexual diversity, and gender discrimination); NAAS (Project for National Information Impact ID & Analysis Ecosystem). Her latest research publication - editor of the book “Europe Alone. Small State Security Without the United States” (published by Rowand & Littlefield).
16:15 – 16:30
Detect, Redact, Archive: Algorithmic Policing from Below?
Cheshta Arora
Centre for Internet and Society, India
The paper draws from an experience of developing a user-facing browser plug-in that deploys ML and non-ML approaches to help mitigate text-based online abuse targeting sexual and gender minorities on Twitter in India. It supports three Indian languages: Hindi, Indian English, and Tamil. The tool, called Uli, was built using feminist, bottom-up approaches that involved including members of the community in the design and annotation of the ML model. 18 annotators—activists, journalists, and community influencers—were invited to participate in the development process and build a training dataset based on their experience of social media. During the first phase, a host of design ideas were crowdsourced out of which three non-ML features were selected for the first iteration. Using ML and non-ML approaches, the tool can essentially help detect, redact and archive abusive content on the user’s feed. The paper reflects on these design features, how these were selected and justified, and situates these decisions in the political climate of governance, surveillance and policing prevalent in the contemporary discourse of internet governance. Through a self-reflective account of this tool, I look at the extent to which the dynamics of privatization, pluralization, and hybridization of algorithmic policing have entered political imaginations and if the appropriation of diffusive policing by bottom-up tools helps subvert the notion of algorithmic policing.
Cheshta Arora is PhD in Social Sciences and a Researcher at the Centre for Internet and Society, India.
16:30 – 16:45
Diffusion of Algorithmic Decision Making and Lack of Transparency in Policing Practice
Ushnish Sengupta
Algoma university, Canada
This paper describes the diffusion of algorithmic decision making in policing practice through examining different implementations of technology in a specific city, namely by Toronto Police Services (TPS) in the city of Toronto, Canada. By examining the trajectory of implementation, and eventual rejection of three different technologies, the paper highlights the process of diffusion of algorithmic decision making within policing practice and the competing interests that determine the adoption of technology. The three different technologies examined are Shotspotter, Clearview AI and NEC Nuface, which have all been implemented in multiple jurisdictions across the world. An examination of the implementation of these three different technologies highlights a lack of transparency of the TPS to the citizens of Toronto. The context of the evolving technology adoption strategies for TPS reveals a lack of consideration of the issues of bias, particularly bias in historical data, which are then amplified through algorithmic decision making. Data published by TPS has repeatedly demonstrated existing racial biases in policing practices. At the same time, the data published has focused on in person interactions and has not included technology-based surveillance. The biases that are evident as part of the in-person interactions, are likely to be replicated in technology-based surveillance practices, without due consideration of historical data bias. TPS has utilized surveillance technology including Shotspotter, Clearview AI, and some applications such as NEC Nuface which continue to be used today. The article concludes with recommendations for an Algorithm Register of all technologies used by TPS, published as Open Data which would provide more transparency. An Algorithm Register is a list of algorithms in use by a public entity such as a police service. Publishing an algorithm Register using Open Data principles will make the process more transparent to citizens, resulting in effective interventions when required.
Ushnish Sengupta is an Assistant Professor in Community Economic and Social Development at Algoma University. He has a PhD from the Ontario Institute for Studies in Education, an MBA from the Rotman School of Management, and a degree in Industrial Engineering from the University of Toronto. Ushnish Sengupta is an award-winning teacher and has taught courses at post-secondary institutions and at community-based organizations. In addition to his academic experience, he has worked in various private sector, public sector, and social sector organizations including Atomic Energy of Canada Limited, Cedara Software Corp, Canadian Broadcasting Corporation, Centre for Addiction and Mental Health, OntarioMD, Ontario Telemedicine Network, and eHealth Ontario. Ushnish’s research interests include Nonprofits, Cooperatives, Entrepreneurship, Blockchain, Artificial Intelligence, Open Data, Diversity, and the Social and Environmental impact of technology projects.
16:45 – 17:00
Can Privacy and Ethics-by-Design by Adapted for Law Enforcement Technologies?
Joshua Hughes & David Barnard-Wills
Trilateral Research
The impacts that technologies have on us as individuals and on society at large can be significant. It is, therefore, important that technologies are designed and developed in appropriate ways. This is particularly the case with law enforcement technologies due to the exceptional place that law enforcement plays in our societies, especially where data-analysis tools are used to reveal private information about suspects. Two design approaches that can assist in appreciating and mitigating risks raised by law enforcement technologies are Privacy-by-Design and Ethics-by-Design. However, these approaches are primarily focussed on commercial technologies where the end-users are the focus of attention. Yet, with law enforcement technologies, the end user is likely to be a law enforcement officer, such as a detective or crime data analyst, but the focus of attention from Privacy and Ethics-by-Design approaches is the subject of a criminal investigation. How should these approaches be adapted to deal with this change in focus? Another key issue is the lawful ability of law enforcement to uncover private details of individuals present in their investigations, how should Privacy and Ethics-by-Design be implemented in a situation where conventional standards of privacy do not apply, and the standards of what behaviours might be ethical and acceptable are different? This paper seeks to answer these questions and provide an outlook as to how privacy and ethics-by-design approaches can be adapted and applied in the situation of researching and developing data-analysis technologies for law enforcement investigations.
Joshua Hughes is a senior research analyst at Trilateral Research where he leads the research cluster for law enforcement and community safeguarding. He has worked across a number of research projects exploring new technologies intended for police investigations and digital forensics. He contributes research on ethical, privacy, legal, and societal impacts of technology and advice on implementing Privacy-by-Design and Ethics-by-Design approaches. His PhD Thesis looked at the legal issues of weapon systems controlled through artificial intelligence.
David Barnard-Wills is a Research-Innovation Lead at Trilateral Research. He works to connect Trilateral’s interdisciplinary research projects with its commercial services in ethical AI and data protection and to encourage knowledge sharing across the company. At Trilateral, he has led various applied technology development projects, where he contributed policy knowledge, privacy- and ethics-by-design, technology foresight and research data management experience. He has also led research projects on cyber conflict, international collaboration between regulatory authorities, data protection training, and SME experiences with the GDPR. David has an academic research background in the politics of information and security technologies. He has a PhD in the politics of identification systems from the University of Nottingham, an MA in Political Science with a research methods specialism, and BA in Politics. He has previously been a Research Fellow in the Department of Informatics and Systems Engineering at Cranfield University, the School of Politics and International Relations at the University of Birmingham, and the UK’s Parliamentary Office of Science and Technology.
17:00 – 17:15 Q&A for Panel 5
17:15 - 17:30
Book Series Presentation: Routledge New Intelligence Studies
Hager Ben Jaffel & Sebastian Larsson
This book series offers a comprehensive and innovative account of contemporary intelligence. It gathers scholarship that takes the study of intelligence professionals and practices as the point of departure and investigates its current configuration as a heterogeneous practice, overlapping with surveillance, counterterrorism, and broader definitions of security. In doing so, the series provides a renewed understanding of intelligence that conceptually and empirically challenges Intelligence Studies’ traditional ontological and epistemological foundations.
Hager Ben Jaffel is a research associate at the National Center for Scientific Research in Paris. Her research focuses on the sociology of intelligence with a particular focus on its relationships with law enforcement and politics. She initiated and leads a collaborative project with Dr. Sebastian Larsson aimed at reformulating empirical and conceptual debates on intelligence. She is the co editor in chief of the brand new book series New Intelligence Studies.
Sebastian Larsson is a lecturer in War Studies at the Swedish Defence University. His research concerns the international, political and sociological dimensions of security and the military profession as well as new approaches to intelligence and surveillance. He is the co-editor together with Dr. Hager Ben Jaffel of the forthcoming New Intelligence Studies series on Routledge.
17:30 – 18:00
Way Ahead and Discussion
Moderated by: Tereza Østbø Kuldova
Contact
Veronika Nagy V.Nagy@uu.nl
Tereza Østbø Kuldova tereza.kuldova@oslomet.no
Ella Paneyakh paneyakh@gmail.com
This workshop is funded by The Research Council of Norway under project no. 313626 – Algorithmic Governance and Cultures of Policing: Comparative Perspectives from Norway, India, Brazil, Russia, and South Africa (AGOPOL). This workshop reflects the institutional collaboration between and is jointly organized by the Work Research Institute, Oslo Metropolitan University, Norway & College of International and Public Relations Prague, Czech Republic.
Comments