The AI Localism Canvas: A Framework to Assess The Emergence of Governance of AI within Cities

Stefaan G. Verhulst
7 min readJun 6, 2022


Stefaan Verhulst, Andrew Young and Mona Sloane

(Earlier version published as part of Künstliche Intelligenz — Zwischen Erwartungen und Unbehagen).

The proliferation of artificial intelligence (AI) technologies continues to throw up challenges and opportunities for policymakers — particularly in cities.. As the world continues to urbanize, cities grow in their importance as hubs of innovation, culture, politics, and commerce. More recently, they have also grown in significance as innovators in governance of AI, and AI-related concerns. Prominent examples of how cities are taking the lead in AI governance include the Cities Coalition for Digital Rights, the Montreal Declaration for Responsible AI, and the Open Dialogue on AI Ethics. Cities have also seen an upsurge of new laws and policies, such as San Francisco’s ban of facial recognition technology or New York City’s push for regulating the sale of automated hiring systems, as well as new oversight initiatives and organizational roles focused on AI, such as New York City’s Algorithms Management and Policy Officer, and numerous local AI Ethics initiatives in various institutes, universities and other educational centers.

Considered together, all of these initiatives and developments add up to an emerging paradigm of governance localism[1], marked by a shift toward cities and other local jurisdictions in order to address a wide range of environmental, economic and societal challenges. With AI localism, cities take the lead in problem solving, developing context-specific approaches to growth, governance, and innovation[2]. Cities are in effect filling gaps left by insufficient state, national or global governance frameworks related to AI, and more generally technology. For example, we are also seeing signs of what we might call a “broadband localism,”[3] in which local governments are addressing the digital divide by filling service gaps left by major broadband providers and taking on a central role in integrating the Internet into everyday municipal life; as well as a “privacy localism”[4] that encompasses new privacy laws, typically enacted in response to pressing developments such as increased use of data for law enforcement or recruitment.

Photo by Cem Ersozlu on Unsplash

AI Localism focuses on governance innovation surrounding the use of AI on a local level. Examples of AI Localism include local bans on AI-powered facial recognition technology, new local procurement rules pertaining to AI technology, public registries of AI systems used in local government, and public education programs on AI that are offered by local governments or institutions.

AI Localism offers both immediacy and proximity. Because it is managed within tightly defined geographic regions, it affords policymakers a better understanding of the disparate needs of citizens and the technology potentials and shortcomings, which may have a wide variety of regional manifestations. By calibrating algorithms and AI policies for local conditions, policymakers have a better chance of creating positive feedback loops that will result in greater effectiveness and accountability.

AI Localism and Decision-Making

Local actors involved in the governance of AI systems, including those associated with COVID-19, are faced with many competing imperatives, and must make difficult decisions weighing opportunity and risk. Their decisions across the design, implementation, and lifespan of an AI system can have significant ramifications — both positive and negative — on people’s lives and local economies.

As it stands, however, the decision-making processes involved in the local governance of AI systems are not very systematized or well understood. Scholars and local decision-makers lack an adequate evidence base and analytical framework to help guide their thinking. In order to address this shortcoming, we have developed the below “AI Localism Canvas” which can help identify and assess the different areas of AI Localism specific to a city or region, in the process aiding decision-makers to weigh risk and opportunity. The overall goal of the canvas is to rapidly identify, assess and iterate local governance innovation about AI to ensure citizens’ interests and rights are respected.

The category of transparency broadly relates to efforts of local governments to create transparency about the acquisition and application of AI systems across different government domains. A good example is the use of public AI registries that list the algorithms, AI systems and tools used in public service. Public registries have been published by the cities of Helsinki and Amsterdam, and more recently, the City of New York published a directory of algorithmic tools used by City agencies.

Procurement relates to any innovation pertaining to the procurement of algorithmic or AI products on a local level. A prominent example is the regulation of the acquisition of surveillance technologies by local government agencies, such as in the City of Berkeley, California.

Innovation in engagement focuses on novel ways to engage publics into conversation and decisions about AI, and AI-related concerns (such as data). This can mean partnering with local research and education organizations to develop and deliver events or courses about the functionalities of AI and their ethical implications, such as public events hosted by various local universities across the globe, it can mean citizen-focused trainings in AI, such as the Elements of AI course that originated in Finland in collaboration with the University of Helsinki, or it can mean creating platforms for public deliberation about AI, such as the Data Assembly.

Accountability and oversight initiatives on a local level focus on enforcing accountability about the use of AI systems. These initiatives are operationalized either internally, for example via internal review boards, ethics codes, or positions within organizations dedicated to oversight and accountability; or externally, for example via audits or external review boards. Examples include the Seattle Surveillance Advisory Working Group, or the New York City Algorithms Management and Policy Officer.

Local regulation pertains to local AI laws and policies. These can focus on the regulations about government use of AI, see “Procurement”, or they can focus on local regulations pertaining to how certain AI applications can be used in certain sectors. For example, New York State has temporarily banned the use of facial recognition technology in K-12 schools, and the City of San Diego switched off street light sensors while drafting regulation on public surveillance technology.

The category of principles local agencies may develop and use, sometimes in tandem with other agencies or city partners, to ensure the responsible use of AI at a local level. Prominent examples of these non-binding agreements are the Barcelona declaration for the proper development and usage of artificial intelligence in Europe, or the Montreal Declaration for a Responsible Development of Artificial Intelligence.

The canvas is a tool to capture all these local initiatives. It has multiple functions: it allows to capture innovation and think about the relevant and dynamically changing elements together, while also serving as a research template. As such, it allows to identify points at which fragmentation occurs. The canvas also has a prescriptive function in that it provides a comprehensive framework for checking all the elements that comprise AI Localism. Information and insight that is iteratively collected and analyzed via the AI Localism Canvas can facilitate the much needed pragmatic and critical approach to local policymaking in the age of AI. It allows to account for the fast-moving nature of both the technology and the local environments that policymakers face and have to rapidly innovate in. The AI Localism canvas can help frame reality and inform action.

The implementation of the AI Localism Canvas can be applied to a specific AI (e.g. facial recognition software), a specific challenge or problem (e.g. mitigating the spread of the Sars-Cov 2 virus in an urban setting), or a geographic context (e.g. a neighborhood). Individuals who wish to use the canvas should fill in as many tiles as possible, and use the canvas iteratively.

We want to note that this canvas is a living document and we anticipate that it will change as AI becomes increasingly important at the city level — and with it the need for understanding the governance response. New questions will continue to emerge, such as whether we should have particular governance innovations for particular functions of AI, or for particular areas in which AI is used on a local level, for example resource triage in emergencies, public safety and law enforcement, public consultation (whereby the democratic process is delegated to an algorithm), or “digital twins” used in simulations for urban planning. The AI Localism canvas can help ask these important questions by identifying the emerging governance responses and structures for these new technologies. We welcome your views on the issue of AI Localism, and in particular on the possibilities (or limitations) of a canvas-based approach.

[1] Davoudi, Simon, Madanipour, Ali. Reconsidering Localism. New York, London: Routledge, 2015.

[2] Katz, Bruce, Novak, Jeremy. The New Localism. How Cities Can Thrive in the Age of Populism. Washington, D.C. : Brookings Institution Press, 2018.

[3] Olivier Sylvain. “Broadband Localism”, 73 Ohio ST. L. J.795 (2012). Available at:

[4] Ira S. Rubinstein, “Privacy Localism”, 93 Wash.L. Rev.1961 (2018). Available at:

[5] Allam, Zaheer, Dhunny, Zaynah A. On big data, artificial intelligence and smart cities. Cities Volume 89, June 2019, Pages 80–91; Kirwan, Christopher, Zhiyong, Fu. Smart Cities and Artificial Intelligence. Amsterdam: Elsevier, 2020.

[6] Verhulst, Stefaan, Sloane, Mona. Realizing the Potential of AI Localism. Project Syndicate, February 7, 2020. Available at:

[7] Singer, Natasha. “The Hot New Covid Tech Is Wearable and Constantly Tracks You”. The New York Times, November 15, 2020. Available at