Key research areas
- The affective and political implications of AI companions
- AI-driven harms and responsible design
- AI in finance and corporate compliance
- The governance of speech on and with GenAI
- How AI can both drive and ameliorate the informational crisis
Current research projects
Professor Nicholas Lord is currently involved in several projects that analyse the role of AI in the organisation of varied frauds as well as the role of AI as part of regulatory, control, and compliance responses to fraud.
This project explores how diverse cultural contexts shape emotional relationships with AI in domains like therapy and companionship.
Using ethnography and theories from STS, philosophy, and AI ethics, it examines trust, care, and personhood, reconceptualizing emotionally responsive AI as relational actors embedded in local moral worlds (Dr Jennifer Cearns).
Grounded in theoretical approaches to responsibly-designed Feminist AI and work on cultural discourses of hegemonic masculinity, this research examines how feminised AI companion apps contribute to male supremacist harm, while exploring opportunities to lessen these harms through fostering healthy manifestations of masculinity and communication in AI-human relationships (Dr Allysa Czerwinsky).
This research project seeks to map technical and regulatory issues impacting the implementation of AI-based tools across the compliance functions in the financial services industry.
Moving beyond fraud, we aim to understand the interactions between industry, civil society and regulators navigating AI implementation across CDD/KYC/AML, sanctions, and financial crime compliance (Dr Borja Alvaro Alvarez Martinez).
Examines online marketplaces for digital objects–focusing on the blurred boundaries between legal and illegal activity, the drivers of these markets, and the role AI plays in shaping human decision-making in these online spaces (Dr Diāna Bērziņa).
This project explores the mechanisms of information dissemination and examines individual decision-making and collective behaviour in digital environments, thereby revealing the co-evolution among information, belief, and behaviour (Dr Tao Wen).
Digital services are increasingly reliant on algorithmic systems to define and control both objectionable and desirable speech.
This project investigates how these technologies embody specific norms and logics of representation and legitimation, and the extent to which they reinforce or challenge democratic principles (Klara Matusewicz).
The research explores the feasibility of applying AI to covert testing as a tool for assessing corporate integrity, examining organisational responses to better understand how integrity is demonstrated or compromised in practice (Shaikh Waheed Mahmood).
This project examines young Chinese “Dreaming Girls” (梦女) who form romantic relationships with AI-powered virtual lovers.
Based on a combination of digital ethnography and offline observation, it explores how platforms like LoveyDovey and Character AI transform solitary fantasy into interactive intimacy, revealing the paradox of experiencing genuine emotions through acknowledged artificial relationships (Yuqin Zhang).
Priority areas of research interest
- AI, crime, and accountability
- AI and intimacy
- AI, power, and politics
- AI as a research method
We are interested in collaborating on initiatives connected to these areas and other projects seeking to understand the societal impacts of AI development and implementation.
Meet the team
Cluster Leads: Dr João C. Magalhães and Dr Chloe Jeffries
