Analysis: Collective Societal Risks: A Blind Spot in AI Governance

AI does not simply operate within markets – it fundamentally reshapes how markets function.

Heidi Lund Trade Policy Adviser

AI reshapes markets and behaviour at scale. This analysis shows how collective societal risks fall outside EU rules, creating a regulatory blind spot with implications for competition and international trade.

Heidi Lund, what is the main contribution of this analysis to the current debate on AI governance in the EU?

While EU frameworks for safe AI are among the most advanced globally, they focus on technical risks, illegal content, and individual harm. This analysis shows that an important type of risk falls outside this model: cumulative collective societal effects from lawful AI systems.

It develops a typology of three interrelated risk dimensions where AI-driven collective societal risks manifest. The analysis translates abstract societal concerns arising from AI into regulatory terms and shows how they affect markets, competition, and international trade. It thus shifts the debate from ‘Is AI safe?’ to how AI reshapes society at scale, including its effects on markets and international trade.

The analysis highlights a “regulatory blind spot”. What does this mean in practice?

AI systems can shape behaviour and markets without triggering regulatory intervention. Existing frameworks act mainly when something is illegal or causes clear individual harm. However, many AI-driven effects are lawful, arising from design choices such as recommendation, ranking, and nudging. Over time, through scale and repeated exposure, they can significantly influence behaviour and markets.

Why are collective societal risks difficult to capture within existing EU regulatory frameworks?

These risks challenge the core logic of EU regulation, which focuses on individual harm, identifiable incidents, and clear causality. AI-driven effects operate at the population level. They develop gradually and are cumulative and diffuse. This makes it difficult to link them to a specific actor or legal breach. Other factors also limit intervention, including difficulties in defining behavioural harm, fragmented responsibilities, and broader legal and policy constraints.

How can AI-driven behavioural influence affect market competition and international trade?

AI is shifting competition away from price and quality towards control over attention, information, and behaviour. Through personalisation, ranking, and nudging, firms can shape demand, reinforce information asymmetries, and raise barriers to entry. Platforms are increasingly acting as gatekeepers of visibility.

In international trade, algorithmic systems can function as de facto non-tariff barriers. As a result, access to markets increasingly depends on algorithmic exposure. The global dominance of AI actors outside the EU may also generate competitive asymmetries and structural dependencies. The key insight is that AI does not simply operate within markets – it fundamentally reshapes how markets function.

What steps could policymakers take to better address collective societal risks within existing frameworks?

The analysis does not propose new legislation. Instead, it calls for more effective use of existing frameworks, such as the AI Act and the Digital Services Act. A first step is to recognise collective societal risks within EU AI governance. Stronger enforcement is essential. This includes better access to data, improved tools for risk assessment, and more effective supervision. Because AI operates across borders, these issues should also be addressed through international regulatory cooperation and trade policy discussions within the WTO.