London
13
Feels like13

Europe’s Missing Large Language Model: Why the EU and UK Must Invest in Their Own Foundation Model 

Senior Lecturer UEL
Europe’s Missing Large Language Model: Why the EU and UK Must Invest in Their Own Foundation Model 
Credit: ft.com/REUTERS/Dado Ruvic/Illustration

Europe and the UK stand at a crossroads in the AI revolution. Yet we risk becoming spectators in a race dominated by American and Chinese tech giants whose power now stretches far beyond market share into governance, ethics, and even sovereignty. The world’s most advanced AI systems are increasingly trained, owned, and controlled in jurisdictions where the principles we hold central in Europe ( human dignity, fundamental rights, privacy, transparency, fairness, and the rule of law) are often secondary to profit, state control, or geopolitical ambition. We have already seen this tension play out: from Silicon Valley’s AI models vacuuming up personal data without consent, to China’s use of AI for mass surveillance.

 Yet, despite these risks, Europe still lacks a Large Language Model (LLM) at scale—no flagship AI capable of rivalling GPT-4 or Ernie Bot. Why? Because public investment here has never matched the scale or ambition seen across the Atlantic or in Asia. If we fail to build a robust, homegrown LLM ecosystem, we won’t just be outsourcing innovation; we will be surrendering the future of our economies, our security, and our ability to shape how technology serves society.

A duopoly with global consequences 

Recent research confirms the growing ideological and infrastructural dominance of the US and China in LLM development. This geopolitical shading of LLMs matters, especially when these systems are used to support decision-making in education, law, and public communication. 

Moreover, the corporate landscape around AI is deeply concentrated. US Big Tech firms such as Google, Amazon, Apple, Microsoft, Meta,  hold near-monopoly power over digital infrastructure. According to Professor Annabelle Gawer, Google alone controls 90% of the global search market. These firms now wield more economic and technological influence than most nation-states. 

 In China, the AI landscape is no less strategic. Despite its fragmented regulatory environment, the Chinese government has strongly supported national champions in LLM development to ensure it does not become dependent on US-led platforms.

This duopoly is not just economic it shapes the global norms, biases, and limitations of digital knowledge.

The myth of European weakness and missed opportunity

Europe has the intellectual and institutional capacity to lead in AI. From the ELLIS network and Gaia-X to the UK’s Turing Institute, our universities and labs are home to top-tier AI talent. Regulatory frameworks like the EU’s AI Act are seen globally as gold standards for trustworthy innovation. 

In a recent article, Politico journalist Ellen O’Regan, claimed public distrust in Big Tech is especially high in Europe, where concerns about data privacy, algorithmic bias, and democratic oversight are pronounced. And yet, paradoxically, our governments and institutions still rely on those very platforms to power their digital infrastructure. 

As author and digital economy advocate Lord Tim Clement-Jones argues in his book Living with the Algorithm (2024), unless democratic societies build their own trusted, transparent digital systems, they risk ceding control of public life to opaque, profit-driven technologies. It is time for Europe to stop regulating from the sidelines and start building from the centre. 

Security budgets are rising. Let us use them wisely! 

Europe is not short on funding; it is short on strategic prioritisation. The EU’s Horizon Europe programme, with a proposed budget of €175 billion from 2028–2034, is designed to fund precisely this type of “moonshot” innovation. The UK, now re-associated with Horizon and ramping up its AI and defence budgets, has pledged £1 billion to expand its supercomputing capabilities. 

At the same time, most NATO countries, including EU members, are increasing security spending in response to mounting global instability. A strategic allocation from these expanding defence budgets could fund the development of a secure, sovereign LLM, one that strengthens Europe’s digital resilience and reduces dependence on foreign powers. This would not be a luxury. It would be a new form of critical infrastructure. 

A European LLM, developed through a strategic UK–EU partnership, would be far more than a technological artefact; it would represent a new kind of digital infrastructure, aligned with democratic values and public accountability. 

 What a European LLM could deliver

A European LLM, developed through a strategic UK–EU partnership, would be far more than a technological artefact, it would represent a new kind of digital infrastructure, aligned with democratic values and public accountability. 

Such a model should be: 

  • Multilingual by design, capable of understanding and producing content across Europe’s 24 official language ensuring inclusivity across cultures and member states. 
  • Transparent and auditable, with architecture and training data that allow scrutiny by researchers, SMEs, regulators, and civil society, avoiding the “black box” problem of commercial models. 
  • Compliant with the European AI Act, GDPR and human rights standards, embedding privacy, fairness, and accountability at every level. 
  • Purpose-built for public service delivery, supporting applications in law, healthcare, education, local government, and civic engagement. 
  • A platform for sustainable AI, optimised for energy efficiency, developed using low-carbon infrastructure and hosted in environmentally responsible data centres. 

The UK’s AI Safety Institute (AISI), already positioned as a global leader in frontier model evaluation and red-teaming could play a vital role in this endeavour. As a neutral, public body with expertise in auditing and testing, AISI can ensure that a European LLM meets the highest standards of safety, robustness, and alignment with societal values. Its frameworks for transparency, systemic risk assessment, and model evaluation could form the backbone of a trusted, sovereign LLM ecosystem. 

This European model would embody our shared values: equity, democracy, multilingualism, and sustainability. It would offer a viable, open alternative to the proprietary, centralised platforms developed in Silicon Valley and Beijing, putting people, not profit, at the centre of AI development. 

Digital sovereignty needs action, not slogans 

As Clement-Jones warns, we are already “living with the algorithm”, but too often, that algorithm is imported, unaccountable, and opaque. If Europe wants a say in how AI evolves from the data, it learns from to the values it encodes it must stop outsourcing its future. 

A publicly governed LLM, co-developed by the EU and UK, would safeguard democratic values, boost local innovation, and enhance security in an increasingly polarised digital world. 

This article calls for: 

  • A joint task force between the European Commission and the UK government to scope and fund an LLM initiative. 
  • Strategic use of Horizon Europe, national innovation budgets, and defence funding. 
  • Appointment of the AI Security Institute as a coordinating body for safety, evaluation, and cross-border cooperation. 

Europe has the capacity. It has the funding. It has the need. Now it must act before the window of opportunity closes.