Fraunhofer Study Recommends Federal Administration Prioritise In-House AI Development

Sun 7th Dec, 2025

A recent analysis conducted by the Fraunhofer Institute highlights the importance of the German federal administration investing in its own generative artificial intelligence (AI) systems, particularly large language models (LLMs). The study, funded by the Federal Ministry of the Interior, examines the current landscape of LLM adoption within public sector agencies and assesses their capacity for digital sovereignty.

According to the report, modern LLMs, such as those behind popular tools like ChatGPT, require extensive datasets, substantial computing resources, and significant energy--assets largely under the control of a small group of major, mostly non-European, technology companies. In light of this, the study underlines the strategic necessity for the German state to maintain autonomy, transparency, and control over these foundational technologies.

Researchers from the Public IT Competence Center (Öfit) investigated the extent to which digital sovereignty is preserved in federal LLM projects. Digital sovereignty is defined in the study as the capability of Germany, alongside European partners, to independently develop, operate, and manage essential digital infrastructures, data, and computing systems in accordance with local regulations and standards.

The evaluation focused on three strategic objectives derived from national digital policy: the freedom to switch between alternative solutions and interchangeable system components, the development of internal technical and organizational expertise, and the ability to influence suppliers through market leverage, especially during procurement processes.

The findings indicate that, in contrast to prior vulnerabilities observed in office software or database applications, no critical dependency on a single major provider exists within federal LLM projects. The federal administration has successfully developed proprietary LLM-based applications for numerous routine tasks, reducing the need to rely on products from large, predominantly non-European corporations. This approach diminishes the likelihood of new dependencies arising from third-party providers.

Current risks to governmental operational capability are considered minimal, as these systems primarily support administrative staff rather than being integral to mission-critical public services. The study notes that most LLMs are operated on government-owned hardware, allowing for relatively straightforward replacement or updates when necessary, thereby supporting continued sovereignty over essential digital assets.

Despite these positive developments, the study identifies a strategic gap. Most LLMs used by the federal administration are based on non-European open-source models, which are deployed within internal government infrastructure. While this supports the flexibility to switch technologies and manage systems independently, it still leaves the administration reliant on external innovation cycles. The report recommends exploring the development of a European open-source LLM, which would be made publicly available and tailored to European values and regulatory frameworks, aiming for lasting independence from dominant market players.

The study also outlines several challenges that impede the growth and broader adoption of LLM projects within public agencies. Legal uncertainties and complex AI-related regulations are seen as barriers, often resulting in delays and necessitating advanced legal expertise. These factors have limited the ability to release new developments as open source. Additionally, project leads expressed a need for a dedicated cloud infrastructure for AI, staffed with trained personnel, to streamline deployment and management processes.

To further strengthen digital sovereignty, the report makes several recommendations. These include expanding shared LLM infrastructure across governmental departments, enhancing open-source initiatives, and introducing standardized legal guidelines, such as mandatory sovereignty assessments for critical AI projects. The study also suggests consolidating procurement processes across the federal structure to enforce sovereignty criteria and strengthen bargaining positions with large providers.

The findings suggest that the federal administration is on a promising path toward establishing a robust foundation for independent AI solutions. Continued commitment to in-house development, open-source strategies, and the creation of a European LLM are seen as pivotal steps in securing digital autonomy for Germany and its public sector.


More Quick Read Articles »