Published
Europe’s Cloud Debate Is Looking the Wrong Way: It’s Not Concentration – It’s Lock-In
By: Matthias Bauer Dyuti Pandya
Research Areas: Data, AI, and Emerging Technologies EU Single Market, Institutions, and Governance

Some policymakers argue that Europe should “correct” cloud market concentration. The intuition is straightforward: more providers should mean less systemic risk. But in cloud computing, improving resilience and security is not a question of how many suppliers exist, but of whether customers can switch and diversify when circumstances demand it.
Europe is already holding a serious conversation about cloud concentration, operational resilience and cyber security. Yet the principal risk lies elsewhere. Market share, on its own, is not the vulnerability. The problem arises when market power is used to entrench technical and contractual lock-in – through restrictive licensing, egress fees, switching penalties or limited transparency over alternatives. These frictions make it harder for customers to migrate workloads and integrate third-party software and security tools.
A concentrated market can therefore remain resilient if it is contestable. Policy would be better directed at removing the sources of lock-in that impede switching, rather than fixating on concentration ratios. In the end, cloud resilience depends on customer choice, underpinned by contractual openness, technical standardisation, open APIs and flexible licensing – not on counting market shares.
Concentration Is Not the Risk – Constrained Exit Is
Revisiting how resilience (and security) is understood is useful because they mean different things to different communities, and that matters. For cyber security agencies and standards bodies (like BSI, NCSC, ISO and ENISA), resilience is not the absence of failure but the ability to continue operating through and after disruptions. For industry groups (like CISPE, CIGREF, BusinessEurope, EUCLIDA, and others), resilience is framed more practically; the freedom to redistribute workloads, maintain exit options and avoid dependency traps. And for competition authorities, resilience is a condition of market quality: switching must be possible, and no firm’s contractual terms should turn continuity into theory rather than practice.
The underlying observation is that resilience ultimately depends on users’ freedom to adapt. The central policy question is therefore not how many cloud providers exist, but whether organisations can move workloads and run third-party applications quickly and at low cost when their risk profile changes.
Real Resilience Comes from Choice, Not Isolation
The traditional telecom logic – that more competition automatically delivers greater resilience – translates poorly to the cloud. Unlike networks, cloud services differ markedly, not only in price but also in performance, functionality, security capabilities, data residency options, compliance regimes, and AI tooling. What undermines Cloud Customer Choice – are structural obstacles that can make resilient and secure multi-cloud configurations economically irrational or technically unworkable.
These barriers include:
- Proprietary APIs that make workloads non-portable
- Restrictive licensing rules that penalise dual-running during migration
- Identity and security tools tied to one provider’s stack
- Encryption models and cloud configurations that deny independent access to security-relevant data streams
- Spend-commitment discounts that make redundancy financially unviable
These are not questions of market structure. They are questions of system design and of the conditions under which cloud services are procured.
Outages Show Why Switching Matters
Resilience is not only about preventing outages; it is also about recovering from them. The scale of recent disruptions underlines this reality. The recent internet report for December 8–14 reveals approximately 364 global network outage events affecting ISPs, cloud service provider networks, collaboration platforms, and edge services such as Domain Name System (DNS), content delivery networks, and security-as-a-service platforms. This represents a 78 per cent increase compared to the 205 outage events recorded the previous week. Across these two weeks, public cloud outages rose sharply from 41 to 109 during the week of December 8, while public cloud outages had increased from 3 to 41 in the week of December 1.
Many recent cloud outages have stemmed from routine operational failures, such as DNS or configuration errors, rather than extraordinary events. Layered dependencies can further amplify the impact; failures in underlying storage or infrastructure services, sometimes backed by third-party cloud providers, can spread across multiple products and services relied upon for configuration, authentication, and asset delivery. Several incidents highlight the systemic risks arising from DNS and traffic-routing dependencies in large-scale cloud platforms. In one case, an AWS DNS-related failure resulted in widespread service disruptions across at least 14 services, including AWS Global Accelerator, AWS VPC Endpoint (PrivateLink), and AWS Security Token Service, along with numerous dependent customer applications.
In another incident, a misconfiguration in Microsoft’s Azure Front Door led to a global outage that affected Azure infrastructure, Microsoft 365, Xbox Live, and a broad range of customer-facing services. Similarly, IBM Cloud experienced a major outage that disrupted 41 services, including IBM Cloud DNS Services, Watson AI, AI Assistant, Global Search Service, Hyper Protect Crypto Services, database services, and the Security and Compliance Center.
Collectively, these incidents underscore persistent challenges in cloud resilience engineering, including insufficient isolation between critical control-plane components, limitations in failover and recovery mechanisms, and the tendency for configuration or DNS-related faults to cascade across tightly coupled services.
Moreover, this architectural fragility is further compounded by a commercial one – the limited ability to move. When organisations are deeply embedded in tightly coupled, provider-specific services, highly interconnected architectures can act as failure amplification points, enabling localised faults to propagate globally through dependency chains and increasing exposure to systemic failures and security incidents. For instance, in 2024, roughly 80 per cent of companies reported an increase in the frequency of cloud-related attacks, with around a third linked to cloud data breaches. The operational lesson is straightforward: resilience requires choice.
Many organisations depend on a very small number of infrastructure providers, with workloads often tied to a single region, licence, or proprietary technology stack. When an outage occurs, capacity cannot simply be reallocated or services spun up elsewhere. Limited portability means that what begins as a localised disruption can escalate into a broader systemic problem. Real resilience rests on the availability of credible and independent fallback options – not on the assumption that the primary system will always hold.
Cyber Security Shows Why Freedom of Choice Matters
Operational resilience is only one side of the equation. Cyber security works the same way. If organisations cannot deploy the security tools they need, when and where they need them, vulnerabilities multiply rather than shrink.
Several common commercial practices directly restrict cyber security choices:
- Software licensing rules that explicitly forbid running vendor software on competing clouds, which in practice blocks the deployment of integrated third-party security services in a multi-cloud environment.
- Degraded or restricted access to critical security data streams, where cloud providers limit API fidelity, charge punitive egress fees, or withhold logs, making third-party monitoring and threat detection economically or technically unviable.
- Proprietary APIs and non-standard interfaces that hinder or prevent deep integration with best-of-breed security platforms, forcing reliance on the provider’s own, often less specialised, tooling.
- Financial bundling and credit schemes that favour the provider’s own security solutions, while making independent, potentially more appropriate tools financially non-competitive, creating a “free but inferior” default that undermines security posture.
These are not theoretical concerns. They materially shape an organisation’s defensive posture. If procurement decisions push customers into using only the cloud provider’s native tools, the organisation loses its ability to deploy specialist defences, correlate logs across environments, or automate forensic analysis.
The procurement lesson is equally clear. In both the public and private sectors, buyers should safeguard choice and contestability not by maximising the number of suppliers, but by ensuring that switching remains feasible in practice. At a minimum, procurement should preserve the ability to:
- Integrate independent cyber security tools freely
- Access full data streams and logs to allow real-time threat detection
- Run third-party solutions without punitive pricing or licensing restrictions
- Retain control over architecture, identity, and monitoring choices
Achieving these objectives is not straightforward. Hyperscale providers operate security to a very high standard, drawing on global threat intelligence, advanced telemetry and state-of-the-art defence mechanisms. There are also sound reasons why the deepest layers of their systems are not exposed, since excessive openness could itself create new attack vectors.
Two further practical issues nevertheless remain. First, many recent cloud outages have occurred at large providers – not because of deficiencies in security or engineering, but because their platforms sit at the centre of dense chains, so failures can have wide-reaching effects. Some services, particularly platform-managed and serverless offerings, provide only limited support for external tools and deliberately restrict network visibility. Second, restrictive licensing terms and proprietary interfaces can impede the movement of data and applications. This creates dependency not because alternative providers lack capability, but because users are unable to deploy or substitute those alternatives when circumstances demand it.
Ultimately, cyber resilience depends on maintaining the practical ability to deploy appropriate security tools as threats evolve. When those options are constrained by licensing or proprietary architectures, security risk becomes concentrated in a single ecosystem. Long-term resilience depends on open standards, interoperable monitoring and avoiding deep entanglement with non-portable managed services.
A Pragmatic Path Forward for European Resilience
Europe’s cloud and ICT resilience challenge is not market concentration, but concentration created and sustained by lock-in, conditions that prevent exit, block switching, and restrict access to independent security tooling. The policy priority should therefore not be to manage market shares directly, but to ensure that market share remains contestable through diversity and choice. That requires clear guidance, proportionate competition enforcement against restrictive contractual terms, and explicit support from cyber security agencies for the superior security posture of multi-cloud architectures.
Where Does This Leave Policy?
Prescriptive, command-and-control regulation would be counter-productive. The smarter course is to encourage structural choice, not to mandate supplier counts or force national (or European) isolation. Policies that pursue strategic autonomy through “EU self-preferencing”, such as favouring European cloud providers in procurement or subsidising regional champions, risk replacing one set of concentrated dependencies with another, often without addressing the underlying problem of lock-in. The objective should instead be an open and competitive market in which customers can choose providers based on performance, security, and value, rather than geographic origin.
A practical approach centres on five priorities:
- Ensure that contractual and licensing terms do not punish portability or dual-running – Competition authorities must challenge and prohibit terms that punish dual-running, impose punitive egress fees, or otherwise create artificial switching costs.
- Recognise and support multi-cloud as a superior security posture – Cyber security agencies should endorse architectures that allow the integration of independent, best-of-breed tools across environments, as this diversity enhances systemic resilience.
- Preserve access to global innovation rather than creating isolated islands – Policy should avoid creating isolated regional islands. Cyber threats are global; defensive capabilities and threat intelligence must be allowed to scale accordingly.
- Support hybrid architectures combining commercial and open-source tooling – This enhances operational freedom without prescribing which provider to use.
- Prioritise open interfaces, standard APIs and transparent specifications in cloud and software procurement – They keep options open and allow workloads to be reallocated when conditions change.
The general direction of travel should be guidance and guardrails, not rigid prescriptions. Regulators and cyber security agencies can reinforce key resilience conditions without dictating architectures or operational design choices.
This distinction is particularly important when considering horizontal competition instruments such as the Digital Markets Act (DMA). The DMA is designed to address gatekeeper-driven lock-in through prescriptive, ex-ante obligations applied to a limited number of designated firms. It is not, however, a sector-specific instrument aimed at resilience or security, nor is it calibrated to the operational realities of highly integrated cloud infrastructures.
If applied without sufficient restraint, such an approach risks constraining the architectural and business-model flexibility that underpins secure and reliable cloud operations. It may also generate regulatory asymmetries, subjecting designated providers to detailed technical or commercial obligations while other large market actors – whose practices may equally impede portability or switching – remain outside the framework.
Supporting cloud resilience, therefore, requires proportionate, targeted interventions and sector-wide principles that promote openness, portability, and effective risk management across the market, without compromising the security responsibilities and operational integrity of integrated service providers. In this respect, the DMA’s ongoing investigations into cloud services – notably under Articles 17 and 19 – offer a more appropriate vehicle for enforcement, allowing competition authorities to focus on removing commercial and contractual barriers to switching rather than mandating technical standards or architectural choices.
Such an insightful and timely piece! I really appreciate how this article reframes the discussion around Europe’s cloud policy — pointing out that the real challenge isn’t simply the number of cloud providers, but the risk of vendor lock‑in that makes it hard for customers to switch, diversify, and build resilient systems when needed. Your emphasis on customer choice, open standards, and practical portability over market concentration helps cut through a lot of conventional thinking and encourages smarter, more future‑ready policy. Thank you for offering such a clear and forward‑thinking perspective on a critical issue in the digital economy