The Shareholder State
In February 2022, as Russian missiles began landing on Kyiv, a series of decisions were made in boardrooms in Seattle, Redmond, and Cupertino that would have been unrecognizable as corporate behavior a decade earlier. Microsoft and Amazon migrated Ukrainian government and military data to cloud infrastructure outside Russia’s physical reach. Apple and Google restricted payment systems and mapping tools for Russian users. Intel and AMD cut off chip sales to Russia entirely — not because they were legally required to, but because they decided to. A technology sector that had spent years insisting it was a neutral platform made a geopolitical choice.
Western governments applauded. The narrative was clean: democratic tech aligning with democratic values. The harder question — what exactly has been normalized here, and for whose benefit — went largely unasked.
We should start asking it now
How Companies Chose
The academic framing for what happened is “digital corporate autonomy” — the capacity of major technology firms to take actions that fall within the traditional domain of statecraft, based on their own judgment rather than legal mandate or democratic oversight. In the Ukraine context, this operated across two distinct tracks.
The first was what researchers have started calling voluntary or corporate sanctioning. Apple, Google, Intel, AMD, and dozens of others implemented restrictions on Russian access to their products and services that went well beyond the legal requirements of government sanctions. Some of this was anticipatory — getting ahead of regulatory exposure. But much of it was genuinely autonomous: companies making political judgments about which side of a war they wanted to be on and acting accordingly.
The second track was more direct. Microsoft and Amazon didn’t just restrict services to Russia — they actively deployed their infrastructure in support of Ukraine. The migration of Ukrainian government and military data to cloud servers was operationally significant. Ukraine’s digital administrative continuity through the early months of the war, when Russian strikes were targeting exactly the kind of infrastructure that would normally house that data, was partly a function of decisions made by American tech executives in informal conversations with Ukrainian government officials. Not through diplomatic channels. Not through any treaty structure. Through meetings that happened because the executives in question decided to show up.
The result was what one analyst has described as a “digital alliance” between Western tech companies and the Ukrainian state — functionally equivalent to a security partnership, arrived at through a process that involved no public deliberation and no formal accountability structure.
Legitimate Targets
There’s a legal dimension to this that hasn’t been fully reckoned with.
Under international humanitarian law, the principle of distinction requires parties to a conflict to differentiate between combatants and civilians, and between military objectives and civilian objects. Infrastructure that makes a direct contribution to military operations is a legitimate target. Infrastructure that provides general civilian services is not.
When Microsoft stores Ukrainian military data, when Amazon Web Services supports Ukrainian defense networks, when Starlink terminals enable Ukrainian drone operations — these companies have moved, legally speaking, from the civilian column toward the military column. Their commercial infrastructure has become, at least arguable, a military objective.
Russia has acknowledged this logic, if not always in the most formal terms. Russian officials threatened Western commercial satellites early in the war. The assets of Meta were frozen. These aren’t random escalations — they reflect a legal and strategic calculus that goes like this: if your satellites are helping the enemy kill our soldiers, they are not neutral commercial objects. We are treating them accordingly.
The companies involved are, on some level, aware of this. Starlink’s terms of service explicitly prohibited military use at the time Ukrainian forces were running targeting operations through the network. The prohibition was not seriously enforced, and Starlink became central to Ukrainian battlefield coordination anyway. The gap between the company’s formal position and its operational reality reflects exactly the kind of ambiguity that makes corporate actors genuinely dangerous in conflict environments — dangerous to the conflict’s parties, to the companies themselves, and to whatever norms are supposed to govern the conduct of war.
Accountability & Technology in War
In September 2022, during a Ukrainian offensive against Russian forces in Crimea, Starlink connectivity was suspended in the operational area. Elon Musk had decided, on his own, that enabling Ukrainian military operations in Crimea would constitute an escalation he wasn’t prepared to facilitate. He consulted no government. He sought no legal guidance. He made a personal judgment about the appropriate limits of his company’s involvement in a war and acted on it.
Ukraine’s military operation was disrupted. Personnel were left without communications at an operationally sensitive moment. A private individual had exercised a unilateral veto over the military strategy of a sovereign nation.
This is the other edge of digital corporate autonomy, and it cuts in a direction that Western governments have been less eager to examine. The same structural capacity that allowed Microsoft to protect Ukrainian data and Apple to sanction Russia also allows a single executive to restrict a country’s military operations based on his personal risk calculus. There is no appeal mechanism. There is no democratic override. There is no treaty that governs what happens when the shareholder interests of a satellite operator diverge from the military requirements of an allied state.
Musk’s stated rationale — concern about nuclear escalation — may have been sincere. It may also have reflected other interests: his public sympathy for Russian positions on territorial settlement, his stated desire to maintain Starlink as a platform acceptable to all parties, his personal politics. The point is that no one outside SpaceX knows, and no accountability structure exists to compel transparency. A decision with direct battlefield consequences for a U.S.-supported ally was made by a private actor with no obligation to explain himself to anyone in the U.S. government, the Ukrainian government, or the public.
The story didn’t end in 2022. Through 2023 and into 2024, Russian forces began acquiring Starlink terminals through gray market channels — black market purchases, battlefield capture, third-party suppliers in countries outside the sanctions perimeter. By early 2024, Ukrainian military intelligence had confirmed thousands of Russian terminals active along the contact line. Russian units were using Starlink to coordinate assaults, improve artillery accuracy, and — critically — to outfit strike drones with satellite connectivity that allowed them to fly at low altitudes, resist electronic warfare jamming, and operate in real time at ranges approaching 500 kilometers. The Molniya-2 attack drone, Shahed variants, and eventually the BM-35 deep-strike platform were all documented with Starlink Mini terminals strapped to their airframes. When the Pentagon announced in June 2024 that it had disabled several hundred unauthorized Russian terminals, the problem had already grown well beyond that scale. Through all of this, Musk publicly dismissed the evidence. Reports of Russian Starlink use were “categorically false,” he said. The terminals were not being tracked because SpaceX “does not do business of any kind with the Russian Government or its military.”
The crackdown finally came in February 2026, and the immediate trigger was an event that could have been scripted to illustrate the problem precisely. In late January 2026, a Russian BM-35 drone — equipped with a Starlink Mini terminal, allowing real-time operator control from inside Russia — slipped through Ukrainian air defenses and glided into the Kyiv government district, passing close enough to the Cabinet of Ministers building that officials on the seventh floor could watch it go by. Ukrainian Defense Minister Mykhailo Fedorov then met with Musk directly, presenting evidence of Russian strike drones using Starlink to penetrate deep into Ukraine. Musk acted. On February 3, SpaceX implemented a whitelist system: only terminals verified through Ukraine’s Diia digital government platform or the military’s DELTA battlefield management system would remain active. Everything else went dark.
The battlefield results were documented almost immediately. Russian communications sputtered along multiple sectors of the front. In one incident on the Zaporizhzhia front, twelve Russian soldiers were killed by friendly fire after a Starlink terminal failure disrupted coordination. Ukrainian officials in the south reported a drop in kamikaze drone strikes within days. “After the disconnection, the enemy experienced certain problems with communication and coordinating infantry assaults,” said a spokesperson for Ukraine’s Southern Defense Forces. In the three weeks following the whitelist going live, Ukraine seized over 300 square kilometers of territory. In Russia, the fallout sparked a public scandal — critics calling the army’s dependency on an American commercial satellite system both a national humiliation and a strategic blunder.
The February 2026 crackdown is being cited as vindication of corporate accountability working as it should. It is not. Consider what the sequence actually demonstrates. Russia used Starlink against Ukraine for over two years before comprehensive action was taken. The Pentagon’s June 2024 terminal disablement was partial and insufficient. Musk denied the problem publicly throughout. The intervention that finally ended Russian Starlink access came not through regulation, not through treaty obligation, not through any legal mechanism — but because a drone nearly hit a government building in Kyiv and prompted an informal meeting between a cabinet minister and a tech executive. The mechanism that produced the good outcome in February 2026 is structurally identical to the mechanism that produced the harmful outcome in September 2022: one man’s judgment, made in private, with no formal accountability to any government or public.
That the judgment happened to be right this time does not change what the judgment being his alone means for the next time.
Precedents Set
Western governments have been quietly grateful that Big Tech’s autonomous geopolitical interventions have, in Ukraine, aligned with Western interests. The gratitude is understandable. It’s also strategically myopic.
The capacity for digital corporate autonomy is not directional. It doesn’t come preset to support liberal democracies. It’s a structural feature of the relationship between infrastructure-controlling private actors and the states that depend on that infrastructure. In Ukraine, that structure happened to work in the West’s favor. The same structure, in a different conflict, with different executives making different calculations, will not necessarily do so.
Consider the range of scenarios. A major tech company with significant revenue exposure in China faces pressure during a Taiwan contingency. Executives decide that maintaining market access matters more than continuing services that support Taiwanese defense capabilities. They don’t need to do anything dramatic — they simply don’t act, or they quietly degrade service quality, or they invoke terms-of-service provisions to restrict military use. There is no legal mechanism to compel them otherwise. There is no treaty that covers this. There is only whatever calculation the board makes about where its interests lie.
Or consider a conflict in which Western governments are not the sympathetic party — a scenario where U.S. policy is contested, where the adversary has cultivated relationships with key technology executives, or where the financial calculus runs against sustained support. The same companies that acted decisively in Ukraine might act very differently in contexts where acting decisively is commercially costly.
The companies themselves are aware of some version of this tension. Their responses have largely taken the form of terms-of-service language that prohibits military use and provides legal cover for whatever decision gets made after the fact. This is not a governance framework. It’s liability management.
In Practice
For governments, the lesson is that dependency on commercial digital infrastructure for critical military and state functions creates a structural vulnerability that alliance relationships and arms transfers don’t resolve. Ukraine’s digital resilience was real and meaningful — and it was contingent on decisions made by private actors. The appropriate policy response involves building redundancy, developing interoperability with government-controlled infrastructure, and establishing legal frameworks that clarify — in advance, not during a crisis — what obligations commercial digital infrastructure providers have when their services become militarily significant.
NATO has started working through some of this. The alliance’s public commitments on cyber defense and the inclusion of cyber attacks under Article 5 are steps in the right direction. They don’t address the specific problem of private actors making autonomous choices about whether and how to support allied military operations.
For the companies themselves, the trajectory is toward increasing regulatory pressure, particularly in Europe. The Digital Services Act and emerging EU frameworks around critical infrastructure designation are beginning to treat some technology services as carrying public obligations rather than purely private discretion. This is probably the right direction, and it will be resisted.
For investors in defense-tech and dual-use companies, the corporate autonomy dynamic creates a specific due diligence question: when a portfolio company’s product or service becomes militarily significant, what decisions will the company’s leadership make about how to manage that significance? The Starlink example isn’t exceptional — it’s illustrative of a class of decisions that dual-use companies will increasingly face. How those decisions get made, by whom, and on what basis is a governance question that the market hasn’t priced.
Accounting For
Big Tech’s intervention in Ukraine provided real military and institutional benefit to a country that needed it. That’s true. It’s also true that the intervention happened outside any governance structure, created legal exposure that hasn’t been resolved, established a precedent for corporate autonomy that has no built-in directional constraint, and demonstrated that private infrastructure owners can unilaterally restrict the military options of sovereign states.
The West has been celebrating the upside of digital corporate autonomy while declining to reckon with its architecture. The architecture doesn’t change based on which side benefits. Starlink can protect Ukrainian drone operations and it can veto them. Microsoft can migrate Ukrainian military data and it can, under different circumstances, decline to. The same structural capacity that produced the digital alliance with Ukraine would produce something very different in a conflict where the executives involved reach different conclusions about where their interests lie.
This is not an argument against the specific decisions that were made in 2022. It’s an argument for governing the capacity those decisions revealed — before the next conflict, with a different cast of executives and a different set of commercial incentives, demonstrates what happens when it runs the other way.





