Autonomous Robots Are Already Here — Who's Really in Charge?

AI-powered machines are delivering your parcels, fighting wars, and running your supply chain. But as hostile states embed backdoors in the hardware that powers them, the real question isn’t whether autonomous systems can be turned against us — it’s whether we’d even know until it was too late.
The Battlefield Has Come Home
Not long ago, the idea of AI-guided machines making life-or-death decisions on the battlefield felt like the stuff of science fiction. Today, it is documented reality. AI-guided drones are killing people in Ukraine right now — a fact confirmed by Russian technical experts themselves, who acknowledge that autonomous drones determining their own targets are already in combat. And the security implications of that shift do not stop at the front line.

According to research published by the Center for Strategic and International Studies (CSIS), Ukraine purchased 10,000 AI-enhanced drones in 2024 alone, with a target of having half of all procured drones carry AI guidance in 2025. That is a jump from under half a percent to fifty percent in a single year. The Modern War Institute at West Point reports that these autonomous systems can navigate without GPS, reacquire targets after jamming, and engage without a live operator signal. On the Russian side, AI has been integrated into Shahed strike drones to improve navigation accuracy and penetration rates, with Putin directly directing deeper AI cooperation with China.
Ukraine has described itself as a “war lab for the future.” It is worth taking that seriously. What is being tested on the fields near Kharkiv today will be in a supplier’s product catalogue tomorrow.
The Supply Chain Is Already Under Attack
On 12 March 2026, the world learned that Stryker Corporation, one of the largest medical technology companies in the United States, had been hit by a destructive cyberattack. The Iran-linked group Handala claimed responsibility, alleging they wiped over 200,000 servers and devices and stole 50TB of data, forcing system shutdowns across 79 countries. Stryker’s products touch the lives of an estimated 150 million patients every year. The American Hospital Association confirmed it was actively exchanging information with hospitals and federal agencies to assess the damage.
This is not an isolated incident. It is a deliberate escalation. By targeting a medical device supplier rather than a government system, hostile actors exploit the weakest link in a “just-in-time” supply chain — one where hospitals order custom surgical equipment exactly when they need it, with little buffer. The lesson for any organisation supplying or using automated or robotic systems is hard to miss: your suppliers are your attack surface.
To understand why that risk is structural rather than theoretical, you need to understand where AI drones actually come from. According to CSIS, virtually every drone on both sides of the Ukraine conflict contains components originating in Chinese factories — carbon fibre frames, rare-earth magnets for motors, lithium-ion cells, gallium-nitride chips, and the sensors that power AI targeting systems. As recently as early 2024, nearly 89% of Ukraine’s drone component imports by value came from China, and Chinese manufacturers still control roughly 80% of global drone component production.
China’s position is one of calculated duplicity. It officially maintains export controls on military drones while exploiting dual-use loopholes to keep components flowing to Russia. In December 2024, China began restricting shipments of motors, batteries, and flight controllers to US and European companies — while simultaneously shipping hundreds of thousands of miles of fibre-optic drone cable to Russia. NATO formally named China a “decisive enabler” of Russia’s war in July 2025. In October 2024, China halted battery sales to Skydio, the largest US commercial drone manufacturer, over its sales to Taiwan — forcing production rationing and limiting the supply of US-made drones available to Ukraine. China is, in short, supplying AI-powered weapons to both sides of the conflict, on its own terms, and withdrawing that supply when it chooses.
The drone supply chain has been a cyberespionage target in parallel. Between 2023 and 2024, a Chinese-linked threat actor known as Earth Ammit ran two coordinated campaigns — codenamed VENOM and TIDRONE — against drone manufacturers and their upstream software vendors in Taiwan and South Korea. Their strategy, as documented by Trend Micro, was to infiltrate trusted software suppliers first, then use that foothold to reach the actual drone manufacturers downstream. Custom backdoors were installed silently inside enterprise software, giving attackers persistent, hard-to-detect access to military and satellite-industry targets.
Meanwhile, the US Cybersecurity and Infrastructure Security Agency (CISA) and the FBI have formally warned that Chinese-manufactured drones risk exposing sensitive data to Chinese authorities. A Sandia National Laboratories study, cited in congressional letters, identified potential backdoors in devices that at their peak controlled around 90% of the US commercial drone market. The FCC has moved to ban Chinese drone hardware from US networks — a regulatory response that reflects genuine classified intelligence, not political posturing.
The “Ghost Fleet” Scenario Is Not Fiction
In Ghost Fleet, Chinese-made chips embedded in US weapons systems contain hidden backdoors. When conflict breaks out, those backdoors are activated remotely — and American weapons simply stop working. In the Battlestar Galactica reboot, the Cylons do not need to outgun humanity; they gain backdoor access to every networked system simultaneously, bringing fleets and planetary defences offline in an instant. In neither story do machines make autonomous decisions to turn on their operators. The more chilling point is that the attacker was already inside, waiting.
Now apply that to today’s autonomous machines. A drone that can navigate, identify, and engage targets without a human in the loop is extraordinarily capable. It is also extraordinarily dangerous if an adversary has pre-positioned access to its firmware. The scenario is no longer about a machine going rogue — it is about an attacker quietly reprogramming your weapons to work against you at the moment of their choosing. When critical chips, firmware, and communications hardware are manufactured in countries with opaque relationships to state intelligence services, the question is not whether backdoors exist — it is whether you would know if they did, and whether you would know in time.
For businesses supplying or operating robotic and autonomous systems, the commercial version of this risk is already materialising. A compromised supplier can give an adversary persistent access to your command and control infrastructure, your telemetry data, and ultimately your physical systems. An autonomous machine that can be silently reassigned is not an asset. It is a loaded weapon pointed in the wrong direction.
What Can Be Done: A Practical Framework
None of this is cause for paralysis. It is cause for disciplined action. Here is a working framework for boards and senior leaders.
1. Air-gap your command and control systems
Autonomous systems — whether drones, robotic manufacturing lines, or automated logistics — should not be managed from internet-connected infrastructure. Invest in private, segregated networks for operational command and control. The same layered defence-in-depth approach that protects traditional IT infrastructure applies here. The inconvenience is real. So is the alternative.
2. Prioritise local and allied-nation hardware
Where feasible, source control systems, chips, and communications hardware from domestic or allied-nation suppliers. Ukraine itself has demonstrated this is achievable: drone producer Vyriy released a batch of 1,000 units in March 2025 built entirely from domestically produced components, cutting reliance on foreign supply chains under battlefield conditions. If they can do it under fire, there is little excuse for a well-resourced commercial organisation not to have a roadmap.
3. Vet your third parties rigorously
The Earth Ammit campaigns succeeded because downstream manufacturers trusted their upstream software vendors without adequate scrutiny. Every third party with access to your systems — especially those supplying embedded software or firmware — represents a potential entry point. Cost savings achieved by choosing the cheapest supplier rarely survive the first serious breach. A thorough third-party audit should be standard practice. Weigh them accordingly.
4. Test components for backdoors and attack vectors
This is not a one-time exercise. Firmware updates, software patches, and new hardware batches should be tested against known attack vectors on a regular schedule. The Atlantic Council’s analysis of UAS supply chain security recommends coordinated testing frameworks across allied nations. Your own testing regime should mirror that intent at an organisational level.
5. Revise your operational security and human factors
Technology is only part of the picture. The people who operate, maintain, and provision autonomous systems are targets too — for blackmail, social engineering, and phishing. Battlestar Galactica illustrates this more sharply than any corporate security briefing. The Cylons did not breach the Colonial defense mainframe through technical means. They seduced Gaius Baltar, the scientist with direct access to it. Number Six — appearing entirely human — manipulated him into compromising the very systems he was trusted to protect, without Baltar ever understanding what he had done until it was too late. The attack that nearly wiped out humanity started not with a line of malicious code, but with a personal relationship and a misplaced trust. Key personnel with privileged access to autonomous systems represent exactly this kind of target. Regular, realistic security training matters — good digital hygiene is as important for operators of autonomous systems as it is for office workers. So do clear procedures for reporting suspicious contact, free of any stigma around doing so. The human layer is where the most sophisticated adversaries will always look first.
Governance Matters as Much as Technology
The ethical and accountability questions raised by autonomous systems are not abstract. They are already playing out on the pavements of Milton Keynes.
Since 2018, Starship Technologies has operated a fleet of small six-wheeled delivery robots across the city, completing millions of deliveries along the Redway network. DPD followed in 2022 with Cartken-powered robots, and in late 2024 upgraded to the Ottobot — a larger Level 4 autonomous robot capable of carrying up to 70kg across eight separate compartments.


Most recently, Just Eat launched its own rival robots in the city in February 2026. Milton Keynes is, by any measure, the UK’s most roboticised city.
These machines are charming. They are also largely ungoverned in any meaningful sense. Ask a simple question: if a Starship robot collides with a child on a shared footpath, who is liable? Is it the operator — DPD or Just Eat? The technology provider — Starship, Cartken, or Ottonomy? The city council that permitted them to use public infrastructure? The software team that trained the navigation model? Current UK law has no clean answer. These robots exist in a regulatory grey area where accountability has not caught up with deployment.
Now extend that question to a malicious attack. The same robots navigate autonomously using AI, sensors, cameras, and wireless communications. They are, in effect, small networked computers on wheels operating in public spaces — not unlike the smart cameras already watching our streets from private homes. If an attacker were to compromise the software stack — rerouting a robot, disabling its obstacle avoidance, or simply using it as a surveillance platform — who detects it, who responds, and who is responsible for the consequences? At 70kg and moving at pedestrian speed, a compromised Ottobot is not a weapon. But the principle scales. A hijacked autonomous vehicle, a compromised logistics robot in a warehouse, or a reprogrammed industrial machine in a manufacturing plant represents an entirely different threat profile — and the governance frameworks for all of them are equally unprepared.
Business leaders should not hand accountability questions off to the technical team or wait for regulators to catch up. Autonomous systems that can take consequential actions without real-time human oversight require defined ownership of risk at board level, not just operational procedures buried in a supplier contract. Who in your organisation is accountable if an autonomous system under your control causes harm — whether through malfunction, misconfiguration, or a malicious third-party attack? If the answer is not immediately clear, that is the governance gap that needs closing first.
The wars in Ukraine and the Middle East have accelerated the technology by a decade. The delivery robots of Milton Keynes represent the civilian end of the same continuum. That acceleration is irreversible. What is still within your control is how seriously you treat the infrastructure — and the governance — that makes these systems run.
Key Takeaways
- Air-gap operational systems. Autonomous machine command and control should not be on internet-connected infrastructure.
- Treat your supply chain as your attack surface. The Stryker attack and Earth Ammit campaigns demonstrate that upstream vendors are the preferred entry point for sophisticated adversaries.
- Localise where you can. Source chips, firmware, and communications hardware from domestic or trusted allied suppliers.
- Test regularly and specifically. Backdoor testing is not a checkbox exercise. It should be continuous and targeted at newly introduced components and software updates.
- Build governance structures now. The ethical and regulatory environment around autonomous systems is moving fast. Lead rather than react.

Axel Segebrecht is founder and director of Be Braver Ltd, a UK-based technology consultancy specialising in digital sovereignty, self-hosted infrastructure, and FOSS migration for European businesses.
Featured photo by Mika Baumeister on Unsplash
Discussion