- MONTH
- YEAR
A Timeserving RUSI Alarm
A report of the Royal United Services Institute (RUSI), entitled How AI is Quietly Becoming a Supply Chain Problem and prepared by Dr. Melina Beykou, is an update on the security issues arising from the use of artificial intelligence. As AI gets embedded in critical national infrastructure, the supply chains remain exposed. The author refers to the incident with npm packages compromised by Shai-Hulud malware and argues that the existing security arrangements are insufficient. Special emphasis is made on the fact that AI systems already extend into defense and critical infrastructure, but the customers cannot see what is ‘under the bonnet’ (datasets, model weights, and update services). The author concludes that strict public control is needed at all levels, from microchip purchases to audits of public repositories.
The report argues that access to advanced chips is concentrated in a few manufacturing hubs, which poses a systemic risk. It cites the situation with Nexperia, a Chinese-related company: export controls placed on chips in November 2025 by the Netherlands resulted in a major disruption in the Dutch automotive industry. The author presents it as an illustration of how foreign influence in a technology market poses a direct threat to national security. This example of hardware vulnerability is meant to entrench the perception of Chinese companies as a threat.

The author also insists that open source code and a free market fail to cope with the security risks. For defense and critical infrastructure, voluntary standards are insufficient: the State must impose strict requirements, audit the technology companies and control the supply chains at all stages. The existing tools (SBOM, and NCSC principles) were designed for classical software and do not cover managed services, third-party models and datasets.
He discusses that even when a State purchases an AI system for defense or critical infrastructure, the security requirements in the contract often do not apply to components running in a cloud, ready-made models from public repositories and to the source data used to train the models. Those elements remain outside the customer’s control. The UK takes the risk of relying on AI components beyond its accountability area. OWASP has already included supply chain vulnerabilities in its top lists for LLMs and agent applications, which confirms the systemic nature of the threat, the author believes.
This is generally a well-founded concern, but there are some ambiguities to this report.
Its author Melina Beykou works for Plexal, a cyber-security company. Given their area of activity, she is directly interested in a broader market for ‘AI chain audit’ services. The report lays an intellectual basis for public spending on such services, to be made under the respective departments’ auspices.
The only specific example of a hardware vulnerability (Nexperia) is presented in a way that stresses the chips’ Chinese origin, although it was Europeans that imposed the restrictions. A narrative thus forms about ‘hostile States’ that threaten technological sovereignty – a convenient ‘foreign enemy’ justification for protectionism and redistribution of State budgets. Besides, the framing that “we are technologically vulnerable to a foreign enemy’ gives corporations a universal excuse: they can now blame any outage, data leakage or misuse of funds on foreign hackers and ‘hostile acts’ and dodge all questions about their own proficiency.
