Blog

What the White House’s AI Action Plan Means for Infrastructure and Cybersecurity Leaders

The White House’s AI Action Plan, titled “Winning the AI Race”,  marks a strategic shift in how the U.S. government aims to lead in artificial intelligence while securing its technological foundations. Among the most urgent directives are mandates around collaboration, infrastructure hardening, and secure compute environments, all with direct implications for the hardware, firmware, and embedded systems that are powering AI innovation.

Eclypsium’s recent research highlights why these layers must be central to any AI security roadmap.

White house AI image

Enabling the Private Sector to Protect AI from Security Risks

Policy Directive:
The DoD, DHS, Center for AI Standards and Innovation (CAISI) at Commerce, and others will work with AI developers to guard against cyber risk, insider threats, and infrastructure attacks.

Why This Matters Now:

AI chips are recently shown to be vulnerable to hardware-level attacks that can seriously affect the performance of AI models. Our GPUHammer teardown exposed how GPU hardware central to AI model training can be tampered with to alter performance or introduce covert persistence mechanisms. Unlike traditional CPU-based malware, GPU threats often evade detection and prevention by conventional tools.

At the same time, our findings around BMC (Baseboard Management Controller) vulnerabilities (CVE-2024-0548) show how attackers are actively exploiting out-of-band management controllers to gain full device control even when the OS is patched and secure.

Recommendations:

  • Integrate GPU firmware verification into standard security and compliance checks
  • Monitor and restrict BMC access to only trusted networks and operators
  • Leverage firmware telemetry to detect anomalous GPU or BMC behavior indicating tampering or lateral movement

Creating New Technical Standards for High-Security AI Data Centers

Policy Directive:
DOD, NSC, IC, and NIST will develop new technical standards for AI-critical data center infrastructure.

Why This Matters Now:

As GPUs become core to AI performance, they are also becoming an attack vector. GPU firmware often lacks basic protections such as code signing, integrity validation, or rollback prevention,creating exploitable gaps in the AI stack.

Similarly, network device infrastructure such as routers, switches, gateways, VPN and security appliances remain a silent risk, often overlooked in existing security controls. For example, the Baseboard Management Controller (BMC) vulnerability, CVE-2024-0548, was recently added to CISA’s Known Exploited Vulnerabilities list, and is being used in the wild to compromise systems remotely via unpatched management firmware.

Recommendations:

  • Adopt baseline firmware security standards (e.g., SP 800-193) for all AI-capable hardware, including GPUs and BMCs
  • Require network device vendors to disclose firmware provenance, update cadence, and security lifecycle
  • Include firmware runtime validation and secure boot as prerequisites in secure data center design

Advancing Classified Compute Environments for Scalable AI

Policy Directive:
Agencies are directed to adopt classified compute environments that support secure and scalable AI workloads.

Why This Matters Now:

The need for confidentiality doesn’t stop at encryption. It extends into hardware-level assurance. Without trust in firmware, classified workloads are vulnerable to supply chain attacks or persistent implants that evade OS and hypervisor visibility. Furthermore, AI workloads are no longer confined to cloud environments. From field-deployed systems to national labs and air-gapped networks, many agencies and contractors run sensitive AI applications on-premises, where physical access risks, outdated firmware, or unmonitored management interfaces create persistent security gaps. Hardware-level trust must extend across the entire deployment footprint, whether in hyperscale facilities or edge environments.

Recommendations:

  • Implement continuous device attestation for systems operating in classified or sensitive enclaves
  • Leverage out-of-band firmware security analytics to maintain integrity across system components — even in disconnected or air-gapped environments
  • Harden and monitor firmware on GPUs, BMCs, and networking gear, which often operate outside traditional threat detection tooling
  • Deploy continuous device attestation for AI infrastructure, verifying integrity prior to workload execution.
  • Leverage firmware security analytics that operate out-of-band, ideal for disconnected, segmented, or classified systems where agent-based tools cannot run.
  • Harden and monitor embedded firmware on AI-supporting hardware — especially GPUs, BMCs, and network appliances, which often go unmonitored yet control critical compute, update, or access paths.
  • Build secure update pipelines for firmware in on-prem environments, ensuring signed, validated, and traceable changes aligned with SBOM and NIST 800-193 guidance.
  • Incorporate firmware-level threat detection into incident response plans and compliance frameworks for facilities running sensitive or regulated AI workloads.

Complex supply chains that span international borders increase the risk of compromises and backdoors in a critical AI arms race.

Bolster Critical Infrastructure Cybersecurity

Policy Directive: Ensure collaborative and consolidated sharing of known AI vulnerabilities from within Federal agencies to the private sector as appropriate. This process should take advantage of existing cyber vulnerability sharing mechanisms.

Why This Matters Now:

Recent Binding Operational Directives from CISA, including mandates on mitigating threats to network devices, underscore a growing national focus on securing the entire digital supply chain, not just software. This includes:

  • BIOS/UEFI, BMCs, and GPU firmware, all of which can be exploited for stealthy persistence or to disrupt AI operations
  • Exploits targeting out-of-band management interfaces, which evade traditional endpoint protections
  • Advanced threat actors abusing firmware vulnerabilities to gain persistent access to infrastructure used for AI development or inference

Recommendations:

To align with these directives and the AI Action Plan’s guidance on securing critical infrastructure, Eclypsium recommends:

  • Continuous firmware monitoring to detect unauthorized modifications, unsigned updates, and performance deviations
  • Hardware-rooted attestation for AI systems, enabling assurance of device integrity before executing sensitive workloads
  • AI-specific threat intelligence sharing, including indicators of compromise (IoCs) tied to firmware attacks, through trusted channels like ISACs
  • Establishing baseline firmware configurations and automating policy enforcement for GPUs and network appliances at scale
  • Participating in vulnerability disclosure coordination for AI-relevant firmware and embedded components

Securing the AI Stack from the Ground Up

AI innovation depends on trustworthy infrastructure. The White House’s AI Action Plan calls for stronger protections, but to succeed, organizations must extend their threat models below the OS, into firmware, hardware, and embedded components that attackers are increasingly targeting.

AI vulnerabilities do not stop at the OS, they include the entire hardware supply chain that is often hidden. Eclypsium provides the tools to detect, validate, and defend this critical layer — from GPUs to BMCs, network infrastructure, and beyond.

In the AI race, supply chain security at the bare metal layer must be a strategic priority for AI cloud providers, data centers, and enterprises building their own AI infrastructure.

For a deep dive into Hardware and Firmware security at the chip level foundations of every AI data center, view our recent 2-part webinar series on AI Infrastructure Cyber Security.