CrowdStrike’s View on the New U.S. Policy for Artificial Intelligence
Thoughts on the White House Executive Order and its implications for cybersecurity
The major news in technology policy circles is this month’s release of the long-anticipated Executive Order (E.O.) on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. While E.O.s govern policy areas within the direct control of the U.S. government’s Executive Branch, they are important broadly because they inform industry best practices and can even potentially inform subsequent laws and regulations in the U.S. and abroad.
Accelerating developments in AI — particularly generative AI — over the past year or so has captured policymakers’ attention. And calls from high-profile industry figures to establish safeguards for artificial general intelligence (AGI) in particular has further heightened attention in Washington, D.C. In that context, the E.O. should be viewed as an early and significant step addressing AI policy rather than a final word.
Given CrowdStrike’s extensive experience with AI since the company’s founding in 2011, we want to highlight a few key topics that relate to innovation, public policy and cybersecurity.
The E.O. in Context
Like the technology it seeks to influence, the E.O. itself has many parameters. Its 13 sections cover a broad cross section of administrative and policy imperatives. These range from policing and biosecurity to consumer protection and the AI workforce. Appropriately, there’s significant attention to the nexus between AI and cybersecurity, which is covered at some length in Section 4.
Before diving into specific cybersecurity provisions, it is important to highlight a few observations on the document’s overall scope and approach. Fundamentally, the document strikes a reasonable balance between exercising caution regarding potential risks and enabling innovation, experimentation and adoption of potentially transformational technologies. In complex policy areas, some stakeholders will always disagree with how to achieve balance, but we’re encouraged by several attributes of the document.
First, in numerous areas of the E.O., agencies are designated as “owners” of specific next steps. This clarifies for stakeholders how to provide feedback and reduces the odds for gaps or duplicative efforts.
Second, the E.O. outlines several opportunities for stakeholder consultation and feedback. These will likely materialize through Request for Comment (RFC) opportunities issued by individual agencies. Further, there are several areas where the E.O. tasks existing — or establishes new — advisory panels to integrate structured stakeholder feedback on AI policy issues.
Third, the E.O. mandates a brisk progression for next steps. Many E.O.s require tasks to be finished in 30- or 60-day windows, which are difficult for agencies to meet at all, let alone in deliberate fashion. This document in many instances provides for 240-day deadlines, which should enable 30- and 60-day engagement periods through RFCs, as outlined above.
Finally, the E.O. states plainly that “as generative AI products become widely available and common in online platforms, agencies are discouraged from imposing broad general bans or blocks on agency use of generative AI.” This should help ensure that government agencies explore positive use cases for leveraging AI for their own mission areas. If history is any guide, it’s easy to imagine a scenario where a talented junior staffer at a given agency identifies a key way to leverage AI at some time next year, that no one could easily forecast this year. It would be unwise to foreclose that possibility, as innovation should be encouraged inside and outside of government.
AI and Cybersecurity Provisions
On cybersecurity specifically, the E.O. touches on a number of key areas. It’s good to see specific callouts to agencies like the National Institute of Standards and Technology (NIST), Cybersecurity and Infrastructure Security Agency (CISA) and Office of the National Cyber Director (ONCD) that have significant applied cyber expertise.
One section of the E.O. attempts to reduce risks of synthetic content — that is, generative audio, imagery and text. It’s clear the measures cited here are exploratory in nature rather than rigidly prescriptive. As a community, we’ll need to innovate solutions to this problem set. And with U.S. elections around the corner, we hope to see rapid advancements in this space.
In many instances, the E.O.’s authors paid close attention to enumerating AI policy through established mechanisms, some of which are closely related to ongoing cybersecurity efforts. This includes the direction to align with the AI Risk Management Framework (NIST AI 100-1) and the Secure Software Development Framework. This will reduce risks associated with establishing new processes, while enabling more coherent frameworks for areas where there are only subtle distinctions or boundaries between, for example, software, security and AI.
The document also attempts to leverage sector risk management agencies (SRMAs) to drive better preparedness within critical infrastructure sectors. Specifically, it mandates:
Within 90 days of the date of this order, and at least annually thereafter … relevant SRMAs, in coordination with the Director of the Cybersecurity and Infrastructure Security Agency within the Department of Homeland Security for consideration of cross-sector risks, shall evaluate and provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in critical infrastructure sectors involved, including ways in which deploying AI may make critical infrastructure systems more vulnerable to critical failures, physical attacks, and cyber attacks, and shall consider ways to mitigate these vulnerabilities.
This is important, but we also encourage these working groups to consider benefits along with risks. There are many areas where AI can drive better protection of critical assets. When done correctly, AI can rapidly surface hidden threats, accelerate the decision making of less experienced security analysts and simplify a multitude of complex tasks.
At CrowdStrike, AI has been fundamental to our approach from the beginning and has been built natively into the CrowdStrike Falcon® platform. Beyond replacing legacy AV, our platform uses analytics to help prioritize critical vulnerabilities that introduce risk and employs the power of AI to generate and validate new indicators of attack (IOAs). With Charlotte AI, CrowdStrike is harnessing the power of generative AI to make customers faster at detecting and responding to incidents, more productive by automating manual tasks, and more valuable by learning new skills with ease. This type of AI-fueled innovation is fundamental to keep pace with ever-evolving adversaries incorporating AI into their own tactics, techniques and procedures.
This E.O. represents a key step in the evolution of U.S. AI policy. It’s also particularly timely. As we described in our recent testimony to the House Judiciary Committee, AI is key to driving better cybersecurity outcomes and is also of increasing interest to cyber threat actors. As a community, we’ll need to continue to work together to ensure defenders realize the leverage AI can provide, while mitigating whatever harms might come from threat actors’ abuse of AI systems.
This article was first published in SC Magazine: The Biden EO on AI: A stepping stone to the cybersecurity benefits of AI
- Keep up-to-date with cybersecurity policy developments in the CrowdStrike Public Policy Resource Center.
- Learn more about the powerful CrowdStrike Falcon® platform by visiting the webpage.
- Get more information on how CrowdStrike protects federal government agencies: CrowdStrike for the Federal Government FAQ.