R Street Institute focuses on cybersecurity, AI policy priorities for incoming US administration

 Read More Cybersecurity

The R Street Institute acknowledges that as emerging technologies like artificial intelligence (AI) continue to shape the digital landscape, cybersecurity is becoming an interdisciplinary challenge, with evolving threats crossing industries and borders. In response, the institute identifies and advocates for free markets and limited, effective government policies that enhance cybersecurity, such as reducing the regulatory burden on industry while maximizing opportunities for innovation. Although a variety of cyber-related and data privacy issues are addressed, two areas are particularly poised for action in 2025.

Such developments come as companies and governments “are not only finding ways to use technology and AI to improve our lives, but also to improve cybersecurity,” Haiman Wong, resident fellow for cybersecurity and emerging threats at R Street Institute, and Brandon Pugh, policy director and resident senior fellow for cybersecurity and emerging threats at R Street Institute, wrote in research published Tuesday. “Still, threat actors, unbound by ethical or regulatory constraints, can leverage AI for nefarious purposes, including targeting critical infrastructure sectors, so it is imperative that we stay ahead of such actors in this arena.”

They also noted that numerous federal agencies, industry regulators, and standalone state and federal laws set forth cyber requirements around incident reporting and baseline security requirements. “This has led to duplicative—even contradictory—requirements and has created an opportunity to improve our cyber posture, free cybersecurity teams of the burden of excessive compliance tasks, and promote government efficiency.”

The R Street Institute research provided a couple of recommended actions, including establishing a clear expectation for cybersecurity regulatory harmonization. It calls for defining a comprehensive end goal for harmonization, including establishing the scope of what should be covered and considering reciprocity or uniform baseline security requirements across sectors in a way that is not onerous but, instead, reasonable for companies of all sizes.

It also called for empowering a coordinating authority for cyber harmonization, as no single central entity currently has the authority to bring independent regulators to the table. “Congress could grant such authority to an entity like the Office of the National Cyber Director so that it could establish and pilot baseline security rules and incident-reporting requirements to reduce duplicative and contradictory rules. The Department of Government Efficiency should coordinate with related Hill committees on deregulation and regulatory-alignment efforts and resolve interagency conflicts in the context of broader efforts,” the executives added.

Wong and Pugh recognized that continued advances in AI are redefining industries and reshaping national security priorities, offering transformative opportunities for innovation, efficiency, and resilience. “In cybersecurity, AI has already compressed cybersecurity incident analysis time from minutes to milliseconds and promises to identify never-before-seen threats and vulnerabilities through predictive threat intelligence. These benefits, however, are paired with risks that demand our attention.” 

They added that these AI systems are susceptible to exploitation—whether through adversarial attacks, weaponization, data poisoning, or unintentional breaches of the infrastructure supporting their use. “Left unchecked, these vulnerabilities can disrupt critical infrastructure operations and expose sensitive data.”

The R Street Institute research noted that the White House, Congress, and relevant agencies should account for cybersecurity risks in AI systems while evaluating their applications, such as digital twins, along with their limitations and benefits. This includes avoiding inadvertently limiting AI uses in cybersecurity, as some proposals have done, and distinguishing between exaggerated fears and genuine risks.

The White House, Congress, and relevant agencies should incorporate risk-tolerance principles, which involve defining acceptable risks in any AI regulation and governance solutions. Congress should direct the National Institute of Standards and Technology (NIST) to collaborate with stakeholders across industry, academia, and the public to develop and advance voluntary best practices for improved transparency and oversight in AI applications. 

The White House, with support from Congress, should modernize legacy systems throughout the government to ensure that they can support emerging security updates and responsible-use standards.

The R Street Institute research noted that the NIST and the Cybersecurity and Infrastructure Security Agency (CISA) should define permissible actions for researchers in security-centric AI development across public and private sectors. The Department of Defense, in collaboration with other agencies like the National Security Agency (NSA), should expand offensive cyber capabilities to improve deterrence and attribution The executives observed that the NIST should establish risk-tolerance parameters for federal entities to guide AI implementation. The NIST, in collaboration with CISA, should develop a voluntary framework to guide state and local governments in evaluating and addressing AI-related security and safety concerns.

The research also called for Congress, in coordination with the Department of Energy (DOE), to prioritize the expansion of secure, domestically operated data centers to reduce U.S. reliance on foreign-sourced hardware and infrastructure across public and private sectors.

Wong and Pugh wrote that the White House should direct agencies like NIST and CISA to establish clear standards for secure collection, storage, and handling protocols for AI training datasets to prevent tampering, data poisoning, and unauthorized access across private and public sectors.

R Street Institute also mentioned that Congress should fund partnerships between agencies such as the NIST, DOE, and National Science Foundation, and stakeholders across industry, academia, and the public to gain insights into the real-world impact of AI technologies, including opportunities and risks of combining large language models with other AI and legacy cybersecurity capabilities.

It suggested that the NIST develop reliable metrics for assessing data security, AI model protection, and resilience against attacks, providing a foundation for consistent audits across sectors. CISA, in coordination with the Federal Bureau of Investigation and the NSA, should leverage AI-driven cloud security innovations for enhanced threat detection, posture management, and configuration enforcement.

The research also asked for the CISA, with sector-specific agencies, to develop tailored AI security frameworks and risk assessments that address unique industry needs and vulnerabilities, prioritizing the proactive identification and mitigation of supply chain vulnerabilities, critical infrastructure risks, and security threats to AI ecosystems, avoiding a top-down regulatory approach.

Lastly, the R Street Institute research said that Congress should support legislation to upskill the workforce and develop incentives for businesses to implement training and reskilling programs that prepare non-technical workers to work in an AI-enabled economy. It added that federal agencies should revise hiring policies to attract technical talent, streamline security clearance processes, and offer competitive incentives to recruit and retain top AI and cybersecurity talent.

The research data emerges amid reports of Chinese hackers from the Salt Typhoon cyberespionage operation breaching more U.S. telecommunications firms, such as Charter, Consolidated, and Windstream, underscoring the growing threat to critical infrastructure.