Biden's executive order on AI has 'extensive to-do list' for Homeland Security
An executive order signed Monday by President Joe Biden on artificial intelligence “directs nearly the entire alphabet of federal agencies” to address the risks of AI, according to Just Security. Image from Shutterstock.
President Joe Biden issued an executive order on artificial intelligence this week that is intended to reduce security risks, protect privacy and prevent use of the technology to discriminate.
The executive order “directs nearly the entire alphabet of federal agencies” to address the risks of AI, according to Just Security. But the blog said the directive “can be read primarily as a national security order” that gives an “extensive to-do list” to the U.S. Department of Homeland Security.
The agency must assess how AI can make infrastructure more vulnerable to failures and cyberattacks; must create a new AI safety and security board; must address risks that AI could be used to design or use chemical, biological, radiological or nuclear weapons; and must explore immigration pathways for people with AI skills, according to Just Security and the executive order.
Companies that develop advanced AI systems will be required to conduct tests to make sure that they can’t be used for biological or nuclear weapons, according to the New York Times. The test results will have to be shared with the federal government.
“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said at the White House event Monday, according to Reuters. “In the wrong hands, AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.”
To protect Americans from fraud, the U.S. Department of Commerce is tasked with developing guidance for authentication and watermarking of AI-generated content.
To protect privacy, the order calls on Congress to pass data privacy legislation and tells federal agencies to prioritize federal support for the development of privacy-protecting techniques.
The idea of a federal law appeals to Alan Butler, the executive director of the Electronic Privacy Information Center. He told Law360 that a law is needed to counter incentives for “capturing personal data on a massive scale” through AI.
“Without a strong privacy law, there will be no meaningful limits on how companies collect and use our data for artificial intelligence systems,” Butler said.
To address discrimination, the order tells federal agencies to address algorithmic discrimination in tenant screening and the advertising of housing and credit. The order also tells the U.S. attorney general to address the use of AI in the criminal justice system, including in sentencing, parole, pretrial release, risk assessments, police surveillance and predictive policing.