The American Civil Liberties Union (ACLU) has filed a Freedom of Information Act (FOIA) request to find out how America's national security and intelligence agencies are using artificial intelligence (AI).
The ACLU said it made the request out of concern that AI was being used in ways that could violate Americans' civil rights.
The request follows the March 1 release of a 16-chapter report containing recommendations on how AI, machine learning, and associated technologies should be used by the Biden administration.
Prepared by the National Security Commission on Artificial Intelligence (NSCAI), the report warns that the US government "is not prepared to defend the United States in the coming AI era."
The report urges the federal government, including intelligence agencies, to introduce “ubiquitous AI integration in each stage of the intelligence cycle” by 2025. Recommended uses for AI systems detailed in the report include conducting surveillance, exploiting social media information and biometric data, performing intelligence analysis, countering the spread of disinformation via the internet, and predicting cybersecurity threats.
The ACLU says the report raised concerns that government agencies may introduce technology that encroaches on a number of rights held by US citizens, including the rights to privacy and free expression.
In its FOIA request the ACLU stated: "In June 2020, the Office for the Director of National Intelligence released its Artificial Intelligence Framework for the Intelligence Community. One of the core principles endorsed in that framework is transparency.
"Yet the public knows almost nothing about what kinds of AI systems are being developed or used by the Intelligence Community, what policies constrain the deployment or operation of AI systems in practice, and what risks these systems may pose to equality, due process, privacy, free expression, and public safety."
The ACLU also voiced concerns that AI could be used to unfairly scrutinize America's non-white communities.
"Of particular concern is the way AI systems may be biased against people of color and marginalized communities, and may be used to automate, expand, or legitimize discriminatory government conduct," stated the ACLU.
"In addition, AI systems may be used to guide or expand government activities that have long been used to unfairly scrutinize communities of color, including intrusive surveillance, investigative questioning, detention, and watchlisting."
The ACLU said it made the request out of concern that AI was being used in ways that could violate Americans' civil rights.
The request follows the March 1 release of a 16-chapter report containing recommendations on how AI, machine learning, and associated technologies should be used by the Biden administration.
Prepared by the National Security Commission on Artificial Intelligence (NSCAI), the report warns that the US government "is not prepared to defend the United States in the coming AI era."
The report urges the federal government, including intelligence agencies, to introduce “ubiquitous AI integration in each stage of the intelligence cycle” by 2025. Recommended uses for AI systems detailed in the report include conducting surveillance, exploiting social media information and biometric data, performing intelligence analysis, countering the spread of disinformation via the internet, and predicting cybersecurity threats.
The ACLU says the report raised concerns that government agencies may introduce technology that encroaches on a number of rights held by US citizens, including the rights to privacy and free expression.
In its FOIA request the ACLU stated: "In June 2020, the Office for the Director of National Intelligence released its Artificial Intelligence Framework for the Intelligence Community. One of the core principles endorsed in that framework is transparency.
"Yet the public knows almost nothing about what kinds of AI systems are being developed or used by the Intelligence Community, what policies constrain the deployment or operation of AI systems in practice, and what risks these systems may pose to equality, due process, privacy, free expression, and public safety."
The ACLU also voiced concerns that AI could be used to unfairly scrutinize America's non-white communities.
"Of particular concern is the way AI systems may be biased against people of color and marginalized communities, and may be used to automate, expand, or legitimize discriminatory government conduct," stated the ACLU.
"In addition, AI systems may be used to guide or expand government activities that have long been used to unfairly scrutinize communities of color, including intrusive surveillance, investigative questioning, detention, and watchlisting."