Network of policies not syncing into one federal AI strategy, report warns

A tangle of policies on artificial intelligence are coming from the White House, Congress and agency leaders, but these policies do not yet sync into a single strategy on how the federal government should develop or implement tools. of AI.

In recent years, agencies have launched dozens of AI and machine learning tools and pilots that make back office functions easier for federal employees. But a recent report from the Advanced Technology Academic Research Center (ATARC) reveals that agencies are focusing on making the right policy before moving forward with greater adoption of AI systems.

The ATAC AI Data Policy Working Group report found dozens of separate AI ethics, policy and technology working groups scattered across the federal government .

The task force includes members of the General Services Administration, the Defense Health Administration, the National Institutes of Health, and the Department of Veterans Affairs.

“Although a few global governance structures for AI policy have started, we are concerned that the resulting policies may be incomplete, inconsistent or incompatible with each other,” the report said.

The report identifies six groups working on AI policies that apply across the federal government:

  • White House Office of Science and Technology Policy
  • General service administration
  • National Security Commission on Artificial Intelligence
  • Commerce Department
  • Management and Budget Office
  • Senate and House Artificial Intelligence Caucus

The Trump administration in January established an office of the National AI Initiative to promote research and development of AI systems across the federal government. The office, which the Biden administration has kept in place, is also focused on improving data availability and assessing issues related to the AI ​​workforce.

In June, Biden’s White House also created a National AI Research Resources Task Force with the National Science Foundation. The NAIRR, required under the National AI Initiative Act 2020, will examine how to expand access to AI education and other essential resources.

The task force includes members from NIST, the Department of Energy, and top universities.

Meanwhile, the National Institute for Standards and Technology is working on a framework to help agencies design and implement ethical and trustworthy AI systems.

However, it is more common to see agencies developing their own AI strategies. The working group found that at least 10 agencies are developing AI policies or tools focused on internal use, rather than developing an interagency framework.

This group includes the Ministry of Defense, which has its Joint AI Center (JAIC) and its AI Center of Excellence. The DoD adopted a set of ethical AI principles in February 2020, developed by the Defense Innovation Board, a panel of science and technology experts from industry and academia.

In March, the JAIC also reached initial operational capability of its Joint Common Foundation, a DoD-wide development platform designed to accelerate testing and adoption of AI tools across the country. department.

The intelligence community borrowed elements of DoD’s AI ethics principles to create their own AI ethics policy in July 2020.

The DoD and the intelligence community defended these policies at the behest of the National Security Commission on AI, which urged the two to prepare for AI by 2025 to maintain a tactical advantage over international adversaries. . The NSCAI released its final report in March and will end on Friday.

Agencies take a cautious approach to ‘high risk’ AI

Agencies that already offer AI tools always take a cautious approach to this technology.

Oki Mek, head of AI at the Department of Health and Human Services, said the HHS is training its staff to better understand what AI and machine learning can do for the agency, while ensuring that it builds and acquires AI systems in a way that meets legal requirements.

“We want to make sure that we have this support ecosystem because AI and machine learning is a new, innovative and transformative field,” Mek said Tuesday during an ATARC panel. “The failure rate is going to be high. Even for IT projects the failure rate is quite high, but with newer technologies emerging it is going to be really high.

Pamela Isom, director of the energy department’s office of AI and technology, said her office is hosting listening sessions as part of an effort to create a risk management manual for the ‘IA for the agency.

The playbook, she added, will highlight some of the agency’s best practices for building ethical AI systems and avoiding bias when training algorithms.

“These are issues that might not necessarily arise intentionally. It’s all about raising awareness and understanding some of the things we can do to prevent inherited stigma, ”Isom said.

As the DoE examines how AI can provide information on climate change and its impact on health in communities, Isom said the agency must also master fundamentals such as labeling of AI and annotation standards.

“AI itself is not about technology. AI itself is about the challenges of the mission and the responses to those challenges. And to do that, we can’t think of it as we have traditionally thought of software development. The whole lifecycle is much more incremental, it’s much more iterative, it’s much more agile. You train with production data, you train with the real deal, because the AI ​​is going to make real safety decisions. You have to test it, you have to train it well, ”Isom said.

Ed McLarney, NASA’s digital transformation manager for AI and ML, said the agency is supporting AI pilots for use cases in situations where it is assisting the workforce in roles such as mission support, human resources and finance. He said NASA has also created an ethical AI framework.

“It’s really important that we have a larger discussion about AI ethics, AI risks, mitigation measures and approaches. It’s kind of a turbulent new frontier, Wild West, that there is just a lot of talk and debate that our global communities have to have, ”he said.

Brett Vaughn, the Navy’s chief AI officer, said the service is considering AI to enable autonomous and unmanned systems, as well as to improve the efficiency of back office functions that support the preperation.

The Navy, Vaughn said, faces its biggest AI challenge to ensure these tools continue to perform in harsh environments with limited connectivity.

“To us that maybe literally means the middle of the Pacific Ocean, so it’s a tough technological and operational environment,” he said.

But the Navy must also overcome organizational and cultural challenges.

“We are an organization that is hundreds of years old. For us, going digital, which is a predicate to be effective in AI, let’s say that we will have to work on it. It’s a constant challenge, it’s widespread, ”Vaughn said.


Source link

Comments are closed.