Manchin Questions Witnesses on Rapid Development of Artificial Intelligence, Energy Department’s Role in Ensuring Security

Home/Manchin Questions Witnesses on...
Manchin Questions Witnesses on Rapid Development of Artificial Intelligence, Energy Department’s Role in Ensuring Security
Manchin Questions Witnesses on Rapid Development of Artificial Intelligence, Energy Department’s Role in Ensuring Security Admin CG September 08, 2023

Today, the U.S. Senate Energy and Natural Resources Committee held a hearing to examine recent advances in artificial intelligence (AI) and the Department of Energy’s (DOE) role in ensuring U.S. competitiveness and security in emerging technologies. During the hearing, Chairman Joe Manchin (D-WV) discussed the DOE’s expansive role in research and development of AI and supercomputing, competition with China and the risks posed by the rapid growth of AI.

“Over the past few years, six National Labs with world leading capability have been working to understand the challenges around AI and related issues. The labs’ work is bringing together both fundamental science and national security missions,” said Chairman Manchin. “If we want to invest in AI in a cost-effective way, we must build on these existing programs. Most people think about the Department of Energy for its work advancing energy technologies, like nuclear reactors, energy efficiency, carbon capture and hydrogen. But DOE does more than just energy. The Department is also the largest supporter of scientific research in the Federal government — conducting research and developing technologies across a range of fields from quantum computing to vaccine development to Astro-physics.”

Chairman Manchin continued, “Artificial intelligence stands out across DOE’s vast mission. It has the potential to revolutionize scientific discovery, technology deployment, and national security. In fact, AI is already changing the world at a remarkable pace. We are seeing it deployed in battlefields across the world. Ukraine has successfully used AI-enabled drone swarms against Russian forces. AI also helped us fight COVID-19. DOE’s Oak Ridge National Laboratories used its artificial intelligence and computing resources to model proteins in the coronavirus to help develop the vaccine.”

During the hearing, Chairman Manchin also discussed the Energy and Natural Resources Committee’s work advancing DOE’s AI mission through the Exa-scale computing program and China’s developing AI technology. 

“Our Committee recently played an important role in advancing DOE’s AI work. Recognizing that the United States must not fall behind in the supercomputing race, we authorized the Exa-scale Computing Program at the Department of Energy in the 115th Congress. In May of last year, the Frontier supercomputer at Oak Ridge National Laboratory in Tennessee passed Exa-scale — the ability to perform one billion-billion calculations per second — making this the fastest supercomputer in the world. Before we authorized the Exa-scale Computing Program, China had the fastest computers. Now, the U.S. has regained the lead,” said Chairman Manchin.

“Between 2015 and 2021, Chinese AI companies raised $110 billion, including $40.2 billion from U.S. investors in 251 AI companies. In 2017, China released their “New Generation AI Development Plan” which includes R&D and infrastructure targets. The U.S. currently does not have a strategic AI plan like this. In addition to government spending, China’s workforce advantage is significant — it has twice as many STEM PhDs and twice as many STEM master’s degree holders than the U.S. China has created Artificial Intelligence PhD programs in every one of their top universities. In regards to the Exa-scale Computing Program this Committee championed, the Chinese government could be set to operate as many as 10 exa-scale supercomputers by 2025,” continued Chairman Manchin.

During the hearing, Chairman Manchin questioned witnesses about a study at the Massachusetts Institute of Technology (MIT) that found that AI chatbots could provide clear and detailed steps to non-experts in creating a pandemic or bioweapon. The DOE and its 17 National Labs are in a position to do extensive work in detecting and mitigating emerging technological threats related to an array of biotechnologies and nuclear security. 

“What can the Department and the Labs do to address these safety and security concerns?” asked Chairman Manchin.

“AI can do a lot of good, but it can also do a lot of harm here. It allows actors that aren’t as sophisticated, scientifically or technologically, to do certain things that could have huge, huge harm. From the DOE side, I think we’ve got some ability to be incredibly helpful working with others — Department of Defense, HHS [Health and Human Services] and others as well. We’ve got to remember our National Labs don’t just work for the Department of Energy, they work for all the other agencies and a lot of other agencies already have a lot of programs, including in the biodefense, biotech area,” said Mr. David M. Turk, Deputy Secretary, U.S. Department of Energy. 

“Basically, I look back and you all remember when the internet was coming on board, was born out of the labs, and then by the early 90’s we created Section 230 thinking we would let it develop, be all it could be. We look back, it’s even more than what we thought it could be. It’s been used very effectively to help economies and help people all over the world, but it’s been used very detrimentally too. So, we’re trying to not recreate that same environment here with AI. What can you do to stop something like that?” asked Chairman Manchin. 

“This is exactly why we need to invest in these capabilities and need to be ahead of the curve,” replied Deputy Secretary Turk.

“There are actually two key problems that we have to solve. One is that we have to have the ability to assess the risk in current models at scale. There are over one hundred large language models in circulation in China. There is more than one thousand in circulation in the U.S. A manual process for evaluating that is not going to scale. So, we’re going to have to build capabilities using the kind of supercomputers we have and even additional AI systems to asses other AI systems so we can say, this model is safe, it doesn’t know how to build a pandemic or it won’t help students do something risky. That’s one thing we have to do. Second thing we have to do is we have to understand the fundamental issues of alignments, that is building these models that align with human values and are reliable in aligning with human values. And that’s a fundamental research task, it’s not something that we can just snap our fingers and say we know how to do it. We don’t know how to do it, companies don’t know how to do it, Labs don’t know how to do it, universities don’t know how to do it,” said Professor Rick L. Stevens, Associate Laboratory Director, Computing, Environment and Life Sciences (CELS), Argonne National Laboratory.

“It’s growing so quickly and expanding…how can we put the cat back in the box?” continued Chairman Manchin.

“I don’t think we can put it back in the box. I think we have to get smarter about how we manage the risks associated with advanced AI systems and using the term that people have been using quite a lot about eyes wide open, there’s no putting Pandora back in the box. Every person within the next few years is going to have a very powerful AI assistant in their pocket to do whatever it is they can get that assistant to do. Hopefully most of that will be positive advances for society, some of that will be negative,” said Prof. Stevens.

“I think that it’s also important as we look at AI as a tool of discovery, and in some ways, you can say that the study the classroom did was a discovery, that there are a lot of steps though that need to happen from the time you go from a sequence into something that can really have the large-scale damage that is talked about. That is one of the things that we are actually taking a closer look at it is, having the sequence is one thing but then what are those follow-on steps, and what biology has to go on between that,”said Ms. Anna B. Puglisi, Senior Fellow, Center for Security and Emerging Technologies, Georgetown University.

“There are many layers to this. I think there is both a policy aspect to this as well as a research component. As an example, on the policy side, our own company spent over a year and a half developing what we call our AI ethics principles, and this is all about getting our thousands of engineers and users to go through training around what does it mean to use AI in our product developments? How are we going to deploy solutions that harness AI? Now that can’t solve every problem because, as you mentioned, there are bad actors that maybe won’t follow that same line of reasoning. That’s where the research and investment come into play. There’s a broad field of study around this trustworthy AI, which ultimately can provide some of those guardrails you’re asking about, but we’re still really in the early days of some of that and deploying some of those solutions and there’s a lot of work that’s left,”said Mr. Andrew Wheeler, Fellow and Vice President, Hewlett Packard Labs and HPC & AI Advanced Development, Hewlett Packard Enterprise.

The hearing featured witnesses from the U.S. Department of Energy, Argonne National Laboratory, Georgetown University and Hewlett Packard Enterprise. 


PUBLISHING PARTNERS

Tags