What would happen if an extreme heat wave hit your city, making life difficult or dangerous for everyone there—especially for people with limited incomes, few ways to stay cool, or limited access to food and resources? It’s not hard to imagine. The northern hemisphere just experienced its hottest summer on record, and the world is on track to keep warming for decades. But artificial intelligence could reduce the impact. The National Oceanic and Atmospheric Administration is experimenting with AI tools to identify at-risk neighborhoods and to develop better ways to protect residents from extreme weather.
Although most discussions of artificial intelligence focus on its impacts on business and research, AI is also poised to transform government in the United States and beyond. AI-guided disaster response is just one piece of the picture. The U.S. Department of Health and Human Services has an experimental AI program to diagnose COVID-19 and flu cases by analyzing the sound of patients coughing into their smartphones. The Department of Justice uses AI algorithms to help prioritize which tips in the FBI’s Threat Intake Processing System to act on first. Other proposals, still at the concept stage, aim to extend the applications of AI to improve the efficiency and effectiveness of nearly every aspect of public services.
The early applications illustrate the potential for AI to make government operations more effective and responsive. They illustrate the looming challenges, too. The federal government will have to recruit, train, and retain skilled workers capable of managing the new technology, competing with the private sector for top talent. The government also faces a daunting task ensuring the ethical and equitable use of AI. Relying on algorithms to direct disaster relief or to flag high-priority crimes raises immediate concerns: What if biases built into the AI overlook some of the groups that most need assistance, or unfairly target certain populations? As AI becomes embedded into more government operations, the opportunities for misuse and unintended consequences will only expand.
Rachel Gillum, who is vice president of ethical technology at the software company Salesforce and who advised the U.S. Chamber of Commerce on AI, is optimistic that artificial intelligence will, on balance, be a huge benefit. “AI can deliver better, faster constituent services that will transform the way governments serve our communities in really exciting and meaningful ways,” she says. “AI allows government employees, who are often resource-constrained, to focus their time on the high-touch work that is needed most.”
Integrating AI into government operations will require major changes in the federal workforce of nearly 3 million employees. It will also require anticipating the future direction of a fast-changing technology, a kind of flexibility that even Silicon Valley has been struggling with. “AI is horizontal, and it will influence everything—including transformative innovations yet to come, like quantum computing,” says Michelle Lopes Maldonado, a member of the Virginia House of Delegates who sits on the AI subcommittee of the U.S. Joint Commission on Technology & Science.
In 2023, the Biden-Harris Administration issued an Executive Order that established a new AI and Tech Talent Task Force. From January to March of 2024, the number of government AI job applications doubled compared to the year before. By April, more than 150 individuals were hired for AI-related federal positions. Alejandro Mayorkas, Secretary of the U.S. Department of Homeland Security, recently announced a new AI Corps, which is hiring workers with AI skills to counter major challenges—including fentanyl trafficking, child sexual exploitation and abuses, and cybersecurity—and to help secure travel and protect critical infrastructure. Homeland Security received more than 3,000 applicants for 50 available positions within a few months.
Automation may eliminate some of today’s jobs while creating novel opportunities.
The National AI Talent Surge, a hiring effort created as part of that same Executive Order, aims to build on these promising results by recruiting more AI professionals all across the federal government. That effort focuses on data scientists, legal experts, social scientists, economists, and engineers—individuals who will not only improve the government’s AI capabilities but also help shape regulations for its safe and ethical use. Maldonado is particularly excited about AI.gov/apply, an online portal created by the AI Talent Surge team to streamline the often-daunting federal job application process.
Recruitment alone isn’t enough, however. The federal government must also be able to retain AI-skilled individuals to keep its projects running smoothly. Betsy Cooper, founding director of the Aspen Tech Policy Hub, points out that salary discrepancies between government and private sector jobs make it challenging for the federal government to attract and hold on to top AI professionals. But government jobs could draw workers with other benefits. Experts have noted that private-sector salaries often come with unstable or stressful work environments. Data from the 2022 Culture 500 project, which studies primarily for-profit companies, indicate that toxic workplace culture is 10 times more likely to influence turnover rates than compensation. Cultivating an office culture that prioritizes worker well-being, satisfaction, and job security could make government jobs more attractive.
Mutale Nkonde, CEO of the nonprofit communications agency AI for the People, adds that the government has an obligation to provide equitable access to AI jobs, particularly for marginalized groups. She warns that gentrification is pushing Black people out of the urban centers where many AI-related jobs are located, which could reduce their opportunities for both public and private-sector work. “You can’t take part. You can’t contribute,” Nkonde says. “So how can the federal government invest in some of these other areas and create housing or other strategies to ensure that the entire U.S. population can afford to live where the jobs will be?”
In parallel with a push to bring in new talent, Nkonde calls out the importance of developing AI skills among existing federal employees. Several new initiatives offer training or professional development opportunities for federal employees, including the AI Federal Workforce Initiative and the Office of Personnel Management’s AI Training Initiatives. Shalin Jyotishi, a policy strategist at the think tank New America, advocates for job-development programs along the lines of the Workforce Innovation and Opportunity Act, which helped workers get access to education and training and helped connect employers to the right talent.
Matthew P. Shaw, a lawyer and legal professor at Vanderbilt University, notes that AI is already reshaping job roles and transforming what “work” looks like, both inside and outside the government. At some level, much of the existing federal workforce will need to adapt to the era of AI. Yet Cooper cautions that managers planning job-training programs need to recognize that “the vast majority of federal government workers are not going to have experience with AI or related technologies.”
Many workers will require training in the basics of artificial intelligence, “often for positions that don’t exist yet,” Shaw says. Automation may eliminate some of today’s jobs while creating novel opportunities—especially in areas like regulation and compliance, where building and maintaining AI systems will be crucial. The Department of Defense and the Department of Homeland Security are actively launching training programs to develop worker AI skills. Still, a lot more targeted efforts will be required to build AI technical proficiency across the federal government.
AI could eventually improve a wide range of government transactions, from Medicare to social security to taxes.
Looking further ahead, Jyotishi highlights the need for an education system that provides the expertise that workers will need for the AI-driven jobs of tomorrow. He advocates for investments in AI education in community colleges, calling them an “underestimated vehicle for workforce transition in the AI era.” The National Science Foundation’s AI Education Act of 2024 supports those efforts. Jyotishi sees great potential in AI workforce partnerships, such as those between labor unions and community colleges, “to ensure vulnerable and marginalized communities aren’t left behind.”
Maldonado stresses the importance of the earlier stages of education as well. In 2023, the Biden-Harris Administration announced $277 million in grants to achieve “educational equity and innovation,” with $90.3 million going toward STEM. This builds on the federal YOU Belong in STEM initiative, designed to “help implement and scale equitable, high-quality STEM education for all students from Pre-K to higher education—regardless of background.” At the state level, Maldonado highlights GO TEC, an initiative in her home state of Virginia that prepares middle school students for jobs in IT, advanced manufacturing, and other STEM fields. “We currently have robotics and other tech initiatives that are often considered extracurricular activities. These should be part of the regular curriculum,” she says.
AI education programs also offer an opportunity to draw in groups that are often underrepresented in STEM. With a background in sociology, Nkonde has spoken with students from high school to graduate levels about algorithmic bias and biased tech design. A social justice approach to STEM education, such as Aspen’s Our Future is Science program, allows future science leaders to grasp the societal impacts of their work, Nkonde notes: A workforce that blends technical skills with ethical awareness will help ensure AI serves the public good.
Artificial Intelligence is already making noticeable improvements in the way the government works. Salesforce’s Gillum reports that AI tools have helped the Transportation Security Administration “reduce response time and enrich security efficiency,” taming some of the frustrations of the more than 2 million people who pass through U.S. airports daily. AI could eventually improve a wide range of government transactions, she notes, from Medicare to social security to taxes. But that scenario requires more than the right employees. It also requires the right systems and applications.
The sprawling, fragmented nature of the federal government makes it impractical to implement a unified set of AI policies and practices. On the other hand, AI itself could be effective at breaking down institutional barriers. Maldonado, who was a tech lawyer before she entered politics, asserts that “AI can absolutely bridge gaps between federal agencies, but we need to move past the ‘fiefdom’ mentality, where each agency is territorial about its work. To fully leverage AI, we need to foster a mindset of collaboration and co-creation, which is already more common in the private sector.”
AI teams need members of varied educational, cultural, racial, and gender identities.
The National Artificial Intelligence Research Resource is a pilot program by the National Science Foundation, begun in January 2024, to foster collaboration across 13 federal agencies in conjunction with more than two dozen private, nonprofit, and philanthropic partners. Its goal is to expand access to AI tools for researchers and students, focusing on work that addresses major societal challenges. The two-year program will assess the feasibility and value of such large-scale collaborations.
Such efforts have their work cut out for them. “It’s no secret the federal government lags significantly in digital transformation,” Jyotishi says. “Government websites, grants management systems, and reporting processes are woefully outdated.” Current college students have experienced those lapses firsthand with the recent implosion of the Department of Education’s Free Application for Federal Student Aid (FAFSA) system, which helps students apply for higher-education financial aid. The system ran on antiquated technology, and the department’s ambitious effort to overhaul the system rendered it largely inoperable. Schools were unable to process financial aid applications for weeks to months this past year, leaving students unsure if they could afford to enroll.
The FAFSA fiasco is a cautionary tale of the need for multi-level solutions to prevent disruptions in vital federal systems. Maldonado suggests that the government needs to create centralized protocols and guidelines that establish best-practice standards (including technological infrastructure and workforce training), which can then be used to guide AI implementation in various agencies. She suggests creating a federal agency or office focused on AI and emerging tech to lead these efforts. “This way, not everyone will have their own AI czar or a separate set of processes,” she says.
Federal agencies also need to adopt long-term AI strategies, which will require reducing their dependence on external contractors and suppliers, according to B Cavello, a former AI developer at IBM and TechCongress fellow. Their experience is a case in point. The TechCongress program was created in 2016 to provide the federal government with emerging tech experts. Since then, it has brought 109 scientists, engineers, and technologists into Congress in temporary advisory positions.
“Rapid advancements in science and technology demand dedicated expertise,” says Michael Akinwumi, Chief AI Officer at the National Fair Housing Alliance. “Unlike temporary roles, permanent positions provide institutional knowledge, foster long-term relationships, and promote proactive policy development.”
Amazon’s experience with its experimental talent recruitment tool showcases one of the great challenges with expanding the role of artificial intelligence: Bias can easily infiltrate allegedly neutral technologies, distorting or subverting their intended goals. A decade ago, Amazon began developing machine-learning algorithms to automate its hiring process. Company officials abandoned the project in 2018, after an internal review revealed that the system disproportionately favored men. The problem was that the AI system was trained on data reflecting the company’s early, male-dominated workforce.
The issue is not unique to gender or to hiring, Nkonde notes. Other automated recruitment tools have been found to discriminate against candidates with “African-sounding names.” Similar biases have emerged in software systems for criminal justice, immigration, and housing. The scale of the federal government, combined with its mandate to serve all the people, give it a unique obligation to root out such damaging potential impacts of AI.
Akinwumi’s work at the National Fair Housing Alliance is an example of how AI can be applied the other way, to counteract systemic biases. He developed a system that identifies discriminatory patterns in housing and fair-lending loans and that enables the NFHA to design appropriate legal responses. His goal is “to promote responsible AI use, ensuring fairness, transparency, and public trust.” Similar AI techniques could be used to analyze and reform outdated zoning codes, to streamline permitting, and to optimize construction that improves housing affordability and accessibility.
Nkonde argues that software engineers should pursue well-defined principles of ethics and equity in each phase of AI development. The National Institute of Standards and Technology is attempting to institutionalize such principles by requiring that the agency’s projects all adhere to a list of federal “AI Commitments.” But enforcing such standards requires significant effort.
Cooper, who served as the founding executive director of the University of California, Berkeley’s Center for Long-Term Cybersecurity, says that “AI’s reliability remains questionable, and every AI-generated output still requires human vetting.” One way to ensure that AI systems have been carefully evaluated is to require government impact assessments. Nkonde, an AI policy advisor, has pushed this approach as the lead supporter of the proposed Algorithmic Accountability Act, introduced by Representative Yvette Clarke from New York City. This legislation would require assessments to confirm that machine learning technologies adhere to non-discrimination laws before public rollout. Although the Act has not passed, its principles have been integrated into all major privacy proposals from the U.S. House Energy and Commerce Committee.
“If an AI system fails to meet this standard, it should not be procured or used—not because it’s the ethical thing to do, but because it’s required by law,” Nkonde says. Gillum agrees, noting that “the risks governments face implementing AI are not all that different from those companies face—though in some cases, the stakes are even higher.” For instance, AI will very likely be incorporated into government services and benefits that directly impact people’s livelihoods.
Weeding out bias requires diligent, ongoing effort, as evidenced by Google’s recent missteps with their experimental Gemini AI tool. Image requests generated through Gemini inaccurately depicted historical figures, representing Black men as Nazi-era German soldiers, the Pope as an Asian woman, Native Americans as Vikings, and President George Washington as a Black man. Google apologized and launched a new version of Gemini in August 2024. Some critics saw the historical misrepresentations as a failed attempt to address previous biases that suppressed images of minorities—but the effort backfired and just ended up creating new stereotypes.
“If you only have technologists at the table, you’re going to have multiple blind spots. If you only have policymakers at the table, you’ll also have multiple blind spots,” Maldonado observes. AI teams need members of varied educational, cultural, racial, and gender identities, she notes, because otherwise “we may not recognize certain gaps because of our life experiences.” Nkonde also recommends incorporating social scientists into AI development. “Without the involvement of social scientists,” she says, “we risk creating AI systems that are technically advanced but socially disconnected.”
Ultimately, AI will be useful in government only if it improves how government serves the people—all of the people. Reflecting on that high-level goal, Cavello, who now directs Emerging Technologies at Aspen Digital, pushes back on a common complaint that assessing the ethics of AI systems slows the pace of innovation. “Innovation is rushing toward the hard problems,” they say. “To me, working on protecting civil rights and privacy … that is innovation.”
This article is part of a series, Science at the Ballot Box, which is an initiative by the Aspen Institute, published in partnership with Nautilus.
Lead image: jamesteohart / Shutterstock