A five-step plan for securing generative AI at government agencies


The U.S. Air Force and U.S. Army have launched Generative artificial intelligence (AI) chatbot platforms to increase workforce productivity when they write correspondence, publish papers and even code software. Homeland Security Investigations (HSI) is leveraging a Large Language model (LLM)-based system to improve the efficiency and accuracy of investigatory lead summaries. And the Federal Emergency Management Agency (FEMA) is deploying LLM to support state and local communities as they develop hazard mitigation plans.

These and other examples reveal how Generative AI (GenAI) – even though the technology is still considered relatively nascent – is benefitting government users across-the-board. Simply defined, GenAI is a form of AI which produces a wide range of content, including text, images, audio and video. With the U.S. Office of Personnel Management (OPM) citing advantages such as increased efficiencies, better brainstorming and more refined writing, GenAI is poised to automate and/or augment 39 percent of working hours for the public sector and is forecast to account for $100 billion in productivity benefits for agencies.

Given this, it should come as no surprise that nearly nine of ten public sector IT decision-makers believe it’s important for their organization to adopt GenAI, with 58 percent indicating they’ve either already adopted the technology or they expect to within the next two years, according to research from Amazon Web Services (AWS). However, 83 percent say their agency is concerned about public trust in GenAI, with about one-half citing the potential for data security and privacy issues as the leading causes of the concerns.

Such perspectives are valid, as AI-enhanced malicious attacks are now considered the top emerging risk for enterprises, according to Gartner. What’s more, 73 percent of workers believe GenAI introduces new security risks, according to Salesforce. Of those who plan to use the technology, nearly 60 percent admit that they don’t know how to do so using trusted data sources, nor can they ensure sensitive data is protected.

All of which means the pressure is on for agencies to contain risks while not hindering the speed of innovation. With this in mind, here’s our five-step security-standards plan for optimal – and protected – GenAI deployment:

  1. Gain observability across GenAI touchpoints. Establish real-time monitoring of all GenAI touch points throughout your agency to closely track the usage of these tools. Continuous observations will identify anomalies which could signal suspicious activity.
  2. Assess the threat landscape. It’s essential to acquire a complete understanding of your agency’s existing threat landscape. Resources such as the OWASP Top 10 LLM Vulnerabilities and Security Checklist will help identify possible issues – including malicious prompt injections, insecure output handling and data poisoning – while anticipating emerging risks. As a result, government tech and security teams are better positioned to safeguard source code, third-party GenAI-based applications and original model development, among other components.
  3. Implement guardrails. As agency employees seek to leverage or integrate these tech tools, security teams must establish classification and access controls to determine authorized and unauthorized roles, departments and classes. Similarly, they have to define roles and responsibilities for those involved with GenAI development and deployment. As part of a zero trust (ZT) strategy, they should limit access to strictly authorized personnel – with only those with proper clearance accessing these powerful capabilities.
  4. Invest in routine training/awareness programs. In fostering a culture of awareness, agencies should invest in training so employees know how to responsibly use GenAI, with sessions focused on safe and ethical deployments. In addition, agencies should install a real-time alert system to proactively deter staffers from taking part in potentially troublesome practices and/or exposing sensitive data.
  5. Take a dedicated LLM approach. LLM and GenAI are constantly evolving. That’s why seamless integrations with dedicated GenAI security posture management, governance, AI firewalls, automated red teaming, and other GenAI components prove critical – empowering government teams to proactively identify, assess and continuously mitigate risks which result from this developing technology.

Agencies are only starting to unleash the possibilities of GenAI. However, cyber criminals are already targeting the many vulnerabilities in GenAI and LLMs as they present new and appealing opportunities for harm. To help inform AI Security strategies agencies can consult the NIST AI Risk Management Framework, OWASP Best Practices, MIT AI Risk Repository and other useful industry guidance. 

By implementing a GenAI security plan – with comprehensive observability threat assessments, guardrails, training/awareness, and dedicated LLM approach – agencies will ensure that the wealth of content they generate is trustworthy and protected.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *