Cracking the Code: Secure Software Architecture in a Generative AI World
Share this on:
Generative AI (GAI) technologies are being applied in a range of sectors—national defense, healthcare, transportation, and more. While the surge brings advantages, it also presents unique cybersecurity challenges. As a software architect deeply involved in designing secure, complex systems, I recognize the urgency in understanding and addressing the challenges. I’m not the only one! The U.S. Secretary of Commerce, Gina Raimondo, announced the launch of a new Public Working Group on Generative Artificial Intelligence sponsored by the National Institute of Standards and Technology (NIST). NIST, CISA (Cybersecurity and Infrastructure Security Agency), and OWASP are all releasing continuous guidance.
Publishing guidance alone is not enough. GAI adoption must start with stakeholder education so everyone grasps potential security risks. This is not just an IT issue; it’s an organization-wide concern affecting everything from operational security to compliance. Everyone needs to have a solid understanding of the challenges. Beyond GAI education, what architectural considerations and principles need focus when adopting GAI?
The bedrock of any resilient system begins at design. Security cannot be bolted on or tested into existence through automated pipelines. Security needs to be embedded in the architectural design by implementing secure architectural principles. An industry go-to guidance is CISA’s comprehensive “Secure by Design” guide.
Identify Security Objectives: Begin the design process by identifying key security objectives, focusing on the assets that require protection. This helps to categorize and prioritize resources based on their criticality and vulnerability. By doing so, you can better align your security architecture with the specific threats your organization faces.
Security-by-Layer: A multi-layered defense, known as ‘defense in depth,’ should be central to your architecture. This involves implementing security controls at different layers of your system, ranging from the perimeter to the application level. By distributing defenses, it is harder for attackers to penetrate the system. It also provides alternative security measures in case one layer is compromised.
Automate and Verify: Leverage automated security controls to monitor and respond to anomalies and potential threats in real-time. Organizations must reduce human lag in detecting and neutralizing attacks. In tandem, adopt the ‘Zero Trust’ architecture (ZTA). ZTA requires verification for every entity trying to access resources in your ecosystem, regardless of its origin. Complement this with ‘Least Privilege’ principle limiting access rights to what’s strictly required by role. This minimizes the potential damage from accidental mishaps or intentional malfeasance.
Secure by design is a philosophy that encourages a holistic approach to security. The resulting architecture and software will be adaptable to evolving cybersecurity challenges.
The data used to train GAI models is more than just input—it’s the foundation for algorithmic understanding. If this data is compromised or mismanaged, the implications can be catastrophic. There are tactics to reduce risk.
Data Classification: You must classify the data based on sensitivity and accessibility. Data classification tools can automatically categorize data. This enables confidential or classified information to be adequately secured. This not only safeguards the data but also aids in compliance with various security protocols.
Ethical Audits: Data bias is another security concern that has far-reaching consequences. Ethical audits can be done to scrutinize data for any kind of implicit bias. This is not just a social responsibility but a security imperative, as biased algorithms can make unpredicted and potentially hazardous decisions.
Proactively implementing data classification and ethical audits helps mitigate the inherent risks associated with training a GAI system. The security posture of the resulting model is improved. Resulting models are capable of aiding in complex decision-making processes without unforeseen consequences.
Keep a Strong Backbone
GAI is increasingly sophisticated, making traditional security measures obsolete. Generative pre-trained models like OpenAI’s GPT-4 are based on neural networks creating content that mimics human language and decision-making. Models are engineered to understand and respond to context. The trouble is that nefarious actors are leveraging GPT models as well.
GAI systems require continuous and ongoing supervision of system activities and data flows to quickly detect and address any irregularities. The first step is to understand what constitutes “normal” behavior of a system. Anomalies against the normal baseline will trigger alerts including situations where a GAI algorithm is simulating human behavior.
Comprehensive logging: Historic data on the system’s behavior is needed to train alerting mechanisms. It also enables forensics when something does go wrong. It’s crucial to track the user behaviors and audit trails of authenticated users to detect anomalies that could indicate compromise. A tough decision is determining how long to store logs. Why? Maintaining a secure, long-term record of system logs is invaluable for forensic analysis when, not if, a security incident happens.
Behavior Analysis and Alerting: Behavior analysis tools can monitor typical user behavior and flag any deviations, which could indicate a potential security breach. Automated alert mechanisms can be designed to flag unusual patterns in data traffic or user behavior. Immediate notifications can potentially stop a data breach. These tools use algorithms to learn from ongoing user activities, making them more efficient over time.
Tangentially, monitoring and alerting should also be triggered if there are changes in the system’s compliance to standards and regulations; this is especially valuable when working with government agencies and highly regulated domains. Real-time monitoring, comprehensive logging, behavior analysis, and alerting form the backbone of robust cybersecurity.
Code vulnerabilities serve as entry points for attackers. Given the complexity of GAI models, these vulnerabilities can be nuanced. We are in the early days of using code generation to inject vulnerabilities. Now is the time to take action by keeping humans in the loop with static code analysis and code reviews.
Static Code Analysis: Conducting static code analysis(SCA) can help identify vulnerabilities in the code without running the program. This is crucial as running a program with vulnerabilities could compromise the entire system. SCA also enables compliance monitoring to standards such as Federal Information Processing Standards FIPS) and other NIST guidelines.
Code Reviews: Peer-reviewed coding practices allow for a second set of eyes to catch potential vulnerabilities, reducing the likelihood of a security breach. Make this a mandatory step in your DevSecOps process to catch and fix issues before they escalate.
The intricate nature of GAI models amplifies the risks associated with code-level vulnerabilities. It may seem counterintuitive to keep maintaining human oversight. Code reviews, ethical audits, and static code analysis techniques are proven to fortify cybersecurity posture.
Call to Action for All!
The next frontier in cybersecurity is here, and it’s moving at the pace of Generative AI. The many considerations provided are the tip of the iceberg. What is most important is to get moving now. Don’t wait for the threats to evolve beyond your defense capabilities. As W. Edwards Deming wisely said, “Plan, Do, Check, Act.”
Plan: Educate your team, understand the risks, and plan your “Secure by Design” architecture. No plan survives contact with reality; however, failing to plan is planning to fail.
Do: Implement your security measures, layer your defenses, and launch your Generative AI projects. Execution without foresight is aimless, but planning without execution is useless.
Check: Monitor in real-time, log comprehensively, and audit periodically. Vigilance is not just a task, but a culture that needs to be infused into your organization.
Act: Adjust, improve, and evolve your strategy as you learn from the checks. This includes revisiting your foundational plans to ensure they adapt to new threats and vulnerabilities.
The time for a proactive, comprehensive approach to cybersecurity is now. Equip your organization with the knowledge and tools to defend against tomorrow’s threats today.
The future of cybersecurity starts with you. Join forums, participate in NIST’s Public Working Group on AI, or reach out to cybersecurity experts for consultation.
Embracing the potential of generative AI technologies involves acknowledging and mitigating the inherent cybersecurity risks early. Pay close attention to the considerations and ideas now. GAI is still in its infancy, though the pace of innovation and improvement are stifling though there is time to prepare and act.
From the foundational architecture to real-time monitoring, every facet of system design requires a “Secure by Design” philosophy. This is not a challenge for IT departments alone; it’s a collective responsibility that demands understanding and action from every stakeholder involved. Whether you’re working on national defense, healthcare, or any other industry, the considerations discussed in this article are universal and urgent.
About the Writer:
Passionate Architect!!! Tracy (Trac) Bannon is a Senior Principal with MITRE Labs and IEEE Senior Member. She is an accomplished software architect, engineer, DevSecOps, and digital transformation advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver value at speed.
She’s passionate about mentoring and training and enjoys community and knowledge-building with teams, clients, and the next generation. Trac guest lectures at universities, leads working groups, and shares experience stories.
Trac is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer, and friend.
Ms. Bannon is a featured industry speaker, blogger, author, and accomplished podcaster including 𝗥𝗲𝗮𝗹 𝗧𝗲𝗰𝗵𝗻𝗼𝗹𝗼𝗴𝗶𝘀𝘁𝘀 and 𝗜𝘁’𝘀 𝟱𝟬𝟱! 𝗗𝗮𝗶𝗹𝘆 𝗣𝗼𝗱𝗰𝗮𝘀𝘁 (part of Sourced Network Production).
NOTE: The image in this article was created using OpenAI’s DALL-E model.
Disclaimer: The author is completely responsible for the content of this article. The opinions expressed are their own and do not represent IEEE’s position nor that of the Computer Society nor its Leadership.