Resources for Professionals Interested in Security and Privacy

Security and privacy is getting renewed attention. The strong defenses and reduced exposure AI requires are too risky for organizations to deny or delay. The security and privacy job market is beginning to boom again.

There’s no overplaying AI’s role in security and privacy today, whether it’s being used by

  • hackers to optimize attack techniques and volume,
  • cybersecurity teams to automate threat detection and response,
  • or identified by both as a new attack surface in organizations.

The result? According to Robert Half, the security and privacy job market is beginning to boom again for one main reason: Because the strong defenses and reduced exposure AI requires are too risky for organizations to deny or delay.

On this resource page you’ll learn…

  • Which security and privacy knowledge areas are foundational? Master core principles and knowledge.
  • Which trends are shaping security and privacy’s future? Managing the risks of agentic AI, implementing zero trust, and going all-in on privacy-by-design.
  • What challenges does the field face? Adaptive AI threats and vanishing network parameters top the list.
  • What are promising career paths in security and privacy? Look for key roles at the intersection of risk management, data strategy, and large-scale AI transformation.
  • Which ethical challenges are most urgent? AI-driven deceptions, excessive data collection, and the challenge of accountability in the AI black box.
  • How can I stay up-to-date on security and privacy news and research? Access the latest standards, SME insights, and industry trends.

What Are Key Knowledge Areas in Security and Privacy?


“Modern security and privacy careers require deep knowledge of computing systems, networking, and cloud infrastructure, along with the ability to analyze and secure distributed systems against real-world threats.” – Dr. Yuhong Liu, Associate Professor of Computer Engineering, Santa Clara University

Security and privacy address how organizations use and protect digital information:

  • Cybersecurity focuses on protecting systems, networks, and data from unauthorized access or disruption.
  • Privacy focuses on protecting an individual’s personal information and how it is collected, used, stored, and shared.

Because personal data protection requires robust security measures and responsible data governance, security and privacy are often addressed together. Understanding differences and points of intersection can help practitioners better design technically secure systems that respect user rights.

Cybersecurity: Protecting Systems, Networks, and Data


As cybersecurity expert Eugene H. Spafford famously noted:

The only truly secure system is one that is powered off, cast in a block of concrete and sealed in a lead-lined room with armed guards—and even then I have my doubts.”

As is also widely noted, cybersecurity is a process, not a product or a fully reachable goal. This process is one that evolves along with the threat space and is built on principles and foundational knowledge areas as follows.

Cybersecurity’s core CIA principles

  • Confidentiality: Protect against unauthorized access to information.
  • Integrity: Ensure that data is trustworthy, complete, and unaltered accidentally or by unauthorized users.
  • Availability: Ensure that data is accessible when it is needed.

Essential cybersecurity knowledge areas

  • Authentication and identity management
  • Encryption and secure communications
  • Network and system security
  • Vulnerabilities, threats, and risk management
  • Security monitoring and incident response

Privacy: Protecting Personal Data and User Rights


Privacy focuses on how personal information is collected, used, governed, and protected.

Core privacy principles

  • Data minimization: Collect only necessary information.
  • Purpose limitation: Use data only for defined purposes.
  • Transparency: Be clear and truthful about how data is collected, used, shared, and stored.
  • User rights: Give users the ability to access, correct, and delete their data.

Essential privacy knowledge areas

  • Personal data and sensitive data classification
  • Data governance and lifecycle management
  • Privacy-by-design practices
  • Regulatory frameworks and compliance
  • Ethical data use

Security and Privacy Integration: Protecting Data and People


In practical terms, cybersecurity and privacy are interconnected:

  • Security protects data from breaches or attacks
  • Privacy ensures personal data is handled responsibly

Integrated security–privacy practices

  • Secure data architecture
  • Privacy-by-design implementation
  • Data protection engineering
  • Risk assessment and compliance management
  • Incident response and breach notification

Learn more:

What Are Key Trends in Security and Privacy?


“The next security frontier is agentic and physical AI. We’ve moved beyond chatbots to autonomous agents that can vibe code, independently generating software and deploying it.” Dr. Yuhong Liu, Santa Clara University

As is true across technical fields, the expansion of AI and cloud computing are fueling numerous trends in the security and privacy space. Following are three examples and resources to learn more.

Managing the Risks of Agentic AI


Increasingly practical and accessible, AI agents are proliferating across organizations and creating new avenues for attack. To counter this, strong governance and cybersecurity oversight are essential.

According to Gartner, key priorities for cybersecurity teams include the following:

  • Identify authorized and unauthorized AI agents
  • Develop robust systems to manage, monitor, and restrict agents
  • Create codified incident responses to address agent-related risks

Learn more:

Implementing Zero Trust and Cloud-Native Security


A zero trust architecture removes trust from system design and implementation, replacing it with a “never trust, always verify” approach that assumes every access request is a breach until verified. As cloud adoption continues to accelerate, zero trust is becoming the norm for risk management.

Zero trust’s three principles and their practical implementation are as follows:

  • All resources are accessed securely, regardless of network location.
  • Access is granted per-session based on the principle of least privilege.
  • Authentication and authorization are dynamic and strictly enforced before access is allowed.

Learn more:

Embedding Privacy-By-Design


Privacy is no longer limited to regulatory compliance and breach avoidance or recovery. In an age of AI’s insatiable data consumption, where both identity-based attacks and global regulations are evolving and ever-increasing, privacy is a strategic imperative for all organizations.

Privacy by design proactively integrates this imperative by embedding data protection into systems from the start. It is guided by seven principles:

  • Proactive not reactive; preventative not remedial
  • Privacy as the default setting
  • Privacy embedded into design
  • Full functionality: positive-sum, not zero-sum
  • End-to-end security: lifecycle protection
  • Visibility and transparency: keep it open
  • Respect for user privacy: keep it user-centric

Learn more:

What Are Key Challenges in Security and Privacy Today?


“Today, AI produces perfect, hyper-personalized deepfake audio and video. This makes the human ability to distinguish between a legitimate request and a sophisticated fraud very challenging.”  Dr. Yuhong Liu, Santa Clara University

The security and privacy field is being reshaped by ongoing and emerging challenges, including growth and evolution in AI, regulations, and system complexities.

Weaponizing AI: Adaptive Threats Expand


Using AI, attackers are now automating phishing campaigns, generating convincing social engineering content, and scaling both malware development and vulnerability discovery. Examples of this increased capabilities include

  • Anthropic’s November 2025 report that AI had reached an “inflection point” after the company discovered an espionage campaign in which attackers used AI agents not just to advise on but to fully execute sophisticated cyberattacks.
  • Crowdstrike’s annual Global Threat Report christened 2026 the year of evasive adversaries who are “supercharging their attacks using AI,” including an 89% increase in AI-enabled intrusions.

To combat this, organizations are responding in kind, deploying AI-powered cybersecurity tools to improve access control, threat detection, anomaly monitoring, and incident response.

Learn more:

Vanishing Network Perimeters

As use of cloud computing, mobile devices, and third-party services increase, the days of defined, internally controlled, and firewall-protected network perimeters are largely dissolving. As organizations lean-in on hybrid and multicloud architecture adoption, the complexity of this security landscape is only increasing.

Further, using various cloud providers, operating models, and security tools can reduce or fragment a security team’s ability to sustain a unified view of risk. Centralizing teams and security platforms can help mitigate this, as can mapping a responsibility model for each vendor to clarify who is securing which stack layers. Other possible options for managing complexity of the security space include

  • Listing all third-party components and open source libraries to clarify attack surfaces and vulnerabilities
  • Implementing AI solutions to monitor third-party tools for security issues.

Learn more:

Surging Privacy Regulations


Organizations and security teams today face a maze of laws and compliance requirements around how and why data is collected and stored. The goal is to ensure that data collection is transparent, minimized, and purpose-driven. Regulators are also shifting their focus from verifying compliance concepts to measuring how an organization’s compliance plays out in practice, including why data is collected, how that data aligns with specific business goals, and whether narrower data collection would suffice.

Learn more:

Building a Career in Security and Privacy Today


Organizations are struggling to manage AI adoption while also tracking the radically expanding attack surfaces ushered in by AI, cloud ecosystems, and always-on devices. The upside of this organizational pain is a gain for students and early-career professionals interested in cybersecurity.

According to the World Economic Forum’s Future of Jobs report, networks and cybersecurity are among the top three fastest-growing job skills. Following are some of the key positions to consider.

Data Privacy Officer


Related research: IEEE Transactions on Dependable and Secure Computing and IEEE Transactions on Privacy

Information Security Analyst


DevSecOps Engineer


Digital Forensic Examiner


Ethical Hacker


Cloud Security Engineer

What Are Ethical Issues in Security and Privacy?

“One urgent ethical issue is the central conflict between data utility and privacy. AI models require vast, diverse datasets to be accurate, but every piece of data added increases the risk of individual exposure.” Dr. Yuhong Liu, Santa Clara University

Defending against threats and breaches introduces many ethical challenges, and understanding and addressing them is essential to ensuring that security efforts protect privacy, standards, and trust.

To strike a balance between robust security and a respect for individual rights, organizations and their security teams must carefully consider both the technical and ethical dimensions of cybersecurity.

Minimizing Data: A Matter of Privacy—and Security

Digital services collect vast amounts of personal data, often with insufficient transparency or user control. Further, the increasing use of AI tools for network monitoring and threat detection can result in excessive surveillance that captures sensitive employee information that is not related to their jobs.

As in the workplace, in the broader digital ecosystem, minimizing data collection is not only essential for privacy protection, but also for supporting data security. On the other hand? Money.

As Tech Equity points out, data collection has created a surveillance economy, wherein organizations and services collect digital, movement, biometric, and other data to make money through targeted marketing or by simply selling that data to the highest bidder.

Learn more

Interrogating AI Fairness and Accountability

Cybersecurity teams are increasingly deploying AI tools in threat detection, identity verification, and automated decision-making. This automation increases efficiency, but it can also introduce risks by obscuring transparency, introducing bias, and making high-stakes decisions without human oversight.

Whether in pedestrian or high-risk situations, accountability, explainability, and lack of bias are crucial, as is having a human in the loop, especially for safety-critical decisions and systems.

Learn more

Detecting AI-Driven Deceptions

Using GenAI, corrupt actors can create ever-more realistic deepfakes to impersonate trusted individuals and easily bypass voice recognition and facial authentication security mechanisms. Given these developments, security teams must rethink identity assurance and continuously monitor endpoints. Combined with automated threat intelligence and rapid response mechanisms, this approach can help to reduce the damage window and strengthen overall cyber resilience.

Learn more

Resources: The Security and Privacy Knowledge Hub

Dr. Yuhong Liu: Further Thoughts on Security and Privacy

Dr. Liu elaborated on several of the topics above; her full responses follow here. For more on her work, see Santa Clara University’s Trustworthy Computing Lab page.

Essential Security and Privacy Knowledge Areas

In addition to her emphasis on a deep understanding of computing systems, networking, and cloud infrastructure, Dr. Liu discussed the following hard and soft skills as particularly valuable today.

Hard skills

“Most recently, with the rapid adoption of AI systems that process sensitive user data, engineers must also develop a solid understanding of how machine learning/AI models work. This includes topics such as adversarial machine learning, as well as privacy-enhancing technologies (PETs) like differential privacy and federated learning, which help protect user data while enabling large-scale model training.”

Soft skills

“The ubiquity of computing means that security is not a ‘one-size-fits-all’ discipline. Modern engineers must possess domain-adaptive expertise. It requires the ability to quickly absorb the operational context of new sectors to ensure that security measures are both robust and relevant to specific high-stakes environments, such as critical infrastructure or bioinformatics.”

She also cited “socio-technical intuition” as particularly important, and defined it as “expertise in bridging the gap between technical requirements and human behavior. Understanding the psychological drivers behind how people share and protect data is essential to design and develop security measures that serve as intuitive guardrails rather than obstructive barriers.”

Emerging Security and Privacy Technologies

Dr. Liu cites the ability of agentic AI to generate and deploy code as the next security frontier, noting that because these agents can impact the physical world, “we are no longer just defending against human hackers; we are securing the autonomous digital teammates that now have the keys to our physical and digital infrastructure.”

This, she says, “requires a shift toward zero trust for AI, where we must continuously verify the intent and integrity of machine-led actions in both the digital and physical realms.”

Key Challenges in Security and Privacy

While technical safeguards have improved, the human-in-the-loop remains the most vulnerable and unpredictable element in the security chain.”

In addition to the tremendous challenges human overseers face, including in discerning legitimate system requests from sophisticated deepfakes, Dr. Liu also cited two other major security and privacy challenges:

The Convenience-Privacy Trade-Off

“The rapid adoption of AI has exacerbated the privacy paradox. Users often prioritize immediate utility, such as an AI agent scheduling their life or summarizing medical records, over long-term data sovereignty. This creates a massive ‘data exhaust’ where personal identifiers are fed into black-box models, often without a clear way to ‘unlearn’ or delete that data later.”

The Rise of “Agentic Dependency”

“A new challenge is the delegation of authority. When users give AI agents the power to act on their behalf (e.g., making purchases or accessing accounts), they aren’t just sharing data; they are sharing identity. Humans tend to over-trust these systems, leading them to ignore security warnings or grant excessive permissions that an AI agent might exploit if it is compromised.”

Urgent Ethical Issue in Security and Privacy

Regarding the conflict between data utility and individual privacy, Dr. Liu said that we need to ensure that “as AI systems become more useful, they do not erode the fundamental rights of the individuals providing that data.

“Another urgent ethical challenge is ensuring the secure, trustworthy coordination of heterogeneous agentic AI systems. As these agents, developed by competing stakeholders and trained on disparate datasets, coexist in shared physical environments, the risk of ‘alignment friction’ increases.

“Whether it is autonomous vehicles from different manufacturers navigating the same highway or independent energy management systems feeding into a shared distribution grid, the danger lies in goal narrowness. If an agent prioritizes its manufacturer’s specific objectives, such as ‘minimizing travel time’ or ‘maximizing local energy profit’ without accounting for the broader ecosystem, it can trigger systemic failures.

“These uncoordinated actions don’t just lead to inefficiency; they lead to societal-level disruptions, such as widespread traffic gridlock or critical power grid instability, compromising both public safety and the ethical integrity of AI deployment.”