In the ever-evolving world of cybersecurity, one truth stands out: it's not just about firewalls and fancy algorithms. At its core, cybersecurity is a human endeavor. We've all heard the adage that the weakest link in any security chain is the people behind it – and the data backs this up. Drawing on the People, Process, Technology (PPT) framework, a staple in risk management, let's dive into why human behavior drives most breaches and what that means for organizations in 2026.
Whether you're a CISO grinding through daily threats or a business leader trying to safeguard your assets, understanding this breakdown isn't just academic – it's essential for building resilient defenses. In this article, we'll explore the stats, dissect the framework, and offer practical insights to shift the odds in your favor – with a special focus on integrating AI responsibly through strong governance and security measures.
The PPT Framework: A Quick Primer
The People, Process, Technology model isn't new – it's been a cornerstone of IT and security strategies since the 1960s, popularized by Harold Leavitt. It posits that effective cybersecurity requires balance across three pillars:
- People: The human element, including employees, vendors, and end-users. This covers behaviors like clicking phishing links, using weak passwords, or falling for social engineering.
- Processes: The policies, procedures, and workflows that guide operations, such as incident response plans, access controls, and compliance protocols.
- Technology: The tools and systems, from antivirus software to AI-driven threat detection.
The catch? These aren't equal. While tech grabs headlines with shiny new gadgets, breaches rarely stem from tech alone. Instead, they exploit gaps in how people interact with processes and tools. And with AI's rise, this dynamic is amplified – tech like AI can supercharge defenses, but without proper governance and security, it introduces new risks that loop back to human oversight.
The Stats: Human Behavior Dominates Breach Causes
Recent reports paint a clear picture: people are involved in the lion's share of cybersecurity incidents. According to a 2025 analysis, 60% of breaches involved human factors, such as errors or manipulation. This aligns with broader trends in which human-driven vulnerabilities outpace purely technical flaws.
Breaking it down by PPT based on industry benchmarks:
- People: 60-95%. Human error, social engineering, and misuse account for the bulk. For instance, phishing attacks – a classic human exploit – surged by 1,265% in recent years, fueled by generative AI tools that make scams more convincing. Verizon's 2025 Data Breach Investigations Report (DBIR) highlights credential abuse as a top initial access vector at 22%, often tied to human lapses like poor password hygiene. Even conservative estimates peg human involvement at around 68% of breaches.
- Processes: 10-20%. Flaws here include misconfigurations, inadequate policies, or third-party risks. The same DBIR notes third-party involvement in 30% of breaches, often due to process gaps like unvetted suppliers. Ransomware, present in 44% of analyzed incidents, frequently exploits process weaknesses like delayed patching or poor backup protocols.
- Technology: 5-20%. Pure tech vulnerabilities, like unpatched software, make up a smaller slice. DBIR data shows vulnerability exploitation at 20% for initial access, but attackers increasingly pivot to human targets over brute-forcing tech. With AI accelerating both attacks and defenses, tech failures are dropping – IBM's 2025 Cost of a Data Breach Report notes the global average breach cost fell to $4.44 million, thanks to faster AI-driven containment.
These figures aren't static; they vary by industry and threat landscape. In healthcare, for example, costs soar to $9.77 million per breach, often due to human-targeted attacks. But the skew toward people is consistent: over 75% of targeted cyberattacks start with email, a direct hit on human vigilance.
Why the Human Element Looms Largest
Humans aren't just error-prone; we're predictable. Attackers know this – why hack a fortified system when you can trick someone into handing over the keys? Social engineering tactics, amplified by AI deepfakes, exploit trust, curiosity, and fatigue. In 2025, 97% of companies reported GenAI-related security issues, many tied to human adoption without safeguards.
The fallout? Breaches now cost trillions globally, with cybercrime projected to hit $15.63 trillion by 2029. Yet, organizations using AI in security cut detection times by 108 days, showing tech can help – but only if people and processes align.
Real-world examples from 2025 underscore this: The SalesLoft breach via third-party OAuth integrations affected millions, blending process flaws with human oversight. Similarly, PowerSchool's incident exposed 62 million student records, likely through human-enabled access.
AI in Cybersecurity: Harnessing Power with Governance and Security
As AI integrates deeper into the Technology pillar of PPT, it reshapes the entire framework. AI accelerates threat detection, triage, and response, fundamentally transforming security operations. However, it also amplifies risks: attackers use AI for precision phishing, data poisoning, adversarial attacks, and AI-driven malware, with 72% of security leaders noting heightened risks. By 2026, AI powers core operations but becomes a prime target, making governance and security non-negotiable.
Governance Essentials: Frameworks such as the NIST AI Risk Management Framework (RMF) and ISO/IEC 42001 provide a structure for managing AI risks, emphasizing auditability, ethical use, and compliance with regulations such as the EU AI Act. Governance expands beyond tech to include board-level oversight, with 54% of boards neglecting AI data governance trailing in metrics by 26-28 points. As a cyber expert, I stress that we must be involved in all AI decisions to embed security from the start.
Security Measures: Protect AI models, data, and pipelines against threats such as prompt injection and data leakage. Implement advanced agent governance with real-time monitoring, kill switches, and isolation for autonomous AI systems. Securing AI requires tailored risk assessments and embedding security in the development lifecycle (SecDevOps for AI).
The Right Way to Implement AI: Start with a governance framework that defines roles and aligns with regulations. Conduct AI-specific risk assessments, hire experts in adversarial ML and ethics, and integrate zero-trust models. Train teams on AI awareness, automate repetitive tasks, and ensure continuous monitoring. Avoid unchecked adoption – 63% of organizations lack controls to enforce AI purpose limitations, creating a governance and containment gap. Done right, AI strengthens PPT by empowering people and refining processes.
Strengthening the Chain: Actionable Steps
To tip the scales, prioritize people without neglecting the rest. But remember, it's not a popularity contest – hire the right people with proven expertise, not just likable personalities. As a cybersecurity expert, I advocate that we should be involved in all decision-making processes, from strategic planning to tool selection, to ensure security is baked in from the ground up – especially for AI implementations.
- Hire the Right Talent: Focus on skills, experience, and a security-first mindset over charisma. Vet candidates rigorously for their ability to handle real-world threats, including AI-specific expertise like adversarial machine learning. Build diverse teams that challenge assumptions. This isn't about filling seats; it's about assembling a fortress of competent defenders.
- Invest in Training: Regular, engaging awareness programs reduce incidents by up to 40% when paired with AI tools. Simulate phishing, teach MFA fatigue resistance, and foster a "report, don't regret" culture. Include AI education on risks such as deepfakes and on safe adoption. Involve cyber experts in designing these programs to align with evolving threats.
- Refine Processes: Audit third-parties rigorously, enforce zero-trust models, and automate where possible to minimize human touchpoints. For AI, establish governance frameworks such as NIST RMF to support lifecycle management and regulatory alignment. Cybersecurity leaders must sit at the table for all policy decisions to spot risks early.
- Leverage Tech Wisely: AI for threat hunting is great, but govern its use – 16% of 2025 breaches involved attackers using AI. Implement secure AI development, real-time agent monitoring, and kill switches. Consult experts like me in procurement to avoid tech that looks good on paper but fails in practice.
Aim for equilibrium, but double down on humans. As threats evolve – think quantum risks and AI disinformation by 2026 – the human factor will remain the frontline. By embedding cyber experts in every layer of decision-making and implementing AI with robust governance and security, organizations can turn potential weaknesses into strengths.
Wrapping Up: People First in a Tech-Driven World
Cybersecurity isn't won with tech alone; it's a behavioral battle. With humans driving 60-95% of breaches, shifting focus from gadgets to grit is key. Hire wisely, train relentlessly, govern AI responsibly, and involve experts at every turn – that's how we stay ahead.
What are your thoughts? Have you seen human behavior turn the tide in your org? Drop a comment or share on X – let's keep the conversation going.
Erich Horst (
@CISOGrit) is an expert and is passionate about cybersecurity resilience. This article is for informational purposes; always consult experts for tailored advice.
References
Cybersecurity and Infrastructure Security Agency. (n.d.). Shields up: Guidance for families. https://www.cisa.gov/shields-guidance-families
Hoxhunt. (2025). Phishing trends report. https://hoxhunt.com/guide/phishing-trends-report
IBM. (2024). Cost of a data breach report 2024. https://www.ibm.com/reports/data-breach
International Organization for Standardization. (2023). ISO/IEC 42001:2023 – AI management systems. https://www.iso.org/standard/42001
Knostic. (2025). The 10 biggest statistics and trends for GenAI security. https://www.knostic.ai/blog/gen-ai-security-statistics
Lakera AI. (2024). GenAI security readiness report 2024. https://www.lakera.ai/genai-security-report-2024
National Institute of Standards and Technology. (2024). AI risk management framework (Version 1.0). https://www.nist.gov/itl/ai-risk-management-framework
SlashNext. (2024). The state of phishing report. https://slashnext.com/press-release/slashnext-mid-year-state-of-phishing-report-shows-341-increase-in-bec-and-advanced-phishing-attacks/
Tanium. (2025). Salesloft drift data breach: What we know and what we're doing. https://www.tanium.com/blog/salesloft-drift-data-breach-what-we-know-and-what-were-doing
TechTarget. (2025). PowerSchool data breach: Explaining how it happened. https://www.techtarget.com/whatis/feature/PowerSchool-data-breach-Explaining-how-it-happened
Verizon. (2024). 2024 data breach investigations report. https://www.verizon.com/business/resources/reports/dbir/
Comments
Post a Comment