AI, Humanity, and the Future of Security


As we continue exploring the human-centered approach to security, we can’t avoid the question of AI.

The growth of artificial intelligence is reshaping everything — the pace of innovation, the landscape of threats, and the way we as leaders must think about risk, responsibility, and readiness.

From my perspective, there are three key areas that shape how AI impacts a human-centered approach to security today.

1️⃣ The Speed of AI — and the Speed of Threats

The first is speed.

AI is accelerating faster than any other technological shift we’ve seen, and threats are evolving just as quickly.

Recently, we saw what is believed to be the first AI-enabled attack against Anthropic — a signal of what’s coming. It won’t be the last.

In this new era, traditional awareness training — “Look at this phishing email. Does it look fake?” — will no longer suffice.

Because the truth is, it won’t look fake anymore.

Deepfakes, voice mimics, and AI-crafted phishing will appear flawless. The new skill we must teach our teams is not recognition, but validation.

People at the front lines, receiving messages, requests, and media, need to know how to test what they see and hear before taking action.

That may mean going back to the source, verifying through a second channel, or using validation tools built for this new era.

The human role will shift from spotting anomalies to testing truth.

2️⃣ Governance, Inclusion, and Security’s Expanding Role

The second area is AI governance or, in many organizations, the lack of it.

Some companies are investing billions and moving fast to integrate AI into products and processes.

Others have slowed down, trying to understand how AI fits into their culture, infrastructure, and ethics.

In both cases, security must have a seat at the table.

Whether AI is being developed in-house or sourced from a third party, security should be embedded in the full lifecycle — from data collection and model training to deployment and monitoring.

Even in organizations without formal data governance teams, security leaders can and should help define ethical and operational guardrails.

That includes ensuring:

• Proper vetting of training data and model inputs.

• Secure integration of AI tools into systems and workflows.

• Alignment between AI innovation and human values — including fairness, privacy, and accountability.

Security doesn’t replace legal or governance teams, but it complements them — grounding innovation in awareness and ethical responsibility.

3️⃣ Readiness — Moral, Emotional, and Societal

The third area is the most complex and the most human.

With the speed of AI’s expansion, I find myself asking:

Are we truly ready for this?

Not just technically, but morally, culturally and from an evolutionary perspective?

We are living through a transformation that challenges our sense of self, our sense of truth, and our sense of control.

AI is not just changing how we work — it’s changing what we trust, how we think, and how we define being human.

The question isn’t whether AI will transform security. It already is.

The question is: Can we evolve our consciousness fast enough to meet it?

The Human Lens in the Age of AI

In this moment, human-centered security becomes more essential than ever.

Because as AI systems grow more autonomous, it is our humanness, our awareness, our integrity, our intuition, that must grow in equal measure.

We can’t automate wisdom.

We can only cultivate it.

That is the role of leadership now is to ensure that technology evolves with consciousness, not just capability.

How do you resonate with this perspective: that to move faster, we may need to slow down first? 

Next
Next

Anthropic Guardrails: What AI Anxiety Teaches Us About Conscious Creation