How has the current technology revolution of Generative AI reshaped your leadership philosophy?
Leadership in the age of Generative AI is no longer about having all the answers; it is about asking the right questions and building the right guardrails. My philosophy has transitioned from “Command and Control” to “Enable and Orchestrate.” In the past, a CISO’s leadership was often judged by the strength of their “No.” Today, a dynamic leader is judged by the velocity of their “How.” As I often say, “Content is a commodity, inspiration is the true catalyst.” In an era where AI can generate content instantly, a leader’s value lies in providing the spark that moves people to action.
GenAI has forced a shift toward what I call “Agile Governance.” We are leading through a period of extreme risk velocity, where a new model or vulnerability can emerge overnight. Navigating this requires understanding what I call the “10% Milestone”. This highlights two pivotal moments in any transformation: the first 10% when uncertainty looms largest, and the final 10% when fatigue risks autopilot. I believe that while “no one knows what the world will look like in 20 years, you can definitely influence that.” This influence comes from recognizing that “there’s a delicate balance between being fixed and focused”. Furthermore, a security leader must avoid being a “smoking doctor”, we must live the security we preach. Ultimately, “the best preparation is to prepare for the fact you won’t be prepared”.
What does “mission-driven leadership” mean to you in a corporate setting?
Mission-driven leadership is the bridge between a company’s bottom line and its societal contribution. In a corporate setting, especially one as impactful as healthcare technology, the “mission” is the ultimate North Star that prevents short-term pressures from compromising long term integrity. To navigate this, I use my own proprietary model: MORE – Meaning, Opportunity, Responsibility, and Empathy.
Meaning: I emphasize the meaning of our work. We aren’t just managing data; we are making the world slightly better, safer, or healthier. Leadership is being a role model for your kids and making them proud of you.
Opportunity: Every challenge or crisis is an opportunity. I always look at the bright side. As the value of time only rises, I tell my team: “Your time’s value appreciates, invest in yourself today.”
Responsibility: We must do things responsibly. Think twice before reacting or advising. In the military, the mission is binary: “it’s always either achievements or excuses”. I apply that same gravity here.
Empathy: Never forget putting yourself on the “other side.” Whether coaching or making hard decisions, I put myself in their shoes.
This model is deeply rooted in my background. In the IDF, I learned that you are responsible for your actions. If our mission at Philips is to improve lives, then our leadership must ensure that the technology facilitating those improvements is unshakeable. It transforms the security function from a compliance exercise into a department of trust that extends its protection beyond the company walls.
As a leader at Philips, what are your top priorities in securing enterprise-wide digital transformation? To secure a transformation of this scale, I utilize my FDL Model (Fences, Doors, and Locks) to visualize security as a dynamic architecture. A critical component is resolving the “Knights Dilemma”, the traditional trade-off between the weight of protection and the need for agility. My guiding principle is: “If it won’t be simple, it simply won’t be.”
- Fences (Defining the Territory): These are our software-defined, intelligent perimeters. We map the digital landscape to understand where our crown jewels reside. However, we must recognize that our “fences” now extend to the home offices and personal cloud environments of our employees. Our priority is building fences that provide visibility without boxing the user in.
- Doors (The Agile Armor): Doors represent Identity and Access Management (IAM). This is where we solve the Knights Dilemma by creating “Agile Armor”, protection that doesn’t hinder mobility. In 2026, the “door” should recognize intent. If an identity is compromised in an individual’s personal life, the enterprise “door” must react and challenge that entry gracefully.
- Locks (Granular Protection): These are our last line of defense – deep data protection and encryption. Implementation means moving security from the network layer to the data layer. This includes rigorous authorization logic that prevents “prompt injection” from tricking an AI. Our priority is ensuring that our locks are future-proofed against evolving threats.
How is the rise of Generative AI reshaping the cybersecurity threat landscape for global enterprises?
We are witnessing the “Industrialization of Cybercrime.” The arms race is nearing its end because “Microsoft is THE Iron Dome” for many enterprises. However, the paradigm is changing: “companies are not looking anymore for a single pane of glass, they are looking for a single (shielding) glass”.
The greatest challenge is the “perceived responsibility gap.” As a father to a 13-year-old daughter, if she showed me a suspicious WhatsApp, the answer would be instant:
“Absolutely not.” Yet at work, an employee might click a suspicious link for a gift card without a second thought. “At home, the risk feels personal, but at work, it feels abstract.” This gap is exactly where insider threats are born. Furthermore, as Mike Tyson said, “Everyone has a strategy, until they get punched in the face.” AI has made that “punch” faster and more frequent. To counter this, we adopt a W.A.R. (Watch, Assess, Report) mentality.
What does responsible and secure GenAI adoption look like in practice?
Responsible GenAI adoption requires a realization: “AI is magic, but not anyone can be a great magician”. It takes time and skill to master this magic securely.
In practice, this means providing “Sanctioned Sandboxes”, secure, enterprise-grade AI environments where staff can innovate without the risk of data leaking into public models. Secondly, we must implement “Human-in-the-Loop” (HITL) protocols. For any high-stakes decision, the AI should be an advisor, not the final adjudicator. Secure adoption also requires “Data Lineage” tracking. Finally, it involves a commitment to the individual’s digital safety. My time is my most valuable resource, and I don’t offer it freely to other businesses. Secure adoption means giving our staff the tools to protect their own time and identity from AI manipulation, both at work and at home.
How do you align data security strategies with innovation rather than slowing it down?
The key is to treat security as a strategic business partner rather than a technical silo. Traditionally, security was a “brake” on innovation. I believe security should be the “all wheel drive” that allows you to go faster on a dangerous road. We align security with innovation by “Shifting Left” and speaking the language of business value.
By integrating security architects directly into product development teams, we identify risks at the whiteboard stage, not the deployment stage. We also utilize “Security Orchestration and Automation” to handle the mundane compliance tasks. If we can automate 90% of the security checks, the innovators can spend 100% of their time on the 10% of the project that truly moves the needle.
Furthermore, we treat security features as competitive advantages. In 2026, customers buy a “Secure Experience.” When you frame security as a value-add, the “value of your time will only increase” as you spend less on remediation and more on creation.
How do you approach AI governance to ensure trust, transparency, and compliance across global operations?
AI governance must be a “Living Framework” that is globally consistent but locally compliant. Strategy at this level has three dimensions: The Inside (our products), Sideways (competitors/partners), and Outside (market/customers).
We utilize a tiered governance model. At the top are our “Universal Ethics Principles.” Below that, we implement “Context-Specific Guardrails.” To ensure transparency, we mandate “Algorithmic Impact Assessments” for all high risk AI systems. This documentation explains how a model reached a conclusion. We also use automated governance tools that scan for “Model Drift.” Crucially, our governance addresses the “Identity Risk.” To maintain this level of control, you must “define your 1% club”, those key members, partners, and technologies you trust implicitly.
What frameworks or principles do you believe are essential for secured AI deployment in healthcare and beyond?
“all models are wrong, but some are useful.” While we use ISO 42001 and NIST, we must remember that frameworks are only as good as their implementation. In healthcare, “Reliability” is a safety requirement. We must ensure models do not “hallucinate” in clinical settings. “Explainability” is a moral requirement; a clinician must understand the “why” behind an AI suggestion. Finally, “Privacy by Design” must be the default. We are heavily utilizing “Privacy-Enhancing Technologies” (PETs) like federated learning and differential privacy. These allow us to train robust AI models on distributed data sets without ever actually moving or exposing the sensitive patient data itself. These principles ensure that as we deploy AI, we are upholding the digital version of the Hippocratic Oath: First, do no harm.
Looking forward into 2026, how will Generative AI digital security risk evolve beyond the enterprise and workplace?
he risk will migrate from “Enterprise Data” to “Identity and Truth.” We are entering a “Post-Truth Security” era. General Eisenhower said, “Plans don’t make sense, but planning does.” In 2026, we must be constantly planning for the unexpected. “Be prepared for not being prepared”.
This is also where the second half of the “10% Milestone” comes into play. In 2026, many organizations will be near the end of their first major AI implementation cycles. This is when only 10% remains, the moment when fatigue sets in, and you risk going on autopilot. A leader’s job in 2026 will be to combat that complacency. The most significant evolution will be “AI-to-AI Risk.” As we delegate our lives to personal agents, the attack vector will be the manipulation of these agents. Security will be about protecting the “Digital Twin.” I believe that “though the future is unknown, our actions today have the power to shape a better world for tomorrow. Make every moment count!” Finally, I live by a simple rule for resilience: “Don’t be upset by anything that you won’t care about in 2 weeks.” Focus your energy on what matters – yourself and your loved ones.



