As we step into 2026, I joined the inaugural Behavioral AI Horizon webinar hosted by the Behavioral AI Institute—and it was, quite simply, interesting, well-curated, and genuinely valuable.
A big thank you to the organizers (Luana de Mattos Gabriel, Samuel Salzer) and Professor Ganna Pogrebna, for framing a conversation that went beyond hype and focused on what actually matters: how AI systems interact with human behavior at scale, and where tensions are emerging rather than being resolved.
The format worked especially well:
I left the session with a simple conclusion that clearly confirmed my own thinking:
While the technological challenges of AI are very real and far from solved, the behavioral layer is just as critical—and too often still underestimated. This challenge is also a powerful opportunity to address: adoption, trust, agency, and how decisions actually get made when AI is embedded into everyday work.
This is a perspective I have long carried from my work with behavioral science — including actively integrating it into brand, communication, and transformation projects during my time at Ogilvy. What Behavioral AI Horizon did particularly well was putting structure, data, and real-world signals behind that intuition.
Here is my recap — structured around the four tensions that stood out most and that I believe leaders should pay close attention to in 2026.
One of the strongest signals is the shift from “occasionally prompting a tool” to continuous co-piloting—AI that sits alongside work in real time.
This slide that stuck with me highlighted how work time is actually distributed:

Why this matters (behaviorally): if attention is fragmented and the default mode is reactive, then always-on copilots will shape decisions by reducing friction—often pushing people into faster, more automatic “System 1” patterns. That can increase productivity, but it can also reduce reflection unless the system is intentionally designed to slow people down at the right moments.
Practical implication: For leaders, the question is not “should we deploy copilots?” but where do copilots reduce noise vs. where do they accidentally reduce thinking?
The webinar raised the question directly: Will AI companionship become more normalized in 2026, or will we see meaningful pushback?
This is not just a consumer trend. Companion dynamics can show up in the workplace too: people start relying on an assistant for reassurance, feedback, and emotional regulation (even if they call it “productivity”).
Why this matters (behaviorally):
Practical implication: Organizations rolling out AI need to treat “companion behaviors” as part of governance: not only “what the tool can do,” but how it shapes motivation, confidence, and dependency.
One slide framed a core contradiction:

What we say
What we do
Why this matters (behaviorally): people want agency, but they also default to convenience. When tools reduce effort, the behavior shifts—even if beliefs and intentions lag behind.
Practical implication: If you want augmentation, you have to design for augmentation.
That means:
The final tension was the most “enterprise real-world” one: AI agents.
A slide summarised the adoption gap starkly:
Regardless of the precise definitions behind each category, the storyline is clear: agents are easy to demo and hard to operationalize.
Why this matters (behaviorally + operationally):
Practical implication: “We need AI agents” is not a strategy.
A strategy is: Which decisions can be delegated safely, under what constraints, with what accountability, and what behavioral training is required for teams to adopt them?
What I appreciated about Behavioral AI Horizon is that it didn’t treat “behavior” as a soft side topic. It treated it as the missing core: AI and behavioral science must co-evolve—because AI systems increasingly shape decisions, emotions, and everyday life.
And if that’s relevant for your organization, feel free to reach out to me via bempowered.de.