👥 Featuring:
Host: Alex (The Strategy Stack)
Opening & Setup
Alex opens Part 4 of his live series on Agentic Strategy—this session focuses on people and culture in the agentic enterprise. He notes most AI programs stall for human reasons (fear of replacement, loss of control, lack of trust, old mental models) rather than technical ones. A recent bank rollout of a 92%-accurate AI risk agent still failed on adoption because the team didn’t understand its reasoning or the new workflow. The fix: start with a people plan, run anonymous readiness surveys, and frame agents as force multipliers for judgment.
New Roles in the Agentic Enterprise
Leaders shift from command to design—encoding intent, constraints, and ethics (e.g., boundaries like CSAT ≥ 8.5, margin ≥ 18%, local rules enforced).
Managers move from supervision to coaching—overseeing humans and agents, routing escalations, resolving conflicts, and stewarding trust.
Employees move from repetitive execution to judgment—agents compile; people interpret, choose scenarios, and apply ethics. Analysts stop stitching data and start stress-testing recommendations.
Week 1 artifacts: a Decision Charter (who decides what), an Autonomy Matrix (what agents can do, when to escalate), and role playbooks (leader/manager/employee behaviors).
Building Trust Between Humans and Agents
Trust rests on four pillars: Transparency, Accountability, Reliability, Ethics. Every agent decision should log inputs, constraints, and rationale; every agent has a named human owner; autonomy is earned via performance; policy-as-code plus a trust layer blocks unsafe actions. Publish an Agent Ledger (decisions taken, success/failure, escalations, reasons) and define autonomy tiers (Tier 0 propose-only; Tier 1 auto within limits; Tier 2 broader autonomy) with promotion based on evidence.
Handling the First Mistake
Start with risk-tiered tasks. When an agent errs, deliver fast, specific feedback; retry; and if needed, involve engineering to modify the system. Run blameless postmortems: Was intent properly encoded? Were guardrails missing? Patch rules, update the ledger, and communicate fixes to rebuild trust.
Cultural Principles (that make it real)
Transparency over secrecy (default-open logs and shared memory),
Continuous learning (every interaction trains human + agent),
Distributed decision-making (decisions made closest to data),
Ethical responsibility (ethics embedded in prompts/policy, not slides).
Add to values: “We document agent decisions by default,” and “We escalate uncertainty early.” Run monthly Agent Transparency Days across product, ops, and compliance.
A Ritual That Shifts Behavior
Leaders visibly treat mistakes as learning events—owning issues, fixing root causes, and also clearly drawing lines on what’s not acceptable when discipline fails. Maintain daily methods to catch drift; don’t wait a week, or issues compound.
Managing Fear & Resistance
Name common fears (“agents will take my job,” “I can’t work this way,” “this is cost-cutting”) and counter with: a visible upskilling guarantee (budget + hours per person), volunteer pilots, peer success stories, “fuck-up walls,” and anonymous channels for concerns. Run a six-week “Work With Agents” program: Week 1 mindset; Weeks 2–3 hands-on; Week 4 oversight drills; Week 5 ethics labs; Week 6 demos to execs and certification.
Incentives & Metrics
If you pay for old behavior, you’ll get old behavior. Track: decision speed, signal-to-action time, escalation rate, cognitive load reduction, adoption rate, autonomy ratios, and safe decisions. Reward teams for training agents, not just using them; tie bonuses to cross-functional outcomes (e.g., churn reduction via support + marketing + agents). Publish a collaboration scorecard (decisions made with agent support, safe autonomy %, post-action corrections) and celebrate monthly improvements.
12-Month Cultural Roadmap (condensed)
Months 1–2: Awareness + shared terminology.
Months 3–6: Hands-on pilots; trust layer rituals; regular reviews.
Months 7–12: Ethics drills; promote agents from Tier 0 → Tier 1; year-two expands autonomy where evidence supports it.
Scaling Pitfalls to Avoid
Before scaling, ensure data integrity and clean inputs at every entry point. Many programs slip on implementation and training (the ERP lesson): timelines run long if teams aren’t prepared to use the tools.
Psychological Safety (for agents era)
Without psychological safety, people hide mistakes, under-report bias, and avoid feedback—crippling learning loops. Define it as the belief that no one is punished for raising concerns or honest mistakes, even with AI. Practices: public error reviews (aviation-style), anonymous feedback channels, and rewarding those who surface issues.
Digital Twins of Employees
Each role will gain a digital twin agent—a partner that mirrors workflows and preferences (e.g., a supply-chain twin monitors shipments, flags risks, and prepares decisions). Start with shadow mode, then run regular sync reviews (human ↔ twin).
Cultural Archetypes
Fearful (“AI will take our jobs”) → risk: sabotage; remedy: heavy training and reassurance.
Blindly optimistic (“AI fixes everything”) → risk: poor governance, runaway agents; remedy: build trust/oversight early.
Command-first (everything needs exec approval) → risk: bottlenecks; remedy: gradual trust-building + distributed autonomy.
Balanced partners (AI + humans) → most effort but most sustainable. Use surveys to map your archetype, baseline maturity, and tailor change programs.
Global Cultural Variations
Local culture and law matter. US: speed prized, looser autonomy tiers. Japan: hierarchy/harmony—more escalations. Germany: works councils must approve employee-affecting changes (including agent deployments). Best practices: regional culture liaisons; policy-as-code by region; train agents on local etiquette and compliance.
Redesigning the Employee Lifecycle
Recruiting: hire for AI collaboration; assess candidates on interpreting and critiquing agent outputs.
Onboarding: every new hire gets a digital onboarding agent.
Performance: measure decision quality in collaboration with agents; include a “feedback-to-agents” metric.
Exit: twin agents retain institutional knowledge to reduce brain drain. Create an AI literacy curriculum and make collaboration a core competency in job descriptions.
Handling Ethical Dilemmas
Prepare for pricing during disasters, biased recruiting, or reputation-risking trading. Use a four-step ethic drill: Detect (policy-as-code flags), Pause (auto-freeze), Deliberate (cross-functional review), Learn (update rules & retrain).
Multi-Level Communication
Operational channel: day-to-day human↔agent updates (e.g., Slack notifications on shipments).
Strategic channel: monthly trends (e.g., cases resolved, escalation %, error reductions).
Narrative channel: company-wide storytelling of wins and failures to build culture (e.g., internal videos of employees + agents collaborating). Assign owners per channel to avoid noise.
Agentic Leadership Habits
Daily intent checks (goals/constraints), mid-day decision audits, evening feedback reviews; weekly cross-functional alignment; monthly public Q&A on agent decisions. Stop approving micro-decisions; design the decision system. Keep a personal delegation log (what you’ve delegated to agents and why).
Handling Public Failure
When a public agent error happens (e.g., a pricing blow-up), run a live postmortem explaining the failed rule, what humans missed, and the fixes to prevent recurrence. Prepare a crisis playbook before deployment.
Key Takeaways
People before tech — most AI programs fail due to fear, mistrust, and misaligned human workflows, not technical issues.
Design new roles — leaders encode intent, managers coach, employees focus on judgment and oversight.
Build trust deliberately — transparency, accountability, reliability, and ethics must be embedded from day one.
Start small, prove value — use pilot programs with clear rules, then expand autonomy gradually.
Create safety nets — psychological safety and clear escalation paths enable learning without fear.
Align incentives — reward teams for training and collaborating with agents, not just for old outputs.
Think global, act local — customize policies and agent behavior for cultural and legal differences.
Continuously adapt — treat culture, trust, and oversight as living systems that evolve with the agents.
Hit subscribe to get it in your inbox. And if this spoke to you:
➡️ Forward this to a strategy peer who’s feeling the same shift. We’re building a smarter, tech-equipped strategy community—one layer at a time.
Let’s stack it up.
A. Pawlowski | The Strategy Stack