👥 Featuring:
Host: Alex (The Strategy Stack)
Opening & Setup
Casual opening from Alex, welcoming the audience warmly. Mention of recent travels, conversations with leaders about AI, and setting the scene for this “book session.” Explained this is part of a continuing deep dive into Agentic Strategy.
Why I Wrote Agentic Strategy
AI tools alone aren’t enough — what matters is systems and architecture. The book responds to a gap: companies adopting AI without rethinking how they plan, decide, and learn. It’s relevant now because autonomy is shifting digital transformation from efficiency → cognition.
Chapter 1 – The Strategic Case
AI is not a strategy, systems are.
Local efficiency gains plateau without integration into decision loops.
Systems thinking reframes the question: not “what can AI do” but “how do we make decisions better?”
Example: marketing teams using AI as copy assistant vs. agent networks that test, learn, and adapt.
Case: LVMH’s MaIA system integrates across pricing, personalization, supply chain.
Cognitive overload is the real bottleneck — agents reduce mental overhead and compound intelligence.
Agents form the next thinking infrastructure — persistent, adaptive, cross-functional.
Chapter 2 – Levels of Autonomy
Five levels of autonomy:
Rule-based systems
Basic AI automation
Adaptive workflows
Goal-oriented agents
Self-sufficient agents
Most firms today: Levels 1–2. Strategic leverage starts at Level 4, transformation at Level 5.
Trade-offs: more autonomy brings more value but requires oversight and trust.
Autonomy is not binary — it’s about calibrating the right level for context.
Chapter 3 – Agent Selection & Training
Choosing foundation models: generalists vs. specialists.
Training agents means designing their thinking processes, not just feeding them tasks.
Integration of external tools: balance between reasoning vs. tool-use.
Memory architectures (short-term, long-term, retrieval).
Guardrails and escalation paths are essential — autonomy without boundaries = chaos.
Chapter 4 – Agent Thinking
Planning: breaking down goals into smaller actions.
Reasoning: internal monologues and transparent logic trails.
Feedback loops: agents that learn from results vs. static automations.
Frameworks compared: ReAct (reason + act iteratively) vs. Plan-Execute-Evaluate.
No universal loop — thinking design must match context.
Content & Community Roadmap
Book sessions as interactive deep dives, covering 2 chapters at a time.
Future sessions will cover practical design, governance, and scaling.
Broader plan includes masterclasses, podcasts, and a 5-book series expanding agentic strategy across organizational and societal contexts.
💡 Takeaways
AI itself is not a strategy. Systems and feedback loops are.
Autonomy is a spectrum, not a switch — leaders must calibrate oversight.
Training an agent = designing its cognition: planning, reasoning, memory, guardrails.
Thinking frameworks matter: the way agents reason determines the value they create.
Agents are becoming the next layer of organizational infrastructure — not for execution, but for cognition.
The role of leadership shifts from micromanaging tasks to architecting how the organization thinks.
Hit subscribe to get it in your inbox. And if this spoke to you:
➡️ Forward this to a strategy peer who’s feeling the same shift. We’re building a smarter, tech-equipped strategy community—one layer at a time.
Let’s stack it up.
A. Pawlowski | The Strategy Stack












