I appreciated the logic threading through your piece. It reminded me of how chess players train with machines: AI engines, coaching platforms, game databases, smart boards...all designed to simulate complexity until the player is ready to face a human opponent. That system works because the rules are fixed, the feedback is immediate, and the goal is precision.
But talent management isn’t chess. It’s not rule-bound; it’s relational, behavioral, and riddled with contradiction. I remain unconvinced that AI, no matter how refined the workflows, can resolve the behavioral shortcomings that plague HR. Kahneman’s Thinking, Fast and Slow came to mind as I read, because we’ve known for decades that HR decisions are shaped more by bias and cognitive drift than by process. And I’m not sure we’ll ever have the data architecture needed to redesign that.
Tech companies sell a rosy future, but I continue to see end-users misfire. AI is used to draft job descriptions that repel the very candidates they claim to attract. Conditionals and qualifiers are set by humans still operating inside bias systems, so the results remain structurally unsound. You’ve listed valuable best practices, and I agree they’re needed. But I never underestimate the depths of human ineffectiveness...especially when the system demands less cognition, less critical thinking, and less accountability.
AI doesn’t fix that. It amplifies it. Unless we redesign the architecture.
I look forward to learning more from the two of you.
I agree with your assessment: the messiness of real life can’t be fully explained or predicted by any framework. That’s why the biggest challenge for management is leading in paradox.
On one side, systems and new tech adoption demand clear structure and design. On the other, the human side is about messiness, moral dilemmas, and unpredictable behavior. Each is complex enough on its own, integrating the two is the whole new level and will require a new mindset.
The attempt here is to propose some kind of terrain map before we embark on this journey.
But the most important skills are harder to conceptualise into a clean framework - it’s adaptability, mental flexibility, empathy, emotional maturity - all those inner capabilities that will help leaders to deal with the reality as it unfolds.
Edwin, thank you for this deeply thoughtful comment. 🙏
Your chess analogy really resonated with me. It perfectly captures the core difference between a closed system like a game—where rules are fixed and outcomes are measurable—and the open, messy, deeply human system of talent management. In chess, complexity is simulated but ultimately bounded. In organizations, complexity is relational, emotional, and, as you put it so well, “riddled with contradiction.”
I completely agree with your point that AI doesn’t magically resolve human bias or organizational dysfunction. In fact, as you noted, it often amplifies it. Garbage in, garbage out—but faster, at scale. This is why in our piece, we put so much emphasis on the Early Design phase: if the underlying workflows and data architectures are structurally unsound, AI will accelerate the very problems we hope it will solve.
Where I see a glimmer of hope, though, is in reframing this as not purely a tech challenge, but a leadership challenge and an architectural one:
Redesigning the system before scaling the tech.
If organizations rush to automate broken processes—like biased job descriptions—they’re just paving cow paths. The harder, slower work is to re-architect the hiring and talent systems first, which demands uncomfortable conversations about power, incentives, and accountability.
Pairing technology with judgment.
Kahneman’s “Thinking, Fast and Slow” is spot on here. AI can be the “fast” system—processing at speed—but we need deliberate human oversight as the “slow” system, especially in high-stakes areas like hiring, promotion, and termination. It’s why I believe hybrid teams need explicit escalation paths and ethics committees, not just automation pipelines.
Reinvesting in human critical thinking.
You raise a powerful warning: systems that demand less cognition risk eroding the very human capabilities organizations need most. This is why leaders must actively design for reflection, feedback, and skill-building—otherwise, AI simply turns people into passive executors of machine outputs.
I don’t think AI will ever “fix” the behavioral challenges of HR. But it can give us leverage—if we take the time to rethink the architecture of how decisions are made and who holds the pen.
Your point reminds me of an old systems thinking mantra: “Every system is perfectly designed to get the results it gets.”
If we want different results, we have to redesign the system, not just supercharge it with new tech.
Curious to hear your take: do you see any examples—however small—where organizations are starting to break out of these bias loops rather than just automate them?
PS: please also consider picking up my book "Agentic Strategy" either via my subscription or from Amazon.
I appreciated the logic threading through your piece. It reminded me of how chess players train with machines: AI engines, coaching platforms, game databases, smart boards...all designed to simulate complexity until the player is ready to face a human opponent. That system works because the rules are fixed, the feedback is immediate, and the goal is precision.
But talent management isn’t chess. It’s not rule-bound; it’s relational, behavioral, and riddled with contradiction. I remain unconvinced that AI, no matter how refined the workflows, can resolve the behavioral shortcomings that plague HR. Kahneman’s Thinking, Fast and Slow came to mind as I read, because we’ve known for decades that HR decisions are shaped more by bias and cognitive drift than by process. And I’m not sure we’ll ever have the data architecture needed to redesign that.
Tech companies sell a rosy future, but I continue to see end-users misfire. AI is used to draft job descriptions that repel the very candidates they claim to attract. Conditionals and qualifiers are set by humans still operating inside bias systems, so the results remain structurally unsound. You’ve listed valuable best practices, and I agree they’re needed. But I never underestimate the depths of human ineffectiveness...especially when the system demands less cognition, less critical thinking, and less accountability.
AI doesn’t fix that. It amplifies it. Unless we redesign the architecture.
I look forward to learning more from the two of you.
I agree with your assessment: the messiness of real life can’t be fully explained or predicted by any framework. That’s why the biggest challenge for management is leading in paradox.
On one side, systems and new tech adoption demand clear structure and design. On the other, the human side is about messiness, moral dilemmas, and unpredictable behavior. Each is complex enough on its own, integrating the two is the whole new level and will require a new mindset.
The attempt here is to propose some kind of terrain map before we embark on this journey.
But the most important skills are harder to conceptualise into a clean framework - it’s adaptability, mental flexibility, empathy, emotional maturity - all those inner capabilities that will help leaders to deal with the reality as it unfolds.
Edwin, thank you for this deeply thoughtful comment. 🙏
Your chess analogy really resonated with me. It perfectly captures the core difference between a closed system like a game—where rules are fixed and outcomes are measurable—and the open, messy, deeply human system of talent management. In chess, complexity is simulated but ultimately bounded. In organizations, complexity is relational, emotional, and, as you put it so well, “riddled with contradiction.”
I completely agree with your point that AI doesn’t magically resolve human bias or organizational dysfunction. In fact, as you noted, it often amplifies it. Garbage in, garbage out—but faster, at scale. This is why in our piece, we put so much emphasis on the Early Design phase: if the underlying workflows and data architectures are structurally unsound, AI will accelerate the very problems we hope it will solve.
Where I see a glimmer of hope, though, is in reframing this as not purely a tech challenge, but a leadership challenge and an architectural one:
Redesigning the system before scaling the tech.
If organizations rush to automate broken processes—like biased job descriptions—they’re just paving cow paths. The harder, slower work is to re-architect the hiring and talent systems first, which demands uncomfortable conversations about power, incentives, and accountability.
Pairing technology with judgment.
Kahneman’s “Thinking, Fast and Slow” is spot on here. AI can be the “fast” system—processing at speed—but we need deliberate human oversight as the “slow” system, especially in high-stakes areas like hiring, promotion, and termination. It’s why I believe hybrid teams need explicit escalation paths and ethics committees, not just automation pipelines.
Reinvesting in human critical thinking.
You raise a powerful warning: systems that demand less cognition risk eroding the very human capabilities organizations need most. This is why leaders must actively design for reflection, feedback, and skill-building—otherwise, AI simply turns people into passive executors of machine outputs.
I don’t think AI will ever “fix” the behavioral challenges of HR. But it can give us leverage—if we take the time to rethink the architecture of how decisions are made and who holds the pen.
Your point reminds me of an old systems thinking mantra: “Every system is perfectly designed to get the results it gets.”
If we want different results, we have to redesign the system, not just supercharge it with new tech.
Curious to hear your take: do you see any examples—however small—where organizations are starting to break out of these bias loops rather than just automate them?
PS: please also consider picking up my book "Agentic Strategy" either via my subscription or from Amazon.
Thanks!
—Alex