Episode 5: Featuring David Eliot, PhD
In this essential episode, SparkEffect President Kim Bohr sits down with David Eliot, PhD candidate and author of Artificially Intelligent: The Very Human Story of AI, to explore why democracy matters in AI implementation. Drawing on SparkEffect’s Trust Study—which found AI-driven systems among the most disruptive forces to employee trust—they reveal the stark contrast between corporate-controlled AI rollouts that erode trust and democratic approaches that strengthen it. The centerpiece: Google’s failed Toronto smart city versus Barcelona’s thriving citizen-driven smart city—two radically different paths with radically different outcomes.
This isn’t another generic AI conversation. This is about understanding why most AI implementations break trust, and what leaders can do differently.
Listen
Watch
In This Episode
The Democracy Problem in AI: Why AI Advisory boards filled exclusively with tech executives create dangerous knowledge gaps—and how this defers democratic participation onto those building the systems rather than those affected by them.
History of the Present: Why understanding AI requires starting at 700 AD with the invention of algorithms—and how knowing foundational technologies helps leaders make sense of rapidly changing AI applications.
Barcelona vs. Google: The complete story of two smart cities—one corporate-controlled and opaque (failed), one democratically run with hidden technology (thriving)—and what this reveals about transparency and inclusion in AI implementation.
The Entry-Level Pipeline Crisis: How AI automation of entry-level jobs threatens organizational talent development, with companies admitting “we’re going to run out of the well to promote from within.”
Retooling vs. Deskilling: Why the current moment is different from past industrial revolutions, and what needs to change in education and corporate training to prevent worker resentment and political radicalization.
AI as Amplifier: How AI amplifies both good and bad in organizations—making existing problems worse but also creating opportunities in medicine, accessibility, and freeing humans from dehumanizing work.
Episode Highlights
[02:14] – Why confusion benefits those in power: “The powers that be in AI actually really benefit from that state. They benefit from people being confused.”
[06:47] – The train metaphor: “A history of trains cannot be disentwined from mercantile trading, Roman roads, moving armies. The train is part of this greater history.”
[17:23] – The two smart cities: Corporate Google Toronto (failed) versus democratic Barcelona (thriving)—and why opacity versus transparency determined outcomes.
[20:10] – Barcelona’s invisible success: “The idea behind their smart city was not to show off the technology. It was to blend into the city and actually uplift people’s lives.”
[25:52] – Job displacement reality check: “We’re in uncharted territory. Companies over-hired during the pandemic and now claim AI justifies layoffs—but is that real or convenient?”
[32:01] – The pipeline problem: “Entry-level jobs are where you get base skill sets for the next job. Eventually we’re going to run out of the well to promote from within.”
[34:07] – Why hope matters: “For everything we talk about the negative, we overlook the positive amplifications—expanding medical access, helping people with disabilities, freeing us from dehumanizing labor.”
[37:32] – The future vision: “If we can design society that takes advantage of this opportunity, it gives us the ability to go back to what makes us more human.”
Resources Mentioned
- SparkEffect Trust Study: 71% of organizations faced disruption; only 36% emerged with stronger trust
- Virtual Health Hub in rural Saskatchewan (Dakota territory)
About Our Guest
David Eliot is a PhD candidate at the University of Ottawa researching the social and political effects of artificial intelligence. His work focuses on making AI understandable and participatory for everyday people, rejecting both fear-driven and naively optimistic narratives. David has advised organizations on AI implementation, spoken at universities about technology ethics, and serves on the board of a literacy charity in Canada. His book Artificially Intelligent: The Very Human Story of AI traces the history of the present from 700 AD to today, making complex AI concepts accessible to non-technical audiences.
Take Action
Download free resources from this conversation, above, at couragetoadvancepodcast.com, including our complete Trust Study findings on AI-driven disruption.
Ready to lead AI implementation that builds trust instead of breaking it? The difference between the 36% who emerge stronger and the 64% who don’t starts with democratic participation.
Courage to Advance is produced by SparkEffect. New episodes drop every Tuesday.