Shopify’s AI Memo Shows Why We Need to Pair Innovation With Responsibility
When Tobi Lütke’s internal memo on AI adoption leaked, it struck a chord across the tech world. His message was clear: AI usage is now a "baseline expectation" at Shopify and not optional, urging employees to integrate AI into their daily workflows.
The memo reflects a growing sentiment in boardrooms everywhere: adapt or die. The general fear is that AI is moving fast and companies that don’t embrace it risk being left behind. The memo captures the urgency and opportunity of this moment, but also exposes a tension many leaders are struggling with: how do you push for AI adoption in a way that is responsible?
What the memo gets right
1. Experimentation is vital
In his memo, Tobi emphasises the importance on tinkering with AI tools and continously learning. This messaging is spot on because AI isn’t a plug-and-play solution, it requires curiosity, trial and error, and critical thinking. Treating it as part of the "Get Stuff Done" phase is a smart move that encourages creativity and iteration.
2. Reflexive learning is a core skill
In addition, positioning AI as a collaborator is good framing. People who can think with AI, not just through it, will be better positioned to innovate, adapt, and lead. This aligns with the future of work, where knowledge isn’t just in what you know, but how quickly you can learn.
3. Everyone should be at the table
Lastly, the memo emphasizes that AI integration applies to everyone, including leadership, ensuring that AI adoption is not siloed within technical teams but embraced across the organisation.
“Everyone means everyone. This applies to all of us, including me and the executive team.” - Tobi Lütke
What the memo misses
While the tone of the memo is visionary, it also borders on urgency without safety. Encouraging teams to “replace” workflows with AI wherever possible sounds efficient but it risks skipping over a critical question: at what cost?
AI adoption isn’t just a technical or operational challenge, it's a governance one.
1. What is the cost of mandating AI enthusiasm?
Requiring AI usage and tying it to performance reviews could backfire. When experimentation becomes mandatory, it stops being exploration and starts to feel like surveillance. People may use AI not because it makes sense for the task, but because they feel pressured to be seen using it. That’s not innovation, that’s performative compliance.
In addition, using AI “a lot” doesn’t guarantee mastery, it often just scales our gaps. Without thoughtful training, employees may make well-intentioned but risky choices. For example, several high-profile incidents have already shown employees at other companies inadvertently uploading confidential data to public AI tools or downloading malicious AI tools for work. Thus opening the door to data breaches and reputational damage.
2. What are the unintended consequences of tying AI to performance reviews?
Attaching AI usage to evaluations can unintentionally:
Penalise neurodivergent employees who may struggle withrapidly evolving tech
Pressure junior staff to cut corners to appear “AI fluent”
Reinforce tech elitism, creating a divide between early adopters and thoughtful skeptics
This environment may unintentionally encourage shadow AI use, one where employees quietly test tools without oversight, guidance, or safety nets.
3. Are we deferring critical thinking to AI?
One of the most overlooked risks of pushing AI reflexively into workflows is decision deferral, where people increasingly defer judgment to AI recommendations instead of thinking critically.
When AI suggestions are taken as “smart defaults,” employees may:
Stop questioning outputs, even when something feels off
Bypass review steps, assuming AI already optimised them
Rely on AI-generated language in sensitive areas like hiring, compliance, or customer support without checking for bias or nuance
This is especially dangerous in high-stakes, complex environments where human judgment, context, and accountability matter. AI can assist our decision making process but it should never replace discernment.
Without clear guidelines on when to trust AI and when to question it, companies risk creating a culture of outsourced thinking.
The real takeaway from Shopify’s memo shouldn’t be “use AI or else.” It should be: Let’s talk more. Let’s learn together. Let’s govern better.
If leaders want people to use AI effectively and ethically, they have to provide more than a directive, they need to build capacity.
That means:
Structured, role-specific training on responsible AI usage
Communities of practice where people can share both wins and failures
Clear governance frameworks to balance speed with safety
Psychological safety to report issues without fear
Because yes, AI is a multiplier. But it only multiplies well when paired with human judgment, ethical consideration, and a strong internal culture.
Why This Matters Now
We’re entering an era where AI could enable more entrepreneurs and businesses than any other time in history. But AI isn’t neutral, it reflects our inputs, our values, and our blind spots. Used well, it’s a force for inclusion and innovation. Used poorly, it could just reinforce the same inequities at scale.
Shopify’s memo is an interesting call to action. But without the proper guardrails, culture, and conversations, it risks becoming a cautionary tale. The future of work isn't just about surviving AI. It’s about thriving with it, responsibly, reflectively, and together.
You can read the full memo here: https://x.com/tobi/status/1909251946235437514