- Continuous Learning
- Posts
- De-Risking AI Starts with Culture
De-Risking AI Starts with Culture
At a Glance
AI failure is rarely technical – it’s cultural.
Align ambition with culture before scaling AI.
Get the free AI Cultural Response Framework to find your fit.
Radical reinvention ≠ right for everyone – intentional evolution wins.
Hey folks,
Everyone’s talking about their AI strategy. But here’s the question I rarely hear:
Can your culture actually handle the speed and ambiguity AI brings?
Over the last few months, I’ve been working with teams trying to make AI real, not just theoretical. The pattern is clear: the biggest AI risks aren’t technical. They’re human.
That’s why I created something new:
The AI Cultural Response Framework
– Jeff
Why Culture Determines AI Success
When organizations talk about their “AI strategy”, it’s often framed in vague terms of models, aspirational data and infrastructure and robust descriptions of employee and customer productivity. But in practice, the biggest failure mode for AI implementation and adoption isn’t technical—it’s cultural. You can invest in the latest large-language-models, AI-driven productivity tools and the associated hype but if your company culture can’t adapt to the speed, ambiguity and continuous change AI brings, you risk building something that either stalls, gets ignored, or worse, causes damage.
In other words: the tech is tempting and may even be ready for use—but is your company?
Before you roll out a big AI initiative, your real question should be: Can our culture handle the pace and uncertainty that AI demands?
To help you with this, I am introducing the AI Cultural Response Framework: a diagnostic tool to help organizations align their culture and ambition, thereby de-risking their path into AI.
The AI Cultural Response Framework
What it is
The greatest benefits of the AI revolution come from broad adoption of the technology rather than simply automating tedious tasks. This framework is a way to map how your culture might respond to broad AI adoption—whether you lean toward radical reinvention or incremental integration. Instead of asking “what’s the best AI playbook?” it asks, “What’s the best AI playbook for us given our culture, appetite and risk tolerance?”
The two archetypal ends of the spectrum*
Radical Reinvention: Treat AI as a forcing-function to make a bold, restructuring bet to reboot culture.
Incremental Integration: Embed AI into existing workflows, build trust, evolve the culture, effect change gradually.
Most organizations fall somewhere in between—but the key is to pick a point on that spectrum intentionally and move from there.
*This model assumes that AI adoption is inevitable which is why there is no “don’t adopt AI” option on this scale. The overarching belief is that not integrating AI into your culture is a recipe for stagnation and ultimately irrelevance.
Why this matters
Culture acts like a filter on AI. If your culture is rigid, bureaucratic and risk-averse, going all-in on AI will likely fail. Conversely, if your culture is nimble and experimentally minded, but you impose slow, cautious adoption, you lose momentum, the favor of your staff and may ultimately be outpaced.
The aim: align your ambition with your culture, rather than fighting it. Do that and you de-risk your path to harnessing real AI value.
Five Case Studies: How Different Cultures Shape AI Strategy
Below are five companies—and how their culture shaped their AI approach.
Company | Cultural Response | Risk Appetite | Differentiating Move |
|---|---|---|---|
Intercom | Radical reboot with startup urgency | High | AI-first support pivot (Fin) |
Atlassian | Incremental, bottom-up integration | Medium | Developer trust, autonomy |
Salesforce | Marketing-led, cautious adoption | Low | Trusted enterprise AI position |
Adobe | Empowering creatives; ethics focus | Medium | AI embedded in creative DNA |
Airtable | Dual-speed culture: fast AI platform team + slow infra group | Medium–High | Balancing bold iteration with long-term stability |
Intercom
Intercom (a customer-support SaaS company) made a pronounced shift: it treated AI not as a feature add-on but as a core strategic bet. The introduction of its “Fin” AI-agent was framed as an all-in transformation for customer service and for the company itself. (Bessemer Venture Partners)
This came with cultural implications: moving fast, making decisive bets, reorganizing (and laying off staff). For many companies this would feel radical—labour shifting, roles changing, workflows disrupted. Intercom’s risk appetite was high—but the payoff has been differentiation and velocity.
Atlassian
Atlassian (known for Jira, Confluence) has been following a more evolutionary path. When it comes to the products offered to their customers, the “Atlassian Intelligence” and embedded AI apps show gradual integration into existing workflows: from meeting-management to knowledge-base assistants. (Atlassian)
On the culture side, though, Atlassian emphasises autonomy, developer ecosystems and incremental improvement. Instead of tearing up everything and starting over, they embed AI as part of what the culture already excels at. One team is tasked with putting together AI building blocks for the rest of the organization to use. They notice which of these building blocks get used the most and maintain those while deprecating the ones that get used less. The risk appetite is medium: faster than the incumbent “move cautiously” approach, but not full-startup blitz.
Salesforce
Salesforce is emblematic of the late-stage enterprise approach: marketing-led, deeply established culture with a large installed base. Their “Einstein” brand and recent “Agentforce” initiatives embed AI across sales-, service- and marketing- clouds—with a strong emphasis on trust, ethics and enterprise readiness. (Salesforce)
The cultural stance here favours stability, trust, rigorous control. Risk appetite is low compared to a nimble startup—but that’s appropriate to their scale. The differentiator: enterprise-grade AI delivered in a way that fits their culture and customer expectations.
Recently Salesforce CEO, Marc Benioff acknowledged (paywall) that, “...capabilities of available AI are "outstripping customer adoption" and that it "takes time" for companies to make good use of the technology.” This is a change from his initial stance that AI integration is an “easy and quick process.”
Adobe
Adobe has gone all-in on AI — not as a side project or an innovation lab experiment, but as a core part of how people work and create. One of the smartest things they’ve done is treating employees as customer zero for AI. Before anything is released to the public, Adobe’s own people are the ones testing, experimenting, and breaking things.
Since the first beta of Firefly launched in March 2023, thousands of employees have participated in more than 30 internal AI beta programs. That kind of engagement sends a clear signal: experimenting with AI isn’t just for technical teams — it’s everyone’s job. Employees across departments are co-creating Adobe’s AI future, surfacing new use cases, sharing feedback, and building confidence and trust in how AI can boost both productivity and creativity.
To make this work, Adobe didn’t just add AI tools — they redesigned the way work happens. A cross-functional working group called AI@Adobe brings together people from engineering, design, legal, and other functions to guide responsible AI adoption. It’s effectively an internal center of excellence where teams share learnings, coordinate experiments, and ensure knowledge doesn’t get stuck in silos.
They also embedded ethics directly into the structure. Adobe’s AI Ethics Committee and AI Ethics Review Board oversee the development of AI features and research through the lens of the company’s values. These groups collaborate with product teams to ensure that every AI initiative is reviewed and refined before it ships.
The key takeaway: de-risking AI isn’t about buying the right tool — it’s about building the right culture and structure. Adobe made AI safer and more scalable by creating space for experimentation, collaboration, and ethical oversight. The result is a workforce that doesn’t just use AI, but actively shapes how it’s used.
Sources:
- Great Place To Work
- India New England News
- Business Insider
- Adobe
Airtable
Airtable provides a compelling recent example. The CEO publicly described the company’s re-organisation into two segments: a fast-thinking AI platform team shipping bold, weekly releases, and a slow-thinking group making deliberate, long-term infrastructure bets. (Lenny’s Newsletter)
This “dual-speed” model enables Airtable to hedge risk: move boldly where they can, but preserve stability where needed. This matches their risk appetite – keeping it at a medium level, pushing the boundaries of AI while maintaining as business as usual approach to ensure stability.
One of the big questions to consider with this approach is what happens to the employees on the “slow-thinking” group. On the one hand you’d expect the ideas from the AI-forward team to make their way to the BAU team. However, there is a risk that the BAU staff gets left behind in the AI story and eventually replaced by the AI half of the company. Time will tell here.
Diagnosing Your Organization: The AI Cultural Response Worksheet
To begin aligning culture + AI you can use a simple worksheet.
Below are six prompts you can use individually or with your leadership team:
What is our company’s current cultural stance toward new technologies?
Are we cautious? Bold? Bureaucratic? Agile? Be honest.
Example answer: “We tend to wait for proven ROI before acting.”
Are we willing to restructure or simplify to move faster with AI?
Would we consider reorganizing teams, reducing layers, sunsetting old processes?
Do we have the urgency of a startup or the stability of a late-stage incumbent?
Startup urgency fuels speed (but risks chaos); incumbent stability builds trust (but may slow adoption).
Which AI initiatives feel like table stakes vs. true differentiation for us?
What do we need to do just to keep up, vs what will truly set us apart?
What cultural obstacles might prevent us from moving faster on AI?
Bureaucracy? Risk aversion? Skill gaps? Unclear leadership signals?
Where do we want to position ourselves on the spectrum from “incremental integration” to “radical reinvention”?
Place yourself: perhaps 2/5 today, but aiming for 4/5 in 12 months.
How to use it
Bring your leadership team together to work through this conversation with the worksheet as the facilitation tool
Ask each participant to answer the 6 questions individually (5–10 mins).
Then share in small groups and surface patterns or tension points.
The aim isn’t full consensus—it’s to make visible the beliefs, assumptions and tensions around culture + AI.
Mapping Your Position on the Spectrum
After reflection, map your organization on a spectrum:
1-2: You’re mostly incremental—embedding AI into current workflows, low disruption, moderate ambition.
3: You’re in the middle—you’re willing to change but doing so within the framework of existing systems.
4-5: You’re embracing reinvention—AI as a driver of transformation, possibly reorganization, new products, new business models.
Example Interpretation
If you rate yourself a 2/5, you’re likely launching pilots, maybe adding AI into marketing or operations, but you’re still cautious and risk-oriented around stability.
If you aim for 4/5, you might be hiring an AI platform team, changing product logic, restructuring around AI, and moving into higher-risk, higher-reward territory.
De-risking insight
Moving too far too fast—without matching cultural readiness—creates resistance, failure and wasted investment.
Moving too slowly—despite competitive urgency—may leave you behind.
The sweet spot is intentional evolution: pick the step you can handle, then stretch to the next.
From Reflection to Action: De-Risking Your AI Path
Principle
The safest AI strategy is the one your culture can actually sustain.
Action themes
Align ambition with culture: Don’t copy what other companies (e.g., Intercom) are doing unless your culture supports it.
Choose scope wisely: A pilot is fine; a full-scale transformation may be premature.
Reduce risk through iteration: Treat AI efforts as hypotheses: test, learn, refine.
Build feedback loops: Monitor not just performance metrics but cultural signals—adoption, sentiment (e.g., what are people saying in internal chats), speed of iteration.
Frame culturally: Present AI as enhancement, not replacement. For example, Adobe positions AI as empowering creatives, not replacing them.
Practical Steps
If your culture values deliberation and trust, start with a controlled internal AI use case (for example, enhancing internal documentation search).
If your culture thrives on speed and innovation, you might launch a visible, customer-facing AI pilot—but pair it with explicit learning loops around governance and ethics.
Set your ambition and roadmap with your culture—not despite it. Document your current rating (say 2/5), your target (say 3 or 4/5), and the cultural shifts needed to get there (e.g., faster decision-making, smaller teams, more experimentation).
Use the case studies above as reference points: “Look, Airtable restructured into fast/slow teams around AI” or “Salesforce embedded AI into core products but kept trust & enterprise culture top-of-mind.”
Conclusion: Culture as the Ultimate AI Risk Mitigator
At the end of the day: AI magnifies what you already are. If your culture is slow, conservative, risk-averse—AI will make you slow, conservative, risk-averse faster. If your culture is aligned, adaptive and change-oriented—AI will accelerate it.
Therefore, the work of de-risking AI adoption starts not with models or tech, but with culture and organizational design. Do the reflection first. Map your stance. Decide where you want to go. Then move one cultural step at a time—consistently, deliberately, aligned.
Because an AI strategy without culture is just a gamble. And in the world of enterprise transformation—safe bets win.
P.S. – As I was wrapping up this newsletter a fascinating new interview showed up in my inbox. Once again, Lenny’s podcast proving itself valuable with a fascinating discussion with Block’s (fka Square) CTO, Dhanji R. Prasanna. I thought Prasanna’s core point was important to add into this newsletter as an underlying theme.
As I mentioned at the beginning, adoption trumps automation when it comes to AI enablement. Prasanna notes two very important factors in Block’s move to become an AI-forward organization.
To drive AI adoption, leaders have to actually use it. At Block, Jack Dorsey, the CTO, and the exec team all use Goose, the company’s internal AI automation agent, every day. That hands-on usage is what drove cultural change — not memos, not decks. Prasanna’s advice to other leaders: stop reading think pieces, pick a real problem in your own workflow, and solve it with AI. That’s how you figure out where it’s valuable for the org.
Particularly relevant for large enterprise companies, Prasanna notes that individual business units operating as mini-companies with each having its own mini-CEO risk developing unevenly and underleveraging AI. Instead of “just adding AI” to each of Block’s business units, he showcased how Block reorganized by discipline – one engineering org, one design org, etc. The benefits here were company-wide shared language, resource and talent sharing across the company as needed, shared platforms, shared AI tools and a focus on discipline-as-craft. This not only drove productivity gains but a deep pride in the work – AI-enhanced or not.
Source: YouTube
Upcoming Workshops from Sense & Respond Learning
Objectives and Key Results: Who Does What by How Much?
November 7th – Live Workshop (in English)
Learn to connect strategic ambition to measurable outcomes.
Register here
Lean Product Management
November 17th – Live Workshop (in Romanian)
Flip the script and prioritize learning over output.
Register here
Interested in working together? Please reach out.
In case you need it, here's a description of what I do.

Reply