
Table of contents
On November 6 in London, we co-organized a private dinner with OpenAI, bringing together 30 leaders from pharma, software, technology, telco, and other industries. The discussion centered on one theme: how to move beyond proofs of concept and scale AI responsibly, safely, and profitably into production.
We gathered leaders, operators, and practitioners navigating transformation inside some of Europe’s most innovative organizations. Additionally, our own experts, Michał Iwanowski (VP Technology & AI Advisory), Tomasz Rytel (Sr. Director, Partnerships & BD), Tomasz Głowacki (AI Engineering Manager), David Powell (VP Business Development), Dawid Nguyen (GTM Strategy & Operations Lead), shared what they’re seeing across industries right now.

Across the conversations, four insights stood out as the most urgent for enterprises moving beyond PoC fatigue:
- AI value must be measurable within ~90 days; otherwise, the scope is wrong.
- Speed and adaptability outperform budgets and headcount.
- People enablement is the true bottleneck, not technology.
- Advisory only works when strategy and implementation stay tightly connected.
Below is a synthesized view of the most important lessons from those discussions.
1. Defining and Measuring AI Success in Business
There is no universal framework for AI value, but disciplined teams share one trait: a short time-to-value cycle, often targeting meaningful impact within ~90 days. This constraint forces clarity, prioritization, and ruthless scoping.
Why AI ROI Fails
When AI Projects disappoint, it’s usually because teams:
- measure too early,
- measure the wrong things, or
- pursue “AI for AI’s sake.”
Value emerges when AI is woven into an end-to-end workflow, not when a model is simply added to an unchanged process. Early scoping must be ruthlessly narrow: one high-friction task, automated end-to-end, expanded only once data proves uplift.
How to Define Value Upfront
Value discussions should align with the company’s top-level goals. Cost, revenue, and time savings resonate with CFOs and users alike. Yet it is crucial to separate two types of initiatives:
- Y-side: Revenue growth (e.g., more customers, more sales)
- Cost-side: Operational efficiency, headcount reduction, or process optimization
Y-side initiatives almost always produce a stronger business impact.
However, value realization often requires organizational decisions, such as workforce reallocation or process redesign, which must be surfaced early.
Proxies for Early Success
When financial metrics lag, look for operational signals:
- improved speed,
- fewer errors,
- reduced steps in the workflow,
- stronger user adoption or engagement.
Operational wins precede monetary ones.
Should Every AI Project Target 90-Day Impact?
For applied AI, yes, unless constraints (data, regulations, infrastructure) make this impossible. Some processes, such as budget planning or MMM, inherently require 6–12 months to show impact. These should still be pursued, but expectations must be clearly set from the beginning.
Avoiding Sunk-Cost Traps
The remedy is simple and powerful:
- Define kill criteria before kickoff.
- Review metrics every 2–3 weeks.
- Pivot or shut down when the signal is flat.
Advisory partners safeguard outcomes, not hours spent.
2. Speed and Adaptability Define Winners
Speed is becoming the ultimate competitive advantage in AI transformation. The fastest learners, not the biggest budgets, set the pace for their industries.
How to Move Fast Without Breaking Governance
Teams that accelerate safely do so by:
- piloting features with controlled user groups,
- excluding sensitive processes in early iterations,
- applying robust AIOps guardrails,
- ensuring observability and traceability from day one.
What Separates Fast Movers from Slow Ones
Fast organizations:
- validate solutions with real users early,
- build an internal coalition of early adopters, a foundation for future Centers of Excellence, with clear executive sponsorship and stakeholder buy-in,
- define success criteria upfront,
- iterate quickly when expectations are not met.
Slow organizations tend to rely on vague metrics, long planning cycles, and delayed exposure to real environments, with little to no accountability for measurable outcomes.
Can Speed Be a KPI?
Absolutely, but it should focus on how quickly validated value is delivered, not just how fast something ships. A strong KPI is:
“How quickly does the solution demonstrate real, measurable value for users?”
This reframes speed as a function of business impact rather than output volume.
Reliable AI partners internalize urgency, shorten feedback loops, and guide clients through rapid cycles of validation and value realization, not just delivery.
How deepsense.ai Reinforces Speed
- A modular delivery approach shipping value within 10–12 weeks.
- Parallel business and tech workstreams (discovery/design + implementation).
- A boutique, high-touch model supported by proprietary accelerators (as our own GenAI ragbits framework).
Where Speed Beats Planning: A Real Example
A rapid MVP rollout to a small group of doctors surfaced unexpected real-world constraints, like microphone quality affecting diarization, and insights impossible to uncover in design-only phases. Iteration unlocked progress far faster than extended planning ever could.
3. People Enablement as the True Differentiator. And the Biggest Barrier
Across all our discussions, one theme overshadowed technology:
AI success is, above all, a people transformation challenge.
The companies winning today are not those with the biggest models but those that invest in:
- capability building,
- change management,
- cross-functional collaboration.
Organizational readiness can be assessed using models such as Gartner’s AI Maturity Model, but the real differentiator is intentional AI advocacy. Internal champions, empowered, trained, and strategically supported, accelerate adoption far more effectively than budget or tools.
Why People Are the Hardest Part
Most AI failures come from unchanged processes, misaligned incentives, or cultural friction — not from model quality. Executives expect immediate impact; employees fear disruption (“I don’t want to automate myself out of a job”). Managing that fear through communication and expectation-setting is essential.
How to Build the Right Organizational Muscle
Leaders increasingly rely on Centers of Excellence (COEs) or AI Culture Programs. But these cannot be symbolic. Effective programs include:
- active executive sponsorship,
- representation from key business functions,
- KPIs tied to both culture and outcomes,
- a living AI adoption roadmap,
- a sandbox for rapid experimentation,
- bi-monthly reviews focused on iteration rather than perfection.
A COE must also be anchored in an IT-backed tech stack and organizational alignment across Finance, Operations, and Strategy. Frameworks such as G.O.S.P.A. help ensure alignment “from the mail room to the board room.”
Mobilizing people, not models, is the real infrastructure of AI transformation.
4. The Role of AI Advisory
When organizations seek AI advisory, they aren’t asking for more ideas, they’re asking for clarity. Many companies have a general sense of the outcomes they want to achieve, but the scale and pace of the AI ecosystem make it difficult to know:
- where to start,
- what to prioritize,
- which tools or architectures are future-proof,
- how to avoid costly missteps.
That said, it is equally important to acknowledge a reality highlighted by our experts: companies are increasingly looking to partners and consultants not only to prioritize existing ideas, but to propose new, high-potential AI use cases altogether.
The abundance of technologies, frameworks, and emerging possibilities means many organizations expect advisory partners to both frame the opportunity and chart the path toward it.
As one participant jokingly said:
“Trying to keep up with AI news gives me headaches.”
Why Some Organizations Hesitate
Traditional advisory often:
- stays at the high level,
- lacks engineering depth,
- underestimates implementation challenges.
This creates skepticism. Leaders want guidance that reflects real delivery experience, not slides.
Why Our Approach Differs
Classic consulting typically focuses on producing business strategy frameworks and polished reports, valuable for direction-setting, but insufficient for the realities of modern AI. These teams deliver decks, outline high-level operating models, and recommend processes, yet they rarely ship working solutions.
In AI, where tools, best practices, and architectures evolve monthly, this gap becomes problematic. Recommendations age quickly, and long advisory cycles, especially those costing tens of thousands of dollars for a static report, delay real learning. In the same timeframe, and often at comparable cost, an AI-specialized partner can already build and test a proof of concept, generating evidence instead of assumptions.
That is why strategy and strategic AI implementation must operate under one advisory umbrella. Effective AI advisory today requires not only defining what to do but validating it rapidly in production-like conditions. This combination ensures that recommendations remain actionable, current, and grounded in technical feasibility, not theoretical models.
Risk Management in AI Adoption
Without early guidance, companies risk:
- prioritizing low-impact use cases,
- choosing architectures that block future scaling,
- investing in incompatible tools,
- creating patterns that fail at scale.
AI transformation is foundational. Early mistakes compound.
Wrap-Up: Beyond the PoC Era
The dinner reinforced what we see every day in the field:
Enterprises don’t struggle with AI pilots – they struggle with AI scale. Moving from experimentation to scalable AI solutions remains the true barrier, and organizations that succeed share four common traits:
- They measure value in weeks, not years, bringing discipline to measuring AI success in business.
- They move quickly but safely, using guardrails rather than bureaucracy.
- They treat AI as a people transformation, not merely a technology upgrade.
- They rely on advisory models grounded in real implementation, not theory.
As the AI wave accelerates, these disciplines will separate leaders from laggards. The companies that master them will not only scale AI, they will also reshape industries.
Table of contents







