AI Won’t Kill your company. Imitation Will.

written by Adrian Maharaj

(Views mine, not Google’s.)

Everyone can rent intelligence now. The edge is variance making better, different calls than the shop next door. Most teams are doing the opposite: Hype → Tool Rush → Plateau. This time the plateau comes fast because everyone buys the same stack.
For context: Accenture’s Technology Vision 2025 says 36% have scaled gen‑AI; 13% see big enterprise impact; 77% say trust is the gate to value. Overview Full PDF.

The Silent Cliff: “Synthetic Consensus”

When the same public models push the same “optimal” answer, prices bunch, target customers look identical, and everyone ships the same copilot feature. AI isn’t replacing people; it’s erasing competitive differences.

Where leadership is failing (plain talk)

  • Buying theater: No cost per closed‑task math only glossy pilots.

  • Pilot museums: No service goals, no rollback plan.

  • Open writes: Bots can change anything they touch a breach waiting to happen.

  • Trust without tests: No weekly accuracy/safety checks.

  • Vendor lock‑in: Your “brain” can’t move models or clouds.

The Anti‑Imitation Stack (six simple moves)

1) Name it & staff it.
Four parts, four owners: Info (what we know), Brains (models), Helpers (task bots), Plumbing (systems). One accountable owner each. (Accenture calls this the “cognitive digital brain.” See the diagram on p.5 ).

2) Put it on a budget.
Bots read widely but write narrowly (drafts/tickets). Cap time, actions, dollars, and data per task. Keys expire fast.

3) Run a weekly scoreboard (tie comp).
Publish: Success %, Incidents/1k, Escalation %, Hallucinations caught, $/job vs cap, Time ask→done, Customer‑happiness Δ. Trust is a number you track, not a poster.

4) Fix the data before more bots.
Shared names/formats, freshness SLAs, lineage. Keep an approved‑tools list. No list → no production.

5) Be yourself—and disclose.
Teach tone + policy + what not to do. Disclose AI, ask consent, escalate on money/safety/emotion. (Accenture warns about bland, same sounding bots personify to stand out; be transparent. See pp.22–33 ).

6) Stay portable.
Have a Plan‑B model, portable data, policy‑as‑code. If your “brain” can’t move clouds, you’re renting it. (Accenture’s “Binary Big Bang”: agents + digital core + generative UI; pp.14–21 ).

One‑liners your team will remember:
Bots get a credit limit, not a blank check.Trust is a scoreboard, not a slogan.Read widely, write narrowly.Fix the data kitchen before hiring more chefs.Generic models make generic brands.If your brain can’t move, you’re renting it.

Lead like this (role by role)

CEO Stop asking “How much will AI save?” Start asking “Where will we be undeniably different?” Mandate the scoreboard. Green‑light one proprietary data asset and one public, personified customer experience. (See brand/persona section above.)

CFO Enforce $ per closed task and payback; set hard spend caps per agent; practice model portability to avoid vendor pricing traps.

COO / GM Pick 3 workflows and run them end‑to‑end under clear service goals and draft‑by‑default writes. Bonus on cycle time and fewer exceptions. Promote agents canary → shadow → guarded‑write → full.

CTO / CIO — Build the approved‑tools list + data contracts; log every tool call; keep two model tracks (open + commercial). (Agents as first class users of the digital core see p.16.)

CRO / CMO Ship intent first flows (not demo chat boxes). Personify + disclose. Measure time to value, not clicks.

CHRO (People leader) Pay bounties for automations; design human‑agent tandems; train when to escalate. (See The New Learning Loop, pp.47–58.)

Field notes (fail → fix)

Airfare convergence optics (composite scenario).
Several carriers adopt the same dynamic‑pricing vendor + AI add‑on. Fare bands converge after common signals (events, load factors). Social chatter calls it “price fixing.”
Tech fail: 0. Leadership fail: 100%.
Fix: Add price variance guardrails (diversity checks + small randomization), put price moves behind human approvals, and track a unique decision rate so automation doesn’t herd to the same answer set.

Renewal fiasco.
Discounts converged across rivals using the same vendor; customers cried price fixing.
Fix: Pricing variance guardrails, sensitive write reviews, human sign off.

Concierge backlash.
Undisclosed bot applied a policy your humans wouldn’t; refunds went viral.
Fix: Tune tone + policy, disclose, set escalation thresholds.

Shadow copilot breach.
Unvetted browser extension scraped PII.
Fix: AI allowlist, short lived keys, telemetry.

Pilot museum.
14 demos; 0 production.
Fix: Data contracts + tools list + promotion gates.

Budgeted Autonomy House Rules for AI Helpers

Plain idea: Treat every AI like a new hire with a company card and a learner’s permit.
Small keys. Short trips. Clear limits. Freedom is earned by the numbers.

The 5 rules

  1. Small keys, short time. Minimal access; logins auto expire.

  2. Set a spend limit per job. Cap minutes, actions, and dollars.

  3. Drafts first, not live. It prepares emails/quotes/tickets; people send.

  4. Money & policy need a human. Prices, refunds, personal data = approval.

  5. Prove it, then promote it. Lab → ride‑along → supervised writes → trusted.

The 5 numbers to check weekly

  • Works Rate (finished well out of 100)

  • Uh‑ohs (incidents per 1,000)

  • Hand‑offs (% a human took over)

  • Cost per job (under cap?)

  • Speed (ask → done)

Tie bonuses to these. If trust goes up, freedom goes up.

Two quick examples

A) Renewal‑Quote Helper (B2B)

  • Limits: ≤30s, ≤15 actions, ≤$0.30

  • Writes to: Draft quote in CRM only

  • Human OK if: price change >5% or discount >10%

  • Extra guardrail: shows two different quote options to avoid sameness

B) Customer‑Reply Helper (Retail/Travel)

  • Limits: ≤20s, ≤8 actions, ≤$0.10

  • Writes to: Draft reply; can’t issue refunds

  • Human OK if: refund > $25 or customer is angry/legal/medical/at‑risk

Paste‑and‑Go “One‑Page Policy”

Access: least privilege; logins expire ≤24h (sensitive ≤1h).
Per‑job limits: time / actions / dollars hit a cap, the bot pauses.
Writes: Draft by default; sensitive actions require approval (two people for big ones).
Logs: track what the bot did and why.
Weekly check: the five numbers above; freedom up or down based on results.

Promotion ladder (like a license): Practice → Ride‑along → SupervisedTrusted.
Level‑up when: Works Rate ≥96% for 2 weeks; 0 serious incidents for 14 days; cost under cap for almost all runs; hand‑offs trending down.

Your Trust Scoreboard (template you can post)

Workflow · Owner · Agent/Version · Volume · Success % · Incidents/1k · Escalation % · Hallucinations caught · Budget Cap $/task · Actual $/task · Avg Actions/Task · Time‑to‑Intent · Time‑to‑Close · Customer‑Happiness Δ · Notes.
(Accenture’s core point: trust gates value—make it visible, weekly.) Intro & stats.

Respectfully—what Accenture gets right, and what leaders still need

Right: the four layer digital brain, the Binary Big Bang, and trust as the limiter.
Still needed (Monday morning HOW):

  • From frame to owners. Four layers, four accountable owners with weekly numbers. p.5.

  • From gas to brakes. Function registry, draft‑by‑default, budget caps so autonomy won’t outspend strategy. pp.14–21.

  • From voice to policy. Personify and disclose; encode values + consent + escalation. pp.22–33.

  • From pilots to physics. For robots: design around energy & latency tiers (on device reflexes, edge context, cloud planning). pp.34–46.

  • Name the blind spot. Synthetic consensus: when everyone buys the same stack, decisions herd. Add variance guardrails (price diversity checks, unique decision rate).

Net: Accenture gives you the what/why; this is the how not to become a commodity when everyone adopts the same tech. Overview.

If you only do 5 things in 30 days

  1. Publish the scoreboard weekly; tie comp.

  2. Stand up an approved tools list + 3 data contracts.

  3. Enforce Budgeted Autonomy everywhere.

  4. Personify & disclose one frontline experience; measure lift.

  5. Promote one agent from shadow → guarded write; tell the story.

30‑second team script

“AI is an assistant, not a free agent. It gets small keys and a spending cap. It drafts; we approve the important stuff. We watch five numbers each week. If the numbers look great, it earns more freedom. If not, we dial it back.”

Early movers set moats that last a decade. Laggards inherit other people’s systems and their risks.
Share this with the exec forwarding AI vendor decks. Save it if you plan to lead, not copy.

Previous
Previous

The First Casualty of AI? Competitive Advantage

Next
Next

Forget Alignment Chase Clarity