re:cinq Logore:cinq

You Can't Transform What You Can't Read

You Can't Transform What You Can't Read

A companion to Your Engineering Org Is a Prompt Now


The previous post made a simple claim: your engineering organisation is now an instruction set, and most instruction sets running today are garbage. Not because the people are bad. Because the structure was never designed to be executed by anyone who wasn't already inside it. No documented conventions. No structured knowledge. No tooling coherent enough to act on without a colleague to fill the gaps. Human engineers compensate through asking around. Agents cannot.

The response we kept getting was some version of: yes, but how do we know what to change?

Which is the right question. But most of the organisations asking it are already failing the prerequisite. They can't answer it because they don't have an accurate picture of what they're currently running. They have a self-image. An org chart. A set of values on a wall. None of that is the same as an honest read of how work actually moves, where knowledge actually lives, and which structures are actively working against them.

You can't rewrite a prompt you haven't read. And most engineering leaders haven't read theirs.


A vocabulary for things you already know are broken

Our pattern library contains 119 cards across five categories: Transformation, Waterfall, Cloud Native, AI Native, and Anti-Patterns. Each card names something real: a structure, a habit, a dynamic that shows up at a particular stage of organisational maturity.

The naming is the point. Most engineering organisations already know something is wrong. They feel the friction. They see the coordination overhead. They notice that certain conversations happen over and over without resolution. What they lack is vocabulary that lets them talk about it without it immediately becoming personal.

"We have too many managers" is problematic. "I think we're deep in AP19 – Siloed Handoffs" is a diagnosis. The cards don't make the conversation comfortable, but easier to get started.

Here are four dynamics from the previous post, and what the cards have to say about each.

You are paying for meetings that agents don't need

Count them. The roles in your organisation whose primary function is to decompose work, track status, or pass information between humans. Team leads coordinating between squads. Scrum masters running ceremonies. Architects translating business intent into technical direction. Program managers aligning priorities across streams.

Be honest about what that layer costs. Not just in salary. In latency. In context loss at every handoff. In the gap between what gets decided and what gets built, which grows by a small amount at every boundary these roles exist to manage. And then be honest about this: the cost of agent-executed work is falling toward commodity pricing. The cost of the coordination layer above it is not. That ratio gets worse every quarter.

AP19 – Siloed Handoffs is the card that names what this looks like from the outside: communication that flows through formal handoffs, documentation as the default currency of transfer, context that degrades every time it crosses a boundary. It's a structural choice that made sense when the constraint was execution capacity. When a team of ten needed to coordinate with another team of ten, you needed humans to manage the interface.

The constraint has changed. The structure hasn't.

AIN10 – Intent-Driven Development describes what replaces it: capability units that specify intent, agents that execute, humans that review output. The steps that the coordination layer existed to manage between specification and execution compress into near nothing. Which means the roles that managed those steps are managing a gap that is closing.

Pull out those two cards with your leadership team. Ask honestly which one better describes how value moves through your organisation right now. Not aspirationally. Right now. If the answer is AP19 you need to change something.

Your platform team is still building for 2019

Most Cloud Native organisations built something real. An Internal Developer Platform. Golden paths. Self-service provisioning. CI/CD pipelines that actually work. CN08 – Platform Engineering / IDP is the card for this, and it represents genuine, hard-won progress. For the Cloud Native era, it was exactly the right answer: abstract the infrastructure, reduce cognitive load, let capability teams focus on business logic. Teams that have it move faster than teams that don't.

The problem is not that it was wrong. The problem is that it was right for a paradigm that is no longer the frontier. The IDP abstracts infrastructure. It provisions environments. It does not provision, version, validate, or govern the agents that your capability units are about to need at scale. It was built for the CN wave, and the AN wave is already breaking.

AP25 – Platform as Bottleneck is the card that names what happens next. Developers queue for things the platform doesn't self-serve. The platform team becomes a gatekeeper not by choice but by default, because the demand arrived before the capability did. You built a platform for one paradigm and are now running a second one on top of it with no infrastructure underneath.

AIN24 – Agentic DevOps Teams describes the evolution: specialised teams that extend the platform model up one layer, building and maintaining the agents, prompt libraries, validation pipelines, and orchestration patterns that capability units consume. Same principles as the IDP. Different layer. The platform team that gets there first turns CN08 into a competitive advantage. The one that doesn't turns AP25 into a tax on every team they're supposed to be enabling.

The question is not whether your platform team needs to evolve. It's whether they're already moving or waiting to be told.

If your agents can't use your tools, you don't have an AI problem

The most instructive thing happening in engineering organisations that are actually running agents at scale is not the model they chose or the orchestration framework they built. It is the design principle they converged on almost universally: agents should use the same tools, environments, and information systems that human engineers use. Not a simplified version. Not a bespoke retrieval layer built specifically for AI. The same thing.

That principle sounds obvious until you ask what it actually requires. Code search that returns useful results. Internal documentation that is current and structured enough to act on. CI pipelines with clear, actionable signals. Tickets with enough context that someone who wasn't in the room when the work was scoped can still execute on them. For organisations that have invested seriously in developer experience over years, wiring agents into that infrastructure is relatively straightforward. For organisations that haven't, the agents expose every shortcut that human engineers were quietly compensating for.

AP41 – Data Governance Failure is usually read as a data quality problem. It isn't only that. It is a description of any organisation where the information systems are too fragmented, too inconsistently maintained, or too dependent on informal knowledge transfer to be reliably used. When that describes your engineering infrastructure, the consequence for human engineers is friction. They compensate. They ask a colleague, track down the person who knows, or make a reasonable guess. When it describes your engineering infrastructure and you are trying to run agents on top of it, the consequence is systematic failure. Agents cannot ask a colleague. They cannot notice what they don't know. They work from whatever is in the environment, and if the environment is degraded, the output will be too, in ways that look plausible right up until they don't.

This reframes the question entirely. Most leadership teams are asking "how do we give agents access to our knowledge?" The better question is "are our existing engineering systems good enough that an agent could use them the same way an engineer would?" For most organisations, the honest answer to that question reveals problems that predate AI by years.

AIN04 – Agentic Architecture describes what you are building toward: autonomous agents that perceive, reason, and act using the same tools and context that humans do, not a bolted-on integration designed specifically for AI. Getting there does not require a new knowledge layer. It requires your existing engineering infrastructure to be good enough. That is a different and harder problem, because it means the investment is not in AI tooling. It is in the quality and consistency of everything you already have.

Most organisations are not ready for that conversation. They would rather buy an AI product than fix their documentation, their ticket hygiene, their internal search, and their CI signal quality. AP34 – Shiny Object Syndrome names this instinct precisely: chasing the newest tool without assessing strategic fit, mistaking procurement for progress. The organisations running agents successfully got there not by buying better AI tooling. They got there by doing the boring work on the substrate first.

Your best engineers are already doing the math

The previous post raised the talent hollow: what happens when you stop hiring juniors because teams of three don't need them. There is a more immediate version of the same problem: what happens when your seniors read the economics and decide to leave.

The argument for AI-native operating models has gone mainstream. The effective cost of agent-executed development is falling toward commodity pricing. Lean, model-first startups can stand up competitive products in months. Specialised roles are dissolving into generalist AI capabilities. The advice senior engineers are passing around is blunt: if your company is resisting this shift, leave.

AP54 – Human-Last Collaboration is the card that describes what your best people are watching for. Not whether you have adopted AI tools. Whether you have thought about the role of human expertise in an AI-augmented workflow. Organisations that treat AI as a replacement rather than a collaborator don't just lose effectiveness. They lose the people who understand the difference.

AIN19 – Human-AI Collaboration Design describes the alternative: workflows designed so that humans do the things humans are good at and agents do the rest. The organisations getting this right aren't eliminating human judgement. They are concentrating it where it matters most and automating everything around it.

The pattern cards are not just a structural diagnostic. They are a retention signal. If your engineers pull out AP54 and recognise their own organisation, they will not wait for your transformation roadmap. If they pull out AIN19 and see a credible path toward it, they will stay to help build it.

How to use the cards before the situation uses you

The pattern library and workshop toolkit are built for exactly these conversations. Here is the order that works.

Start with the Anti-Pattern cards, not the aspirational ones. Have your leadership team individually sort them by how recognisable each one is in your current organisation. The disagreements between people in the same organisation about whether a pattern applies are the most informative bits and pieces. They show you exactly where your shared understanding breaks down.

Map your current state honestly using the Current State Analysis poster. Not your target state, not your roadmap, not your strategy deck. What is actually true right now. This step is where most leadership teams flinch, because the current state is uglier than the self-image. That discomfort is the point. You cannot close a gap you haven't measured.

Then use the AI Native cards to build a vocabulary for where you're going. Not a roadmap. A vocabulary. Which patterns represent the operating model you are building toward? Which are prerequisites for others? What is the smallest concrete move in the next quarter that shifts you meaningfully in that direction?

The workshop guide walks through this sequence in full, from current state through journey mapping and risk identification, with all 119 cards as your working material.


The window is not permanently open

The previous post ended with a provocation: your engineering org is a prompt now, so write it deliberately.

Here is the part that didn't make it in: prompts that are not written deliberately get written by default. By the accumulated weight of decisions made for other reasons in other eras. By structures that outlasted the problems they were built to solve. By engineering infrastructure that was never quite good enough but was good enough for humans to compensate for. AP32 – Accidental Transformation is the card for this: change that happens without strategic direction, steered by drift rather than intent. Most organisations are not choosing to transform. They are being transformed, accidentally, by a shift they have not yet diagnosed.

Agents do not compensate. They execute on what is there.

And the competitor that should worry you most is not the incumbent you are watching. It is the startup that does not exist yet, one that will start at AIN01 – Model-Centric Architecture by default. No coordination overhead to compress. No siloed handoffs to untangle. No platform debt to pay down. They will not need to read themselves first because there is nothing accumulated to read. They will build the operating model your org is still trying to diagnose, and they will do it in months.

The organisations that close this gap in the next 18 months will have a structural advantage that is genuinely hard to close from behind. Not because of the tools they chose. Because of the clarity with which they read themselves first, and the discipline with which they fixed what they found.

The cards exist to help you do that. The window for doing it before it becomes reactive is shorter than most leadership teams want to believe.


Explore the full Transformation Pattern Library, download the free workshop toolkit, or take the free AI assessment to find out where you stand. We also run these workshops in-house — get in touch if you'd like us to run one with your team.

You Might Also Like

A Pattern Language for Transformation

Get a shared vocabulary of proven Transformation Patterns, common Anti-Patterns, and Paradigm Patterns to have more effective, data-driven conversations about your strategy and architecture.

Free AI Assessment

For a personalized starting point, take our free online assessment. Your results will give you a detailed report on your current maturity and suggest the most relevant patterns to focus on first.

Waves of Innovation

Every Tuesday, we deliver one short, powerful read on AI Native to help you lead better, adapt faster, and build smarter—based on decades of experience helping teams transform for real.

Get the Report →