The Product Architect
Path · ReframingClose

Chapter 3

From Deterministic Software to Negotiated Software

The contract between user and machine has changed. The product has to too.

Stage · Reframing

Reading time · 17 min

Thesis · entry claim

Software used to be mostly deterministic: the same input produced the same output, and the system was either right or broken. Software is increasingly negotiated: the system interprets, suggests, drafts, and acts on partial information; the user reviews, corrects, and adapts. Designing for negotiation is a different craft than designing for execution.

Surface statement · system implication

The contract that used to hold

For most of computing, software made a clean promise: same input, same output, every time.

You typed a value into a cell, and the formula returned the right result. You clicked a button, and the system performed the action. You sent a request, and the server returned the answer or an error. In the user’s experience, the system was either correct or it had a bug.

There was no third option.

That promise was load-bearing. Whole genres of work depended on it. Accounting, banking, scheduling, inventory, billing, transit, payroll — all of them rely on systems that do exactly what was specified, no more and no less.

When a deterministic system fails, the failure is legible. You can trace it. You can prove it. You can fix it once and trust it again.

The deterministic contract also shaped the discipline of building software.

The spec described what the system would do. The engineer made the system do exactly that. The tester proved it did. The user, eventually, learned what to expect.

Trust was a function of correctness. Correctness was a function of specification. For most of the work that runs the world, that contract still holds. It should keep holding.

Anywhere being slightly wrong is unacceptable, deterministic software is the right answer. You do not want a payroll system to “suggest” a salary. You do not want a bank transfer to “interpret” an amount. You do not want a medical dosage calculator to be approximately helpful.

Determinism is not over. It is no longer the only mode the product is allowed to be in.

Where exactness is no longer enough

Increasingly, the work users want done is not the kind a deterministic system can do well. They want a draft, not a transformation. A recommendation, not a calculation. A system that can take partial information, fill in what it can, and ask only when it has to.

The same software now contains both modes.

The cell that holds your salary is deterministic; the assistant in the corner suggesting how to phrase a sentence is not. The bank transfer has to move an exact amount, while the category applied to that transaction may only be a strong guess.

Most products that feel modern are quietly composed of two kinds of behavior: one exact, one negotiated. The user moves between them without always being told which is which.

That is where the product starts to become dangerous, brittle, or excellent. The important design question is not which mode is better. It is where the seam belongs.

A product has to know when it owes the user exactness and when it is allowed to offer a proposal. Put negotiation where the user needs certainty, and the product becomes dangerous. Force exactness where the user needs interpretation, and the product becomes useless.

The amount of a transaction should be exact. The category may be negotiated. The meeting time on the calendar should be exact, while the suggestion of when to meet can remain a proposal.

The line is not always obvious, but the product has to draw it. Designing modern software means designing that seam. It means knowing which parts of the product must behave like infrastructure and which parts may behave like a collaborator.

The danger is not negotiation itself. The danger is negotiation placed where the product owes certainty.

Imagine an expense product that lets a model infer whether a transaction is reimbursable and then submits it automatically. The category may be negotiable. The explanation may be negotiable. The suggested policy match may be negotiable. But the actual submission crosses into accountability. If the product treats that final act as another suggestion, the user is no longer collaborating with the system. They are cleaning up after delegated authority.

The reverse failure is quieter but common. A product may force exactness where the user needed interpretation. An expense tool that refuses to accept a messy merchant string until the user cleans it up by hand is not being safer. It is asking the user to do the interpretive work the product was introduced to absorb. Exactness in the wrong place becomes bureaucracy.

Good product judgment is not pro-negotiation or pro-determinism. It is accurate about which contract each moment requires.

The seam is where usefulness becomes risk.

When the old vocabulary starts to slip

A spec for a deterministic feature can describe every output it should produce.

A spec for a negotiated feature cannot.

It can describe the shape of the interaction. It can describe the data available to the system. It can describe constraints, boundaries, and failure states. But it cannot list every valid output in advance, because the point of the feature is that the system is interpreting context.

The interesting design questions stop being only about the answer and start being about the conversation around the answer.

The system has to reveal confidence, frame a suggestion as a suggestion, and make correction feel like a normal part of the loop.

Most of all, it has to avoid acting certain when it is only making a reasonable guess.

The system that pretends it is still deterministic when it is not is the one that loses the user’s trust first. It returns a wrong answer with the same posture it would have returned a right one. It hides its uncertainty. It treats correction as exception handling instead of a normal step in the loop.

The user learns, slowly and then all at once, that the surface is no longer telling them the truth.

That is not only an AI problem. It is a product problem. When a system proposes, drafts, classifies, recommends, summarizes, or acts on partial information, it has entered a different contract with the user. The product has to admit that.

The new loop

The shape that replaces deterministic call-and-response is a loop. Not because the system runs in circles, but because every consequential step is a step that can be revisited.

In deterministic software, the user gives input, the system executes, and the loop closes.

In negotiated software, the loop stays open longer. The system forms an interpretation. It offers a proposal. The user reviews. The user corrects. The system adapts. That loop is the product.

Intent

The user shows up with a partial idea of what they want. Not always a fully specified request. Often a draft of one.

“Make this email warmer.”

“Find a meeting time for the team next week.”

“Categorize these expenses.”

“Finish this function the way the file already does it.”

“Help me turn these notes into a plan.”

The intent is the seed. Everything that follows is built from it.

A deterministic system waits for specification. A negotiated system has to work with intention before it is fully specified.

That is useful. It is also risky, because the less explicit the intent, the more responsibility the system takes on when it interprets it.

Interpretation

The system reads the intent and forms a working understanding of what was asked. This is the step deterministic software did not have in the same way.

Interpretation is where models, heuristics, defaults, and inferred context combine into a hypothesis about the user’s request. It is also where the system makes its first assumptions — and where every later step inherits whatever it got wrong.

If the user says, “Make this warmer,” does warmer mean friendlier, less formal, more emotional, more human, more persuasive, less harsh?

If the user says, “Find time next week,” does that include early mornings, travel time, focus blocks, tentative events, colleagues in other time zones?

If the user says, “Categorize these expenses,” should the system optimize for accounting rules, team habits, tax categories, reimbursement policy, or the user’s previous corrections?

The system never only interprets the input. It interprets the world around the input. That is where product judgment begins.

Suggestion

The system proposes an answer, not the answer. An answer it should be willing to be corrected on.

The shape of the suggestion matters as much as its content. A draft offered as a finished product is rude; the same draft offered as a starting point is collaborative.

The framing teaches the user how seriously to take it.

A suggestion can say, quietly or explicitly, “I know,” “I think,” “I guessed,” “I need your review,” or “I can do this, but you should check the part that matters.”

Those are different product postures. A negotiated product fails when it speaks with the same confidence in all of them.

Review

The user reads the suggestion and decides whether it lands. This step is almost invisible in deterministic software because there is usually nothing to evaluate beyond: did the system do the thing?

In negotiated software, review is often the user’s actual job. The product has to make the suggestion legible enough for review to be quick and honest.

That means showing the right context. It means making assumptions visible. It means making differences easy to inspect. It means helping the user know where to look.

A summary without source references is harder to review. A categorized transaction without a reason is harder to trust. A rewritten email without showing what changed forces the user to compare from memory. A meeting suggestion without exposing constraints turns scheduling into guesswork.

Review is not a passive step. It is a designed state. If the product does not design review, the user still reviews. They just do it slowly, suspiciously, and alone.

A designed review state has artifacts.

It shows the source material. It marks what changed. It distinguishes facts from inferences. It exposes low-confidence fields. It lets the user approve the whole suggestion or challenge one part without throwing the work away.

Review is not “look at the result.” Review is the product making the result inspectable.

Correction

The user changes what the system got wrong. Or accepts it. Or rejects it. Or asks for a different angle.

Correction has to be cheap.

A correction that takes longer than redoing the work from scratch is a correction the user will stop offering. Once they stop, the product loses the only signal it had about whether it was helping.

Correction is not an error state in negotiated software. It is part of the main path.

That changes how the interface should feel. The user should not feel like they are fighting the system when they correct it. They should feel like correction is expected, respected, and used.

A good negotiated product does not punish the user for being specific. It gets better because the user was.

Adaptation

The system updates what it does next based on the correction.

Sometimes that update is temporary. The next suggestion in the same conversation reflects the last edit. Sometimes it is durable. The system remembers the preference and stops making the same mistake. Sometimes it is global. Many users correct the same way, and the product adjusts its default behavior.

Adaptation is what turns a one-shot tool into a product.

Without it, the user has to teach the same lesson every session. They correct the tone again. Reclassify the same merchant again. Remove the same kind of suggestion again. Reject the same automation again.

Eventually they stop correcting. Not because the product got better, but because they stopped expecting it to.

Adaptation is where the product proves it was listening.

The clearest place to watch the whole loop run, several times a minute, is an IDE’s code autocomplete. The intent is the cursor mid-statement and the surrounding file. Interpretation is the system reading style, prior functions, the import graph, the test the user just opened. The suggestion is greyed-out text that proposes a continuation. Review is the half-second the user spends scanning it. Correction is tabbing it in, retyping the first character, or deleting through the rest. Adaptation is the next suggestion in the same file behaving like the user’s last edit was a signal — sometimes immediately, sometimes only after enough corrections accumulate to count as a pattern. When any of those steps is weak, the autocomplete becomes the thing it is hardest to forgive: an interruption that costs more than typing the line yourself. When all of them are tight, the same suggestion feels like a colleague who already read the file.

When any one of these steps is missing, the experience falls apart in a specific way.

No interpretation: the system does what was said and not what was meant. No suggestion: the system either acts too soon or leaves the user with all the work. No review: the user has to accept whatever was produced or start over. No correction: the user is trapped inside the system’s first guess. No adaptation: the user has to teach the same lesson every session.

Negotiation is not weakness in the system. It is the new shape of the system.

The seam in practice

Take expense categorization.

Some parts of the experience must remain deterministic. The transaction amount is not negotiated. The date is not negotiated. The currency is not negotiated. The account the transaction came from is not a matter of tone, preference, or model interpretation.

Those parts are facts. The product owes the user exactness. But the category is different.

A merchant name may be messy. The same vendor may mean different things for different users. A train ticket may be commuting, client travel, or personal spending. A software subscription may be engineering, marketing, operations, or reimbursable client work.

Here, the product is not calculating. It is interpreting. A weak product hides that distinction. It places the category beside the amount with the same posture, as if both are equally factual.

A better product treats the category as negotiated.

It might show why it suggested “Travel.” It might mark the category as inferred. It might make correction one click. It might remember that this vendor should be categorized differently for this user. It might avoid auto-submitting the expense until the uncertain fields are reviewed.

The user should not have to wonder which parts of the system are exact and which parts are guessed. That is the seam.

A modern product can contain exactness and negotiation in the same flow. But the design has to tell the truth about which mode the system is in.

Designing for negotiation

Designing for negotiation changes the work in three concrete ways.

1. The system has to expose enough of its assumptions

Not the literal model trace. That is rarely useful. The user does not need a technical explanation of every token, weight, rule, or retrieval step. They need enough of the assumptions behind the output to decide whether the proposal deserves trust.

“Categorized as travel because the merchant is an airline.”

“Using last week’s preferences for this kind of meeting.”

“Drafted in a warmer tone based on your previous edit.”

“Summarized from these three documents.”

“Low confidence: two possible matches found.”

The user does not need everything. They need enough. A product that exposes nothing forces the user to guess why it behaved the way it did. A product that exposes too much turns review into work. The craft is in making the right assumption visible at the right moment.

2. Correction has to be cheap by default

Correction is not a fallback. It is a first-class part of the loop.

A negotiated product that buries the undo, hides the override, or makes the user navigate three menus to teach the system its mistake has not understood its own shape.

If the system suggests a category, changing it should be easier than creating the category manually.

If the system drafts a message, steering the tone should be easier than rewriting the whole thing.

If the system summarizes a document, challenging one claim should not require rejecting the entire summary.

Correction should feel like collaboration, not cleanup. The cost of correction determines how much the user is willing to teach.

3. Adaptation has to compound

The system that learns within a session and forgets between sessions is a system the user has to keep training.

That does not mean every correction should become permanent memory. Careless memory is dangerous. A single edit is not always a preference. A one-time exception should not become the default.

But some corrections should compound. The product has to decide what should be remembered, what should expire, what should be confirmed, and what the user should be able to inspect or delete later.

This is where adaptation becomes a product layer, not just a model capability.

The product that remembers carefully — the corrections, the preferences, the pushback — and lets the user see and edit what was remembered is the one that earns a relationship instead of a transaction.

Most AI-assisted products today have at least one gap in this loop. They suggest while hiding their assumptions. They explain themselves but make correction expensive. They remember without giving the user a way to inspect what was remembered.

Each of those gaps is a deterministic habit asserting itself in a place where it no longer fits.

The simplest artifact for this work is a seam sentence:

  • exact when the product is handling money, identity, permission, commitment, or irreversible state
  • negotiated when the product is drafting, classifying, ranking, summarizing, recommending, or preparing work for review

For a real feature, write it more specifically:

This feature should be exact when it moves the transaction into the submitted report, and negotiated when it suggests the category, flags policy risk, or drafts the explanation.

That sentence is often clearer than a page of vague AI requirements.

Framework

Negotiated Software Loop

A model for the new contract between user and machine in probabilistic systems.

01 — Intent

The user arrives with a goal, often only partially specified.

The product must treat the request as intention, not always as complete instruction.

02 — Interpretation

The system forms a working understanding from the request, context, defaults, history, and available data.

This is where assumptions enter.

03 — Suggestion

The system proposes an output, action, classification, draft, or next step.

The proposal should be framed according to its confidence and stakes.

04 — Review

The user evaluates whether the suggestion fits the goal, context, and consequences.

The product must make review fast, honest, and well-supported.

05 — Correction

The user edits, rejects, redirects, accepts, or clarifies what the system produced.

Correction should be cheap, expected, and respected.

06 — Adaptation

The system changes what it does next based on the correction.

Adaptation may be temporary, durable, or global — but it must be deliberate.

The loop works only if correction is cheap and adaptation is trustworthy.

If the user cannot correct the system, negotiation collapses into automation.

If the system cannot learn from correction, negotiation collapses into repetition.

Framework · loop

Negotiated Software Loop

Describes the new contract between user and machine in probabilistic systems: intent, interpretation, suggestion, review, correction, adaptation.

  1. 01

    System

    Intent

    The user arrives with a goal, often only partially specified; the product treats it as intention, not always as complete instruction.

  2. 02

    System

    Interpretation

    The system forms a working understanding from the request, context, defaults, history, and available data.

  3. 03

    System

    Suggestion

    The system proposes an output, action, classification, draft, or next step, framed according to confidence and stakes.

  4. 04

    Negotiated

    Review

    The user evaluates whether the suggestion fits the goal, context, and consequences.

  5. 05

    Negotiated

    Correction

    The user edits, rejects, redirects, accepts, or clarifies what the system produced.

  6. 06

    Compounds

    Adaptation

    The system changes what it does next based on the correction; the change may be temporary, durable, or global.

Each correction changes the next interpretation.

Add to portfolio · Seam sentence

Seam sentence for the AI feature you walked

Commit the artifact this chapter produced. The portfolio strip in Chapter 11 reads back what you have written here.

Private to this browser

Next · Chapter 4

Designing Behavior, Not Just Interaction

The product is not only responding to clicks. It is deciding how to behave when intent is partial and correction is part of the work.

Continue reading