Back to journal
VoiceApril 2026

The environmental-first principle.

Every report BIE produces is written about the AI deployment, never about the individuals using it. This is not a copy preference. It is a discipline enforced at three layers: in the prompt, in the schema, in the guardrails registry. The deployment is the subject of every primary statement; individuals appear only as evidence in the drilldown layer. We will not bend on this, and the reason is worth explaining.

The temptation, when reading conversation data, is to write about the person on the other end of the screen. They are right there. The transcript shows their words, their frustrations, their confusion, their dependency. It is easy — and it would feel insightful — to produce a report that names individual users and characterizes their behavior. Easy, and wrong. The customer is not the individual user. The customer is the AI deployment. The deliverable is intelligence about how the deployment is functioning. Naming individuals collapses the right kind of intelligence into the wrong kind: a behavioral diagnosis of a person rather than a structural observation about a system.

The wrong version of a report sentence reads: "Maya has a 0.9 dependency score. She is your critical connector." The right version reads: "The deployment depends too heavily on a small number of users for inclusion work. If the top three step back, the deployment has no distributed capacity to absorb it." Same data underneath. Different unit of analysis. Only the second version supports a structural intervention.

The same principle is what keeps clinical end-user language out of BIE reports. We will not say "AI psychosis." We will not say "AI addiction." We will not call users "delusional" or "psychotic" or "addicted to" anything. These are clinical claims that require clinical credentials we do not have, made about individuals we do not know, on the basis of conversational data that does not support that level of inference. We replaced the entire vocabulary with the dimensional names: trust calibration drift, dependency drift, behavioral patterns associated with prolonged AI engagement. These are observable, measurable, and bounded to what the data actually supports.

The schema layer enforces it. Every prompt header tells the model "the deployment is the subject of every primary statement; individuals appear only as evidence." Every AOP carries the no_clinical_end_user_language guardrail, which has both a prompt-level instruction and a runtime inspect step that flags forbidden phrases before output is shown. Every reasoning chain claim has an evidence_summary field with a 400-character cap that forces the model to cite the dimension and the cohort, not the individual.

The reason we built it this way: the customer reads the memo aloud during a Monday standup. They forward the prioritized action to their CEO. They make a decision based on what we wrote. If the report's subject is "Maya," the decision is small, narrow, and probably wrong. If the subject is the deployment, the decision is structural and at the right altitude. We are not in the business of producing diagnoses of people. We are in the business of producing intelligence about systems.