EngineeringSecurity ToolingComplianceSoftware ArchitectureTestingDevSecOps

The Engineering Principles That Keep Security Code Honest

K. K. MookheyApril 18, 20268 min read
The Engineering Principles That Keep Security Code Honest Banner Image

Most engineering principles documents read like laminated office posters. They are full of nice words, almost never tied to real failures, and easy to ignore when the sprint gets messy.

The Engineering Principles document in Shasta is different. It does not try to sound timeless or neutral. It reads like a set of scars translated into rules. That makes it far more useful.

What makes the document strong is not just that it is opinionated. It is that the opinions are anchored in failure modes that matter in security and compliance tooling: stale documentation, false-clean results, misleading counts, weak abstractions, and engineering rituals that look tidy until they break under real customer pressure.

This is the kind of engineering doctrine more teams should write down.

Good Principles Start With Real Failure Modes

The best part of the document is its posture. It does not claim to define software engineering in the abstract. It defines what this team learned after specific things broke or nearly broke.

That framing matters. Principles become useful when they do at least one of these:

  • prevent a class of recurring failure
  • force the team to encode quality into the system
  • reduce ambiguity when engineers make tradeoffs under time pressure

That is why lines like "every number in your docs is a test waiting to be written" land so well. It is not a slogan. It is a practical response to documentation drift.

For security products, this is especially important. Customers make decisions based on what your tooling claims to cover. If the README says one thing and the scanner actually does another, that is not a cosmetic problem. It is a trust problem.

Diagram showing how documentation claims drift away from code unless tests continuously verify them.

Discipline Is What Prevents Silent Rot

The first cluster of principles focuses on discipline, and it gets to the heart of a problem most mature teams eventually face: not dramatic outages, but quiet decay.

The document argues that documentation claims should be testable, stub-shaped functions should fail CI, and empty results must never be conflated with execution errors. Those are not stylistic preferences. They are controls against false confidence.

That distinction between "no findings" and "could not assess" deserves emphasis. In ordinary product code, ambiguity is annoying. In compliance tooling, ambiguity can be dangerous. A customer reading a green report will often assume coverage and correctness. If your scanner quietly swallows permission errors and returns a clean result, you did not build a harmless bug. You built a system that can mislead auditors and operators at exactly the wrong moment.

This is why good engineering discipline in security systems is less about neatness and more about honesty. The system should say what it knows, what it does not know, and why.

Structure Decides Whether a Codebase Scales or Fossilizes

The middle section of the principles document is really about leverage.

The strongest ideas here are:

  • build the hardest version first so the abstractions are stress-tested
  • prefer cross-cutting walkers over duplicating service-specific checks
  • encode repeatable control logic as declarative tables
  • put framework mappings on the data model, not inside prose

This is exactly the kind of thinking that separates a scalable compliance platform from a pile of check functions.

Teams often underestimate how fast one-off checks become institutional debt. The first three copies look harmless. By the tenth, every feature request becomes a search-and-replace campaign across the codebase. What the Shasta principles argue, correctly, is that recurring security patterns should be modeled once at the right level.

That pattern shows up everywhere in cloud security:

  • endpoint coverage across many services
  • log and diagnostics expectations across resource types
  • framework mappings across findings
  • lifecycle and deprecation checks across vendors

Once you see the repetition as shape rather than syntax, the right abstractions become obvious.

Deterministic Detection Is a Non-Negotiable Design Choice

One of the most important principles in the document is also the one many teams will be tempted to violate: the detection layer must be deterministic, even if the user experience layer is allowed to be smart.

This is the right line to draw.

In an era where every engineering team is under pressure to add AI everywhere, it is easy to forget that reproducibility matters more in security tooling than novelty does. If the same infrastructure can produce different findings on different runs because an LLM reasoned differently, you have built a demo, not a dependable system.

The document does not reject AI. It places it where it belongs. Use it to explain findings, summarize risk, or help a human navigate a large body of evidence. Do not use it as the source of truth for whether the finding exists in the first place.

That design choice is more than conservatism. It is a recognition that auditors, customers, and incident responders need traceable evidence. A deterministic detection engine gives them that. An opaque inference layer does not.

Diagram showing the boundary between deterministic detection logic and AI-assisted explanation.

Severity Without Context Is Just Noise

Another principle worth carrying into any security product is the warning against checks that only return counts.

A count feels useful because it compresses reality into something easy to scan. But it often strips away the thing that matters most: what is actually dangerous right now.

That is why the document's position on severity is strong. Do not blindly inherit a vendor's numeric priority. Factor severity by operational impact and finding type. In real security work, a lower-numbered event tied to credential theft may matter more than a higher-numbered reconnaissance event.

This is a broader product lesson: dashboards are often optimized for neatness when they should be optimized for decision quality.

The right question is not "how many findings do we have?" The right question is "what do I need to act on first, and why?"

Process Is Engineering Memory, Not Ceremony

The final section of the document shifts from code to team behavior, and it gets another thing right that many organizations miss: process is valuable when it preserves memory.

The best examples are:

  • failure messages that tell the reader exactly what to change
  • commit messages written as future incident reports
  • historical narratives that remain intact instead of being retroactively edited
  • closed issues treated as institutional memory rather than dead paperwork

This is how strong teams reduce re-learning costs.

A mature codebase is not maintained only by tests and abstractions. It is also maintained by its written history. The next engineer should be able to infer not just what changed, but what went wrong, what tradeoff was chosen, and what assumptions still hold.

Security teams benefit even more from this because audits, incidents, customer escalations, and compliance reviews all depend on reconstructing intent after the fact.

Diagram showing how tests, failure messages, commit messages, and issues stack into durable engineering memory.

The Deeper Thesis: Encode the Rule or Expect Drift

Underneath the numbered principles is a simpler philosophy: discipline does not scale through memory, and good abstractions only matter when repetition is real.

That is the real core of the piece.

If a behavior matters, encode it:

  • as a test
  • as a data model field
  • as a walker
  • as a declarative table
  • as a failure state the system cannot misreport

If you leave it as tribal knowledge, it will drift.

This is why the document feels more useful than generic engineering advice. It keeps pulling ideas out of team folklore and forcing them into systems that can be verified.

Why This Matters Beyond Shasta

Even if you never touch the Shasta codebase, the principles are broadly relevant to anyone building:

  • cloud security products
  • internal compliance tooling
  • control validation frameworks
  • infrastructure scanning platforms
  • agent-assisted developer tools

The recurring lesson is straightforward: engineering quality is not mostly about cleverness. It is about reducing the number of ways your system can quietly lie.

That may mean tests for doc claims. It may mean refusing to treat errors as clean results. It may mean deleting unused code instead of preserving a polite fiction. It may mean drawing a hard boundary between deterministic detection and AI-assisted explanation.

None of that sounds glamorous. All of it compounds.

Closing Thought

The most credible engineering principles are the ones written after the team has already paid tuition in production.

That is what makes this document worth reading. It does not present ideals from nowhere. It describes a worldview shaped by drift, ambiguity, retrofits, misleading outputs, and the constant pressure to choose speed over structure.

The takeaway is not that every team should copy these twenty rules exactly. It is that every serious engineering team should be able to explain, in equally concrete terms, what their codebase has taught them not to do again.

If your team cannot do that yet, write the document. If you already have the scars, turn them into rules before someone else has to learn them the expensive way.

Source

Share this post:

Recent Posts