Overprotection Breeds Greater Vulnerability | Generated by AI
Question: How does excessive emphasis on stability and security in engineering, such as in banks that fear upgrading packages and libraries, ultimately lead to outdated systems and more cybersecurity issues? Is this related to Morita Shoma’s ideas, like people who are overly sensitive about cleanliness ending up becoming dirty?
Answer:
Yes, the phenomenon you describe is a well-known paradox in software engineering, operations, and risk management — especially in highly regulated environments like banks, insurance companies, government systems, and critical infrastructure.
The core pattern is:
- Intended goal: Maximize stability and security
- Extreme strategy: Avoid almost all change (rarely upgrade libraries, frameworks, OS packages, middleware, etc.)
- Actual long-term outcome: The system becomes more fragile and more vulnerable over time
This happens through several mechanisms:
- Known vulnerabilities accumulate
- CVEs are published every day.
- If you freeze dependencies at a 3–5-year-old version, you inherit all publicly known exploits that were fixed in later releases.
- The attack surface keeps growing
- New attack classes appear (Log4Shell, Spring4Shell, MOVEit, text4shell, ProtoPipe, etc.).
- Old versions usually do not receive patches for brand-new vulnerability types.
- Supply-chain and transitive dependency risk increases
- Even if your direct dependencies look “stable”, their transitive dependencies rot.
- After 4–7 years almost no library is realistically maintainable without upgrades.
- Tooling and observability fall behind
- Modern security tools (SCA, runtime application self-protection, exploit detection) often refuse to support or give poor results on 5–10-year-old stacks.
- Organizational knowledge decays
- The people who originally built and understood the old stack leave.
- New engineers treat it as a scary black box → even riskier to touch.
This leads to the ironic outcome:
The organizations that scream loudest about “stability and security” often end up running some of the least secure and least stable long-term environments.
This is very close in spirit to the ideas expressed by Morita Shoma (森田正馬), the founder of Morita therapy (森田療法), even though he was talking about psychology rather than software:
- People who are excessively afraid of dirt / impurity / anxiety → become hyper-focused on avoiding any contact with “danger”
- This obsession creates restriction of life, avoidance behavior, and mental rigidity
- Paradoxically, they often end up more distressed, more dysfunctional, and in a worse psychological state than people who accept a normal amount of dirt/risk
Key parallel statements from Morita therapy:
- “The more you try to eliminate anxiety, the stronger it becomes.”
- “Trying too hard to be clean / perfect / safe eventually makes you dirtier / more imperfect / less safe.”
- “Arugu (努力) that is too self-focused and fear-driven produces the opposite result.”
So in both cases we see the same structure:
Over-protection → constriction → loss of adaptive capacity → worse outcome than moderate risk acceptance
In software this is sometimes called:
- “Stability tax” — paying a huge price for the illusion of stability
- “Fear-driven dependency management”
- “Security debt accumulates faster than technical debt when you freeze everything”
Healthy middle path in both psychology (Morita) and engineering:
- Accept a realistic amount of uncertainty / dirt / change
- Move deliberately but continuously (small, frequent, reversible upgrades)
- Build systems that can tolerate change rather than systems that forbid it
That is why many security-conscious organizations today (Cloudflare, Google, Stripe, GitHub, etc.) actually upgrade very aggressively — because they understand that stagnation is the bigger risk.
References:
- None (conceptual reasoning based on widely discussed patterns in software engineering and Morita therapy principles)