I’ve spent a lot of my career looking at the friction between how we want to work and how we actually deliver, and I have noticed a recurring issue that has nothing to do with the quality of the applications we build. We are hiring incredibly talented engineers to write high quality code, but we are frequently seeing the security of those applications undermined by mismanaged infrastructure settings. It is rarely a case of incompetence; it is usually a case of volume.
The latest OWASP Top 10 reflects this reality. Security Misconfiguration has moved from fifth to second place. This shift indicates that our technical environments have reached a level of complexity that is simply beginning to exceed what a person can reasonably manage through manual checks or basic oversight.
I see this as a Complexity Tax.
In earlier stages of my career, we managed a relatively static set of variables. Today, a standard deployment might involve hundreds of parameters across cloud services, container orchestrators, and identity providers. Some argue that this complexity is self-inflicted and that we should build simpler systems. While I agree with the sentiment, the reality of global scale and rapid delivery usually makes that a difficult path to retroactively walk. We have to secure the world we actually live in, not the one we wish we had.
Why are we still seeing data exposed through open storage buckets or overly permissive internal roles? Is it a lack of effort, or have we reached a point where the architectural requirements of our systems have outpaced our traditional methods of governance?
We need to be realistic about where the actual risk sits. For years, the industry focus has been on application security and finding flaws in the logic of the code. While that remains important, the configuration of the environment has become the more frequent point of failure. You can invest heavily in secure coding practices, but if the underlying platform is not hardened and monitored, those efforts are largely negated. Securing the code while ignoring the configuration is like locking the front door but leaving the side of the house missing.
Now, the standard pushback here is that automation is dangerous because it scales mistakes. If an engineer misconfigures one server, it is a problem. If an automated script misconfigures a thousand servers, it is a catastrophe. This is a valid concern, but it misses the point of what modern governance actually looks like. The goal is not to give human error a larger lever. The goal is to move the point of oversight.
When we move security into code and configuration files, we bring it into the light. We can peer review it, version control it, and test it before it ever hits production. A manual change in a console is invisible and ephemeral; a change in a configuration file is an auditable record.
This reality suggests that both manual configuration and manual security are no longer viable strategies for an organisation that wants to scale. Manual configuration is too opaque to be trusted, and manual security is too slow to be effective. Any team still relying on a person to manually verify settings or follow a static hardening guide is essentially accepting a high level of residual risk. If your security posture is dependent on every engineer getting every configuration right every single time, you have built a process that is statistically likely to fail.
The move to the number two spot on the OWASP list is a signal for leadership to rethink governance. It shows that our push for delivery speed is creating a gap in our systemic integrity. We are deploying faster than we can verify.
Security in this environment is less about finding individual bugs and more about the automated, consistent orchestration of the platform. If you cannot define your security standards as code and enforce them through automation, they are not really standards at all.
Manual oversight is no longer a sign of diligence; it is a systemic vulnerability.



Leave a Reply
You must be logged in to post a comment.