Application Development Blog Posts
Learn and share on deeper, cross technology development topics such as integration and connectivity, automation, cloud extensibility, developing at scale, and security.
cancel
Showing results for 
Search instead for 
Did you mean: 
When I speak to customers or other security professionals, I often hear security as a defensive or offensive exercise. Depending on the role we have in an organization, our best attempt is often to give our very best to harden our infrastructure, or to penetrate through a target infrastructure. I observe many organizations take such red team / blue team approach to security, yet I wonder how effective such approach can be in improving security. I am not saying it's a wrong approach, or better yet I dare not to say such approach is wrong or ineffective. As many of you know, the concept of red team / blue team originates from military. Taking its military origin in to account, I suggest perhaps peace-making, or other arbitration effort can also be an efficient mechanism to improve security.

In essence, I suggest all security matters are social by nature. Think of how we describe a security incident, breach, or hack. We often use the term 'actors' to talk security. These actors can be human or non-human alike, yet the most important aspect is security happens when there's interaction among actors. To that end, vulnerability coordination is very broad and essentially covers many things we do in security. Vulnerability coordination begins upon a discovery by researcher until successful patch application and vulnerability management within an organization.



The point I am hoping to get across is security can't be done in silo. It requires a team effort, and communication is the key. At the same time, trust plays a vital role for the whole concept to work. To borrow from a product design concept, empathy is necessary for us to gain understanding of each other. I may have hoped software vendors tell me earlier about their security vulnerabilities, yet I  also understand the underlying complexity in confirming, narrowing, and ultimately fixing a vulnerability.



There is an increasing trend for security research organizations to adopt strict disclosure policies, where disclosure timeline is non-negotiable. From a consumer perspective, this can be good news because now I have better visibility over security vulnerabilities I am concerned with. There is always a risk where security patches can't be released in-time, thus creating more zero-day vulnerabilities. Indeed, I have also heard of the argument that vulnerabilities are there already, disclosing them on pre-defined timeline is not brining on more risk, given bad-acting hackers are well-aware of these zero-days. Vendors against straight disclosure policies are often portrayed as adopting an ostrich policy towards their security problems. On the contrary, I have also observed security appears more frequently in product updates. I can rationale such effort is a result of diligent security engineering. However, would it also be perhaps time pressure induce second-grade engineering effort to just patch-work security flaws?

I am not here to judge the merit of who's right or wrong. Yet, I propose our industry requires more collaboration and discussion as a whole to find out the middle-ground. There is no right or wrong, but I find insufficient presence and discussion among stakeholders to identify better solutions for our industry to improve security.