It’s the beginning of 2019 and the new buzz-ware from the security solution vendor community is ‘zero trust’. What do we think that means? It must be important because they’re all rushing to tell us how their products and services support this new initiative in security architecture. It’s being presented as the new holy grail, something never before imagined, that will solve all your corporate security problems. Google has apparently implemented the approach in a form named BeyondCorp. This shifts the access control decisions away from the corporate perimeter and repositions them close to the resources being protected.
What? You mean you are still doing that ‘perimeter security’ thing? Didn’t you ever come across the Jericho Forum commandments on deperimeterisation for the enterprise? And now you’re buying into this latest fad which is just the same thing served up in smaller slices. The clue is in the word ‘microperimeters’.
Forrester Research credits one of their research analysts (John Kindervag) as creator of the zero-trust concept in 2010. It is claimed that Zero Trust is a data-centric architecture that puts micro-perimeters around specific data or resources so that more-granular access control policy rules can be enforced and implemented. This is the first problem that The Attributer has with this model of utopia. True ‘Data Centric’ security architecture would be embedding security in the data structures. This version of Zero Trust being promoted by the network vendors is just another version of network-centric security architecture – one where you carve the network up into smaller segments to use it as the means to control access to your data. That is not what The Attributer calls data-centric.
The solution based on micro-perimeters in the network is a fundamentally flawed architectural approach. The job of the network is to provide transport services for protocol data units to be moved from one place to another, reassembled in sequence order and with certain performance targets to be met. It is NOT the job of the network to protect application data from theft, corruption, or fabrication. This looks very much like the network-embedded security products for architectural approaches that we threw out decades ago.
IPSec was heralded by CISCO and others in the 1990s as the future for protecting application security. It can’t be done, even for confidentiality services, because there is no way for the application to know whether or not the network encryption is turned on. Much more important is the fact that applications need to store audit trails, and networks don’t do that – they deliver and throw away the packets.
Now we are told that that the network segmentation gateways (aka next generation firewalls) will open every application packet, inspect it (by whose rules?) and log everything in a ‘data lake’. Hmm. more like a data ocean. Where is the tooling to connect data lake analysis with applications? It seems to The Attributer like an unscalable dream, and an inelegant one too. This is not the right architectural approach to achieve data centricity.
Let’s be clear. The only thing that network security architecture should be protecting is the network itself and its ability to meet its service level agreements with users and applications that make use of its transport services.
When we dig a little deeper, we discover that the term ‘zero trust’ refers to ‘zero trust networking’, in which you never trust the network, or anything connected to the network. It takes the principle of ‘trust but verify’ that was popularised by President Reagan in the 1980s in his dealing with the Soviet Union over nuclear disarmament and twists it round into the phrase ‘never trust, always verify’. One of the basic concepts of zero trust is that trust is dependent on technical strength and that somehow this trust is controlled by technical components in the architecture.
All of this is nonsense. Without trust, society and business could never operate. Trust is a human concept, not a technology one. We trust different people for different things and the levels of trust are measured in microlevels. This attempt to sell us pre-packaged ‘architecture’ in the form of more technical products shows just how immature the security architecture industry really is. Technology, technology and more technology. If you stack your business full of technology, then you’re bound to solve the problem – eventually.
Let’s look at a suggestion for implementing zero trust. Firstly, it is proposed that zero trust is a strategy not a solution. The Attributer agrees with that idea and believes that moving control into the data itself (not just close to the data) is the correct strategy to pursue. SABSA has been proposing and describing data centric security architecture since 2010.
Then we are told that the tactics for implementing the zero-trust strategy are network segmentation using next generation firewalls and VPN technology to bind the end-points into the logical virtual network with its embedded microperimeters. This is where we part company on the ideas of zero trust networking. SABSA Thinking says that nothing in the network should be trusted to protect application data – real zero trust. It just isn’t what the network is supposed to do. That’s a SABSA concept going back to a publication in 2005.
For data-centric security architecture SABSA uses cryptographic security embedded in the data itself and a network of trust brokers to provide trusted execution platforms for the data to be processed. The network is only a transport tool, not a protection tool.
Just imagine where this solution is heading. It suggests that every application data packet will be opened and inspected by network segmentation gateways and accepted or rejected by a set of the rules in the gateway (a next generation firewall) and the results stored in a shared data lake. Is that scalable? What performance will be needed in the gateways? How will the data lake be analysed and with what tooling? How many applications will discover that they don’t work anymore because of a rule error in the gateway? The Attributer can see so many reasons why this approach will fail, but not before the network security vendors have sold a lot of next generation firewalls and a lot of new VPN technology. Is this a sales campaign? You bet it is.
SABSA educated and certified architects will recognise that first you must model the business before you can start selecting security products. In fairness Kindervag does talk of being driven by ‘business outcomes’, but without detail of how that is to happen. They will also recognise that a network centric architecture will never be a substitute for a true data centric security architecture. We need the vendor community to supply components that really do populate a business-driven security architecture model to solve real business problems. SABSA is the way.
 https://www.brighttalk.com/webcast/10903/344314?autoclick=true&utm_source=brighttalk-promoted&utm_campaign=mysubscriber_weekly_email&utm_medium=email&utm_content=comingup-recommended&utm_term=032019 SABSA: Trust, Security and Risk Management in Cloud Computing; John Sherwood, The SABSA Institute, COSAC 2010,  Enterprise Security Architecture: A business driven approach, Sherwood, Clark and Lynas, 2005.