Risk Tolerance: Good to have – Hard to do
I’ve shared some of my experiences in working as a data security officer for the British government. In this piece, I want to highlight my experiences in getting to grips with risk tolerance. By risk tolerance, I mean getting a broad agreement on how far staff can go in risking corporate data assets before seeking high level approval. Again, this is very much my personal view, and I’ll confine this piece to my personal experiences and perceptions.
I said before that governments are generally risk averse when it comes to security matters. Bad headlines mean a lot of remedial work for civil servants, but ultimately it will be the elected lawmakers in charge of government departments who must answer for mistakes, both publicly in the bear-pit of the Westminster Parliament and, every five years or so, at the ballot box. That does not mean, of course, that all civil servants feel risk averse; I have come across numerous senior staff members with progressive outlooks on how data should be used.
But that is the exception. Another growing factor that tends towards a more conservative approach to risk is the increase of legislation that protects data on(living) individuals. In Britain, think of HIPAA, but on a scale that extends across public and private services, not just medical ones. As well as not wanting to risk red faces, civil servants from the late 1990s onwards have also had to seriously consider how the loss or misuse of data might end in an expensive (and embarrassing) lawsuit.
All of these factors – unconsciously or not – impact a government department’s risk tolerance.
Early on in my career, the thought of drafting a statement that sets down a government department’s view on risk tolerance (and then keeping such a statement under regular review) would have been outlandish to the culture. When I began nearly thirty-five years ago, civil service was much less accountable for its actions. This was partly due to a residual respect for public institutions; but I believe this was mainly due to a lack of information in the public sphere and – perhaps more importantly – the more limited means the public had of extracting that information.
In those days, the last resort for members of the public seeking answers (though also perhaps the first of them) would be to write to an elected lawmaker. This was effective, since any question raised by a lawmaker got (and still gets)priority treatment. But there was little or no information to ‘mine’ as a public resource and only the most persistent and skilled correspondents could have avoided being turned aside by the limitations of a paper-based bureaucracy.
A paper basis for everything was also naturally helpful to security. When I started my first government security job in the mid-eighties, my office was actually called the Physical and Document section – there was not much in the way of data to worry us and what there was could be held inside of secure perimeters with little risk of interference or interception. The few staff members that did use IT had a separate system of grading and higher pay, to acknowledge what were usually only their typing/inputting skills, not necessarily any knowledge of computing.
Papers would be stored in special containers and safely locked away. Any copies had to be accounted for (even photocopying was done by a specialist in those days) and careful records had to be kept whenever the most sensitive documents were moved between offices.
The transition to virtual data undermined these tried and trusted systems of physical security and access management. Not only did data move invisibly, it could no longer be confined to data centers or even to physical storage media. All of this led to security challenges that could only be managed by assessing and accepting some risk. One of the first realizations in my transition from a paper to data security officer was that these risks should fall to the people who owned the data – they could no longer only rely upon the locks and walls put in place for them by security experts. And they would need to accept greater responsibility for the management of their information assets.
If you’ve never worked in government, this picture of a transition to a risk managed approach for security should not be strange. Government security has a sort of mystique about it, yet most of its data is no more or less sensitive than many records held by private sector organizations such as medical offices, banks and schools. But like all such organizations, governments have to update its control of the information it holds to stay in sync with new threats to electronic data and systems. Apart from some national security assets, the procedures for enabling them to do this do not fundamentally differ from commercial off-the-shelf solutions available to private sector services.
Establishing a secure system, or at least a system that lends itself to secure management, is however difficult to define without some understanding of what risks can be taken – in other words the organization’s risk tolerance. Ultimately, these sorts of decisions need to be endorsed by the highest echelon of an organization, otherwise more effort – and thus expenditure – than is necessary may be given to guarding a data asset that does not really justify the level of security it is given.
It could also mean that too much security is applied to some data, making it difficult to access when it is needed and thus creating more problems for the organization. For example, it could be seen as administrative malpractice to protect information so that no-one, including those with legal rights to see it – can get to it because it has been given some difficult to impossible-to-get-through level of access.
For government security, the problem with risk tolerance is, once again, the traditional silo-shaped offices that were created years ago to deal with a specific portfolio. The effect of creating electronic records has eroded the need for a silo-based approach to government and, as I have said, governments have started to recognize this by creating a one-stop shop for their public customers, where it is not necessary to first identify the government office you are dealing with in order to procure your service.
However, the government offices behind this virtual counter each have their own levels of management and control, and thus inevitably, they will have differing views of what risks might be tolerated with different types of data.
Older institutions tackle reform at a slower, more cautious tempo than an aggressive private sector institution that must change to stay competitive. So it is likely to be a slow progress towards single-stream ownership of government records and, in the meantime, multiple agencies can expect difficulties in agreeing a risk profile for their data assets.
Why does this matter? For now, let’s leave aside those records of individuals: there is going to be little appetite for taking risks with these, since they are increasingly protected by legislation – and of course, some individuals might not like having one government agency holding all of their details.
Let’s instead concentrate on records of how things should be done (policy) and how they are done (program and projects). It might be that one department is very liberal about sharing the development of policies with the public and (contractual issues aside) being open about reporting progress on their targets. That department might adapt a higher risk tolerance for information being accidentally leaked, because on the whole, they are seeking a reputation for openness that will not be harmed by some unguarded slip.
Another department might feel its thinking on policy might be limited by accidental exposure. For instance, it might be dealing with sensitive subjects that could very easily be misconstrued and turned against it by the media. Should these two departments need to work with each other, including the sharing of information systems, it could create difficulties in effective co-operation that could in turn be a headache for senior staff and lawmakers to reconcile.
This is just an example of how the broad corporate call for tolerating risks can be broken on the rocks of departmental silos that have differing attitudes to risk, based upon their own reflexes, experience and yes – history. This differing attitude is even apparent at an individual level (see my piece – published elsewhere – on the Psychology of Risk Taking for more about that). It is also impossible to be very confident that any statement of risk tolerance will survive contact with reality. It is a feature of modern government that decisions made at a low level, the costs of which cannot always be realistically foreseen, can have very far reaching consequences.
In 2007, two CDs with information on 25 million British residents were lost. The scale of people affected caused political fallout and as a consequence, a senior civil servant resigned. Very fulsome – and costly – public reviews of government data security followed. In addition, expensive countermeasures, including a centrally organized education and awareness drive, were instigated. The results of these led to some turmoil and touched off other reviews.
Though this incident was certainly useful within the security officer’s blue book of justifications for security procedures, it seemed that no one in government had accurately foreseen the consequences of the loss of mass storage media that contained public records. And a decision not to cleanse the accompanying bank records of the individuals was taken on the grounds it would be too expensive to do so. The incident left me believing that security in government was based on reflex.
To summarize, risk tolerance is a difficult concept to accurately grasp. It can easily become detached from reality, serving no use when an incident occurs that undermines the basis under which it was made (and thus call into question the judgment of those who made it). It cannot assess the consequences of all possible permutations of risk and is not a ‘get out of jail free’ card for employees and managers. Risk tolerance requires regular review, in particular in the light of events outside of the organization.
So it needs a champion with an ongoing view to what is going on outside of the organization. But then it must compete with other pressing corporate needs. Finally, risk tolerance is only effective if it originates at the top of the corporate ladder, since the tendency of lower functions of a big organization is to have differing attitudes to risk (including even when data is common to different functions within the same corporation). This can be difficult to manage effectively: the attention spans of corporate boards – especially in regards to security matters – can be short.
In spite of these difficulties I believe that a well-formulated and regularly revisited statement of risk tolerance is an extremely handy tool and a best-buddy for security officers. It firmly places the responsibility for decision making about broad risks at the top of the organization. Ultimately, that is where it belongs, since associates of large offices should not be in any doubt about how they can handle the information they administer every day.
It is the security managers duty to keep staff aware of security risks and it is only helpful to be able to base this around a clear statement of do’s and don’ts, which can quickly be absorbed by everyone who has responsibility for handling the organization’s data. But be ready to ensure that it is updated with real time examples and that your board is keeping it under regular review. That will be a challenge in and of itself.