Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] Schneier: Focus on training obscures the failures of security design

Carol Waters wish117 at gmail.com
Wed Mar 27 16:45:45 PDT 2013


At the risk of igniting an inbox-exploding smackdown thread, I think the
following piece by
Schneier<http://www.darkreading.com/blog/240151108/on-security-awareness-training.html>is
definitely worth a read and thoughtful discussion; particularly from
the
POV of both trainers and developers.  It was posted last week, but I don't
believe it made it to Libtech -- apologies if it did and I simply missed
it.  (It's also kicked off a pretty healthy thread of comments on
Slashdot<http://it.slashdot.org/story/13/03/20/015241/schneier-security-awareness-training-a-waste-of-time#comments>
.)

On Security Awareness Training

* The focus on training obscures the failures of security design* By *Bruce
Schneier*
*Dark Reading*

Should companies spend money on security awareness training for their
employees? It’s a contentious topic, with respected experts on
both<http://www.csoonline.com/article/711412/why-you-shouldn-t-train-employees-for-security-awareness>
 sides<http://searchsecurity.techtarget.com/news/2240162630/Data-supports-need-for-awareness-training-despite-naysayers>
of
the debate. I personally believe that training users in security is
generally a waste of time and that the money can be spent better elsewhere.
Moreover, I believe that our industry’s focus on training serves to obscure
greater failings in security design.

In order to understand my argument, it’s useful to look at training’s
successes and failures. One area where it doesn’t work very well is health.
We are forever trying to train people to have healthier lifestyles: eat
better, exercise more, whatever. And people are forever ignoring the
lessons. One basic reason is psychological: We just aren’t very good at
trading off immediate gratification for long-term benefit. A healthier you
is an abstract eventually; sitting in front of the television all afternoon
with a McDonald’s Super Monster Meal sounds really good *right now*.

Similarly, computer security is an abstract benefit that gets in the way of
enjoying the Internet. Good practices might protect me from a theoretical
attack at some time in the future, but they’re a bother right now, and I
have more fun things to think about. This is the same trick Facebook uses
to get people to give away their privacy. No one reads through new privacy
policies; it’s much easier to just click “OK” and start chatting with your
friends. In short: Security is never salient.

Another reason health training works poorly is that it’s hard to link
behaviors with benefits. We can train anyone — even laboratory rats — with
a simple reward mechanism: Push the button, get a food pellet. But with
health, the connection is more abstract. If you’re unhealthy, then what
caused it? It might have been something you did or didn’t do years ago. It
might have been one of the dozen things you have been doing and not doing
for months. Or it might have been the genes you were born with. Computer
security is a lot like this, too.

Training laypeople in pharmacology also isn’t very effective. We expect
people to make all sorts of medical decisions at the drugstore, and they’re
not very good at it. Turns out that it’s hard to teach expertise. We can’t
expect every mother to have the knowledge of a doctor, pharmacist, or RN,
and we certainly can’t expect her to become an expert when most of the
advice she’s exposed to comes from manufacturers’ advertising. In computer
security, too, a lot of advice comes from companies with products and
services to sell.

One area of health that *is* a training success is HIV prevention. HIV may
be very complicated, but the rules for preventing it are pretty simple. And
aside from certain sub-Saharan countries, we have taught people a new model
of their health and have dramatically changed their behavior. This is
important: Most lay medical expertise stems from folk models of health.
Similarly, people have folk models of computer
security<http://prisms.cs.umass.edu/cs660sp11/papers/rwash-homesec-soups10-final.pdf>
(PDF).
Maybe they’re right, and maybe they’re wrong, but they’re how people
organize their thinking. This points to a possible way that computer
security training can succeed. We should stop trying to teach expertise,
pick a few simple metaphors of security, and train people to make decisions
using those metaphors.

On the other hand, we still have trouble teaching people to wash their
hands — even though it’s easy, fairly effective, and simple to explain.
Notice the difference, though. The risks of catching HIV are huge, and the
cause of the security failure is obvious. The risks of not washing your
hands are low, and it’s not easy to tie the resultant disease to a
particular not-washing decision. Computer security is more like hand
washing than HIV.

Another area where training works is driving. We trained, either through
formal courses or one-on-one tutoring, and passed a government test to be
allowed to drive a car. One reason that works is because driving is a
near-term, really cool, obtainable goal. Another reason is even though the
technology of driving has changed dramatically over the past century, that
complexity has been largely hidden behind a fairly static interface. You
might have learned to drive 30 years ago, but that knowledge is still
relevant today.

On the other hand, password advice from 10 years ago isn’t relevant
today<http://web.cheswick.com/ches/talks/rethink.pdf> (PDF).
Can I bank from my browser? Are PDFs safe? Are untrusted networks OK? Is
JavaScript good or bad? Are my photos more secure in the cloud or on my own
hard drive? The “interface” we use to interact with computers and the
Internet changes all the time, along with best practices for computer
security. This makes training a lot harder.

Food safety is my final example. We have a bunch of simple rules — cooking
temperatures for meat, expiration dates on refrigerated goods, the
three-second rule for food being dropped on the floor — that are mostly
right, but often ignored. If we can’t get people to follow these rules,
then what hope do we have for computer security training?

To those who think that training users in security is a good idea, I want
to ask: “Have you ever met an actual user?” They’re not experts, and we
can’t expect them to become experts. The threats change constantly, the
likelihood of failure is low, and there is enough complexity that it’s hard
for people to understand how to connect their behaviors to eventual
outcomes. So they turn to folk remedies that, while simple, don’t really
address the threats.

Even if we could invent an effective computer security training program,
there’s one last problem. HIV prevention training works because affecting
what the average person does is valuable. Even if only half of the
population practices safe sex, those actions dramatically reduce the spread
of HIV. But computer security is often only as strong as the weakest link.
If four-fifths of company employees learn to choose better passwords, or
not to click on dodgy links, one-fifth still get it wrong and the bad guys
still get in. As long as we build systems that are vulnerable to the worst
case, raising the average case won’t make them more secure.

The whole concept of security awareness training demonstrates how the
computer industry has failed. We should be designing systems that won’t let
users choose lousy passwords and don’t care what links a user clicks on. We
should be designing systems that conform to their folk beliefs of security,
rather than forcing them to learn new ones. Microsoft has a great rule
about system messages that require the user to make a decision. They should
be NEAT<http://blogs.msdn.com/b/sdl/archive/2011/05/04/adding-usable-security-to-the-sdl.aspx>:
necessary, explained, actionable, and tested. That’s how we should be
designing security interfaces. And we should be spending money on security
training for developers<http://www.cigital.com/justice-league-blog/2013/01/15/does-software-security-training-make-economic-sense/>.
These are people who can be taught expertise in a fast-changing
environment, and this is a situation where raising the average behavior
increases the security of the overall system.

If we security engineers do our job right, then users will get their
awareness training informally and organically from their colleagues and
friends. People will learn the correct folk models of security and be able
to make decisions using them. Then maybe an organization can spend an hour
a year reminding their employees what good security means at that
organization, both on the computer and off. That makes a whole lot more
sense.

*Bruce Schneier is chief security technology officer at BT, and the author
of several security books as well as the Schneier On Security blog. Special
to Dark Reading*
*
*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20130327/9b6004e4/attachment.html>


More information about the liberationtech mailing list