Thursday, 28 January 2016

Fucked pretty much all the way up

Here's a particularly nasty story of online harassment.

Bomb threats have been made under their names. Police cars and fire trucks have arrived at their house in the middle of the night to respond to fake hostage calls. Their email and social media accounts have been hacked, and used to bring ruin to their social lives. They’ve lost jobs, friends, and relationships. They’ve developed chronic anxiety and other psychological problems. More than once, they described their lives as having been “ruined” by their mystery tormenter.

They attribute this abuse to an IRC argument between their son and other hackers. They themselves seem to be collateral damage.  Read the whole thing, it goes on and on and on. Poorly-written page after poorly-written page.

As I've said about a gagillion times here, the ubiquitous idiom that if you've nothing to hide, you have nothing to fear is absolute bullshit.  The assumption that attackers' motives are always rational is incorrect.

Inside job?

Three of Talk Talk's call centre workers (actually, contractors) have been arrested for allegedly stealing data.  Talk Talk says there's no evidence yet that they are linked with last year's big data breach.  This is one of the many reasons we need end-to-end encryption; policies don't protect our data from people with bad intentions.

Tuesday, 26 January 2016

What could possibly go wrong?

First, a plea to non-geeks: stop stealing our jargon. I guarantee that "real-time" doesn't mean what you think it means.  Stick to writing about your various x-related factors and your strict dancing and stop talking about things you don't understand.

Second, look at this.

Scottish police now have "real-time" access to the Blue Badge database: the national database that records those who are qualified to park in designated disabled parking spots.  I'm all for enforcing the law on this; I'm somewhat challenged in the walking department myself, but while an extra few hundred yards can sometimes feel like a few hundred miles, I'm - rightly - not allowed to use the more convenient parking spots. Other people need them a shitload more than I do.

But let's examine a couple of ways that this law could be enforced.

  • Police officers and traffic wardens could make a note of cars parked in a disabled spot that don't have a blue badge.  They could later enter the license plate into whatever system it is they have and an automatic fine could be issued.  Or,
  • Police could have open access to a list of disabled people, which they'd be free to abuse in anyway they desired. And let's not forget that the database will sooner rather than later be hacked by criminals.
The first method of enforcement seems adequate and doesn't put the people it's trying to protect at risk, The second is all over the fucking place.  There's nothing to be gained by giving police a list of disabled people.  The claim that police will be able to deal with people who park in disabled spaces without a blue badge more quickly is patently bogus.

Of course, while police are checking whether a blue badge is legit, they might as well check their criminal record....  I don't like where this leads.

H/T @latentexistence

Monday, 25 January 2016

You see, this:

The majority of "hacks" on individuals - stuff like ID theft rather than compromising a company's server - are done by 'social engineering'. This is mostly just an understanding of how people misunderstand the value - and therefore potential harm - of information.

Look at this, for example.


The social engineer was following a script because it worked. But customer service people follow scripts because they are forced to by their employers and this is part of the problem. Their calls are regularly monitored to make sure they stick to the script. This is usually because organisations don't trust their customer service people, largely because they don't value them very highly, pay them very well or treat them like actual people.

Having customer service people follow an inflexible script often creates an excellent attack vector. Once a social engineer knows the script, they can usually find ways to game it. For example, going off piste with someone conditioned to follow a script can flummox them. Gradually coming back to what they're expecting can cut corners as the poor victim tries to get back on script.

If sticking to the script is more important than customer security, then we clearly have a problem.
Here's another example: when I collect prescription drugs from my local pharmacy, protocol clearly dictates that the staff verify my home address. The intention of that rule is to make sure the person at the counter is the one who was prescribed the drugs and the protocol is this:
  • Ask the customer's address.
  • Make sure it's the same as the one on the label.
It's a terrible protocol for all sorts of reasons, but most of the time, it goes like this, instead:
  • Read the address on the label out loud.
  • Ask the customer if this is really their address.
I've tried to explain to the pharmacy staff why this is problematic but they either don't understand or don't care. It's not their fault. They're doing what the protocol says. They're not expected to or paid to worry about security because it's all supposed to be taken care of by the protocol. I've picked up prescriptions for other people - and I've stated every time that they are for other people - without ever having to tell the pharmacist the appropriate address.

Customer service people need to be trusted more and paid more. They need to understand that they are custodians of customers' information and safety. They should be better trained, better regarded, better compensated and punished when they get it wrong. Aside from the punishment part, this is unlikely to happen.

Guidelines are fine, but rules and scripts are not helpful. If your policy is to single out people queuing for a flight that look Muslim (whatever that means) then the terrorists won't look Muslim. But if you single out people who look 'hinky' - who are behaving oddly or there's something that just doesn't seem quite right - there's no easy defense. But, of course, you need properly trained, compensated and motivated people who can put aside their own biases. I'm not saying that's easy.

The same goes in more traditional customer service environments but the word here is "icky". If something feels icky, don't do it. Why is your bank asking for your password? Why is an employee asking for her payroll number when it's printed on her payslip and ID card? Why is a stranger suddenly telling you deeply personal things as part of a request for information? These things should feel icky. If a conversation feels icky, that's when protocols are useful.

Defining "hinky" and "icky" is a mistake, just more largely worthless protocol. So security has to be done by switched on people who are properly managed, trained, compensated, motivated and disciplined. Organisations need to learn from security breaches by investing in evaluation of threats and training of staff. They need to experiment and to innovate.

This is hard. Which is exactly why firms should pay me vast sums of money to tell them how to do it.