Wednesday, 30 September 2015

How GCHQ tracks you

Why they do it, nobody really knows.  But Schneier reports on how, via The Intercept.

We build a lot of terrible things here in the UK and hardly anyone notices.  Walking into dreadful situations is basically what we do.

Evil Wednesday roundup

  • Ed Snowden is tweeting.  Like I needed any more reasons to be on government watchlists.
  • The BBC gushes about smarthomes.  I sometimes think our obsession with this kind of thing since at least the 60s is due to the fact that it has been historically unrealistic.  That’s the sort of thing that makes us dream without worrying about the consequences.  Now that it’s trivially achievable, we’re all just assuming it’s desirable and still not thinking about the consequences.  Lots of these IoT devices don’t encrypt the intimate data they’re sending around and many have other terrifying security flaws.  And do we really want this stuff?  Well, a lot of it is kind of cool and I’m generally optimistic about the potential for energy saving, but I’ve worked in smart buildings and in my experience they don’t work.  Perhaps they will, in time, and a lot of the security issues will certainly be solved over time, but that doesn’t help us now.  And I have some serious misgivings about the motives of the companies that will be selling these solutions.  They won’t have our best interests at heart.  Ecosystems are great, but when they fail, we’re fucked.
  • People have been hoaxing Facebook.  They’ve been claiming that Facebook is about to start charging for the ability to keep your profile private.  I think it’s probably just mischief but it has generated a lot of noise and caused Facebook to issue statements on policy, saying that it will never charge for such things.  It seems like a decent way to do activism; shouldn’t we be trying to force companies like Facebook, Twitter, Google etc. to adopt an agenda that doesn’t fuck us all over?  With great power comes great responsibility, of course, but it counts both for activists trying to manipulate companies and the companies that are trying to manipulate us.  Perhaps turnaround is fair play.
  • People are hacking medical devices. Because of course they are.  But this article is more about what I’d call medical appliances (MRI scanners and the like) rather than what I’d call devices (such as pacemakers).  In the end, it won’t be the hacking of stuff like this that’ll worry us, it’s the hacking of the places where all the data goes.  My current work shows that none of this stuff is particularly safe at the moment.
  • China and the US agree not to wage cyber-war on each other.  This is the least believable thing I have ever seen.  There is absolutely no way either government would stand down on this.   It’s hard to understand what sort of theatre is going on here.  Why even pretend?
  • Dudes prank drive-thru staff by switching places in the car when the staff turn round. You have to see it to see it.  A lot of our security issues involve people like this who challenge our assumptions. One thousand internets for the people who worked out what was going on, but no penalty for the people who registered something weird but couldn’t quite join the dots.   We’re neither built nor educated to expect this kind of thing and we’re often more likely to doubt ourselves than the evidence of our senses. We can’t always get the balance right and as a rule of thumb I’d personally be more likely to trust people who doubt themselves.
  • Yeah, I used the word “dudes”. Deal with it, I’m having to.  Family and friends are helping in this time of crisis.
  • I love stories about how easy it is to break into stuff. Locks are central to our collective delusion about how security really works, which is one of the reasons I learned to pick locks.  The other is that it’s cool to have a skill – especially a mechanical skill – that most people do not.  My suspicion is that most people believe that many locks can be picked, but not fancy locks, defined as the locks they were sold.  Or something would have been done about it, right?  While at the same time knowing that locksmiths pick locks all day as their actual job.  What does this attitude tell us about backdoors in encryption?
  • Apple again deciding what kind of journalism people can consume on the devices they think they own.  This time, it obstructed an app that reports drone strikes and then removed it from the app store when it managed to slip through their arbitrary filters.  This seems like the sort of thing we should know about, Apple.
  • More assumption-challenging: here, a baby works out how to escape.  I have a more stupid anecdote: when I was about that age, my ‘play pen’ (prison) didn’t have a floor and was made from very light materials.  What do you think I did?  It did not require problem-solving skills on a par with that other baby’s, though.  Bonus anecdote: our stair gate had a bolt.  A fucking bolt. How stupid did my parents think I was?  It was the same stair gate that successfully prevented all my siblings making unauthorised trips up and down stairs, though.  Just saying.

I’m back

Things have been slow around here lately because I’ve been busy.

I’m still busy but I miss writing here, even though nobody reads it, so I’ll try harder.  Some stuff coming today.

Wednesday, 23 September 2015

Hack the world

This is a fascinating story. I’m not familiar with the game show in question but someone hacked it and won a lot of money because he noticed that the game’s supposedly random element was actually very predictable.

This is a familiar, systemic security problem, similar to one I’m dealing with at the moment.  The people who designed the game assumed for no good reason that it was sufficiently random.  That is, they assumed nobody would notice that the random element was in fact a simple repeating sequence, possibly because they expected people to play the game within the rules – including the implicit rules they hadn’t thought about – rather than thinking about how people might see the game from outside with the goal of making money rather than playing the game.

Many security problems have this signature; it’s natural for us to build systems around what people are supposed to do.  That’s why it’s so entertaining when people break those systems by doing something unexpected.  I’m the sort of person who instinctively seeks out the places where things break.  You’d think that would be an advantage in the security community but it usually isn’t: typically nobody wants to know.

This is a problem I’m dealing with at the moment. My client wants me to produce a system as proof of concept which doesn’t prove any concepts and actively ignores the fundamental security problem it creates.  I’ve explained how the security problem can be solved, but the client isn’t interested, insisting instead that it’s a proof of concept and we can solve the security problems later.  I cannot do this; whatever I have that passes for ethics knows that proofs of concept ignoring inconvenient security problems end up in software nine times out of ten.  I won’t do it.

Hack the world, everyone. Find out what things people haven’t thought of and play with them. Don’t steal and don’t fuck with people’s privacy, but fuck with things that fuck with people’s privacy. Think laterally, don’t brute force it unless you have no other option.

Wednesday, 9 September 2015

A distributed denial of things part 2

I speculated earlier about the possibility of a distributed denial of things.  It was an off-the-cuff remark but not an entirely idle one.  As we rely ever more on increasingly smart devices, we have more and more to lose if we’re denied their use.  Technically, I won’t be talking only about denial of service attacks, distributed or otherwise, but about the concept of other people denying us the things we think we own.Image result for denial of service

Think about an internet-connected alarm clock that wakes us up early if the traffic is bad or on especially nice days so we can walk to work.  We already rely on alarm clocks and we’ve all panicked when they didn’t, for one reason or another, go off.  We might rely on a smartclock even more because we no longer need to plan for contingencies, meaning we can maximise our time in bed.

The amount of stuff such a device would need to know (and would surely report) about us is deeply concerning, but so is the possibility that the functionality we’ve come to rely on will be taken away from us. 

That’s a fairly trivial example.  What if our TVs won’t function if they are not connected to their service centre over the internet?  What if our fridges stop keeping our food cold or our thermostats stop working?  What if our car stops working?

When we need to abide by user agreements while we drive our cars, we face two problems.  First, it means we’re going to be under surveillance.  The user agreement will have to be enforced.  The service centre (or insurance company) can take away our car if they don’t like the way we use it.  Second, there’s likely to be a way for attackers to take away our cars, too. 

Denial of things could become a serious problem and it’s not clear what sort of defences we’ll have.  Imagine someone taking away our car the morning of an important interview (which our calendars and clocks told them about).  Would we pay to have our cars unlocked?  We might.

That’s personal. But what if an attacker launches an actual DoS against the server that keeps our cars running or our alarm clocks alarming?  The more customers those servers have, the more they might pay to get their servers back.  I think that’s where the danger of denial of things lies.

A distributed denial of things

I have many concerns about the Internet of Things and have written about some of them before.  IoT devices are usually not built with security in mind.  They are often rushed to market in a highly competitive space.  I expect the manufacturers think of security as something that can be added with a firmware update, which  is a hugely problematic attitude especially – as is known to be the case with some devices – the firmware update process itself is not secure.

But a bigger problem is that many devices are specifically designed to actively spy on us.  What other even vaguely plausible reason could there be for your fridge imgresto be internet-connected?  Samsung seems to tout its internet fridge as a replacement for the paper calendars most of use hang on our fridges, but of course you can also run apps and browse the web.  Let’s face it, though: anyone who has a smart fridge most likely has a house festooned with tablets, smartphones and a bunch of other connected devices.  Why would they want to do the same things standing at their fridge that they could do in the comfort on an armchair on their tablet?  Clearly, all the advantages are to Samsung, who are very likely collecting all manner of information about us and our habits.  Fridges are going to be more or less efficient depending on how much food is in them.  This could say much about our shopping habits even without knowing what food is in there.  It could tell Samsung what days we usually shop on.  Combine it with sensors on the doors and it could tell them whether we prefer fresh or frozen foods, what times we tend to be hungry….  And of course, Samsung’s smart fridge has been found to be insecure.  A man-in-the-middle attack could uncover the owner’s google credentials.

There is one very minor reason for having an internet fridge: firmware updates.  A few years ago, my old fridge needed a firmware update because of that model’s tendency to suddenly burst into flames.  A man came round and plugged his laptop into the fridge with an ethernet cable.  There was a period of several weeks between finding out about the problem and it being fixed.  Automatic updates would have prevented this (and for the record, Samsung’s firmware updates seem to be pretty secure).  But this is surely a rare case.  How often does fridge firmware need updating ?  It’s surely worth neither the manufacturers’ money or the consumers’ risk if updates is the only use case.  It’s much more likely that the fridge is spying on people.

This brings me to a yet bigger worry.  One day, we might find that all fridges are internet-ready and that they won’t work unless they are connected.  If this sounds unrealistic, perhaps you’re right.  Perhaps people will start seeing sense.  But think instead about smart TVs.

A couple of years ago, we bought a new TV.  We weren’t looking for anything special, we just wanted something that would sit in the corner and show pictures of things.  But it was virtually impossible to get one that wasn’t HD.  We have no objection to HD, but we’d rather have saved money and got an SD one instead; having HD hasn’t significantly improved our lives.  We don’t notice that we don’t have it (we don’t have an HD subscription to our main provider). 

I don’t think it will be long before it’s virtually impossible to buy a TV that isn’t internet-connected and that requires an active connection to function at all.  Smart TVs (including Samsung’s) are notoriously insecure.  They store voice commands you issue in their data centres.  They report all sorts of data about our viewing habits.  If they have cameras, these register with dozens of companies when you first fire them up, presumably reporting our activities to those companies.

There’s no doubt that surveillance is the business model of the IoT.  My main concern is that sooner or later there won’t be a practical way to opt out.

And here’s another thing to worry about.  How long will it be before we see a Distributed Denial of Things?

Wednesday, 2 September 2015

Stealing children

The BBC is talking about how sites and apps aimed at children are creepily harvesting data about them.Image result for child catcher

The UK's data protection agency took part in an international investigation looking at almost 1,500 websites popular with young people.

It found that one in five asked for phone numbers or pictures.

Needless to say, many of them share this data with Random Others.

I’m an advocate of children having things like email addresses and mobile phones and of their being allowed to break things to see how they work.  The day my niece can break my network security will be an especially good one.  The day she breaks it a second time, after I’ve fixed the problem, will be even better.  Some of the kids I’ve spoken to about this routinely circumvent their schools’ firewalls by various means and I strongly encourage them to do so.

But we all make horrendously bad decisions in order to learn and services aimed at children shouldn’t actively solicit mistaken behaviour, they should set an example.   The big data fetish shouldn’t override treating people like people and sure as shit shouldn’t cause companies to exploit children.

We have sandboxes to encourage children to push their boundaries in relative safety.  Sites and apps aimed at children should have this as a core design principle.  What is wrong with people?

Only metadata

This is interesting.   A journalist posted his phone metadata online and challenged people to find stuff out about his life.  That data included:14837076187 2d50ac79a6

  • Who he called and texted (in our dataset, exact phone numbers have been hidden and replaced by unique identifying codes).
  • How long each phone call lasted.
  • The time of the communication.
  • The location of the cell tower contacted when outgoing calls were initiated.
  • The location of the cell tower contacted for SMS and internet connections.

The results were hit and miss but accurate about the big stuff (and some people did very well on the little stuff as well).  Because I’m me, I’m more interested in the stuff they got wrong.  Some highlights:

  • Some people thought the journalist was partying all night on New Years Eve because his phone was active all night and all morning.  In fact, he was in bed before 12.  The pings probably came from other people sending him HNY messages and he had a 5am shift the following day.  It’s interesting that our perception of the norm colours our expectations so completely; nobody guessed the truth.
  • Some people inferred that he’s a member of a particular golf course, which isn’t true.  This is interesting because it would likely be a very easy thing to check by ringing the place up and asking.  They might not tell you outright but there’d probably be ways to wheedle the information out of them.  I’d probably try asking them to give the guy an important message when he was next in.  People like to be helpful  My point is that although people used other datasets to help their analysis, they apparently didn’t use social engineering.  The first thing I’d have done is work out where he works and then ring them up and ask questions.  Maybe even ring him up.  That would likely have told me at least that he works shifts, which nobody in the challenge got from the metadata alone.
  • Actually, thinking about that it’s surprising that nobody guessed that from the data, especially since some people accurately guessed his bus route.  I haven’t looked at the data but patterns surrounding when he used the bus, ferry or drove to work seem like they’d stand out.  Perhaps people just didn’t to think to look, since shift work is relatively uncommon these days.  Perhaps it was a limitation in the tools people used.  If nothing else, this might give us some insight into how to design better surveillance tools.  Which is all we need, right?
  • It was easy to identify domestic flights but much harder to guess international ones for obvious reasons.  There are ways to track down international flights without access to foreign metadata (law enforcement agencies would rarely face this problem) but they are tricky often time-limited.and you’d probably need to be there in person. It’s interesting to think about ways to do this.

People getting stuff wrong is an aspect of privacy that a lot of people forget. I’ve written about it before in various places.  I’m out and about so can’t look up the refs just now, but I know I’ve written about an interview with a security expert.  The security expert looked at the journalist’s Foursquare checkins and noted that he checked in often at a deli and a doctor’s surgery.  You can see how an insurance company might construct a narrative involving the journalist eating a lot of fatty deli meat and having to see his doctor often as a result.  The security expert pointed out exactly this narrative.  In fact, the doctor was the journalist’s daughter’s  paediatrician.  The journalist liked to check in at the deli because hardly anyone else ever did, making him the Foursquare mayor of the place.  But the fake narrative was fairly convincing.

I’ve given a few talks about this sort of thing at conferences and other venues.  When I do this, I usually also talk about ways to control the narrative, which I’ve also written about before.  It’s an interesting subject.  Being very private can increase your risks of being misconstrued but false trails and misinformation is surprisingly difficult to pull off convincingly and introduces new risks (that guy must have some really juicy stuff to hide).  Pre-emptive strikes ,such as posting embarrassing photos before anyone else does so you can control the context, carry their own dangers.

It’s tricky and it’s what I’m convinced is at the heart of whatever privacy is.

Photo credit.