Why they do it, nobody really knows. But Schneier reports on how, via The Intercept.
We build a lot of terrible things here in the UK and hardly anyone notices. Walking into dreadful situations is basically what we do.
This is a fascinating story. I’m not familiar with the game show in question but someone hacked it and won a lot of money because he noticed that the game’s supposedly random element was actually very predictable.
This is a familiar, systemic security problem, similar to one I’m dealing with at the moment. The people who designed the game assumed for no good reason that it was sufficiently random. That is, they assumed nobody would notice that the random element was in fact a simple repeating sequence, possibly because they expected people to play the game within the rules – including the implicit rules they hadn’t thought about – rather than thinking about how people might see the game from outside with the goal of making money rather than playing the game.
Many security problems have this signature; it’s natural for us to build systems around what people are supposed to do. That’s why it’s so entertaining when people break those systems by doing something unexpected. I’m the sort of person who instinctively seeks out the places where things break. You’d think that would be an advantage in the security community but it usually isn’t: typically nobody wants to know.
This is a problem I’m dealing with at the moment. My client wants me to produce a system as proof of concept which doesn’t prove any concepts and actively ignores the fundamental security problem it creates. I’ve explained how the security problem can be solved, but the client isn’t interested, insisting instead that it’s a proof of concept and we can solve the security problems later. I cannot do this; whatever I have that passes for ethics knows that proofs of concept ignoring inconvenient security problems end up in software nine times out of ten. I won’t do it.
Hack the world, everyone. Find out what things people haven’t thought of and play with them. Don’t steal and don’t fuck with people’s privacy, but fuck with things that fuck with people’s privacy. Think laterally, don’t brute force it unless you have no other option.
I speculated earlier about the possibility of a distributed denial of things. It was an off-the-cuff remark but not an entirely idle one. As we rely ever more on increasingly smart devices, we have more and more to lose if we’re denied their use. Technically, I won’t be talking only about denial of service attacks, distributed or otherwise, but about the concept of other people denying us the things we think we own.
Think about an internet-connected alarm clock that wakes us up early if the traffic is bad or on especially nice days so we can walk to work. We already rely on alarm clocks and we’ve all panicked when they didn’t, for one reason or another, go off. We might rely on a smartclock even more because we no longer need to plan for contingencies, meaning we can maximise our time in bed.
The amount of stuff such a device would need to know (and would surely report) about us is deeply concerning, but so is the possibility that the functionality we’ve come to rely on will be taken away from us.
That’s a fairly trivial example. What if our TVs won’t function if they are not connected to their service centre over the internet? What if our fridges stop keeping our food cold or our thermostats stop working? What if our car stops working?
When we need to abide by user agreements while we drive our cars, we face two problems. First, it means we’re going to be under surveillance. The user agreement will have to be enforced. The service centre (or insurance company) can take away our car if they don’t like the way we use it. Second, there’s likely to be a way for attackers to take away our cars, too.
Denial of things could become a serious problem and it’s not clear what sort of defences we’ll have. Imagine someone taking away our car the morning of an important interview (which our calendars and clocks told them about). Would we pay to have our cars unlocked? We might.
That’s personal. But what if an attacker launches an actual DoS against the server that keeps our cars running or our alarm clocks alarming? The more customers those servers have, the more they might pay to get their servers back. I think that’s where the danger of denial of things lies.
I have many concerns about the Internet of Things and have written about some of them before. IoT devices are usually not built with security in mind. They are often rushed to market in a highly competitive space. I expect the manufacturers think of security as something that can be added with a firmware update, which is a hugely problematic attitude especially – as is known to be the case with some devices – the firmware update process itself is not secure.
But a bigger problem is that many devices are specifically designed to actively spy on us. What other even vaguely plausible reason could there be for your fridge to be internet-connected? Samsung seems to tout its internet fridge as a replacement for the paper calendars most of use hang on our fridges, but of course you can also run apps and browse the web. Let’s face it, though: anyone who has a smart fridge most likely has a house festooned with tablets, smartphones and a bunch of other connected devices. Why would they want to do the same things standing at their fridge that they could do in the comfort on an armchair on their tablet? Clearly, all the advantages are to Samsung, who are very likely collecting all manner of information about us and our habits. Fridges are going to be more or less efficient depending on how much food is in them. This could say much about our shopping habits even without knowing what food is in there. It could tell Samsung what days we usually shop on. Combine it with sensors on the doors and it could tell them whether we prefer fresh or frozen foods, what times we tend to be hungry…. And of course, Samsung’s smart fridge has been found to be insecure. A man-in-the-middle attack could uncover the owner’s google credentials.
There is one very minor reason for having an internet fridge: firmware updates. A few years ago, my old fridge needed a firmware update because of that model’s tendency to suddenly burst into flames. A man came round and plugged his laptop into the fridge with an ethernet cable. There was a period of several weeks between finding out about the problem and it being fixed. Automatic updates would have prevented this (and for the record, Samsung’s firmware updates seem to be pretty secure). But this is surely a rare case. How often does fridge firmware need updating ? It’s surely worth neither the manufacturers’ money or the consumers’ risk if updates is the only use case. It’s much more likely that the fridge is spying on people.
This brings me to a yet bigger worry. One day, we might find that all fridges are internet-ready and that they won’t work unless they are connected. If this sounds unrealistic, perhaps you’re right. Perhaps people will start seeing sense. But think instead about smart TVs.
A couple of years ago, we bought a new TV. We weren’t looking for anything special, we just wanted something that would sit in the corner and show pictures of things. But it was virtually impossible to get one that wasn’t HD. We have no objection to HD, but we’d rather have saved money and got an SD one instead; having HD hasn’t significantly improved our lives. We don’t notice that we don’t have it (we don’t have an HD subscription to our main provider).
I don’t think it will be long before it’s virtually impossible to buy a TV that isn’t internet-connected and that requires an active connection to function at all. Smart TVs (including Samsung’s) are notoriously insecure. They store voice commands you issue in their data centres. They report all sorts of data about our viewing habits. If they have cameras, these register with dozens of companies when you first fire them up, presumably reporting our activities to those companies.
There’s no doubt that surveillance is the business model of the IoT. My main concern is that sooner or later there won’t be a practical way to opt out.
And here’s another thing to worry about. How long will it be before we see a Distributed Denial of Things?
The BBC is talking about how sites and apps aimed at children are creepily harvesting data about them.
The UK's data protection agency took part in an international investigation looking at almost 1,500 websites popular with young people.
It found that one in five asked for phone numbers or pictures.
Needless to say, many of them share this data with Random Others.
I’m an advocate of children having things like email addresses and mobile phones and of their being allowed to break things to see how they work. The day my niece can break my network security will be an especially good one. The day she breaks it a second time, after I’ve fixed the problem, will be even better. Some of the kids I’ve spoken to about this routinely circumvent their schools’ firewalls by various means and I strongly encourage them to do so.
But we all make horrendously bad decisions in order to learn and services aimed at children shouldn’t actively solicit mistaken behaviour, they should set an example. The big data fetish shouldn’t override treating people like people and sure as shit shouldn’t cause companies to exploit children.
We have sandboxes to encourage children to push their boundaries in relative safety. Sites and apps aimed at children should have this as a core design principle. What is wrong with people?
This is interesting. A journalist posted his phone metadata online and challenged people to find stuff out about his life. That data included:
The results were hit and miss but accurate about the big stuff (and some people did very well on the little stuff as well). Because I’m me, I’m more interested in the stuff they got wrong. Some highlights:
People getting stuff wrong is an aspect of privacy that a lot of people forget. I’ve written about it before in various places. I’m out and about so can’t look up the refs just now, but I know I’ve written about an interview with a security expert. The security expert looked at the journalist’s Foursquare checkins and noted that he checked in often at a deli and a doctor’s surgery. You can see how an insurance company might construct a narrative involving the journalist eating a lot of fatty deli meat and having to see his doctor often as a result. The security expert pointed out exactly this narrative. In fact, the doctor was the journalist’s daughter’s paediatrician. The journalist liked to check in at the deli because hardly anyone else ever did, making him the Foursquare mayor of the place. But the fake narrative was fairly convincing.
I’ve given a few talks about this sort of thing at conferences and other venues. When I do this, I usually also talk about ways to control the narrative, which I’ve also written about before. It’s an interesting subject. Being very private can increase your risks of being misconstrued but false trails and misinformation is surprisingly difficult to pull off convincingly and introduces new risks (that guy must have some really juicy stuff to hide). Pre-emptive strikes ,such as posting embarrassing photos before anyone else does so you can control the context, carry their own dangers.
It’s tricky and it’s what I’m convinced is at the heart of whatever privacy is.