Friday, 26 February 2016

How not to deal with your own incompetence

Image result for database no passworduKnowKids, which was already a creepy, dangerous firm, was recently told by White Hat Chris Vickery that their database jam packed with detailed information about children didn't even have password protection.  He was able to download texts, images and "detailed profiles" from, to and about kids without ever having to enter so much of a password.

He then told uKnowKids about it.  They didn't react well.
When Chris Vickery discovered the security risk and alerted uKnowKids, it accused him of hacking its systems.
Um.... and?  The issue isn't whether the site was 'hacked' (by whatever means they'd like to qualify the term). Every prominent site will be hacked sooner or later.  The issue is whether uKnowKids did everything reasonably possible to protect the data.
The MacKeeper security expert said the database was not password protected. uKnowKids' chief executive Steve Woda put this down to "human error" saying a third-party had installed it.
Oh, well of course that's perfectly all right then.

Mr Woda's extraordinary lack of awareness continues.  Vickery deleted most of the data he'd grabbed but held on to a few screenshots as leverage in case the company didn't fix the problem (apparently they did).  Woda said:
"I have no animosity. I just wish he would have respected our customers' data."
I'd say that Vickery respected the data a shitload more than Woda did.
The row highlights the grey area in which ethical hackers operate - seeking out security weaknesses and vulnerabilities and informing the data owners rather than exploiting them. They typically act without obtaining consent in advance, and deal with very sensitive material.
There's no grey area. It's not the ethical hackers who are at fault here, it's the companies who refuse to learn from their mistakes.
"Anyone researching security has a duty of care," said cybersecurity expert Professor Alan Woodward from Surrey University.
I.... don't even know what that means.  Of course they don't. The duty of care rests with the people collecting and storing the data. If they do it wrong, it's their fault, not that of anyone pointing it out.
"As this data concerns children, I would have hoped that the researcher would have exercised great caution and acted in such a way that he was not adding to the risks of the data being copied into the wild - notwithstanding that the data was publicly visible anyway.
Me too, and apparently he did.  He quietly informed the firm that their security didn't exist. He didn't splash the news all over the web until after the problem was fixed. He put exactly nobody at risk whereas uKnowKids certainly did. And would certainly still be doing if Vickery hadn't pointed it out.
"I think both sides in this story could have handled it better."
Bullshit. Vickery handled it just fine.  He told uKnowKids that their security was - to put it mildly - broken and they accused him of nefarious acts.

From the uKnowKids site:
uKnowKids Makes Parenting Easier, and Keeps Kids Safe Online and on the Mobile Phone 
uKnowKids has helped parents protect more than 260,000 kids in more than 50 countries around the world. 
Better than parental controls, uKnowKids is the world's leading parental intelligence service.
Not....particularly safe. It seems that the company has contributed significantly to the potential danger of its charges.  Fuck this kind of service anyway.  Spying on your kids will not keep them safe and any service that offers to make parenting "easier" ought to be ashamed of itself.

Thursday, 25 February 2016

Hacking the robots that carry us around

CGMA Mecha Contest Entry(Combat Mech) by ianskie1These days, cars are computers that happen to have wheels and engines and stuff.  In fact, they're more like robots because they have all kinds of fancy sensors for perceiving the world and actuators for interacting with it. Also, it's better to think of cars as robots because riding around inside robots is awesome.

Increasingly, our cars are just another Thing in the Internet of Things.  They collect data about their movements and have an internet-accessible interface to get at it.  Sometimes, parts of a car's functionality is also exposed through an internet-accessible API.  The Nissan Leaf is one such car.

This is an interesting and cool story.  Someone hacked his own Nissan Leaf and found he could access information about his car (battery status, air conditioning status and so on) without using the official companion app.  He found that these requests were anonymous; they didn't require any kind of authentication and no session ID was used. Then he found that he could do the same to other people's Leafs.

Then he found that he could control some of the car's features, such as air conditioning and heating.  Some of - that is - other people's cars' features.

This might not sound too bad (what's the likelihood of someone turning off your aircon and what's the harm if they do?) but the data hackers could get and the things they could control are less important than the principle.

It wasn't that Nissan used bad authentication, it was that they didn't use any authentication at all. Oh, and that interactions with other people's Leafs is completely anonymous.

It works like this.  'Authentication' is done by the app using the car's VIN code.  This is fucking etched on the car's windscreen.  It's a longish number so you might think you're unlikely to be attacked unless a hacker sees your car. Which they certainly might if they think you blocked them in or stole their parking spot.  It's much worse than that, though. VINs are structured, which means that some of the large number is taken up by codes for the manufacturer, country, year, plant, production run etc. so only the last few digits (5 or 6).  Brute-forcing an attack would take a few lines of code.

According to the BBC, Nissan has said that there's no real problem and that drivers are totally safe.  There's no need for anyone to panic, but I wouldn't say it's safe.  An attacker could turn on the car's heating and aircon while the car was parked, draining the battery and leaving the driver stranded.  In addition, an attacker could get at the driver's username, which might give a clue to their identity.

As I said, it's the principle.  Security wasn't even an afterthought. It wasn't even a thought.

Going all the way...

It looks as though Apple is determined to go all the way with refusing to comply with the FBI.
Elsewhere, the New York Times reported that Apple had begun working on an upgrade to its devices which would make it impossible to break into an iPhone using the method proposed by the FBI in this case.
Unless they're lying, of course, but I doubt it.
The FBI has argued that Apple is overstating the security risk to its devices. FBI Director James Comey said Apple had the technical know-how to break into Farook's device only in a way that did not create a so-called "backdoor" into every Apple device.
When is a back door not a back door?  Ooh, ooh, I know that one! Never.  You can call it a window if you like, but burglars can still get in through it.

It's important that Apple is taking a stand on this, especially given Bill Gates' bizarre comments on the subject.  Surveillance creep is far too tempting for governments and security forces alike.
Let’s say the bank had tied a ribbon round the disk drive and said ‘don’t make me cut this ribbon, because you’ll make me cut it many times.’
I think that says the opposite of what Gates intended.  Surely once you've cut the ribbon, you're done and can get at the whole disk?  Or to shake off Gates' terrible analogy, once the authorities can get at one record, they can get at them all.

Hack my home

Image result for home securityOur homes are difficult places to secure.  Regardless of the fanciness and expense of our defences, if someone is determined to get in, they will.  This doesn't, of course, mean that we shouldn't secure our houses.  It means that we should understand the risks and what we have to lose, then act accordingly.
My house has three especially weak links, one of which is me (I won't say what the others are).  I open the door to anyone who knocks on it without using the security chain.  I occasionally let strangers into my house without asking for ID or telling someone else what I'm doing.  This doesn't mean I'm stupid, it means that I've weighed the risks against the potential consequences and prioritised my defences alongside other concerns, quality of life and so on.  When I lived in more dangerous places, I made different choices.

Physical and operational security of one's home is fairly easy to understand.  It needs a little thought and we'd all be wise to take advice, but the most important considerations are usually the most obvious.  What's more, if someone breaks into your house, you're likely to know about it.  Something will probably be broken.  Something else will probably be missing.  We can fix what broke and replace what was stolen.

Before I move on, a couple of anecdotes about home security:

When I was a student many years ago, I lived in a rough street in a rough area of a town which was - in those days - fairly rough.  The house had a big bay window on the ground floor and since the summer was very hot, I opened that window while there were several people in the room.  There were opaque curtains and nobody could see in.  Suddenly, one of the neighbour kids leaped in through the window, saw us stating at him, looked panicked and leaped back out again.  He was presumably hoping to find the room empty and grab the first thing he could get his hands on.  We didn't open the window after that.

Another time, I answered the door to another neighbour kid.  He asked if I had a bicycle pump he could borrow.  As it happened, I did have a pump but as I was about to say so, I realised that he was casing the joint.  He was trying to find out if we had bikes so he could come back later to steal them.

These kids were around 8 years old and came close to defeating our 'security' through very simple but undoubtedly often effective means.  Think how much more vulnerable we'd have been if they had been seasoned adult thieves instead of opportunistic children.

The physical threats to our homes are largely straightforward.  Where they are not, easily-implemented policies can be employed to defend against most of them. It's harder to evaluate the risks and come up with a security system that balances security against the other important factors.  It's harder still to re-evaluate the environment and keep our security systems up to date.  This is one of the reasons elderly people are often targeted for doorstep (and other) scams.  The world has changed a great deal in their lifetime and they haven't necessarily noticed those changes, since they were gradual.  Strangers exploiting old-fashioned courtesy used to be rare and the risk of helping people was lower than it is today.

Securing our homes against network attacks is a lot harder for a variety of reasons.   One of the most important is complexity.  Most people don't understand where the vulnerabilities in our networks lie.  Perhaps the (good) message about password security has been stressed so often that we've lulled ourselves into a false sense of security; we think we're safe as long as we have good passwords when in fact there's a lot more to worry about than that.

One of the most important is the same as with physical security: change. Many of us have moved from perhaps a single computer connected to a dial-up modem by a wire to a house full of wireless devices with a broadband connection.  These devices have changed, too.  First they started to look like phones or tablets.  Now they can look like anything.  Light bulbs, alarm clocks, ovens, even kettles can be computers connected - via our home networks - to the internet.  We call this the Internet of Things (IoT).

One of the main problems of the IoT is that the computers connected to it don't like computers.  They look like consumer products.  We're not used to thinking of consumer products as a security risk.  We plug them in and they work.  But if our alarm clocks are connected to the internet so they can wake us up early if the traffic is bad, then they - and therefore the rest of our networks - might be vulnerable to attack.  We don't see what's going on behind the scenes, we see a consumer product like any other.

The second main problem is that in many cases, the security of IoT devices has been an afterthought at best and there are numerous vulnerabilities that could be exploited by an attacker.  You might not worry too much if a stranger starts turning your lights on and off from the other side of the world, but you might if the attacker could use your smart bulbs as an entry point to your network.  You should definitely worry if hackers could enter your network via your insecure smart kettle and turn on the webcam on your laptop or access your network storage drive.  Especially since, unlike a telltale broken window, it might not be at all obvious that anyone malicious was ever there.

It's wrong, however, to blame the IoT for all security problems.  By far and away the biggest threats to our home networks come from our routers.  Routers are the devices, usually installed by our ISPs, which connect on one side to the internet and on the other (usually wirelessly, these days) to our various devices.  They are the core of our networks and therefore a highly prized target for hackers.  

Worryingly, they are also very often distressingly easy to compromise.  Lots of routers (including two in wifi range of where I'm sitting right now) use the default manufacturer password.  Many use out-of-date firmware with known security holes.  Many have WPS turned on by default.  This is a technology that uses PINs instead of passwords to allow easy connection of devices. PINs are much more susceptible than passwords to brute force attacks and in many, many cases, the algorithm that generates PINs on a particular router is known, making it easier still for hackers to gain access.

Attacks like these are analogous to a burglar entering a home or a creepy person installing hidden cameras while we're out.  But there are attacks that are more similar to the doorstep con, where a con artist gains access to someone's house or personal data under false pretences.  We know about phishing scams, where we're led to believe that our bank wants us to enter our security details at a conveniently supplied link.  But we often underestimate how sophisticated these attacks can be.  This is especially true these days when vast amounts of data about us are available to anyone who wishes to buy it.  Phishing scams can be very specifically tailored toward individuals with hardly any effort at all.

We know that we shouldn't open attachments unless we're sure they're safe, but we all make false assumptions from time to time about what 'safe' means.  For example, my friends know that I have a background in security and a tinfoil-hat-level interest in privacy, so they might reasonably assume that any attachments I send them are legitimate and safe.  However, none of my friends ever check that the email is really from me before opening attachments.  Even if they did, I might be even more lazy and incompetent than they already think I am and my computer might be infected with malware that sends out plausible-looking malware to my contacts.

To put it more succinctly, there's a lot to be worried about regarding our home networks, even if we know what we're doing.

Here's a BBC article on exactly this sort of thing.  It documents some (white hat) hackers demonstrating how they could easily gain access to a network- connected camera on the journalist's network. The journalist seems fairly tech-savvy but still had numerous holes in his network.  The last line of the article is important:
Now I'm not sure if I am more secure, or just more paranoid.
And that's the take-home point.  You'll never know whether your network is secure.  You won't have the time and resources to keep it as cast-iron secure as possible and if you do devote superhuman effort to doing so, at best won't be able to use your devices for the things you want to use them for.

That's why we need to learn how best to evaluate risks and consequences and to practice good operational security, just as we try to when protecting our physical spaces.  It isn't easy, but it's all too often neglected in articles about home network security.  We should change this.

Wednesday, 10 February 2016

Unsafer Internet Day

I'm a bit behind the times because lots of stuff is happening here.  I have several longish posts that
need a final proof-check, which I'm hoping to do today.  In the meantime, I couldn't resist a quick post about this:

As part of 'Safer Internet Day', Google is offering 2GB of cloud storage to anyone who completes their security check, as reported here:

There's obviously a good side to this, people need to take security more seriously and to understand the available security options and protect themselves as well as they can.

But 'safety' is a relative term and rather depends on what you have to gain and the price you need to pay for it.  In this case, users have a laughably small amount of free storage to gain plus an (arguably) more secure account, perhaps and hopefully a lesson to be learned.  But the downside is that Google will index the data users put in that space and sell the metadata.

As I've written countless times, metadata is often very sensitive, often in unexpected ways.  In Google's case, the metadata is linked to activity and content in all the other Google services and is therefore particularly valuable.  Plus, of course, Google also has data from the security review which might reveal user attitudes toward security.  This is potentially very dangerous, and therefore valuable, information.  A lot more valuable to Google - and to whoever they sell it to - than 2GB of cloud space is to the average user.

For perspective, I have two USB sticks small and nice-looking enough to wear on a string around my neck at all times with a combined capacity of 128GB.  This cost in privacy is a high one for such a cheap commodity as storage.  We're very bad, as a species, at making sensible privacy bargains.

Tuesday, 2 February 2016

As we knew

The internet of things isn't getting any more secure: