Thursday, 24 December 2015

Spying on children will make them feel less disenfranchised, apparently

The BBC says:
Schools in England must set online filters and monitor pupils' internet use under plans to protect them from radicalisation, education secretary Nicky Morgan said.
Because spying on people coupled with having power over them has proved such a great way to prevent harm throughout history.  If I were a kid today, would you imagine for a second that an underpaid, under-appreciated school network admin would be able to stop me doing pretty much what I wanted?  I'm talking about a kid who used to spend hours a day in Currys learning to program on their demo model BBC Micros and Spectrums.  For any teachers out there, by the way, I run a course in how kids can very probably get past your internet filters.  Also useful for teachers who (for legitimate reasons) want to get past their school's internet filters.

Besides, what kind of 'monitoring' would be required to prevent all this radicalisation the government is so frightened of?  It's not as though kids will be visiting radicalisethefuckoutofme.com, or anything.  Clearly, there is some of this stuff going on, but it seems more likely to be happening in places where all the kids hang out.  Social media, educational sites, music sites....  Unless schools are keylogging, they aren't going to be able to stop kids being influenced by scary people on the internet.  I think that level of surveillance is too high a price to pay.
Mrs Morgan said: "As a parent, I've seen just what an important role the internet can play in children's education. But it can also bring risks, which is why we must do everything we can to help children stay safe online - at school and at home." 
The proposed measures include showing young people how to use the internet responsibly and making sure parents and teachers are able to keep youngsters safe from exploitation and radicalisation, she added.
Yep, bring that on. Right up to the "making sure" part.  Does Ms Morgan not understand the definition of "responsibility"?  But as usual, it's the "making sure" part that's the problem.  Look at what she said earlier in the article:
Mrs Morgan said some pupils had been able to access information about so-called Islamic State at school.
The horror! Kids able to find out about what's going on in the world around them!  MAKE IT STOP.

I'm neither a parent nor a teacher but it strikes me that the more information kids have about what's really going on in Syria and elsewhere, the more likely they are to make good decisions.

Filtering might be necessary.  For example, if a kid wants to access a particular site for research or just general interest, there should be an easily-followed pathway that will end in a timely decision and immediate implementation.  The decision should be based on an assessment of the risk of the sites concerned, the responsibility and maturity of the child making the request and - perhaps - should consider whether some sort of monitoring is appropriate.

But surveillance as a knee-jerk reaction is almost always a bad idea.  It's more likely to expose vulnerabilities than to protect people from them.  Not all teachers or other school employees are entirely benevolent to children.  Suppose, for example, a child is being abused at home and wants to use the school's computers to find out what to do, connect with other victims etc.  Are schools going to monitor that activity?  There are times when they probably should, but obviously it must be done with the utmost care.  Schools shouldn't get to charge into a difficult situation, guns blazing.  If schools are going to monitor pupil's browsing activity, then the monitoring needs to be monitored and what school employee has the time or energy for that?

Surveilling kids is not the answer to radicalisation.  Actual answers will involve properly understanding the risks and weighing them against a whole bunch of other things.  It will involve understanding the length kids will go to to get around restrictions.  It takes only one kid at a school to be slightly smarter than the IT people, after all.  I'm pretty confident that I could visit any school in the UK and get around their filter without a great deal of effort or technical skill.  And I'm in no way suggesting that school network admins aren't good at their jobs.  It's just that it really isn't that hard to get around these things.
Their head teacher has said there is no evidence they were radicalised at school as pupils cannot access social media on the academy's computers.
I bet I could prove that statement wrong in under five minutes and if I can, hoards of kids certainly can too.

But the main answer is in that word "responsibility" and its dependent condition, trust.  It doesn't matter whether radicalisation or other forms of abuse take place on school computers or elsewhere.  The only way to deal with abuse - or potential abuse - of children is to treat them like humans.  Provide a safe space where they can talk about things without judgement or threat.  Our desire to protect children is not misplaced.  The way we go about it very often is.

Sunday, 6 December 2015

I've been saying for years that our poor security habits can hurt the ones we love more than they hurt us.  This is trivially true: viruses like to steal contact data; attackers are generally after data that describes our social networks.  I'm more or less alone in my conviction that privacy is a thing we should do to other people, but here's a story that vindicates that view.

Police are spying on friends and relatives of prisoners.  They're not even storing those conversations securely.  People's words are being recorded because they happen to know someone who was convicted of a crime.

Saturday, 5 December 2015

This is troubling.  A woman is rightly worried about the security of her pacemaker.  Doctors blither on about how they are totes safe, honest.  Manufacturers refuse to release source code.  It's a nightmare.

There's no good reason for manufacturers to not publish their code.  There's no commercial advantage I can see in keeping it secret unless they, too, are concerned about security.  By which I mean they are incompetent.
While nations spend hundreds of millions defending critical infrastructure from cyber-attacks, Marie wonders if the computer inside her is secure and bug-free - she still hasn't been able to find the answer.
It's not as though she has a choice about whether to have that device in her.  It's not as though she can easily pick and choose the manufacturer of the machine that keeps her alive.  You'd think she'd have a right to inspect the hardware and software of the device she has no choice but to wear under her skin.  It's not even as though the thing is doing anything secret or obscure (I hope).

We know that open sourcing is an excellent way to find bugs and security flaws.  If I had a pacemaker and access to it's source code, damn fucking right I'd inspect it in minute detail.  If the companies that make these things aren't confident enough to publish their code and wiring diagrams, we should be very frightened indeed.
When Marie first had her pacemaker fitted she downloaded the manuals. She discovered it had not one, but two wireless interfaces.
One enables doctors to adjust the pacemaker's settings via a near-field link. Another, slightly longer-range, connection lets the device share data logs via the internet.
That last sentence is... unsettling.  What networks is this damn thing connecting to?  It shares data logs with whom? What data?  Why?
Hearts are now part of the Internet of Things, she realised.
This is an important point.  It's reasonable to ask what the pacemaker manufacturers are really selling.  Or the hospitals, for that matter.  Who gets this data and what do they do with it?  Nobody seems to know.
He believes hacking is a purely theoretical risk: "The only significant effort I've seen took a team of people two days, being within 20cm of the device, and cost around $30,000."
Yeah, that's bullshit.  Want to bet that I couldn't do it with a soldering iron and a few weeks of my time?  Want to bet that almost all of that money wasn't salary for the researchers?  What the fuck is a "theoretical risk" anyway?  It's a risk or it's not.  If someone can hack a pacemaker, they will.

"The good news is that this model is no longer sold and the risks have been addressed," he told the BBC's PM programme.
Oh, that's good news, is it?  The hackable device has been replaced by ones that might also be hackable?  The fact that we don't know whether pacemakers are hackable or not is somehow good news?
In general security is better. It's not a completely solved problem but businesses have "learned quite a bit over the last seven or eight years in improving security engineering", he said.
Um.  Yeah, that's weird.  The guy is talking about security in general but talking about a product that could not possibly be more specific.  The 'fact' that businesses in general have a better handle on security these days (and the scare quotes should tell you that I don't believe they have) says exactly nothing about the security of any particular device.
Marie Moe is careful not to overstate the risk of hacking - she fears programming mistakes more. 
Not long after having her pacemaker fitted, she was climbing the stairs of a London Underground station when she started to feel extremely tired. After lengthy investigations, Marie says, a problem was found with the machine used to alter the settings of her device.
 I hope it wasn't Covent Garden. I once had to walk up those stairs and there are a lot. Marie is right.  She has no idea whether the device keeping her alive is any good and there's not much she can do about it if it turns out it's shit.  And apparently it's not just the device itself but the others that talk to it that might have a problem.  And that's assuming zero human error for overworked and underpaid doctors....
"It's a computer running my heart so I really have to trust this computer and it's a little bit hard for me because I don't have any way of looking into the software of this device."
Marie would like to see more third-party testing. She's a member of I Am the Cavalry, a grassroots organisation that works on cybersecurity issues affecting public safety.
Worryingly, I wasn't previously aware of this organisation.  It sounds like something I should know more about.
The challenge, according to Kevin Fu, is to find a compromise between the commercial interests of manufacturers anxious to protect their intellectual property and the needs of researchers.
But that isn't the problem at all, is it? The problem is that devices people need to keep them alive might be hackable.  There's no intellectual property here and who in all of fuck are these "researchers"?  The 'compromise' Kevin Fu suggests doesn't even involve the patient, who you'd think might have some sort of interest in the whole business.  

And it's not a challenge.  Write good code, make good hardware, publish the details, learn from your mistakes.  Cheap knockoff pacemakers are not your competition and your intellectual property is worth exactly fuck all.
Andrew Grace says the devices are "transformative"; if you need one, he and Marie agree, you shouldn't be put off by colourful cyber-assassination tales in TV dramas. But that doesn't mean security isn't important.
Unbelievable. Yeah, you shouldn't be put off installing a device that makes you not dead because of security concerns, but dismissing legitimate security concerns as fancy is horrifying. 
Andrew's colleague, cardiologist Simon Hansom believes security has to be a priority.
I'm glad that someone vaguely involved gives at least some lip service to security but I'm unimpressed.  The BBC can do better than this and I'll be contacting the journalist, Chris Vallance, in the hope that he'll follow the story up by interviewing some people who are a little more informed.


Sunday, 25 October 2015

Stop Europe adopting terrible Net Neutrality laws

Barbera van Schewick, a law professor at Stanford Law School writes here about the terrible net neutrality laws the EU parliament is due to vote for next Tuesday (27th October 2015).  It seems likely to be adopted.  As van Schewick points out, the proposal fails spectacularly to deliver any neutrality to the net.  Here’s the bottom line:

Unless it adopts amendments, the European Parliament’s net neutrality vote next Tuesday threatens the open Internet in Europe.

The ostensible purpose of the new law is to prevent ISPs from charging sites for faster speeds or from punishing sites by slowing them down.  This is sensible and good; ISPs are selling us access to networks they do not own and they shouldn’t get to decide how we access it.  The problem with the proposed law, according to van Schewick, there are four problems that cause it to fall well short of that goal:

    • Problem #1: The proposal allows ISPs to create fast lanes for companies that pay through the specialized services exception.
    • Problem #2: The proposal generally allows zero-rating and gives regulators very limited ability to police it, leaving users and companies without protection against all but the most egregious cases of favoritism.
    • Problem #3: The proposal allows class-based discrimination, i.e. ISPs can define classes and speed up or slow down traffic in those classes even if there is no congestion.
    • Problem #4: The proposal allows ISPs to prevent “impending” congestion. That makes it easier for them to slow down traffic anytime, not just during times of actual congestion.

      All is not (quite) lost, however, and we can still take action:

      Take action: Ask your representatives in the Parliament to adopt the necessary amendments. You can find all the necessary information and tools at SavetheInternet.eu.

      Spread the word: Share this post and others on Facebook, Twitter, or anywhere else. Talk with your friends, colleagues, and family and ask them to take action. If you are a blogger or journalist, write about what is going on.

      van Schewick’s post explains what amendments are needed and why they’ll work.

      If a majority of the members who vote approves this flawed compromise next Tuesday, the rules are adopted and become law. Europe will have far weaker network neutrality rules than the US, and the European Internet would become less free and less open. By contrast, if a majority of the members approves amendments, the text goes back to the Council. The Council can then accept the amendments, and they become law. If the Council rejects the amendments, a joint committee consisting of representatives of the Parliament and the Council has six weeks to come up with a compromise. Any compromise would then have to be adopted by the Parliament and the Council.

      The future of the Internet in Europe is on the line. It’s up to all of us to save it.

      Thursday, 22 October 2015

      Back-door shenanigans

      backdoorshenanigans Obama is not pursuing a backdoor to commercial encryption.  As we all know, it was a stupid idea in the first place.  We can’t afford to sigh in relief, though: this ain’t over.

      For one thing, the FBI is still pushing for it, as are roughly comparable agencies in other countries, such as here in the UK.  Also in the UK, David Cameron is still very much in favour of the idea; he seems to have been spurred on by Obama’s previous support but Obama’s withdrawal doesn’t seem to have cooled his ardour.  Not a surprise; he can hardly be seen to be copying the US.

      It doesn’t matter anyway.  This is going to come up again and again, probably until it’s finally implemented in the US.  I wouldn’t be surprised if the UK went ahead and did it regardless.

      This is something that needs to be addressed at a constitutional level.  It’s about our right to have secrets.

      We all want to pretend this doesn’t happen

      Image result for la la la im not listeningAnother reason you have something to fear even if you think you have nothing to hide.  The BBC reports on the online drug seller Pharmacy2U, which has been fined for selling customer details to marketing companies.

      Pharmacy2U had made a "serious error of judgement" in selling the data, the information commissioner said.

      The pharmacy said the sales had been a "regrettable incident", for which it apologised.

      I’m sure that will comfort their 200,000+ victims.  They sold the data quite cheaply, too: £130 per 1,000 customers.  Although they did sell it to ‘several’ companies, so there were decent sums of money involved.

      ICO deputy commissioner David Smith said it was likely some customers had suffered financially - one buyer of the data deliberately targeted elderly and vulnerable people.

      And there’s the point.  Names and addresses are one thing, but the names and addresses of people who are likely to be vulnerable are quite another.  Once that data is out there, it will be sold on to other companies.  And in the likely event that these companies have other data about the people on the list, there’s a chance they can isolate our vulnerabilities further.  So we have plenty to fear even if we really do have nothing to hide.  Which we do.

      Mr Smith said: "Patient confidentiality is drummed into pharmacists.

      "It is inconceivable that a business in this sector could believe these actions were acceptable.

      Agreed.

      We all want to pretend this doesn’t happen all the time or that it doesn’t matter or that the cat is already too far out of the bag to stuff it back in.  None of those things are true.

      Are you a virgin flesh? More documentation of online abuse

      Privacy and abuse are closely related.  Abusers often seek out their targets’ private information and use it against them by dxxing, SWATing or worse: it’s hardly unknown for abusers to contact their targets’ colleagues, loved-ones or to turn up at their home or place of work.  But on top of that, online abuse is itself a violation of privacy.  It’s an invasion; a violation of people’s right to be left alone.

      And it’s horrible. I’ve suffered a little online abuse from time to time but nothing compared to the kind of abuse many prominent and even ordinary women suffer on a daily basis, often for years.  Several women I know have been silenced because of this kind of abuse.  What kind of abuse?  Well here is only the latest example, from We Hunted the Mammoth.

      Mia Matsumiya, an L.A. musician, is also a human female on the internet, and in the latter capacity has been getting — and saving — creepy messages from creepy dudes for a decade, more than a thousand in total.

      Now she’s posting them on Instagram, supplemented by some of the especially creepy ones her friends have gotten as well.

      Consider yourself trigger-warned.  There are some horrible things there.

      We need to do more to address online bullying and abuse, and more to help people protect themselves.

      Thursday, 15 October 2015

      They don’t like it up em

      Image result for they don't like it up emA tribunal has found that MPs can be spied on by GCHQ just like everyone else.

      But in a landmark decision the Investigatory Powers Tribunal said the so-called "Wilson Doctrine" was no bar to the incidental collection of data.

      The Wilson Doctrine was a 1966 assertion that MPs calls would never be intercepted without the PM knowing.  According to the BBC, it has been reaffirmed ever since, including by Cameron.

      Caroline Lucas MP is calling this “a body blow for parliamentary democracy,".  I can’t quite follow her reasoning.  It’s OK to spy on everyone except the people in charge?

      "My constituents have a right to know that their communications with me aren't subject to blanket surveillance - yet this ruling suggests that they have no such protection.

      "Parliamentarians must be a trusted source for whistleblowers and those wishing to challenge the actions of the government. That's why upcoming legislation on surveillance must include a provision to protect the communications of MPs, peers, MSPs, AMs and MEPs from extra-judicial spying.

      It’s almost as though banning strong encryption is a seriously stupid idea.  I have to say, though, that if you’re whistleblowing to an MP, you are doing it very wrong indeed.

      "The prime minister has been deliberately ambiguous on this issue - showing utter disregard for the privacy of those wanting to contact parliamentarians."

      But it’s OK to disregard the privacy of those wanting to contact journalists or, for example anyone else?

      Maybe now we’ll see a few MPs coming out on the side of privacy.

      Wednesday, 14 October 2015

      Parental trust

      The advice here is as terrible as the writing.  It begins by praising ‘the American youth’ for beginning to care about their online security and privacy, rather oddly linking to Wikipedia’s article on Internet Security rather than to evidence of the claim.  Anyone else already sense a bait and switch on the horizon?

      There are many ways to get protected from the fears of social media and parents can really help their children in getting towards the right track.

      They can! They should! Among the methods I advocate is teaching your kids to break security, if you can, so they can better learn how to protect themselves.  Another method is to foster an ongoing dialogue with your children to establish their needs and your boundaries; not a list of rules and punishments but an expectation of good behaviour on both sides. 

      I guess that’s what Vijay Prabhu is talking about, right?  Well, let’s see:

      mSpy is the most trusted software used by parents to track the activities of their children and protect them from all the fears of social media.

      Sigh.  The bait & switch continues:

      There are many children who really trust their parents for resolving their social media troubles and doubts. However, it really depends on the relation of parents and their children.

      Again, this almost sounds like a good thing.  If children are able to trust their parents to help them with difficulties arising from social media, it’s likely a good thing.  And the suggestion that this is contingent on the relationship between parents and children is obviously correct.  When parents and children have a good relationship, perhaps those kids can trust their parents to listen to their problems and help them despite any concerns non-judgementally. Maybe the parents can trust their children to act appropriately and come to them when they make mistakes.  That’s what Prabhu is talking about, right?

      Seems like it:

      Parents of teenagers need to be attached to their kids and give them a space to discuss their issues with any of the parent.

      Good so far….

      Teens have been getting help from the parents and the reality is that parents need not wait for the kids to reveal things to them. They must be informed about the activities of their children and keep parental control over them.

      Oh dear.

      The rest is pure shill for some spying software to install on your kid’s phones.  Installing spyware is the exact opposite of trust as should be obvious to everyone. 

      I think spying on your kids is more likely to result in their adopting risky behaviour than in their being safe.  Teaching them to be aware of how they can control their exposure and that they can talk to you as a parent about adjusting your mutual expectations of each other seems a safer choice. 

      Either way, not trusting your kids is a really bad idea. TechWorm, which purports to be about security and privacy, should not publish such bullshit.

      Who trusts their government?

      There have been a few stories recently about a survey of about 2000 millennials in the US and UK about whether they trust their government’s data security.  About 22% said they had little or no trust.  This has been presented as somehow shocking and I’m not sure why.  I’m notoriously paranoid but it seems foolish to assume that governments are somehow better at security than companies that do it for a living.  And we know that companies who do this for a living are being breeched all the time.

      Indeed, the same survey showed that 61% had little or no trust in social media platforms and 38% said the same of retailers.

      It would be interesting to know why.  My speculation is that it’s probably complicated.  I suspect a number of elements are at play, including:

      • Some people tend to mistakenly think that companies have selfish motives while governments do not.  This is a dangerous attitude.  Governments are by their very nature self-interested and even democracy doesn’t come close to guaranteeing that those interests do not conflict with those of their citizens.
      • Some people feel that the government needs to know all about them in order to do its job of providing us with various services and accept that they’ll do as good a job as anyone else of looking after it.  Personally, I’m deeply suspicious of a model that requires our data without justifying it or allowing us to choose what can and cannot be seen or control how it is used.
      • Some people don’t want to think how bad it would be if (by which I mean when) that data is stolen, so willingly place their heads in the sand.  This is understandable; privacy issues can be overwhelming and it’s comforting to feel that someone better informed is looking after it for us.
      • Lots and lots of people feel that if you’ve nothing to hide, you’ve nothing to fear. They are wrong: everyone has something to hide because there’s always someone who can exploit the data we don’t think we need to hide.  Efforts to demonstrate this to people tend to fail (at least, when I do it).  I think those efforts tend to come across as paranoid, perhaps because of the point above.

      There’s other stuff, but the list is already long enough.  All the reasons to trust one’s government seem to be bad (pictured).  I have little or no trust in my government’s data security for two broad reasons:

      1. There’s no reason to believe that their security-fu is better than anyone else’s and they are the biggest target possible, with the greatest possible gain for successful attackers.
      2. They do not always act on our behalf.  They pander to ignorance in the guise of protection.  They don’t really have a choice because their competitors are doing it too.  Data usage will creep.  Surveillance will increase whether we like it or not, whether the threats used to justify it are real or not.

      The web authentication arms race

      Image result for spy vs spyThis is really good. It’s a description of the evolution of authentication as an arms race between a developer and an attacker.  I think it does a brilliant job of explaining the issue.

      An arms race is exactly the right way to think about the history of authentication.  This should be obvious but although I’ve always recognised it as an arms race, I’ve never explicitly written down the stages.  It didn’t even occur to me what a great and powerful idea it is. It should have.

      Defender: Users will enter a username & password, and I will give them an authentication cookie for me to trust in the future.

      Attacker: I will watch your network traffic and steal the passwords as they come down the wire.

      Defender: I will change the <form> to submit over HTTPS, so you won't see any readable passwords.

      Attacker: I will run an active MITM attack as the user loads the login page, and insert Javascript that sends the password to my server in the background.

      Defender: I will serve the login page itself over HTTPS too, so you won't be able to read or change it.

      Attacker: I will watch your network traffic and steal the resulting authentication cookies, so I can still impersonate users even without knowing the password.

      Defender: I will serve the entire site over HTTPS (and mark the cookie as Secure), so you won't be able to see any cookies.

      Attacker: I will run an active MITM attack against your entire site and serve it over HTTP, letting me see all of your traffic (including passwords and cookies) again.

      Defender: I will serve a Strict-Transport-Security header, telling the browser to always refuse to load my site over HTTP (assuming the user has already visited the site over a trusted connection to establish a trust anchor).

      Attacker: I will find or compromise a shady certificate authority and get my own certificate for your domain name, letting me run my MITM attack and still serve HTTPS.

      Defender: I will serve a Public-Key-Pins header, telling the browser to refuse to load my site with any certificate other than the one I specify.

      At this point, there is no reasonable way for the attacker to run an MITM attack without first compromising the browser.

      Do click through for the rest.  This comes pretty high on the list of things I wish I’d thought of.

      Former US detainees sue psychologists who designed and oversaw the CIA torture program

      Good.  Cory Doctorow writes about it here.

      James Mitchell and John Jessen were paid $85m for their services.  Nobody knows why, though, because they had no prior expertise in interrogation techniques.

      There is no evidence that their torture, which included anal rape and multiple forms of simulated execution, produced any useful intelligence.

      Lots of links in Cory’s post including this first person account by Mohamed Farag Ahmad Bashmilah, who was tortured by the CIA.  Needless to say, all the trigger warnings in the world are not enough.

      DRM in JPEGs

      wsprodlg_tis11 The group that oversees the JPEG standard (The Joint Photographic Expert Group, apparently) is considering adding DRM to the standard.  This would mean that images could force your computer to stop you from uploading the pictures elsewhere.  Software that displays JPEGs would have to make decisions about whether or not to do so based on whether it thinks you’re allowed.

      The EFF’s Jeremy Malcolm is on it:

      EFF attended the group's meeting in Brussels today to tell JPEG committee members why that would be a bad idea. Our presentation explains why cryptographers don't believe that DRM works, points out how DRM can infringe on the user's legal rights over a copyright work (such as fair use and quotation), and warns how it places security researchers at legal risk as well as making standardization more difficult. It doesn't even help to preserve the value of copyright works, since DRM-protected works and devices are less valued by users.

      So why are they considering it?

      Currently some social media sites, including Facebook and Twitter, automatically strip off image metadata in an attempt to preserve user privacy. However in doing so they also strip off information about authorship and licensing.

      A reasonable concern, but as Malcolm points out, a shitty solution. A better one would be for platforms to allow users to control what metadata is stripped out and what is left behind.

      This doesn't mean that there is no place for cryptography in JPEG images. There are cases where it could be useful to have a system that allows the optional signing and encryption of JPEG metadata. For example, consider the use case of an image which contains personal information about the individual pictured—it might be useful to have that individual digitally sign the identifying metadata, and/or to encrypt it against access by unauthorized users. Applications could also act on this metadata, in the same way that already happens today; for example Facebook limits access to your Friends-only photos to those who you have marked as your friends

      We encourage the JPEG committee to continue work on an open standards based Public Key Infrastructure (PKI) architecture for JPEG images that could meet some of the legitimate use cases for improved privacy and security, in an open, backwards-compatible way. However, we warn against any attempt to use the file format itself to enforce the privacy or security restrictions that its metadata describes, by locking up the image or limiting the operations that can be performed on it.

      That way, madness lies.

      Something else I learned from that article; apparently scanning, photocopying and image editing software is hardwired to prevent you from scanning banknotes.  Whose great idea was that?

      Tuesday, 13 October 2015

      Life will find a way

      In 2007, South Warwickshire General Hospitals NHS Trust decided to let some staff share smartcards with each other to access patient records.  They did this for what sounds like a good reason; logins were taking too long (especially in A&E) and sharing smartcards meant that they could treat emergency cases more quickly.  They must have been serious; the move was in breach of the NPfIT security policy.

      I don’t know the full extent of the repercussions for privacy but not knowing which doctors accessed someone’s medical records hardly seems like it would end well. 

      This is a common problem with security systems; they don’t take account of how people will develop (often quite elaborate) behaviour to defeat some operational problem caused by security.  They’ll teach the behaviour to new employees and usually they won’t think to tell managers of their brilliant innovation.  I have an example:

      Years ago I worked for what in those days we called an e-commerce firm.  We found that some inconsistencies were finding their way into the database and showing up in the application.   Products were showing up as being in stock when they weren’t.  The database was ridiculously over-complicated and the software was worse, so it took me weeks to pour through it all and I came up with nothing.  I was talking about my frustration over lunch with a colleague in the data entry department and she happened to mention how pleased she was with the new database update tool.

      “Er…..w-what new tool?”

      It turned out that someone in data entry had complained that the tools they had were no good and the new tools were about a year behind schedule so they’d asked a developer to write them a quick hack to fix a particular problem.  It was a half hour job so the developer didn’t think to mention it to anyone.  It allowed data entry people to inject SQL into the live server and none of them were trained in SQL. Fortunately, to my knowledge nobody ever used that e-commerce system, but it cost some weeks of my time and caused me to shout quite a lot at the developer. 

      The data-entry people weren’t at fault.  The development process wasn’t at fault.  The developer certainly was at fault but he wasn’t disciplined other than my shouting at him.  Maybe I was a bit at fault because I should have known that he wrote the tool, but it didn’t go through the CVS so I didn’t know about it.  Maybe I should have been better at training…. But that’s the point; it’s easy to apportion blame after the fact.

      In the Warwickshire hospital case, everyone was complicit up to and including the trust management.  Were they at fault?  Or were the software developers at fault?  Should they have insisted that the system be tested in a live environment?  Or that they talk to and/or observe doctors in A&E before designing the system?  Yeah, they certainly should have done that, but what if they weren’t allowed?  Doctors’ time is valuable and maybe the analysts only got to talk to managers.  Maybe the system went live without proper testing due to some deadlines that were out of the developers’ control.  Maybe goalposts kept moving and the client wouldn’t accept revised schedules….

      And that’s the point.  Systems – and especially security systems – are complicated.  They are complicated further by the environments in which they have to be built.  There’s no such thing as a security system that doesn’t have an unrealistic deadline or conflicts of interest or obstinate people or ignorant people.  Even if there were, something fundamental and probably unnoticed would have changed between agreeing the spec and delivering the solution.

      So security very often gets in the way without any obvious benefit to the core business of an organisation, the people who work there or its customers. 

      But blame is often hard to assign and lessons are difficult to learn. Developers can’t say “we won’t do that again” because they will.  They’ll have to. And clients will probably never understand the realities of software development and security because they think they don’t have to. 

      Life will always find a way to flummox systems if there’s a local reason, even if it’s a nett loss to everyone.  It’s a kind of tragedy of the commons.

      Suspiciously specific

      In 2010, the UK Home Office said:

      "The Government has no plans to require owners of mobile phones to be registered with statutory authorities", says the Home Office statement closing the petition, pointing out that the consultation "rejected an option for a single database holding all communications data".

      That’s… a little too specific for my liking.  “Statutory authorities”…. “single database”…. “all communications data”……

      That leaves the door open to, oh I don’t know, making telcos administer the registration of SIMs and the collection of all communications meta-data.

      That was Gordon Brown’s Labour government and we now have a Tory one so the point is pretty much moot, but it’s worthwhile to note just how specific that statement was and the most obvious reason for that.

      Can you smell burning

      Image result for SIMThere are still places in the world where you can buy and activate SIMs without having to provide any sort of ID.  The UK is one of them.  The operators don’t like it and try to incentivise registration by offering discounts.   But weirdly, in 2005 Blair’s UK government proposed ID checks for SIMS and after consultation decided not to.  Wow.  That doesn’t sound like the UK government at all.  Any UK government.  Ever.

      A confidential report by experts concluded that “the compulsory registration of ownership of mobile telephones would not deliver any significant new benefits to the investigatory process and would dilute the effectiveness of current self-registration schemes.”

      (http://www.gsma.com/publicpolicy/wp-content/uploads/2013/11/GSMA_White-Paper_Mandatory-Registration-of-Prepaid-SIM-Users_32pgWEBv3.pdf)

      Another thing that’s important is that the evidence shows that registering SIMs doesn’t seem to help with law enforcement at all (the UK Government didn’t know this in 2005).

      In Mexico, mandatory SIM registration was introduced in 2009 but repealed three years later after a policy assessment showed that it had not helped the prevention, investigation and prosecution of associated crimes. The reasons cited by the senate for repealing the regulation included:

      (i) Statistics showing a 40 per cent increase in the number of extortion calls recorded daily and an increase of eight per cent in the number of kidnappings between 2009 and 2010;

      (ii) The appreciation that the policy was based on the misconception that criminals would use mobile SIM cards registered in their names or in the name of their accomplices. The report suggests that registering a phone not only fails to guarantee the accuracy of the user’s details but it could also lead to falsely accusing an innocent victim of identity theft;

      (iii) The acknowledgement that mobile operators have thousands of distributors and agents that cannot always verify the accuracy of the information provided by users;

      (iv) Lack of incentives for registered users to maintain the accuracy of their records when their details change, leading to outdated records;

      (v) The likelihood that the policy incentivised criminal activity (mobile device theft, fraudulent registrations or criminals sourcing unregistered SIM cards from overseas to use in their target market); and

      (vi) The risk that registered users’ personal information might be accessed and used improperly.

      (same source as above)

      I’m sure that the impracticalities were the deciding factor and easy to see how gradual legislation might iron these out.  For all I know, the government was playing a long game.  But one thing’s certain: the moment the government starts to make noises about registering SIMS, companies will be buying up and registering huge stocks of SIMS that don’t need ID and the prices will skyrocket.  Presumably organised crime will find ways to launder SIMs, it doesn’t seem like it would be hard.  People certainly seem to get access to non-registered SIMs pretty much anywhere on the planet, even China.