Sunday, 25 October 2015

Stop Europe adopting terrible Net Neutrality laws

Barbera van Schewick, a law professor at Stanford Law School writes here about the terrible net neutrality laws the EU parliament is due to vote for next Tuesday (27th October 2015).  It seems likely to be adopted.  As van Schewick points out, the proposal fails spectacularly to deliver any neutrality to the net.  Here’s the bottom line:

Unless it adopts amendments, the European Parliament’s net neutrality vote next Tuesday threatens the open Internet in Europe.

The ostensible purpose of the new law is to prevent ISPs from charging sites for faster speeds or from punishing sites by slowing them down.  This is sensible and good; ISPs are selling us access to networks they do not own and they shouldn’t get to decide how we access it.  The problem with the proposed law, according to van Schewick, there are four problems that cause it to fall well short of that goal:

    • Problem #1: The proposal allows ISPs to create fast lanes for companies that pay through the specialized services exception.
    • Problem #2: The proposal generally allows zero-rating and gives regulators very limited ability to police it, leaving users and companies without protection against all but the most egregious cases of favoritism.
    • Problem #3: The proposal allows class-based discrimination, i.e. ISPs can define classes and speed up or slow down traffic in those classes even if there is no congestion.
    • Problem #4: The proposal allows ISPs to prevent “impending” congestion. That makes it easier for them to slow down traffic anytime, not just during times of actual congestion.

      All is not (quite) lost, however, and we can still take action:

      Take action: Ask your representatives in the Parliament to adopt the necessary amendments. You can find all the necessary information and tools at SavetheInternet.eu.

      Spread the word: Share this post and others on Facebook, Twitter, or anywhere else. Talk with your friends, colleagues, and family and ask them to take action. If you are a blogger or journalist, write about what is going on.

      van Schewick’s post explains what amendments are needed and why they’ll work.

      If a majority of the members who vote approves this flawed compromise next Tuesday, the rules are adopted and become law. Europe will have far weaker network neutrality rules than the US, and the European Internet would become less free and less open. By contrast, if a majority of the members approves amendments, the text goes back to the Council. The Council can then accept the amendments, and they become law. If the Council rejects the amendments, a joint committee consisting of representatives of the Parliament and the Council has six weeks to come up with a compromise. Any compromise would then have to be adopted by the Parliament and the Council.

      The future of the Internet in Europe is on the line. It’s up to all of us to save it.

      Thursday, 22 October 2015

      Back-door shenanigans

      backdoorshenanigans Obama is not pursuing a backdoor to commercial encryption.  As we all know, it was a stupid idea in the first place.  We can’t afford to sigh in relief, though: this ain’t over.

      For one thing, the FBI is still pushing for it, as are roughly comparable agencies in other countries, such as here in the UK.  Also in the UK, David Cameron is still very much in favour of the idea; he seems to have been spurred on by Obama’s previous support but Obama’s withdrawal doesn’t seem to have cooled his ardour.  Not a surprise; he can hardly be seen to be copying the US.

      It doesn’t matter anyway.  This is going to come up again and again, probably until it’s finally implemented in the US.  I wouldn’t be surprised if the UK went ahead and did it regardless.

      This is something that needs to be addressed at a constitutional level.  It’s about our right to have secrets.

      We all want to pretend this doesn’t happen

      Image result for la la la im not listeningAnother reason you have something to fear even if you think you have nothing to hide.  The BBC reports on the online drug seller Pharmacy2U, which has been fined for selling customer details to marketing companies.

      Pharmacy2U had made a "serious error of judgement" in selling the data, the information commissioner said.

      The pharmacy said the sales had been a "regrettable incident", for which it apologised.

      I’m sure that will comfort their 200,000+ victims.  They sold the data quite cheaply, too: £130 per 1,000 customers.  Although they did sell it to ‘several’ companies, so there were decent sums of money involved.

      ICO deputy commissioner David Smith said it was likely some customers had suffered financially - one buyer of the data deliberately targeted elderly and vulnerable people.

      And there’s the point.  Names and addresses are one thing, but the names and addresses of people who are likely to be vulnerable are quite another.  Once that data is out there, it will be sold on to other companies.  And in the likely event that these companies have other data about the people on the list, there’s a chance they can isolate our vulnerabilities further.  So we have plenty to fear even if we really do have nothing to hide.  Which we do.

      Mr Smith said: "Patient confidentiality is drummed into pharmacists.

      "It is inconceivable that a business in this sector could believe these actions were acceptable.

      Agreed.

      We all want to pretend this doesn’t happen all the time or that it doesn’t matter or that the cat is already too far out of the bag to stuff it back in.  None of those things are true.

      Are you a virgin flesh? More documentation of online abuse

      Privacy and abuse are closely related.  Abusers often seek out their targets’ private information and use it against them by dxxing, SWATing or worse: it’s hardly unknown for abusers to contact their targets’ colleagues, loved-ones or to turn up at their home or place of work.  But on top of that, online abuse is itself a violation of privacy.  It’s an invasion; a violation of people’s right to be left alone.

      And it’s horrible. I’ve suffered a little online abuse from time to time but nothing compared to the kind of abuse many prominent and even ordinary women suffer on a daily basis, often for years.  Several women I know have been silenced because of this kind of abuse.  What kind of abuse?  Well here is only the latest example, from We Hunted the Mammoth.

      Mia Matsumiya, an L.A. musician, is also a human female on the internet, and in the latter capacity has been getting — and saving — creepy messages from creepy dudes for a decade, more than a thousand in total.

      Now she’s posting them on Instagram, supplemented by some of the especially creepy ones her friends have gotten as well.

      Consider yourself trigger-warned.  There are some horrible things there.

      We need to do more to address online bullying and abuse, and more to help people protect themselves.

      Thursday, 15 October 2015

      They don’t like it up em

      Image result for they don't like it up emA tribunal has found that MPs can be spied on by GCHQ just like everyone else.

      But in a landmark decision the Investigatory Powers Tribunal said the so-called "Wilson Doctrine" was no bar to the incidental collection of data.

      The Wilson Doctrine was a 1966 assertion that MPs calls would never be intercepted without the PM knowing.  According to the BBC, it has been reaffirmed ever since, including by Cameron.

      Caroline Lucas MP is calling this “a body blow for parliamentary democracy,".  I can’t quite follow her reasoning.  It’s OK to spy on everyone except the people in charge?

      "My constituents have a right to know that their communications with me aren't subject to blanket surveillance - yet this ruling suggests that they have no such protection.

      "Parliamentarians must be a trusted source for whistleblowers and those wishing to challenge the actions of the government. That's why upcoming legislation on surveillance must include a provision to protect the communications of MPs, peers, MSPs, AMs and MEPs from extra-judicial spying.

      It’s almost as though banning strong encryption is a seriously stupid idea.  I have to say, though, that if you’re whistleblowing to an MP, you are doing it very wrong indeed.

      "The prime minister has been deliberately ambiguous on this issue - showing utter disregard for the privacy of those wanting to contact parliamentarians."

      But it’s OK to disregard the privacy of those wanting to contact journalists or, for example anyone else?

      Maybe now we’ll see a few MPs coming out on the side of privacy.

      Wednesday, 14 October 2015

      Parental trust

      The advice here is as terrible as the writing.  It begins by praising ‘the American youth’ for beginning to care about their online security and privacy, rather oddly linking to Wikipedia’s article on Internet Security rather than to evidence of the claim.  Anyone else already sense a bait and switch on the horizon?

      There are many ways to get protected from the fears of social media and parents can really help their children in getting towards the right track.

      They can! They should! Among the methods I advocate is teaching your kids to break security, if you can, so they can better learn how to protect themselves.  Another method is to foster an ongoing dialogue with your children to establish their needs and your boundaries; not a list of rules and punishments but an expectation of good behaviour on both sides. 

      I guess that’s what Vijay Prabhu is talking about, right?  Well, let’s see:

      mSpy is the most trusted software used by parents to track the activities of their children and protect them from all the fears of social media.

      Sigh.  The bait & switch continues:

      There are many children who really trust their parents for resolving their social media troubles and doubts. However, it really depends on the relation of parents and their children.

      Again, this almost sounds like a good thing.  If children are able to trust their parents to help them with difficulties arising from social media, it’s likely a good thing.  And the suggestion that this is contingent on the relationship between parents and children is obviously correct.  When parents and children have a good relationship, perhaps those kids can trust their parents to listen to their problems and help them despite any concerns non-judgementally. Maybe the parents can trust their children to act appropriately and come to them when they make mistakes.  That’s what Prabhu is talking about, right?

      Seems like it:

      Parents of teenagers need to be attached to their kids and give them a space to discuss their issues with any of the parent.

      Good so far….

      Teens have been getting help from the parents and the reality is that parents need not wait for the kids to reveal things to them. They must be informed about the activities of their children and keep parental control over them.

      Oh dear.

      The rest is pure shill for some spying software to install on your kid’s phones.  Installing spyware is the exact opposite of trust as should be obvious to everyone. 

      I think spying on your kids is more likely to result in their adopting risky behaviour than in their being safe.  Teaching them to be aware of how they can control their exposure and that they can talk to you as a parent about adjusting your mutual expectations of each other seems a safer choice. 

      Either way, not trusting your kids is a really bad idea. TechWorm, which purports to be about security and privacy, should not publish such bullshit.

      Who trusts their government?

      There have been a few stories recently about a survey of about 2000 millennials in the US and UK about whether they trust their government’s data security.  About 22% said they had little or no trust.  This has been presented as somehow shocking and I’m not sure why.  I’m notoriously paranoid but it seems foolish to assume that governments are somehow better at security than companies that do it for a living.  And we know that companies who do this for a living are being breeched all the time.

      Indeed, the same survey showed that 61% had little or no trust in social media platforms and 38% said the same of retailers.

      It would be interesting to know why.  My speculation is that it’s probably complicated.  I suspect a number of elements are at play, including:

      • Some people tend to mistakenly think that companies have selfish motives while governments do not.  This is a dangerous attitude.  Governments are by their very nature self-interested and even democracy doesn’t come close to guaranteeing that those interests do not conflict with those of their citizens.
      • Some people feel that the government needs to know all about them in order to do its job of providing us with various services and accept that they’ll do as good a job as anyone else of looking after it.  Personally, I’m deeply suspicious of a model that requires our data without justifying it or allowing us to choose what can and cannot be seen or control how it is used.
      • Some people don’t want to think how bad it would be if (by which I mean when) that data is stolen, so willingly place their heads in the sand.  This is understandable; privacy issues can be overwhelming and it’s comforting to feel that someone better informed is looking after it for us.
      • Lots and lots of people feel that if you’ve nothing to hide, you’ve nothing to fear. They are wrong: everyone has something to hide because there’s always someone who can exploit the data we don’t think we need to hide.  Efforts to demonstrate this to people tend to fail (at least, when I do it).  I think those efforts tend to come across as paranoid, perhaps because of the point above.

      There’s other stuff, but the list is already long enough.  All the reasons to trust one’s government seem to be bad (pictured).  I have little or no trust in my government’s data security for two broad reasons:

      1. There’s no reason to believe that their security-fu is better than anyone else’s and they are the biggest target possible, with the greatest possible gain for successful attackers.
      2. They do not always act on our behalf.  They pander to ignorance in the guise of protection.  They don’t really have a choice because their competitors are doing it too.  Data usage will creep.  Surveillance will increase whether we like it or not, whether the threats used to justify it are real or not.

      The web authentication arms race

      Image result for spy vs spyThis is really good. It’s a description of the evolution of authentication as an arms race between a developer and an attacker.  I think it does a brilliant job of explaining the issue.

      An arms race is exactly the right way to think about the history of authentication.  This should be obvious but although I’ve always recognised it as an arms race, I’ve never explicitly written down the stages.  It didn’t even occur to me what a great and powerful idea it is. It should have.

      Defender: Users will enter a username & password, and I will give them an authentication cookie for me to trust in the future.

      Attacker: I will watch your network traffic and steal the passwords as they come down the wire.

      Defender: I will change the <form> to submit over HTTPS, so you won't see any readable passwords.

      Attacker: I will run an active MITM attack as the user loads the login page, and insert Javascript that sends the password to my server in the background.

      Defender: I will serve the login page itself over HTTPS too, so you won't be able to read or change it.

      Attacker: I will watch your network traffic and steal the resulting authentication cookies, so I can still impersonate users even without knowing the password.

      Defender: I will serve the entire site over HTTPS (and mark the cookie as Secure), so you won't be able to see any cookies.

      Attacker: I will run an active MITM attack against your entire site and serve it over HTTP, letting me see all of your traffic (including passwords and cookies) again.

      Defender: I will serve a Strict-Transport-Security header, telling the browser to always refuse to load my site over HTTP (assuming the user has already visited the site over a trusted connection to establish a trust anchor).

      Attacker: I will find or compromise a shady certificate authority and get my own certificate for your domain name, letting me run my MITM attack and still serve HTTPS.

      Defender: I will serve a Public-Key-Pins header, telling the browser to refuse to load my site with any certificate other than the one I specify.

      At this point, there is no reasonable way for the attacker to run an MITM attack without first compromising the browser.

      Do click through for the rest.  This comes pretty high on the list of things I wish I’d thought of.

      Former US detainees sue psychologists who designed and oversaw the CIA torture program

      Good.  Cory Doctorow writes about it here.

      James Mitchell and John Jessen were paid $85m for their services.  Nobody knows why, though, because they had no prior expertise in interrogation techniques.

      There is no evidence that their torture, which included anal rape and multiple forms of simulated execution, produced any useful intelligence.

      Lots of links in Cory’s post including this first person account by Mohamed Farag Ahmad Bashmilah, who was tortured by the CIA.  Needless to say, all the trigger warnings in the world are not enough.

      DRM in JPEGs

      wsprodlg_tis11 The group that oversees the JPEG standard (The Joint Photographic Expert Group, apparently) is considering adding DRM to the standard.  This would mean that images could force your computer to stop you from uploading the pictures elsewhere.  Software that displays JPEGs would have to make decisions about whether or not to do so based on whether it thinks you’re allowed.

      The EFF’s Jeremy Malcolm is on it:

      EFF attended the group's meeting in Brussels today to tell JPEG committee members why that would be a bad idea. Our presentation explains why cryptographers don't believe that DRM works, points out how DRM can infringe on the user's legal rights over a copyright work (such as fair use and quotation), and warns how it places security researchers at legal risk as well as making standardization more difficult. It doesn't even help to preserve the value of copyright works, since DRM-protected works and devices are less valued by users.

      So why are they considering it?

      Currently some social media sites, including Facebook and Twitter, automatically strip off image metadata in an attempt to preserve user privacy. However in doing so they also strip off information about authorship and licensing.

      A reasonable concern, but as Malcolm points out, a shitty solution. A better one would be for platforms to allow users to control what metadata is stripped out and what is left behind.

      This doesn't mean that there is no place for cryptography in JPEG images. There are cases where it could be useful to have a system that allows the optional signing and encryption of JPEG metadata. For example, consider the use case of an image which contains personal information about the individual pictured—it might be useful to have that individual digitally sign the identifying metadata, and/or to encrypt it against access by unauthorized users. Applications could also act on this metadata, in the same way that already happens today; for example Facebook limits access to your Friends-only photos to those who you have marked as your friends

      We encourage the JPEG committee to continue work on an open standards based Public Key Infrastructure (PKI) architecture for JPEG images that could meet some of the legitimate use cases for improved privacy and security, in an open, backwards-compatible way. However, we warn against any attempt to use the file format itself to enforce the privacy or security restrictions that its metadata describes, by locking up the image or limiting the operations that can be performed on it.

      That way, madness lies.

      Something else I learned from that article; apparently scanning, photocopying and image editing software is hardwired to prevent you from scanning banknotes.  Whose great idea was that?

      Tuesday, 13 October 2015

      Life will find a way

      In 2007, South Warwickshire General Hospitals NHS Trust decided to let some staff share smartcards with each other to access patient records.  They did this for what sounds like a good reason; logins were taking too long (especially in A&E) and sharing smartcards meant that they could treat emergency cases more quickly.  They must have been serious; the move was in breach of the NPfIT security policy.

      I don’t know the full extent of the repercussions for privacy but not knowing which doctors accessed someone’s medical records hardly seems like it would end well. 

      This is a common problem with security systems; they don’t take account of how people will develop (often quite elaborate) behaviour to defeat some operational problem caused by security.  They’ll teach the behaviour to new employees and usually they won’t think to tell managers of their brilliant innovation.  I have an example:

      Years ago I worked for what in those days we called an e-commerce firm.  We found that some inconsistencies were finding their way into the database and showing up in the application.   Products were showing up as being in stock when they weren’t.  The database was ridiculously over-complicated and the software was worse, so it took me weeks to pour through it all and I came up with nothing.  I was talking about my frustration over lunch with a colleague in the data entry department and she happened to mention how pleased she was with the new database update tool.

      “Er…..w-what new tool?”

      It turned out that someone in data entry had complained that the tools they had were no good and the new tools were about a year behind schedule so they’d asked a developer to write them a quick hack to fix a particular problem.  It was a half hour job so the developer didn’t think to mention it to anyone.  It allowed data entry people to inject SQL into the live server and none of them were trained in SQL. Fortunately, to my knowledge nobody ever used that e-commerce system, but it cost some weeks of my time and caused me to shout quite a lot at the developer. 

      The data-entry people weren’t at fault.  The development process wasn’t at fault.  The developer certainly was at fault but he wasn’t disciplined other than my shouting at him.  Maybe I was a bit at fault because I should have known that he wrote the tool, but it didn’t go through the CVS so I didn’t know about it.  Maybe I should have been better at training…. But that’s the point; it’s easy to apportion blame after the fact.

      In the Warwickshire hospital case, everyone was complicit up to and including the trust management.  Were they at fault?  Or were the software developers at fault?  Should they have insisted that the system be tested in a live environment?  Or that they talk to and/or observe doctors in A&E before designing the system?  Yeah, they certainly should have done that, but what if they weren’t allowed?  Doctors’ time is valuable and maybe the analysts only got to talk to managers.  Maybe the system went live without proper testing due to some deadlines that were out of the developers’ control.  Maybe goalposts kept moving and the client wouldn’t accept revised schedules….

      And that’s the point.  Systems – and especially security systems – are complicated.  They are complicated further by the environments in which they have to be built.  There’s no such thing as a security system that doesn’t have an unrealistic deadline or conflicts of interest or obstinate people or ignorant people.  Even if there were, something fundamental and probably unnoticed would have changed between agreeing the spec and delivering the solution.

      So security very often gets in the way without any obvious benefit to the core business of an organisation, the people who work there or its customers. 

      But blame is often hard to assign and lessons are difficult to learn. Developers can’t say “we won’t do that again” because they will.  They’ll have to. And clients will probably never understand the realities of software development and security because they think they don’t have to. 

      Life will always find a way to flummox systems if there’s a local reason, even if it’s a nett loss to everyone.  It’s a kind of tragedy of the commons.

      Suspiciously specific

      In 2010, the UK Home Office said:

      "The Government has no plans to require owners of mobile phones to be registered with statutory authorities", says the Home Office statement closing the petition, pointing out that the consultation "rejected an option for a single database holding all communications data".

      That’s… a little too specific for my liking.  “Statutory authorities”…. “single database”…. “all communications data”……

      That leaves the door open to, oh I don’t know, making telcos administer the registration of SIMs and the collection of all communications meta-data.

      That was Gordon Brown’s Labour government and we now have a Tory one so the point is pretty much moot, but it’s worthwhile to note just how specific that statement was and the most obvious reason for that.

      Can you smell burning

      Image result for SIMThere are still places in the world where you can buy and activate SIMs without having to provide any sort of ID.  The UK is one of them.  The operators don’t like it and try to incentivise registration by offering discounts.   But weirdly, in 2005 Blair’s UK government proposed ID checks for SIMS and after consultation decided not to.  Wow.  That doesn’t sound like the UK government at all.  Any UK government.  Ever.

      A confidential report by experts concluded that “the compulsory registration of ownership of mobile telephones would not deliver any significant new benefits to the investigatory process and would dilute the effectiveness of current self-registration schemes.”

      (http://www.gsma.com/publicpolicy/wp-content/uploads/2013/11/GSMA_White-Paper_Mandatory-Registration-of-Prepaid-SIM-Users_32pgWEBv3.pdf)

      Another thing that’s important is that the evidence shows that registering SIMs doesn’t seem to help with law enforcement at all (the UK Government didn’t know this in 2005).

      In Mexico, mandatory SIM registration was introduced in 2009 but repealed three years later after a policy assessment showed that it had not helped the prevention, investigation and prosecution of associated crimes. The reasons cited by the senate for repealing the regulation included:

      (i) Statistics showing a 40 per cent increase in the number of extortion calls recorded daily and an increase of eight per cent in the number of kidnappings between 2009 and 2010;

      (ii) The appreciation that the policy was based on the misconception that criminals would use mobile SIM cards registered in their names or in the name of their accomplices. The report suggests that registering a phone not only fails to guarantee the accuracy of the user’s details but it could also lead to falsely accusing an innocent victim of identity theft;

      (iii) The acknowledgement that mobile operators have thousands of distributors and agents that cannot always verify the accuracy of the information provided by users;

      (iv) Lack of incentives for registered users to maintain the accuracy of their records when their details change, leading to outdated records;

      (v) The likelihood that the policy incentivised criminal activity (mobile device theft, fraudulent registrations or criminals sourcing unregistered SIM cards from overseas to use in their target market); and

      (vi) The risk that registered users’ personal information might be accessed and used improperly.

      (same source as above)

      I’m sure that the impracticalities were the deciding factor and easy to see how gradual legislation might iron these out.  For all I know, the government was playing a long game.  But one thing’s certain: the moment the government starts to make noises about registering SIMS, companies will be buying up and registering huge stocks of SIMS that don’t need ID and the prices will skyrocket.  Presumably organised crime will find ways to launder SIMs, it doesn’t seem like it would be hard.  People certainly seem to get access to non-registered SIMs pretty much anywhere on the planet, even China.