Wednesday, 26 November 2014

The tension between national security and cyber security

Ron Deibert writes:

Buried in a recent Edward Snowden disclosure is a passing remark from a briefing sheet on a program called “Sentry Eagle.”   According to the briefing sheet, “unauthorized disclosure” of its contents would negatively impact the United States’ “ability to exploit foreign adversary cyberspace while protecting U.S. cyberspace.”

For many, such a remark might pass barely noticed, obscured beneath the more salacious operational details in the top secret slides. It definitely should not. It represents a deeply entrenched worldview at the heart of cyber security problems today.

A lot of spying depends on a nation’s intelligence services being able to exploit weaknesses in other nations’ cyber infrastructure.  National security depends on maintaining – or in some cases actively sabotaging – the global infrastructure.

Agencies like the NSA are tasked with defending critical infrastructures on the one hand, while fueling a multi-million dollar industry of products and services to exploit them on the other. Protecting the integrity of communications systems is a mission imperative, but so is building “back doors” — a kind of insecurity-by-design — programs designed to proactively weaken information security are justified on the basis of strengthening national security.

Agencies like this, who are obsessed with installing back doors to weaken security, are also the very ones trusted to protect our cyber security. This is a major conflict of interest.  What’s encouraging is that companies are fighting back. Companies like Google and Apple (and most recently, Whatsapp) are implementing e2e encryption, much to the annoyance of the security agencies.

Historians like to remind us that intelligence is “the second-oldest profession.”  But in the past decade, we have accorded extraordinary powers and capabilities over society to mammoth military-intelligence agencies that are unprecedented in human history. Their overarching prominence and power have begun to undermine core values upon which our societies rest while exposing us and our communications to widening risks.  It is time we address squarely this syndrome for what it is: the most important threat to cyber security today.

Terms of Service

A very good comic about privacy, depicting a lot of things I’ve been saying for some time.

I especially like the part about controlling the narrative used to explain the data you generate.  We’re used to the idea that what we say we are is what the world sees, but it ia becoming ever easier for other people (and companies and government) to mine data about us and infer from it a different narrative to the one we we wish to present. It could be an incorrect narrative and yet affect us adversely.

The comic uses Foursquare as an example.  A man likes to check in at unusual locations to increase the chance of him becoming a mayor of that place.  However, this leads to his profile showing that most of the places he checks in at are restaurants and delis and doctor’s surgeries.  Someone – such as an insurance company – analysing this data might conclude that there could be a link between those two things. The comic makes the additional point that he’s checked in at the doctor’s.  If someone were to look at that doctor’s website and discover that she’s a paediatrician, they’ll know that not only does that person have a child, but who and where it’s doctor is.  That could be dangerous.

The problem gets worse as we generate more pools of data with more services.  With the Foursquare example, we’re at least in control of the data we generate, even though it might be used in ways we don’t expect.  But we’re generating data all over the place and this can be aggregated in unexpected ways with possibly detrimental effects.  It’s almost impossible to predict how isolated data pools might be combined and what that might reveal about us. 

The problem isn’t just that a profiler might get the wrong idea about us.  They might get the right idea about something we wish to protect.  Privacy is the selective revelation of information about ourselves and we just lost the ability to control it.

Monday, 24 November 2014

Bullying is a privacy issue

By definition, bullying is about magnifying or making up something about a person and treating that person as though they were that (magnified or made up) thing and nothing else. It’s about stripping people of dignity by treating them as things and by making them think of themselves as things.

Privacy is (partly) the desire or right to be in selective control of the things one reveals about oneself.  Bullying is a privacy issue.

Here’s an example of bullying (TRIGGER WARNING). It’s not nearly the worst public example I could cite in recent years. I used it because there were actual prison sentences for some of the people involved, so I could point out a couple of things:

  • Convictions for bullying are extremely rare.
  • The damage to privacy has already been done, even more so if legal action is pursued. The victim doesn’t win even in the unlikely event that their bullies are punished.

Bullying is a privacy issue because it takes away people’s freedom to control what’s revealed about them.  Most often, bullying is about the revelation that someone is vulnerable rather than about an actual specific secret. The bullying by proponents of #gamergate and by people who dislike women who speak and by people who find LGBTQ people contemptible or hilarious is about exploiting vulnerability. That’s not to say that the victims of bullying aren’t strong and it’s not to say that the bullies aren’t also vulnerable. It’s about this: bullies are by definition people who exploit other people’s vulnerabilities. Not-bullies are people who don’t do that.  Not-bullies are very often victims of bullying. Work it out.

Respect people’s privacy. Don’t be a bully. 

UK Government wants to force ISPs to keep, reveal IP allocations

The UK Home Secretary, Theresa May, thinks the government needs more powers to tackle terrorism and child sexual exploitation. With a tagline like that, you already know what’s coming.  In this case, they want ISPs to store – and presumably reveal when asked – which IP addresses are assigned to what devices

This article garbles the technical details but presumably May wants the government to be able to associate IPs with MAC addresses on command. That means that a device’s activity can be traced both within an ISP’s scope and across ISPs. This is problematic.

Devices can be used by more than one person, so a device’s activity doesn’t necessarily identify a person. So if the government can identify a device engaged in occasionally dodgy (by whatever standards it uses) activity, it’s going to need more information – information about users of that device – in order to act. How are they going to get that information? I can think of several ways, each more opaque and subject to abuse than the last.  This sounds like a medium-term strategy to move as quickly as possible toward associating every piece of internet traffic with a specific person, doesn’t it?

It’s a pity I’m afflicted by the occasional moral. I could earn a fortune telling Theresa May how to make staggeringly bad ideas sound attractive. Although presumably someone else already has that gig,

Here are the people Theresa May wants to identify:

  • Organised criminals
  • Cyber-bullies and hackers
  • Terror suspects and child sex offenders communicating over the internet
  • Vulnerable people such as children using social media to discuss taking their own life

Even recognising the hyper-obviously problematic grouping of these already dubious categories (terror suspects and actual child sex offenders, really?) it’s clear that one of these things is not like the others. What business does the government have identifying vulnerable people without their consent? What do they plan to do with that information? It’s terrifying.

    The BBC celebrates the couple who helped us undervalue our privacy

    The BBC seems to treat this couple as heroes for helping Tesco to spy on its customers and – I argue – to undervalue their privacy.  The couple laid the foundations for the introduction of the Tesco Clubcard.  Storecards are a terrible privacy bargain.  Customer data is worth a lot more to stores than they pay their customers for it.  Of course, our credit card companies are selling data about our buying habits anyway, but the tradeoff there is about convenience and safety. Mileage will vary, but I personally consider that a reasonable tradeoff for many transactions. Besides, storecard schemes collect data about us regardless of what means of payment we choose.  Presumably that’s exactly the sort of data credit card companies want: what sort of stuff people buy on their cards, use cash for etc.  The price we pay is huge volumes of targeted marketing and great big databases chock full of information about things we value, which are bound to be compromised some day, if they haven’t been already.  How would we know?  In return, we get fractions of a penny for every pound we spend and feel like we’re getting something for free.

    It seems like a terrible bargain to me, others value different things. But almost nobody – including me – knows for sure what the price really is. We don’t know how our data is being used or shared and we can’t trace individual pieces of spam back to source. We don’t know what information the storecard’s partners have about us, how or if it’s anonymised or even who they are.

    Companies like this are actively trying to deceive us into giving them highly personal information about ourselves and to actively confuse us about the bargains we’re making. These people aren’t heroes, they’re opportunists of the sleasiest kind.

    Friday, 21 November 2014

    Free CA

    Bruce Schneier reports on a Very Good Thing.  It’s a free CA which is a joint project involving EFF, Mozilla, Cisco, Akamai and the University of Michigan.

    I think it’s bloody brilliant news. The service’s name says it all: Let’s Encrypt. Yes, let’s.

    The challenge is server certificates. The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.

    Let’s Encrypt wants to change that.

    EFF write about it here.

    Thursday, 20 November 2014

    Detekt craziness

    Detekt tells me it doesn’t support the version of Windows on the VM I ran it in, which is as up to date as can be.

    Update: I ran the comparability wizard and got Detekt working. It didn't find any government spying malware.  I'm almost disappointed.

    Enfield council issues takedown notice to site that publishes non-existent information

    Enfield council wants to close a bunch of libraries, But it doesn’t know which ones yet. It said:

    “No decisions have been made yet on the type of library or the location of libraries. The final decision on the library service, location and different types of libraries will be made in February or March next year following the conclusion of this consultation.”

    But that hasn’t prevented it from issuing a cease and desist against for posting information about it which for the most part seems to be available on the council’s own website anyway.

    Anti government spying software

    Amnesty International has released software that tells you when governments are spying on you.

    Most anti-malware software doesn’t notice some of the software governments use to spy on their (and other countries’) citizens.  Apparently, such spying software leaves some tell-tale signs. I’d love to know what those are, Needless to say I’m currently speculating widely. 

    It's easier to name the countries that are not using these spying tools than those that are.

    There’s some skepticism – not entirely surprisingly – from someone who advises the government about security:

    Prof Alan Woodward from the University of Surrey, who advises governments on security issues, wondered how easy it would be for Amnesty and its partners to maintain Detekt.

    "It's not really their core business," he said. "Are they going to keep updating the software because the spyware variants change daily?"

    I think the professor is being disingenuous.  If there’s one thing we know about the security and privacy communities, it’s that they will flock to help maintain stuff like this.  He further pooh-poohs:

    He also questioned how useful it would be against regimes that used specially written software rather than commercial versions that were well known and documented.

    What? You mean it can’t magically predict new attacks? This guy isn’t on the level. Anti-malware software is always going to be largely reactive.  That doesn’t mean we don’t use anti-virus software.

    Government spying software has a rather different threat profile to most snooping software. It’s trying to achieve different things for a different reason, probably with a different urgency.

    Get it here.  Read a FAQ here.  Don’t not use it.

    Social networks, scariness of, part 1 of several million

    An executive at Uber suggested that the company doxx journalists who write bad reviews about it. The company has access to data about where people are travelling to and from and if they’re coming or going somewhere they oughtn’t, release of that information could be very damaging.  This got me thinking about how much power social networks would have to silence their critics.  I don’t know whether that’s something that’s likely to happen, but if it did, the fallout could be devastating.  The muck they could rake up could highly personal and they’d know exactly who to spill the beans to. 

    Wednesday, 19 November 2014

    Government believes saying a thing makes it true, surprising nobody

    “The UK's major internet service providers (ISPs) are to introduce new measures to tackle online extremism, Downing Street has said.”

    The ISPs seem bemused because they didn’t agree to any such thing.

    Campaigners called for transparency over what would be blocked.

    Did we? I’m pretty sure we campaigned for there to be no filtering at all and no government interference with ISPs but since this is obviously going to happen I’d certainly prefer transparency, accountability and judicial oversight.  Since the government apparently hasn’t even told ISPs what they’ve supposedly already agreed to, this seems a forlorn hope.

    Prime Minster David Cameron said technology companies had a "social responsibility" to deal with jihadists.

    They have a social responsibility to resist governments telling them what people can and cannot see and do. Government agendas should not influence people’s access to information, We have laws for that sort of thing. Laws that are independent of any particular government. For the most part. In principle. Probably.

    In a briefing note, No 10 said the ISPs had subsequently committed to filtering out extremist and terrorist material, and hosting a button that members of the public could use to report content.

    I’ve no idea what that means. Every time I try to think about it, I picture the CEO of some ISP hitting a big red button on her desk causing lots of alarms to ring and everyone to run around in a blind panic but no terror attacks actually being averted.


    It would work in a similar fashion to the reporting button that allows the public to flag instances of child sexual exploitation on the internet.

    But that reporting button appears to belong to the police, not to the hundreds of ISPs in the UK. That’s because child abuse is a matter for the authorities, as is grooming and violence of other kinds. Why would anyone report stuff like this to their ISP? Who would even think of it? And if they did, it wouldn’t be very safe. I use Twitter to complain about idiots and talk about my cat. I wouldn’t use it to blow whistles. ISPs have no procedures to protect people reporting nasty practices and nor should they. It isn’t their job. And how would you complain if you thought your ISP was complicit? It’s the wrong solution in the wrong place and everyone knows it.

    I don’t even know what threats the government is trying to address and neither do you. Neither does the government.  That might explain why the countermeasures are so blithering, ineffective even in principle and under nobody’s oversight.

    Unsurprisingly, the ORG talks sense:

    We need the government to be clear about what sites they are blocking, why they are blocking them and whether there will be redress for site owners who believe that their website has been blocked incorrectly.

    Given the low uptake of filters, it is difficult to see how effective the government's approach will be when it comes to preventing young people from seeing material they have deemed inappropriate.

    Anyone with an interest in extremist views can surely find ways of circumventing child-friendly filters.

    Well quite. Governments shouldn’t get to weasel out of their responsibilities. ISPs aren’t like gas companies. Gas companies are responsible for people not being blown up unless they deliberately vent a load of gas into their house and strike a match. Actually, I’m not sure where I’m going with this analogy because it would involve gas companies deciding what people are allowed to cook or how they should heat their home. Actually, maybe it’s a decent analogy after all: if my gas company decided I was using too much gas to heat my house I’d probably light all the hobs on my oven to generate some extra heat. I’d probably do it just to piss them off.

    To help deal with the problem, the Met Police set up a dedicated Counter Terrorism Internet Referral Unit (CTIRU), tasked with trying to remove terrorism-related material.

    I have no problem with this in principle. It sounds like the sort of thing the police (not ISPs) ought to be doing.

    Since its inception in 2010, CTIRU has removed more than 55,000 pieces of online content, including 34,000 pieces in the past year.

    Kind of worried about the practice, though.


    Tuesday, 18 November 2014

    Phish tales

    I like stories about phishing scams, I’m not sure why, I suppose I like to hear about scamps being inventive.

    There’s nothing new here, but it’s interesting nonetheless. The guy being phished acted on a feeling that something was wrong and took pains to investigate.  We can all learn from that example.  I’ve found myself – in hectic and distracted moments – nearly falling for phone- and email-based social engineering attacks. My bank telling me my card had been used abroad (I happened to be abroad at the time and the phone scammer adapted to this news by asking me to confirm details of the transaction. A very nice try). Someone claiming to be from HR in a university I had just started working for asking me to confirm details (they called every number in the department. The people who hadn’t just started working there mostly assumed it was a wrong number). Someone asking me to write a reference for a friend (I might have fallen for that one, but I’d already thought it up as a possible attack. It’s kind of a hobby, I’m afraid.)

    We all need to develop that feeling that something’s wrong. There’s no reason to expect that the person on the phone is who they say they are, no matter what they seem to know about us. Cold reading is a skill that isn’t even slightly difficult to develop and I’m under no illusion that I couldn’t be fooled by a moderately talented cold reader.  And I’m constantly on the lookout for that kind of thing.

    Five senses my arse. We routinely and constantly sense when something ain’t right.

    Facebook building 'workplace network'

    I can’t see any good coming of this.


    Irish language website exposes users’ data

    In February, Northern Ireland launched a site to encourage people to learn Irish, which seems like a good goal.  Unfortunately, registered users’ name, address, email address and other personal information (I don’t know what yet) was available via the site’s search function.

    The government has apologised and shut down the site while it gets fixed. I expect it’s the usual story: when the idea first came about, it didn’t sound like there ought to be any privacy threats so nobody was hired to think about privacy. But sites are not ideas.  Still, you’d think someone would some point during development. Who on Earth did they hire to build this thing?

    Whatsapp does end-to-end encryption

    It says it’s the “largest deployment of end-to-end encryption ever”.I’ve no idea if that’s true which is slightly worrying. It seems the sort of thing I ought to know. There are 500 million downloads of Whatsapp in the Play Store, though, so it seems about right.  Anyway, it’s working on Android on messages that aren’t group, photo or video messages. Other platforms coming soon.  Forward secrecy too. Good news.

    Whatsapp’s rollout of strong encryption to hundreds of millions of users may be an unpopular move among governments around the world, whose surveillance it could make far more difficult. Whatsapp’s user base is highly international, with large populations of users in Europe and India. But Whatsapp founder Jan Koum has been vocal about his opposition to cooperating with government snooping. “I grew up in a society where everything you did was eavesdropped on, recorded, snitched on,”

    Here in the UK we’ve sidled into such a world.  Maybe we can sidle back out of it, maybe we can’t.

    Friday, 7 November 2014


    Hopefully the last thing I’ll have to say to Michael Nugent.

    I’d like to say that I understand Michael Nugent’s claims that he’s been misrepresented. But I don’t. He’s been represented.

    I don’t know why you need to keep posting your CV. We get it, Michael, you’ve done all kinds of good. Nobody ever said otherwise. But we still get to criticise you if we want. We want. We want because you are failing to take a stand on horrible behaviour. Bewilderingly, you insist on claiming that our criticisms are about you, your past and your achievements rather than about your blind – and repeatedly pointed out to you – ignorance. You deliberately and repeatedly fail to see that we don’t need or want heroes; that we admire the good things people do and deplore the bad.

    That seems to me the essence of what it means to be an atheist. Christopher Hitchens was admirable in many ways and I mourn the fact that he is dead. But let’s be clear, he was a dick about some things. Richard Dawkins was responsible for my becoming a scientist. I devoured The Selfish Gene and The Extended Phenotype. The enthusiasm with which Richard communicates science is infectious. He’s always been one of the biggest influences of my life and probably always will be. I’ve met him. He’s utterly charming. But he’s clueless about several important things.

    Michael, this is how we do hero worship, if we’re smart: we celebrate the good and deplore the bad. Personally, I celebrate the things Darwin was wrong about. They seem stupid in hindsight, but they were honest and fairly – at the time – reasonable attempts to solve a problem his theory predicted. That is hugely impressive, more than I’ll ever do. There should be a movie about how and why he was wrong about what came to be genetics. It’s one of the most interesting and human stories there is.

    And there’s another side to this hero business, isn’t there? We know that great responsibility is a consequence of great power. Geeks like us are only just learning what that means. To be an atheist or to be a skeptic has a social consequence that I don’t think we can ignore. To be a putative leader in the atheist/skeptic movements, moreso.

    So, Michael, worship heroes if you like, but recognise their failures and limitations. Worship heroes all you want but don’t be afraid to criticise them when they’re wrong. Don’t tell other people that they’re wrong to criticise your personal heroes.  Don’t let clueless rhetoric blind your otherwise good instincts for social justice.

    And for fucks sake stop crying about smears.

    Some people – including me – think you’ve done lots of good things for the atheist movement but have utterly disgraced yourself by tacitly endorsing horrible views and insisting that criticisms are smears. Your cluelessness was first evident to me when you insisted that victims of abuse ought to talk genially with their abusers. Lots of people explained why you were wrong but you didn’t listen. In this new case, there are at least two sides. One side constantly reinforces you because it likes what you say, whatever you say since you’re now a champion of horrible people. The other side criticises some of the things you’ve done.

    Criticisms are not smears, Michael. I can tell you about smears. I can tell you that some of the people commenting on your blog have made entirely untrue and public accusations about me. Those are smears. Criticisms of you are not.

    Thursday, 6 November 2014

    Wrongness all the way down

    People react to the new head of GCHQs demand that companies spy on their customers, mostly wrongly and stupidly.

    Is your car spying on you?

    Mine isn’t, it’s nearly as old as I am. But modern cars are probably spying on you.

    who will own all the data they generate, how will it be used, and will our privacy inevitably be compromised?

    Good questions.  Another good question is whether or not we’ll end up actually owning our own cars.  There are already companies leasing cars that shut down if the driver breaks the terms and conditions of the loan used to pay for them.  Regardless of where the driver happens to be, whether they’re picking up their kids or in the middle of nowhere or escaping an abuser….

    This is part of a very unfortunate trend. We need to own our data and to learn how to spend it wisely. This is more or less impossible if we end up not owning our software, media, phones, watches, cars…

    The Samaritans’ Radar app

    A good explanation. I have a few quibbles with the analogy but I won’t quib them.

    Wednesday, 5 November 2014

    It’s a child safety issue

    Kids in a school in the UK are being forced to wear ID cards with RFIDs. The head teacher of that school claims it’s a child safety issue. It isn’t, though, it’s about tracking the movements of children for absolutely no justifiable reason at all. We know how this works: the threat of random surveillance changes the way people behave.  That’s not a good thing because it’s only ever used to coerce lots of people to behave in a way a minority thinks they should.  Fuck that noise. Students, if you’re forced to wear RFIDs or other tracking devices, destroy them. If your school has CCTV cameras, learn to confound them. This kind of tracking is not  reasonable.

    Privacy policy is not enough

    Not that we read them anyway and not that we understand them when we do. One of the most important things we can do to protect ourselves is to understand the motives of service providers. A dating site targeting people with STDs seems like it ought to be on the side of people newly concerned with safe sex. But selling data about people with STDs is bound to be tempting. It’s the sort of data lots of people want.

    I don’t know whether PositiveSingles set out to betray its customers, but that’s what it did.

    Its privacy policy stated fairly clearly that it might share it’s customers’ details with whomever they liked, but its branding and advertising claimed confidentiality. 

    "We do not disclose, sell or rent any personally identifiable information to any third-party organisations."

    But they did. You can imagine what sort of company might be interested in STDs and the negative effects that might have on PositiveSingles’ customers.

    They did other things, too. The site used its customers’ profiles on other sites, which were misrepresentative. Those sites included AIDSDate, Herpesinmouth, ChristianSafeHaven, MeetBlackPOZ and PositivelyKinky.

    Privacy policies don’t keep you safe. At best they give you a basis to sue once your privacy has been violated. When you decide to trust a company with your personal details, always consider its likely motivations. How is this company making money? Could it make more money by betraying you? Could it promise you one thing now and then change its mind later, when they have attracted lots of customers on the basis of the policy?

    Tuesday, 4 November 2014

    ORG responds to GCHQ

    The director of GCHQ said this. TL;DR: The internet is bad because sometimes we can’t read everything everyone ever says.

    A few terrifying quotes:

    [Terrorists and in particular ISIS] have realised that too much graphic violence can be counter-productive in their target audience and that by self-censoring they can stay just the right side of the rules of social media sites, capitalising on western freedom of expression.

    There’s little doubt here that the director frowns upon “western freedom of expression”.

    Isis also differs from its predecessors in the security of its communications. This presents an even greater challenge to agencies such as GCHQ. Terrorists have always found ways of hiding their operations. But today mobile technology and smartphones have increased the options available exponentially. Techniques for encrypting messages or making them anonymous which were once the preserve of the most sophisticated criminals or nation states now come as standard.

    Hardly. GCHQ knows very well that phone and internet communications and even presence are just about as leaky as they can possibly be. The increase in mobile comms has been a boon to security services, not a hindrance. Encrypted and anonymous messages are not ‘standard’ at all. The Snowden leaks show us that GCHQ and other agencies are doing things that require ordinary citizens like us to consider encryption and other tools, such as Tor.

    Its major achievement in spying on us is that we now realise that we innocent citizens need to take countermeasures against our own government.

    There is no doubt that young foreign fighters have learnt and benefited from the leaks of the past two years.

    If citation were ever needed.., Doubt exists.

    GCHQ and its sister agencies, MI5 and the Secret Intelligence Service, cannot tackle these challenges at scale without greater support from the private sector, including the largest US technology companies which dominate the web

    Let’s be clear. When GCHQ talks about “support” from the private sector, it means it expects companies like Google to spy on us all. They’re trying to spin unacceptable and unproductive surveillance as the duty of successful internet-centric firms. They want permission to mine the data we lay down in our daily comms to generate suspicion. To generate groundless reasons to investigate people further.

    GCHQ goes further:

    I understand why [various companies] have an uneasy relationship with governments. They aspire to be neutral conduits of data and to sit outside or above politics. But increasingly their services not only host the material of violent extremism or child exploitation, but are the routes for the facilitation of crime and terrorism.

    Then you *don’t* understand, or you pretend not to. I agree that global companies have an obligation to act on behalf of citizens around the globe. Companies like Google should take a hard stance on bullying, for example. But that – by definition – must include bullying by governments. Sorry, GCHQ, but Google shouldn’t be doing your job.

    [Random internet firms] have become the command-and-control networks of choice for terrorists and criminals, who find their services as transformational as the rest of us.

    We should have resisted the printing press and sure as shit the global telephone network. Who authorised communication satellites?  Means of communication don’t destroy peace. People who really want to be violent do that. Snooping on the billions of people who don’t isn’t going to stop the violent.

    If they are to meet this challenge, it means coming up with better arrangements for facilitating lawful investigation by security and law enforcement agencies than we have now.

    “They” means firms like Google. GCHQ is charging them with a challenge they don’t necessarily accept and hopefully won’t accept. GCHQ is asking for nothing less than that firms like Google tell law enforcement agencies everything their customers do and say. Innocent users.

    privacy has never been an absolute right and the debate about this should not become a reason for postponing urgent and difficult decisions.

    Privacy doesn’t have to be an absolute right. We can rebel against particular violations of privacy without reference to a fictional absolute right to privacy. But you know what? Concern about privacy absolutely should become a reason for postponing certain urgent and difficult decisions.

    Fuck you, Director of GCHQ. This is what the ORG said:

    “Robert Hannigan's comments are divisive and offensive. If tech companies are becoming more resistant to GCHQ's demands for data, it is because they realise that their customers' trust has been undermined by the Snowden revelations. It should be down to judges, not GCHQ nor tech companies, to decide when our personal data is handed over to the intelligence services. If Hannigan wants a 'mature debate' about privacy, he should start by addressing GCHQ's apparent habit of gathering the entire British population's data rather than targeting their activities towards criminals.”


    Phones being used to open hotel doors

    Bypass hotel check-in entirely and use your phone to open your hotel room door.  It’s about time.  What’s the point in living in the 21st century without stuff like this?

    There are some security concerns, of course. Most people seem to be concerned about whether the encryption is good enough, but I’m more worried about the back end.  There’s no reason it should be less secure than the current (now almost traditional :) swipe card system, but depending on the implementation there might be more identifiable data stored.  Plus: you can check into hotels with a false name, but it’s harder to do so with a false phone.  There might be some anonymity issues.

    Still, it’s pretty cool.  Cooler still if you can use your smartwatch or smartband.  I’m working (when I get round to it) on an app for the Samsung Gear Fit that displays barcodes scanned by a phone to allow easy entrance to events that use barcodes on their tickets.  I love using new technology to facilitate old-skool technology.

    A Samsung Gear Fit, yesterday

    Petition to stop the Samaritans’ Twitter app

    Sign it here if you want.  The petition is asking Twitter to refuse access to their API by this app.  I think I’d prefer to petition the Samaritans directly to withdraw the app, but I signed it anyway.

    The Samaritans’ Twitter app

    The Samaritans recently launched a Twitter app (Radar) which searches your followers’ (public) tweets for phrases that might indicate they need help.  If it finds a match, it sends you email from the Samaritans suggesting that there might be a problem and that you might want to look into it.  It’s a noble and worthwhile idea, but they haven’t thought it through.

    There are some serious and pretty fundamental problems.  First, your followers don’t know that their tweets are being scanned and they are being profiled.  Second, the app might lead people to draw conclusions about a follower that either aren’t true or which – even if true – that follower didn’t want to express to you.  It can change the nature of the relationship between a tweeter and her followers without the follower’s consent or even knowledge.  I follow a few people I consider friends, but I also follow a lot of strangers.  I don’t think I’d appreciate being contacted by a stranger to ham-fistedly tell me not to do anything stupid.  There’s an opt-out option, but this itself is problematic. For one thing, you need to know about the service in order to opt out of it. For another, opting out requires you to give your details to The Samaritans. I don’t want them (or anyone who steals that data) to know that I don’t want people to know if I’m depressed!

    Worst of all is how Radar will be used in the hands of trolls and other bullies.  The app is telling them when their targets might be at a low ebb.  We know that this sort of information is like diamonds to trolls.  It’s a dream come true.  We know that they will leap on and try to exploit any perceived weakness, particularly emotional and psychological distress.

    This seems like an extraordinary oversight from an organisation that does this kind of thing for a living.  The Samaritans say the app took more than a year to develop (how?) and it’s very strange that nobody thought through the implications in all that time.  Fascinatingly, there’s a privacy statement on the Radar site.  It is concerned only with the privacy of the person using the app.  It doesn’t say whether the Samaritans will retain information about people their app flags as needing help, for example.  It actually boasts as a benefit that your followers won’t know you’ve signed them up for this service without consent.

    The motivation behind Radar is compassionate and worthwhile, but the implementation is flawed from the outset.  In fact, it’s conceptually terrible from the ground up and I suspect it’s going to end up doing more harm than good.

    Monday, 3 November 2014

    Surveillance begins at home

    Sarah Jeong writes about the need for people – especially women – to protect themselves against surveillance by their partners and the complicity of law enforcement in the abuse of technology.  She says that privacy advocates rarely make this point and I think she’s right.  We’ve (rightly) learned a lot recently about how women are being relentlessly hounded, generally for the crime of expressing an opinion while female. But we haven’t learned enough about when this kind of abuse happens behind closed doors.

    NPR surveyed more than 70 shelters — not just in big coastal cities like New York and San Francisco, but also in smaller towns in the Midwest and the South.

    [They] found a trend: 85 percent of the shelters we surveyed say they’re working directly with victims whose abusers tracked them using GPS. Seventy-five percent say they’re working with victims whose abusers eavesdropped on their conversation remotely — using hidden mobile apps. And nearly half the shelters we surveyed have a policy against using Facebook on premises, because they are concerned a stalker can pinpoint location.

    She also talks about this piece in BetaBoston, which gives a chilling – yet hardly atypical – account of domestic abuse facilitated in part by technology:

    Sarah’s abuser gained access to every password she had. He monitored her bank accounts and used her phone to track her location and read her conversations. She endured four years of regular physical and emotional trauma enabled by meticulous digital surveillance and the existing support services, from shelters to police, were almost powerless to help her.

    Go there for the full story if you have a strong stomach. Jeong explains how the people behind Tor are working with care professionals to help protect victims.

    “Abuses with technology feel like you’re carrying the abuser in your pocket. It’s hard to turn off,” said Kelley Misata, a Tor spokesperson.

    Close to impossible if the victim doesn’t know how to protect themself.  Most charities and other support systems (including the police) aren’t really geared up to teach victims how to protect themselves.  It’s heartening – but not surprising – that Tor is working in this area. 

    Tor is not enough, of course, and neither does it pretend to be. Privacy and security requires vigilance and the building of habits. It requires an understanding of threats, risks and tradeoffs.

    “The question I always asked was how does someone end up in that situation?” her best friend said. “And the answer — from having witnessed it — is, gradually.”

    That gradual evolution is crucial to understanding abuse, Mednick said.

    Abuse works slowly: First abusers often forbid Facebook, then friends of the opposite sex, then friends altogether, then access to transportation, then privacy of any kind. Without noticing, a victim feels suddenly suffocated and intensely vulnerable.

    We all need to build and maintain the security triangle of prevention, detection and response. When we enter a relationship – any kind of relationship – we should know how to protect ourselves and have a strategy for revealing more about ourselves when we want to. We should understand the tradeoffs we’re making as we do this and what – if any – options we have for rolling back if we want to.

    Unfortunately, while there are various sites I won’t link to which tell abusers how to perform technological surveillance on their partners, there aren’t so many resources around to tell victims – or people entering into relationships – how best to protect themselves. 

    This article has convinced me to put some resources together to help with that.  I can write some stuff and put together some courses.  I can approach charities and agencies to see if they could use some help.  I can connect people who know about this sort of thing in other areas. Does any of this sound good? Does anyone have any links that might help?

    This is one of the many reasons it’s so important:

    To escape, Sarah took about a hundred ibuprofen in an attempt to end her life.

    Please do read the rest of that article to see how difficult it can be to hide from someone who means you harm. Notice in particular how legal systems and law enforcement agencies are not necessarily on the side of victims. When Sarah tried to obtain a restraining order against her abuser, he drove past the courthouse.

    “He knew to drive by a court that was completely in a different town,” said one staff member with detailed knowledge of Sarah’s case.

    The abuser knew where his victim would be due to a leaky court and police system and – I expect – technological surveillance by the abuser. What we can do to plug these holes we should. Urgently.

    But let’s stop saying “victims” and “abusers”. Let’s be explicit:

    Intimate partner violence does not only happen to women, but the hard statistics make it a women’s issue. Women make up 4 out of every 5 victims of intimate partner violence. And women are also disproportionately murdered by intimate partners.About a third of female homicide victims over the age of 12 are killed by an intimate partner, where about 3% of male homicide victims are killed by an intimate partner.

    I don’t bring this up as an obligatory footnote to a discussion about intimate partner violence. The gender skew directly affects how we understand remedies and solutions. It’s not enough to acknowledge that technology is used by abusers, and then to progress directly to “And that’s why police need to address this new menace!”

    That’s exactly right and Jeong explains why.

    Police officers as a body are overwhelming male. They are also more likely to commit intimate partner violence than the general population. Some sources say that police officers are four times more likely to commit domestic violence; others say twice the average rate. Combine this knowledge with the knowledge that technological surveillance is used against victims of intimate partner violence, and suddenly the law enforcement abuse and promotion of surveillance technologies begins to sound more sinister.

    And there’s other stuff, don’t not read it.

    And if you can help me bring together people and resources or help me to talk to networks or help me to help other people to talk to better networks or help me learn from wiser people than do.

    Inside Anonymous: Cory Doctorow and Gabriella Coleman talk, London, Tuesday, £5

    Any computer, anywhere

    The FBI is seeking permission from the US Courts that would allow it to hack or distribute malware to any computer anywhere in the world.  This seems to be targeted specifically at computers using things like Tor to protect their users’ anonymity.

    Were the amendment to be granted by the regulatory committee, the FBI would have the green light to unleash its capabilities – known as “network investigative techniques” – on computers across America and beyond. The techniques involve clandestinely installing malicious software, or malware, onto a computer that in turn allows federal agents effectively to control the machine, downloading all its digital contents, switching its camera or microphone on or off, and even taking over other computers in its network.

    Civil liberties groups warn that the proposed rule change amounts to a power grab by the agency that would ride roughshod over strict limits to searches and seizures laid out under the fourth amendment of the US constitution, as well as violate first amendment privacy rights. They have protested that the FBI is seeking to transform its cyber capabilities with minimal public debate and with no congressional oversight.

    [Ed Pilkington in The Guardian]

    Sunday, 2 November 2014

    Australian snooping bill

    Australia is copying the UK’s worst habits by introducing a snooping bill to its parliament. At a cursory glance it looks similar to our snooping bill here in the UK: law enforcement agencies get access to 2 years of metadata without a warrant.  There’s a lot of dubious justification, as you might expect:

    The government says the laws could be used to target illicit downloading of movies or music, and make it easier to identify suspected paedophiles.

    The former seems a small gain for such an enormous invasion of privacy. Rather, a very small number of people will benefit at everyone’s expense.  In the latter case, I’m all for preventing children being harmed but it’s by no means clear that blanket surveillance will help.  That phrase “identify suspected paedophiles” is a tricksy one, isn’t it? Does it mean “ascertain the identity of users we already suspect of being paedophiles as we have evidence of their grooming, sharing child porn etc”? Or does it mean “mine the metadata to generate suspicion of paedophilia”?  Because the two are very different.  The latter case is very worrying.

    "Access to metadata plays a central role in almost every counter-terrorism, counter-espionage, cyber security, organised crime investigation," Communications Minister Malcolm Turnbull told parliament.

    Really?  If that’s true, they must already have access to the data they need, using existing procedures (warrants, court orders etc.) Why do they need everyone’s data unless they plan to mine it to generate suspicion?

    He said criminal investigations had been hampered by authorities' lack of access to metadata.

    I daresay they have.  That doesn’t mean we should necessarily spy on every citizen.  I’m sure criminal investigations have been hampered by authorities’ inability to torture suspects, but that doesn’t mean we should start.

    "Illegal downloads, piracy, cyber crimes, cyber security, all these matters - our ability to investigate them is absolutely pinned to our ability to retrieve and use metadata," the commissioner said.

    Even supposing that’s true, access to metadata can be achieved in ways other than the blanket retention of everyone’s.

    The Australian government is also introducing or has introduced a new law which allows prison sentences for people who blow the whistle on certain “special intelligent operations”.  This is clearly aimed at silencing journalists who might publish secret information that’s in the public interest such as the Snowden leaks.  It’s OK though, because the attorney-general will be able to veto prosecutions against journalists:

    "It's a very powerful, practical safeguard for a minister, who is a practising politician, to assume personal responsibility for authorising the prosecution of a journalist,'' he said.

    Surely a politician is the last person who should decide who is prosecuted.  I cannot imagine a larger conflict of interest. A story about corruption in the opposition party?  Oh, I don’t think that needs to be prosecuted…