Tuesday, 23 December 2014

Response to ‘terrorism’

warning

I find this quite funny.  I think that Sony was wrong to pull the movie but not for the same reasons the media and Obama seem to have.

This isn’t a matter of capitulating with terrorism because it in no way resembles terrorism. It’s an attack on a company by people unknown - maybe a nation (perhaps North Korea) - maybe not.  The usual ways we identify attacks as coming from a nation rather than some other group are the choice of target and the sophistication of the attack. 

The attacks in this case don’t seem to have been especially sophisticated on the face of it, but that’s very hard to assess.  Sophisticated methods might have been used to find vulnerabilities and choices made to make the attacks look more amateurish. Or a sophisticated attacker might have used the least sophisticated attack that would get the job done. Or it could have been an insider.

The target certainly places North Korea as a prime suspect, but it hardly rules out anyone else, nation, group or individual. It could just as easily have been a random group with or without a grudge or an ex-employee or…

But I’m not sure it matters.  What matters is that America (and lots of Americans, non-Americans) are treating the attack as terrorist. It isn’t. The only terror induced is in Sony executives and workers.  This isn’t to say that’s not a bad thing, but it’s not terrorism.  Heads are bound to roll. I have little sympathy for the top execs, who will probably be shuffled off with a massive payment and no questions asked in their next appointment. For execs at that level, what happened in the last company stays there.  My sympathy is with the workers who are SCAPEGOAT’D and sent home without so much as a good reference and with those who lose their jobs because they’re tied in some way to a particular movie.  Those people are the victims of this attack.

And yet some responses have been extraordinary.  I said earlier that Sony was wrong to pull the movie. It was wrong to pull the movie because there wasn’t – as far as I can tell – a credible threat. Did the attack make it more likely that people would be blown up in cinemas?  So what was the threat? That more information that was damaging to Sony (which I don’t care much about) and to its employees (which I do) would be revealed. That doesn’t seem on the face of it like a good reason to shut down a movie, threatening the jobs of lots of people who wouldn’t have been the ones affected anyway.

That is the reason to not pull the movie, not some bullshit terror defence and sure as shit not some patriotic one.

Student win

Students were allowed to take a 3x5 card of notes into an exam.  One student figured out how to double the text that could fit on the card.

http://boingboing.net/2014/12/21/clever-student-uses-redblue-m.html

It’s not really a story about privacy but it’s an excellent example of being disobedient while following the rules.  Exactly the sort of thing that should be encouraged, especially in young people.

Wednesday, 17 December 2014

Canadian police can search your phone after they arrest you, can arrest you if they want to search your phone

The BBC says:

Canadian police can search the contents of a mobile phone after arrest, the Supreme Court of Canada has ruled. In a 4-3 decision, the court said a warrant was not needed as long as the search is directly related to the suspected crime and records are kept.

I resisted bolding the words that automatically stand out in bold to me.

The gist of the article is that police can search your phone if they have arrested you and really want to and that they can arrest you if they really want to search your phone.

"The intensely personal and uniquely pervasive sphere of privacy in our personal computers requires protection that is clear, practical and effective," Judge Andromache Karakatsanis wrote for the minority

The minority of people who don’t think it’s cool for police agencies to search our phones just because we’ve been arrested.

The Snowden Effect

Bruce Schneier reports that over 700 million people worldwide are taking steps to avoid government agency surveillance.

And yet the media are reporting that the Snowden revelations have had little effect on internet users’ behaviour.  38 seems like a high percentage to me.  The press always seem to think that less than half is bad and that only nearly all is good.  I wonder if that’s related to the media love of the zero sum game and apparent conviction that every issue has exactly and only 2 sides, which are always worthy of equal attention.  The articles Shcneier cites misrepresent the facts but that’s not the point:

Even so, I disagree with the "Edward Snowden Revelations Not Having Much Impact on Internet Users" headline. He's having an enormous impact. I ran the actual numbers country by country, combining data on Internet penetration with data from this survey. Multiplying everything out, I calculate that 706 million people have changed their behavior on the Internet because of what the NSA and GCHQ are doing. (For example, 17% of Indonesians use the Internet, 64% of them have heard of Snowden and 62% of them have taken steps to protect their privacy, which equals 17 million people out of its total 250-million population.)

Note that the countries in this survey only cover 4.7 billion out of a total 7 billion world population. Taking the conservative estimates that 20% of the remaining population uses the Internet, 40% of them have heard of Snowden, and 25% of those have done something about it, that's an additional 46 million people around the world.

[…] it is absolutely extraordinary that 750 million people are disturbed enough about their online privacy that they will represent to a survey taker that they did something about it.

Agreed. ~10% of the world’s people have changed their behaviour because of Snowden. That is simply astonishing and very definitely a big step in an excellent direction.  Schneier mentions Cory Doctorow’s point that we have reached peak indifference to surveillance. I don’t know whether we’ve reached that point yet, but let’s hope we’re approaching it and accelerating.

Monday, 15 December 2014

Quickies. Should probably do these on (evil) Wednesdays

Blackphone app store.

IBM gets bank security wrong.

Corporations misusing our data. People keep telling us this isn’t a big problem or it isn’t the worst problem or that it’s inevitable so we shouldn’t worry about it. Bullshit.

Anti-terrorist algorithms with bonus binary picture.

From now on: What the Fuck Wednesdays will list some of the things I haven’t managed to get round to talking about since the previous evil wednesday.

Tuesday, 9 December 2014

A Declaration of the Independence of Cyberspace

I was recently reminded of this and had to read it again.

How NSA and GCHQ are tapping internet cables

We’ve known for some time that this is happening and we’ve seen glimpses of some of the methods used, but this is the first time (to my knowledge) that we’ve had an end-to-end account of how they pulled it off.  It’s a great piece of work.

Make sure you check out the rest of the posts there.  Very interesting stuff.

HOLY CRAP that is sinister

A friend told me that his child brought home a note from his school about staying safe online.  Apparently (and quite surprisingly) it wasn’t too bad apart from one thing:  it said “never tell your password to anyone outside school” (emphasis mine).

As the friend said, this gets more sinister the more you think about it.  There’s almost nobody in a better position to groom children than their teachers. The teachers will most likely know what problems the child is having at school, which subjects they’re good at and what achievements they’ve made, whether and how they’re being bullied, what buttons to press etc.  Arming them with even more information is a bad idea. 

I don’t know yet whether the letter was talking about passwords in general or the password to its students’ school accounts.  Of course, schools (probably rightly) will have access to student accounts, but teachers and other staff knowing a student’s password is bad for lots of reasons, including:

  • They won’t leave much of a trail if they log in as the student, providing they’re careful. Which they probably would be if they were up to no good.
  • A staff member could pose as the friend of a student for a whole load of bad reasons.  Or the enemy of a student, for that matter.  Imagine how that could terrorise at least two students.
  • Staff members could edit the students’ personal data without leaving much of a trail.  I don’t know what kind of data can be found on a student’s school account, but the possibility of staff rewriting history to protect themselves or incriminate students is rather worrying.
  • Students are quite likely to use the same password on their other accounts, or their passwords might give clues to what those other passwords might be. They might also reveal how students tend to choose passwords or something personal to them. How many people use something associated with a sport team or a celebrity as a password? I frequently recommend the book Microserfs by Douglas Coupland as a brilliant insight into early-days Silicon Valley/web-bubble culture. In it, one character reveals that his password is ‘hellojed’. Jed was the character’s younger brother who died as a child and the password was the character’s way to remember him and feel sad about the loss every day.  That would be a significant vulnerability, especially for a child, which could easily be exploited.  That’s one of the reasons we shouldn’t reveal passwords or answer memorable questions honestly.
  • “Inside school” is worryingly vague.  It might imply that revealing passwords to other students might be OK.
  • Instructions like this could make people feel that there are situations where it’s safe or OK to reveal passwords or that authority figures have a right to know their passwords. Authority figures are exactly the wrong kind of people to know other people’s passwords because they have a greater ability to abuse them.
  • It is never OK to tell someone under what circumstances they must or should reveal their passwords.

I could go on.  I will if and when I can clarify exactly what the letter said.

Thursday, 4 December 2014

The Daily Mail on staying safe online

Surprisingly, it’s not a story about how (female) nipples are evil and will turn your children into rapists.  It’s about safe online banking and shopping..

http://www.dailymail.co.uk/money/bills/article-1585675/How-stay-safe-online-shopping-banking.html

My expectations weren’t high. Correctly, as it turns out.

I’d planned to do a point-by-point rebuttal (the best kind of rebuttal) but I’d be here all fucking day.  The article reads like it was written by a four-year-old who hasn’t quite grasped the concept of sentences or punctuation. Much of the advice is terrible; the suggestion to phone someone who is “good with computers” as a security countermeasure is particularly… well, I don’t know. Is it hilarious or terrifying?

The article contains no useful advice about how to bank or buy safely and its smug assertion that “We explain how you can protect yourself from online fraudsters” isn’t true. It’s deeply irresponsible, even for the Mail.

Monday, 1 December 2014

Ball dropped

Wait…. That could be misconstrued.  To be specific, Google and/or my mobile provider (O2) have dropped the ball on my privacy.  And battery life.

I got an OS update for my phone at the weekend.  At first it seemed that the only thing that had changed was that the battery meter is now white instead of green.  However something else has changed too.  Prior to the update, there was a button I could attach to the drop-down menu to toggle the GPS.  This has been replaced by a button to toggle all location services, including the GPS.  It’s all or nothing, now.

The only reason I can think of to do this is that either Google or O2 or both want even more accurate information about my location. I wouldn’t have minded at all if they’d kept the GPS toggle button and added the location services one, but they didn’t.

So location services are now always off.

Wednesday, 26 November 2014

The tension between national security and cyber security

Ron Deibert writes:

Buried in a recent Edward Snowden disclosure is a passing remark from a briefing sheet on a program called “Sentry Eagle.”   According to the briefing sheet, “unauthorized disclosure” of its contents would negatively impact the United States’ “ability to exploit foreign adversary cyberspace while protecting U.S. cyberspace.”

For many, such a remark might pass barely noticed, obscured beneath the more salacious operational details in the top secret slides. It definitely should not. It represents a deeply entrenched worldview at the heart of cyber security problems today.

A lot of spying depends on a nation’s intelligence services being able to exploit weaknesses in other nations’ cyber infrastructure.  National security depends on maintaining – or in some cases actively sabotaging – the global infrastructure.

Agencies like the NSA are tasked with defending critical infrastructures on the one hand, while fueling a multi-million dollar industry of products and services to exploit them on the other. Protecting the integrity of communications systems is a mission imperative, but so is building “back doors” — a kind of insecurity-by-design — programs designed to proactively weaken information security are justified on the basis of strengthening national security.

Agencies like this, who are obsessed with installing back doors to weaken security, are also the very ones trusted to protect our cyber security. This is a major conflict of interest.  What’s encouraging is that companies are fighting back. Companies like Google and Apple (and most recently, Whatsapp) are implementing e2e encryption, much to the annoyance of the security agencies.

Historians like to remind us that intelligence is “the second-oldest profession.”  But in the past decade, we have accorded extraordinary powers and capabilities over society to mammoth military-intelligence agencies that are unprecedented in human history. Their overarching prominence and power have begun to undermine core values upon which our societies rest while exposing us and our communications to widening risks.  It is time we address squarely this syndrome for what it is: the most important threat to cyber security today.

Terms of Service

A very good comic about privacy, depicting a lot of things I’ve been saying for some time.

I especially like the part about controlling the narrative used to explain the data you generate.  We’re used to the idea that what we say we are is what the world sees, but it ia becoming ever easier for other people (and companies and government) to mine data about us and infer from it a different narrative to the one we we wish to present. It could be an incorrect narrative and yet affect us adversely.

The comic uses Foursquare as an example.  A man likes to check in at unusual locations to increase the chance of him becoming a mayor of that place.  However, this leads to his profile showing that most of the places he checks in at are restaurants and delis and doctor’s surgeries.  Someone – such as an insurance company – analysing this data might conclude that there could be a link between those two things. The comic makes the additional point that he’s checked in at the doctor’s.  If someone were to look at that doctor’s website and discover that she’s a paediatrician, they’ll know that not only does that person have a child, but who and where it’s doctor is.  That could be dangerous.

The problem gets worse as we generate more pools of data with more services.  With the Foursquare example, we’re at least in control of the data we generate, even though it might be used in ways we don’t expect.  But we’re generating data all over the place and this can be aggregated in unexpected ways with possibly detrimental effects.  It’s almost impossible to predict how isolated data pools might be combined and what that might reveal about us. 

The problem isn’t just that a profiler might get the wrong idea about us.  They might get the right idea about something we wish to protect.  Privacy is the selective revelation of information about ourselves and we just lost the ability to control it.

Monday, 24 November 2014

Bullying is a privacy issue

By definition, bullying is about magnifying or making up something about a person and treating that person as though they were that (magnified or made up) thing and nothing else. It’s about stripping people of dignity by treating them as things and by making them think of themselves as things.

Privacy is (partly) the desire or right to be in selective control of the things one reveals about oneself.  Bullying is a privacy issue.

Here’s an example of bullying (TRIGGER WARNING). It’s not nearly the worst public example I could cite in recent years. I used it because there were actual prison sentences for some of the people involved, so I could point out a couple of things:

  • Convictions for bullying are extremely rare.
  • The damage to privacy has already been done, even more so if legal action is pursued. The victim doesn’t win even in the unlikely event that their bullies are punished.

Bullying is a privacy issue because it takes away people’s freedom to control what’s revealed about them.  Most often, bullying is about the revelation that someone is vulnerable rather than about an actual specific secret. The bullying by proponents of #gamergate and by people who dislike women who speak and by people who find LGBTQ people contemptible or hilarious is about exploiting vulnerability. That’s not to say that the victims of bullying aren’t strong and it’s not to say that the bullies aren’t also vulnerable. It’s about this: bullies are by definition people who exploit other people’s vulnerabilities. Not-bullies are people who don’t do that.  Not-bullies are very often victims of bullying. Work it out.

Respect people’s privacy. Don’t be a bully. 

UK Government wants to force ISPs to keep, reveal IP allocations

The UK Home Secretary, Theresa May, thinks the government needs more powers to tackle terrorism and child sexual exploitation. With a tagline like that, you already know what’s coming.  In this case, they want ISPs to store – and presumably reveal when asked – which IP addresses are assigned to what devices

This article garbles the technical details but presumably May wants the government to be able to associate IPs with MAC addresses on command. That means that a device’s activity can be traced both within an ISP’s scope and across ISPs. This is problematic.

Devices can be used by more than one person, so a device’s activity doesn’t necessarily identify a person. So if the government can identify a device engaged in occasionally dodgy (by whatever standards it uses) activity, it’s going to need more information – information about users of that device – in order to act. How are they going to get that information? I can think of several ways, each more opaque and subject to abuse than the last.  This sounds like a medium-term strategy to move as quickly as possible toward associating every piece of internet traffic with a specific person, doesn’t it?

It’s a pity I’m afflicted by the occasional moral. I could earn a fortune telling Theresa May how to make staggeringly bad ideas sound attractive. Although presumably someone else already has that gig,

Here are the people Theresa May wants to identify:

  • Organised criminals
  • Cyber-bullies and hackers
  • Terror suspects and child sex offenders communicating over the internet
  • Vulnerable people such as children using social media to discuss taking their own life

Even recognising the hyper-obviously problematic grouping of these already dubious categories (terror suspects and actual child sex offenders, really?) it’s clear that one of these things is not like the others. What business does the government have identifying vulnerable people without their consent? What do they plan to do with that information? It’s terrifying.

    The BBC celebrates the couple who helped us undervalue our privacy

    The BBC seems to treat this couple as heroes for helping Tesco to spy on its customers and – I argue – to undervalue their privacy.  The couple laid the foundations for the introduction of the Tesco Clubcard.  Storecards are a terrible privacy bargain.  Customer data is worth a lot more to stores than they pay their customers for it.  Of course, our credit card companies are selling data about our buying habits anyway, but the tradeoff there is about convenience and safety. Mileage will vary, but I personally consider that a reasonable tradeoff for many transactions. Besides, storecard schemes collect data about us regardless of what means of payment we choose.  Presumably that’s exactly the sort of data credit card companies want: what sort of stuff people buy on their cards, use cash for etc.  The price we pay is huge volumes of targeted marketing and great big databases chock full of information about things we value, which are bound to be compromised some day, if they haven’t been already.  How would we know?  In return, we get fractions of a penny for every pound we spend and feel like we’re getting something for free.

    It seems like a terrible bargain to me, others value different things. But almost nobody – including me – knows for sure what the price really is. We don’t know how our data is being used or shared and we can’t trace individual pieces of spam back to source. We don’t know what information the storecard’s partners have about us, how or if it’s anonymised or even who they are.

    Companies like this are actively trying to deceive us into giving them highly personal information about ourselves and to actively confuse us about the bargains we’re making. These people aren’t heroes, they’re opportunists of the sleasiest kind.

    Friday, 21 November 2014

    Free CA

    Bruce Schneier reports on a Very Good Thing.  It’s a free CA which is a joint project involving EFF, Mozilla, Cisco, Akamai and the University of Michigan.

    I think it’s bloody brilliant news. The service’s name says it all: Let’s Encrypt. Yes, let’s.

    The challenge is server certificates. The anchor for any TLS-protected communication is a public-key certificate which demonstrates that the server you’re actually talking to is the server you intended to talk to. For many server operators, getting even a basic server certificate is just too much of a hassle. The application process can be confusing. It usually costs money. It’s tricky to install correctly. It’s a pain to update.

    Let’s Encrypt wants to change that.

    EFF write about it here.

    Thursday, 20 November 2014

    Detekt craziness

    Detekt tells me it doesn’t support the version of Windows on the VM I ran it in, which is as up to date as can be.

    Update: I ran the comparability wizard and got Detekt working. It didn't find any government spying malware.  I'm almost disappointed.

    Enfield council issues takedown notice to site that publishes non-existent information

    Enfield council wants to close a bunch of libraries, But it doesn’t know which ones yet. It said:

    “No decisions have been made yet on the type of library or the location of libraries. The final decision on the library service, location and different types of libraries will be made in February or March next year following the conclusion of this consultation.”

    But that hasn’t prevented it from issuing a cease and desist against WhatDoTheyKnow.com for posting information about it which for the most part seems to be available on the council’s own website anyway.

    Anti government spying software

    Amnesty International has released software that tells you when governments are spying on you.

    Most anti-malware software doesn’t notice some of the software governments use to spy on their (and other countries’) citizens.  Apparently, such spying software leaves some tell-tale signs. I’d love to know what those are, Needless to say I’m currently speculating widely. 

    It's easier to name the countries that are not using these spying tools than those that are.

    There’s some skepticism – not entirely surprisingly – from someone who advises the government about security:

    Prof Alan Woodward from the University of Surrey, who advises governments on security issues, wondered how easy it would be for Amnesty and its partners to maintain Detekt.

    "It's not really their core business," he said. "Are they going to keep updating the software because the spyware variants change daily?"

    I think the professor is being disingenuous.  If there’s one thing we know about the security and privacy communities, it’s that they will flock to help maintain stuff like this.  He further pooh-poohs:

    He also questioned how useful it would be against regimes that used specially written software rather than commercial versions that were well known and documented.

    What? You mean it can’t magically predict new attacks? This guy isn’t on the level. Anti-malware software is always going to be largely reactive.  That doesn’t mean we don’t use anti-virus software.

    Government spying software has a rather different threat profile to most snooping software. It’s trying to achieve different things for a different reason, probably with a different urgency.

    Get it here.  Read a FAQ here.  Don’t not use it.

    Social networks, scariness of, part 1 of several million

    An executive at Uber suggested that the company doxx journalists who write bad reviews about it. The company has access to data about where people are travelling to and from and if they’re coming or going somewhere they oughtn’t, release of that information could be very damaging.  This got me thinking about how much power social networks would have to silence their critics.  I don’t know whether that’s something that’s likely to happen, but if it did, the fallout could be devastating.  The muck they could rake up could highly personal and they’d know exactly who to spill the beans to. 

    Wednesday, 19 November 2014

    Government believes saying a thing makes it true, surprising nobody

    “The UK's major internet service providers (ISPs) are to introduce new measures to tackle online extremism, Downing Street has said.”

    The ISPs seem bemused because they didn’t agree to any such thing.

    Campaigners called for transparency over what would be blocked.

    Did we? I’m pretty sure we campaigned for there to be no filtering at all and no government interference with ISPs but since this is obviously going to happen I’d certainly prefer transparency, accountability and judicial oversight.  Since the government apparently hasn’t even told ISPs what they’ve supposedly already agreed to, this seems a forlorn hope.

    Prime Minster David Cameron said technology companies had a "social responsibility" to deal with jihadists.

    They have a social responsibility to resist governments telling them what people can and cannot see and do. Government agendas should not influence people’s access to information, We have laws for that sort of thing. Laws that are independent of any particular government. For the most part. In principle. Probably.

    In a briefing note, No 10 said the ISPs had subsequently committed to filtering out extremist and terrorist material, and hosting a button that members of the public could use to report content.

    I’ve no idea what that means. Every time I try to think about it, I picture the CEO of some ISP hitting a big red button on her desk causing lots of alarms to ring and everyone to run around in a blind panic but no terror attacks actually being averted.

    Apparently:

    It would work in a similar fashion to the reporting button that allows the public to flag instances of child sexual exploitation on the internet.

    But that reporting button appears to belong to the police, not to the hundreds of ISPs in the UK. That’s because child abuse is a matter for the authorities, as is grooming and violence of other kinds. Why would anyone report stuff like this to their ISP? Who would even think of it? And if they did, it wouldn’t be very safe. I use Twitter to complain about idiots and talk about my cat. I wouldn’t use it to blow whistles. ISPs have no procedures to protect people reporting nasty practices and nor should they. It isn’t their job. And how would you complain if you thought your ISP was complicit? It’s the wrong solution in the wrong place and everyone knows it.

    I don’t even know what threats the government is trying to address and neither do you. Neither does the government.  That might explain why the countermeasures are so blithering, ineffective even in principle and under nobody’s oversight.

    Unsurprisingly, the ORG talks sense:

    We need the government to be clear about what sites they are blocking, why they are blocking them and whether there will be redress for site owners who believe that their website has been blocked incorrectly.

    Given the low uptake of filters, it is difficult to see how effective the government's approach will be when it comes to preventing young people from seeing material they have deemed inappropriate.

    Anyone with an interest in extremist views can surely find ways of circumventing child-friendly filters.

    Well quite. Governments shouldn’t get to weasel out of their responsibilities. ISPs aren’t like gas companies. Gas companies are responsible for people not being blown up unless they deliberately vent a load of gas into their house and strike a match. Actually, I’m not sure where I’m going with this analogy because it would involve gas companies deciding what people are allowed to cook or how they should heat their home. Actually, maybe it’s a decent analogy after all: if my gas company decided I was using too much gas to heat my house I’d probably light all the hobs on my oven to generate some extra heat. I’d probably do it just to piss them off.

    To help deal with the problem, the Met Police set up a dedicated Counter Terrorism Internet Referral Unit (CTIRU), tasked with trying to remove terrorism-related material.

    I have no problem with this in principle. It sounds like the sort of thing the police (not ISPs) ought to be doing.

    Since its inception in 2010, CTIRU has removed more than 55,000 pieces of online content, including 34,000 pieces in the past year.

    Kind of worried about the practice, though.

    .

    Tuesday, 18 November 2014

    Phish tales

    I like stories about phishing scams, I’m not sure why, I suppose I like to hear about scamps being inventive.

    There’s nothing new here, but it’s interesting nonetheless. The guy being phished acted on a feeling that something was wrong and took pains to investigate.  We can all learn from that example.  I’ve found myself – in hectic and distracted moments – nearly falling for phone- and email-based social engineering attacks. My bank telling me my card had been used abroad (I happened to be abroad at the time and the phone scammer adapted to this news by asking me to confirm details of the transaction. A very nice try). Someone claiming to be from HR in a university I had just started working for asking me to confirm details (they called every number in the department. The people who hadn’t just started working there mostly assumed it was a wrong number). Someone asking me to write a reference for a friend (I might have fallen for that one, but I’d already thought it up as a possible attack. It’s kind of a hobby, I’m afraid.)

    We all need to develop that feeling that something’s wrong. There’s no reason to expect that the person on the phone is who they say they are, no matter what they seem to know about us. Cold reading is a skill that isn’t even slightly difficult to develop and I’m under no illusion that I couldn’t be fooled by a moderately talented cold reader.  And I’m constantly on the lookout for that kind of thing.

    Five senses my arse. We routinely and constantly sense when something ain’t right.

    Facebook building 'workplace network'

    I can’t see any good coming of this.

    Too…many…scary….things…

    Irish language website exposes users’ data

    In February, Northern Ireland launched a site to encourage people to learn Irish, which seems like a good goal.  Unfortunately, registered users’ name, address, email address and other personal information (I don’t know what yet) was available via the site’s search function.

    The government has apologised and shut down the site while it gets fixed. I expect it’s the usual story: when the idea first came about, it didn’t sound like there ought to be any privacy threats so nobody was hired to think about privacy. But sites are not ideas.  Still, you’d think someone would notice.at some point during development. Who on Earth did they hire to build this thing?

    Whatsapp does end-to-end encryption

    It says it’s the “largest deployment of end-to-end encryption ever”.I’ve no idea if that’s true which is slightly worrying. It seems the sort of thing I ought to know. There are 500 million downloads of Whatsapp in the Play Store, though, so it seems about right.  Anyway, it’s working on Android on messages that aren’t group, photo or video messages. Other platforms coming soon.  Forward secrecy too. Good news.

    Whatsapp’s rollout of strong encryption to hundreds of millions of users may be an unpopular move among governments around the world, whose surveillance it could make far more difficult. Whatsapp’s user base is highly international, with large populations of users in Europe and India. But Whatsapp founder Jan Koum has been vocal about his opposition to cooperating with government snooping. “I grew up in a society where everything you did was eavesdropped on, recorded, snitched on,”

    Here in the UK we’ve sidled into such a world.  Maybe we can sidle back out of it, maybe we can’t.

    Friday, 7 November 2014

    Oh MICHAEL

    Hopefully the last thing I’ll have to say to Michael Nugent.

    I’d like to say that I understand Michael Nugent’s claims that he’s been misrepresented. But I don’t. He’s been represented.

    I don’t know why you need to keep posting your CV. We get it, Michael, you’ve done all kinds of good. Nobody ever said otherwise. But we still get to criticise you if we want. We want. We want because you are failing to take a stand on horrible behaviour. Bewilderingly, you insist on claiming that our criticisms are about you, your past and your achievements rather than about your blind – and repeatedly pointed out to you – ignorance. You deliberately and repeatedly fail to see that we don’t need or want heroes; that we admire the good things people do and deplore the bad.

    That seems to me the essence of what it means to be an atheist. Christopher Hitchens was admirable in many ways and I mourn the fact that he is dead. But let’s be clear, he was a dick about some things. Richard Dawkins was responsible for my becoming a scientist. I devoured The Selfish Gene and The Extended Phenotype. The enthusiasm with which Richard communicates science is infectious. He’s always been one of the biggest influences of my life and probably always will be. I’ve met him. He’s utterly charming. But he’s clueless about several important things.

    Michael, this is how we do hero worship, if we’re smart: we celebrate the good and deplore the bad. Personally, I celebrate the things Darwin was wrong about. They seem stupid in hindsight, but they were honest and fairly – at the time – reasonable attempts to solve a problem his theory predicted. That is hugely impressive, more than I’ll ever do. There should be a movie about how and why he was wrong about what came to be genetics. It’s one of the most interesting and human stories there is.

    And there’s another side to this hero business, isn’t there? We know that great responsibility is a consequence of great power. Geeks like us are only just learning what that means. To be an atheist or to be a skeptic has a social consequence that I don’t think we can ignore. To be a putative leader in the atheist/skeptic movements, moreso.

    So, Michael, worship heroes if you like, but recognise their failures and limitations. Worship heroes all you want but don’t be afraid to criticise them when they’re wrong. Don’t tell other people that they’re wrong to criticise your personal heroes.  Don’t let clueless rhetoric blind your otherwise good instincts for social justice.

    And for fucks sake stop crying about smears.

    Some people – including me – think you’ve done lots of good things for the atheist movement but have utterly disgraced yourself by tacitly endorsing horrible views and insisting that criticisms are smears. Your cluelessness was first evident to me when you insisted that victims of abuse ought to talk genially with their abusers. Lots of people explained why you were wrong but you didn’t listen. In this new case, there are at least two sides. One side constantly reinforces you because it likes what you say, whatever you say since you’re now a champion of horrible people. The other side criticises some of the things you’ve done.

    Criticisms are not smears, Michael. I can tell you about smears. I can tell you that some of the people commenting on your blog have made entirely untrue and public accusations about me. Those are smears. Criticisms of you are not.

    Thursday, 6 November 2014

    Wrongness all the way down

    People react to the new head of GCHQs demand that companies spy on their customers, mostly wrongly and stupidly.

    http://www.bbc.co.uk/news/technology-29894104

    Is your car spying on you?

    Mine isn’t, it’s nearly as old as I am. But modern cars are probably spying on you.

    who will own all the data they generate, how will it be used, and will our privacy inevitably be compromised?

    Good questions.  Another good question is whether or not we’ll end up actually owning our own cars.  There are already companies leasing cars that shut down if the driver breaks the terms and conditions of the loan used to pay for them.  Regardless of where the driver happens to be, whether they’re picking up their kids or in the middle of nowhere or escaping an abuser….

    This is part of a very unfortunate trend. We need to own our data and to learn how to spend it wisely. This is more or less impossible if we end up not owning our software, media, phones, watches, cars…

    The Samaritans’ Radar app

    A good explanation. I have a few quibbles with the analogy but I won’t quib them.
    https://purplepersuasion.wordpress.com/2014/10/30/me-sam-and-his-magical-radar-booth

    Wednesday, 5 November 2014

    It’s a child safety issue

    Kids in a school in the UK are being forced to wear ID cards with RFIDs. The head teacher of that school claims it’s a child safety issue. It isn’t, though, it’s about tracking the movements of children for absolutely no justifiable reason at all. We know how this works: the threat of random surveillance changes the way people behave.  That’s not a good thing because it’s only ever used to coerce lots of people to behave in a way a minority thinks they should.  Fuck that noise. Students, if you’re forced to wear RFIDs or other tracking devices, destroy them. If your school has CCTV cameras, learn to confound them. This kind of tracking is not  reasonable.

    Privacy policy is not enough

    Not that we read them anyway and not that we understand them when we do. One of the most important things we can do to protect ourselves is to understand the motives of service providers. A dating site targeting people with STDs seems like it ought to be on the side of people newly concerned with safe sex. But selling data about people with STDs is bound to be tempting. It’s the sort of data lots of people want.

    I don’t know whether PositiveSingles set out to betray its customers, but that’s what it did.

    Its privacy policy stated fairly clearly that it might share it’s customers’ details with whomever they liked, but its branding and advertising claimed confidentiality. 

    "We do not disclose, sell or rent any personally identifiable information to any third-party organisations."

    But they did. You can imagine what sort of company might be interested in STDs and the negative effects that might have on PositiveSingles’ customers.

    They did other things, too. The site used its customers’ profiles on other sites, which were misrepresentative. Those sites included AIDSDate, Herpesinmouth, ChristianSafeHaven, MeetBlackPOZ and PositivelyKinky.

    Privacy policies don’t keep you safe. At best they give you a basis to sue once your privacy has been violated. When you decide to trust a company with your personal details, always consider its likely motivations. How is this company making money? Could it make more money by betraying you? Could it promise you one thing now and then change its mind later, when they have attracted lots of customers on the basis of the policy?

    Tuesday, 4 November 2014

    ORG responds to GCHQ

    The director of GCHQ said this. TL;DR: The internet is bad because sometimes we can’t read everything everyone ever says.

    A few terrifying quotes:

    [Terrorists and in particular ISIS] have realised that too much graphic violence can be counter-productive in their target audience and that by self-censoring they can stay just the right side of the rules of social media sites, capitalising on western freedom of expression.

    There’s little doubt here that the director frowns upon “western freedom of expression”.

    Isis also differs from its predecessors in the security of its communications. This presents an even greater challenge to agencies such as GCHQ. Terrorists have always found ways of hiding their operations. But today mobile technology and smartphones have increased the options available exponentially. Techniques for encrypting messages or making them anonymous which were once the preserve of the most sophisticated criminals or nation states now come as standard.

    Hardly. GCHQ knows very well that phone and internet communications and even presence are just about as leaky as they can possibly be. The increase in mobile comms has been a boon to security services, not a hindrance. Encrypted and anonymous messages are not ‘standard’ at all. The Snowden leaks show us that GCHQ and other agencies are doing things that require ordinary citizens like us to consider encryption and other tools, such as Tor.

    Its major achievement in spying on us is that we now realise that we innocent citizens need to take countermeasures against our own government.

    There is no doubt that young foreign fighters have learnt and benefited from the leaks of the past two years.

    If citation were ever needed.., Doubt exists.

    GCHQ and its sister agencies, MI5 and the Secret Intelligence Service, cannot tackle these challenges at scale without greater support from the private sector, including the largest US technology companies which dominate the web

    Let’s be clear. When GCHQ talks about “support” from the private sector, it means it expects companies like Google to spy on us all. They’re trying to spin unacceptable and unproductive surveillance as the duty of successful internet-centric firms. They want permission to mine the data we lay down in our daily comms to generate suspicion. To generate groundless reasons to investigate people further.

    GCHQ goes further:

    I understand why [various companies] have an uneasy relationship with governments. They aspire to be neutral conduits of data and to sit outside or above politics. But increasingly their services not only host the material of violent extremism or child exploitation, but are the routes for the facilitation of crime and terrorism.

    Then you *don’t* understand, or you pretend not to. I agree that global companies have an obligation to act on behalf of citizens around the globe. Companies like Google should take a hard stance on bullying, for example. But that – by definition – must include bullying by governments. Sorry, GCHQ, but Google shouldn’t be doing your job.

    [Random internet firms] have become the command-and-control networks of choice for terrorists and criminals, who find their services as transformational as the rest of us.

    We should have resisted the printing press and sure as shit the global telephone network. Who authorised communication satellites?  Means of communication don’t destroy peace. People who really want to be violent do that. Snooping on the billions of people who don’t isn’t going to stop the violent.

    If they are to meet this challenge, it means coming up with better arrangements for facilitating lawful investigation by security and law enforcement agencies than we have now.

    “They” means firms like Google. GCHQ is charging them with a challenge they don’t necessarily accept and hopefully won’t accept. GCHQ is asking for nothing less than that firms like Google tell law enforcement agencies everything their customers do and say. Innocent users.

    privacy has never been an absolute right and the debate about this should not become a reason for postponing urgent and difficult decisions.

    Privacy doesn’t have to be an absolute right. We can rebel against particular violations of privacy without reference to a fictional absolute right to privacy. But you know what? Concern about privacy absolutely should become a reason for postponing certain urgent and difficult decisions.

    Fuck you, Director of GCHQ. This is what the ORG said:

    “Robert Hannigan's comments are divisive and offensive. If tech companies are becoming more resistant to GCHQ's demands for data, it is because they realise that their customers' trust has been undermined by the Snowden revelations. It should be down to judges, not GCHQ nor tech companies, to decide when our personal data is handed over to the intelligence services. If Hannigan wants a 'mature debate' about privacy, he should start by addressing GCHQ's apparent habit of gathering the entire British population's data rather than targeting their activities towards criminals.”

    That.

    Phones being used to open hotel doors

    Bypass hotel check-in entirely and use your phone to open your hotel room door.  It’s about time.  What’s the point in living in the 21st century without stuff like this?

    There are some security concerns, of course. Most people seem to be concerned about whether the encryption is good enough, but I’m more worried about the back end.  There’s no reason it should be less secure than the current (now almost traditional :) swipe card system, but depending on the implementation there might be more identifiable data stored.  Plus: you can check into hotels with a false name, but it’s harder to do so with a false phone.  There might be some anonymity issues.

    Still, it’s pretty cool.  Cooler still if you can use your smartwatch or smartband.  I’m working (when I get round to it) on an app for the Samsung Gear Fit that displays barcodes scanned by a phone to allow easy entrance to events that use barcodes on their tickets.  I love using new technology to facilitate old-skool technology.

    A Samsung Gear Fit, yesterday

    Petition to stop the Samaritans’ Twitter app

    Sign it here if you want.  The petition is asking Twitter to refuse access to their API by this app.  I think I’d prefer to petition the Samaritans directly to withdraw the app, but I signed it anyway.

    The Samaritans’ Twitter app

    The Samaritans recently launched a Twitter app (Radar) which searches your followers’ (public) tweets for phrases that might indicate they need help.  If it finds a match, it sends you email from the Samaritans suggesting that there might be a problem and that you might want to look into it.  It’s a noble and worthwhile idea, but they haven’t thought it through.

    There are some serious and pretty fundamental problems.  First, your followers don’t know that their tweets are being scanned and they are being profiled.  Second, the app might lead people to draw conclusions about a follower that either aren’t true or which – even if true – that follower didn’t want to express to you.  It can change the nature of the relationship between a tweeter and her followers without the follower’s consent or even knowledge.  I follow a few people I consider friends, but I also follow a lot of strangers.  I don’t think I’d appreciate being contacted by a stranger to ham-fistedly tell me not to do anything stupid.  There’s an opt-out option, but this itself is problematic. For one thing, you need to know about the service in order to opt out of it. For another, opting out requires you to give your details to The Samaritans. I don’t want them (or anyone who steals that data) to know that I don’t want people to know if I’m depressed!

    Worst of all is how Radar will be used in the hands of trolls and other bullies.  The app is telling them when their targets might be at a low ebb.  We know that this sort of information is like diamonds to trolls.  It’s a dream come true.  We know that they will leap on and try to exploit any perceived weakness, particularly emotional and psychological distress.

    This seems like an extraordinary oversight from an organisation that does this kind of thing for a living.  The Samaritans say the app took more than a year to develop (how?) and it’s very strange that nobody thought through the implications in all that time.  Fascinatingly, there’s a privacy statement on the Radar site.  It is concerned only with the privacy of the person using the app.  It doesn’t say whether the Samaritans will retain information about people their app flags as needing help, for example.  It actually boasts as a benefit that your followers won’t know you’ve signed them up for this service without consent.

    The motivation behind Radar is compassionate and worthwhile, but the implementation is flawed from the outset.  In fact, it’s conceptually terrible from the ground up and I suspect it’s going to end up doing more harm than good.

    Monday, 3 November 2014

    Surveillance begins at home

    Sarah Jeong writes about the need for people – especially women – to protect themselves against surveillance by their partners and the complicity of law enforcement in the abuse of technology.  She says that privacy advocates rarely make this point and I think she’s right.  We’ve (rightly) learned a lot recently about how women are being relentlessly hounded, generally for the crime of expressing an opinion while female. But we haven’t learned enough about when this kind of abuse happens behind closed doors.

    NPR surveyed more than 70 shelters — not just in big coastal cities like New York and San Francisco, but also in smaller towns in the Midwest and the South.

    [They] found a trend: 85 percent of the shelters we surveyed say they’re working directly with victims whose abusers tracked them using GPS. Seventy-five percent say they’re working with victims whose abusers eavesdropped on their conversation remotely — using hidden mobile apps. And nearly half the shelters we surveyed have a policy against using Facebook on premises, because they are concerned a stalker can pinpoint location.

    She also talks about this piece in BetaBoston, which gives a chilling – yet hardly atypical – account of domestic abuse facilitated in part by technology:

    Sarah’s abuser gained access to every password she had. He monitored her bank accounts and used her phone to track her location and read her conversations. She endured four years of regular physical and emotional trauma enabled by meticulous digital surveillance and the existing support services, from shelters to police, were almost powerless to help her.

    Go there for the full story if you have a strong stomach. Jeong explains how the people behind Tor are working with care professionals to help protect victims.

    “Abuses with technology feel like you’re carrying the abuser in your pocket. It’s hard to turn off,” said Kelley Misata, a Tor spokesperson.

    Close to impossible if the victim doesn’t know how to protect themself.  Most charities and other support systems (including the police) aren’t really geared up to teach victims how to protect themselves.  It’s heartening – but not surprising – that Tor is working in this area. 

    Tor is not enough, of course, and neither does it pretend to be. Privacy and security requires vigilance and the building of habits. It requires an understanding of threats, risks and tradeoffs.

    “The question I always asked was how does someone end up in that situation?” her best friend said. “And the answer — from having witnessed it — is, gradually.”

    That gradual evolution is crucial to understanding abuse, Mednick said.

    Abuse works slowly: First abusers often forbid Facebook, then friends of the opposite sex, then friends altogether, then access to transportation, then privacy of any kind. Without noticing, a victim feels suddenly suffocated and intensely vulnerable.

    We all need to build and maintain the security triangle of prevention, detection and response. When we enter a relationship – any kind of relationship – we should know how to protect ourselves and have a strategy for revealing more about ourselves when we want to. We should understand the tradeoffs we’re making as we do this and what – if any – options we have for rolling back if we want to.

    Unfortunately, while there are various sites I won’t link to which tell abusers how to perform technological surveillance on their partners, there aren’t so many resources around to tell victims – or people entering into relationships – how best to protect themselves. 

    This article has convinced me to put some resources together to help with that.  I can write some stuff and put together some courses.  I can approach charities and agencies to see if they could use some help.  I can connect people who know about this sort of thing in other areas. Does any of this sound good? Does anyone have any links that might help?

    This is one of the many reasons it’s so important:

    To escape, Sarah took about a hundred ibuprofen in an attempt to end her life.

    Please do read the rest of that article to see how difficult it can be to hide from someone who means you harm. Notice in particular how legal systems and law enforcement agencies are not necessarily on the side of victims. When Sarah tried to obtain a restraining order against her abuser, he drove past the courthouse.

    “He knew to drive by a court that was completely in a different town,” said one staff member with detailed knowledge of Sarah’s case.

    The abuser knew where his victim would be due to a leaky court and police system and – I expect – technological surveillance by the abuser. What we can do to plug these holes we should. Urgently.

    But let’s stop saying “victims” and “abusers”. Let’s be explicit:

    Intimate partner violence does not only happen to women, but the hard statistics make it a women’s issue. Women make up 4 out of every 5 victims of intimate partner violence. And women are also disproportionately murdered by intimate partners.About a third of female homicide victims over the age of 12 are killed by an intimate partner, where about 3% of male homicide victims are killed by an intimate partner.

    I don’t bring this up as an obligatory footnote to a discussion about intimate partner violence. The gender skew directly affects how we understand remedies and solutions. It’s not enough to acknowledge that technology is used by abusers, and then to progress directly to “And that’s why police need to address this new menace!”

    That’s exactly right and Jeong explains why.

    Police officers as a body are overwhelming male. They are also more likely to commit intimate partner violence than the general population. Some sources say that police officers are four times more likely to commit domestic violence; others say twice the average rate. Combine this knowledge with the knowledge that technological surveillance is used against victims of intimate partner violence, and suddenly the law enforcement abuse and promotion of surveillance technologies begins to sound more sinister.

    And there’s other stuff, don’t not read it.

    And if you can help me bring together people and resources or help me to talk to networks or help me to help other people to talk to better networks or help me learn from wiser people than do.

    Inside Anonymous: Cory Doctorow and Gabriella Coleman talk, London, Tuesday, £5

    http://boingboing.net/2014/11/02/london-tue-night-cory-and-bi.html#more-342696

    Any computer, anywhere

    The FBI is seeking permission from the US Courts that would allow it to hack or distribute malware to any computer anywhere in the world.  This seems to be targeted specifically at computers using things like Tor to protect their users’ anonymity.

    Were the amendment to be granted by the regulatory committee, the FBI would have the green light to unleash its capabilities – known as “network investigative techniques” – on computers across America and beyond. The techniques involve clandestinely installing malicious software, or malware, onto a computer that in turn allows federal agents effectively to control the machine, downloading all its digital contents, switching its camera or microphone on or off, and even taking over other computers in its network.

    Civil liberties groups warn that the proposed rule change amounts to a power grab by the agency that would ride roughshod over strict limits to searches and seizures laid out under the fourth amendment of the US constitution, as well as violate first amendment privacy rights. They have protested that the FBI is seeking to transform its cyber capabilities with minimal public debate and with no congressional oversight.

    [Ed Pilkington in The Guardian]

    Sunday, 2 November 2014

    Australian snooping bill

    Australia is copying the UK’s worst habits by introducing a snooping bill to its parliament. At a cursory glance it looks similar to our snooping bill here in the UK: law enforcement agencies get access to 2 years of metadata without a warrant.  There’s a lot of dubious justification, as you might expect:

    The government says the laws could be used to target illicit downloading of movies or music, and make it easier to identify suspected paedophiles.

    The former seems a small gain for such an enormous invasion of privacy. Rather, a very small number of people will benefit at everyone’s expense.  In the latter case, I’m all for preventing children being harmed but it’s by no means clear that blanket surveillance will help.  That phrase “identify suspected paedophiles” is a tricksy one, isn’t it? Does it mean “ascertain the identity of users we already suspect of being paedophiles as we have evidence of their grooming, sharing child porn etc”? Or does it mean “mine the metadata to generate suspicion of paedophilia”?  Because the two are very different.  The latter case is very worrying.

    "Access to metadata plays a central role in almost every counter-terrorism, counter-espionage, cyber security, organised crime investigation," Communications Minister Malcolm Turnbull told parliament.

    Really?  If that’s true, they must already have access to the data they need, using existing procedures (warrants, court orders etc.) Why do they need everyone’s data unless they plan to mine it to generate suspicion?

    He said criminal investigations had been hampered by authorities' lack of access to metadata.

    I daresay they have.  That doesn’t mean we should necessarily spy on every citizen.  I’m sure criminal investigations have been hampered by authorities’ inability to torture suspects, but that doesn’t mean we should start.

    "Illegal downloads, piracy, cyber crimes, cyber security, all these matters - our ability to investigate them is absolutely pinned to our ability to retrieve and use metadata," the commissioner said.

    Even supposing that’s true, access to metadata can be achieved in ways other than the blanket retention of everyone’s.

    The Australian government is also introducing or has introduced a new law which allows prison sentences for people who blow the whistle on certain “special intelligent operations”.  This is clearly aimed at silencing journalists who might publish secret information that’s in the public interest such as the Snowden leaks.  It’s OK though, because the attorney-general will be able to veto prosecutions against journalists:

    "It's a very powerful, practical safeguard for a minister, who is a practising politician, to assume personal responsibility for authorising the prosecution of a journalist,'' he said.

    Surely a politician is the last person who should decide who is prosecuted.  I cannot imagine a larger conflict of interest. A story about corruption in the opposition party?  Oh, I don’t think that needs to be prosecuted…

    Friday, 31 October 2014

    The extent of this surveillance

    Cross-posted at lookatthestateofthat and evilwednesday. Seems to fit with both.

    Zoe Williams writes in The Guardian

    The first compensation award, of £425,000, has been made to Jacqui, one of the women impregnated in the mid-90s by a police officer pretending to be an activist. She said last year that it felt as though she had been raped by the state

    Jacqui says:

    “Did he report every contraction back to the police? What use was that for information purposes? That is a moment so intimate, and I shared it with a ghost.”

    She said that she felt as though she’d been raped by the state and I can see her point.  Presumably the police officer, Bob Lambert, reported with some regularity to his superiors who saw nothing wrong in beginning and maintaining this relationship through to and beyond childbirth.  As Williams says:

    The language doesn’t exist to describe this crime, and that consigns us to imperfect analogies: it is an invasion beyond privacy and beyond sex, into a person’s destiny, holding them hostage forever to the love of a child conceived as the byproduct of state reconnoitre.

    How would you feel if your partner – with whom you share a life and a child – turned out to be leading another life, too?  Not ‘just’ something relatively commonplace like an affair but a completely different life, such as having another family with someone else or having lied about their job?  It’s hard to imagine. But to know that the deception was sanctioned and maybe even encouraged by the state in order to catch some criminals who didn’t exist in the first place is a whole new level of unreality. It must be massively dehumanising; the feeling of being used – of being thought of as a tiny cog in a large and futile game – must be devastating. I don’t know what feelings, if any, Lambert had for Jacqui.  But she doesn’t either. All she knows is that the police didn’t. The state that sanctioned their activities didn’t.  She was unwittingly used: and used as part of a surveillance system aimed at the people and ideals she cared most about.The £425k compensation seems meagre at best.

    Williams points out something I hadn’t considered:

    The impact on Bob Lambert, the police officer, cannot be ignored. His life has been completely denatured by this duplicity. Surveillance, like torture, brutalises the agent as much as it violates the victim

    Well, perhaps, but I’m struggling to summon any sympathy. He didn’t have to form a romantic relationship with Jacqui. He sure as shit didn’t have to father her child. He didn’t have to form any kind of intimate relationship – sexual or otherwise with her – in order to do his job. And he didn’t have to do that job.  I feel safe in my assumption that either he thought his actions were justified or even correct, or that he didn’t care whether they were or not.  And he certainly got off lighter than Jacqui in any case.

    […] at some point, it must have been obvious that this woman was not a threat to the state. One day, using average human judgment, of a woman he knew inside out, Lambert must have known that Jacqui was not a terrorist but rather a person of radical views. The thing we will never know is how long after that penny had dropped he continued to spy on her. One year? Three? Five?

    I don’t know whether the ‘investigation’ was about Jacqui or her circle of friends and contacts.  But the point is important either way.  How much time, effort and money are the police prepared to spend in investigating a lead that’s leading nowhere? And how many lives are they prepared to ruin in the process?

    When, for that matter, did MI5 realise that Eric Hobsbawm had no intention of defecting to Russia, and was simply agitating for radical left possibilities within UK politics? When did it realise that Christopher Hill was not intending to restart the English civil war, with a mind to recreating a Leveller revolution three centuries later? These two men were academics and communists, and last week it emerged that they were trailed by security services for more than three decades. The extent of this surveillance is still considered too incendiary to be released fully into the public domain, with sections still redacted.

    Williams suggests two explanations. The first is that – to the police and state – the possession of radical views is tantamount to a crime in itself.  I think that’s almost true.  I think it’s a case of the means justifying the means: circular logic being let out to run riot.  Unlike youths in a town local to me: police are “clamping down” on large gangs of youths gathering in public parks on the grounds that – in their view – no good can come of it. It’s unfortunate for the police that the officer issuing threats against youths and their parents that cannot be legally enforced is called Inspector Button. Aaaaawwww. Anyway, large groups are bound to contain a bad apple and they’re all so close together! If we watch a large group long enough, a crime is certain to occur eventually and we can justify our intolerance of crowds! I’m not sure that the state (at least this state) thinks that activists are automatically evil, but that some of them are likely to be and that catching one justifies enormous taxpayer expense (that’s enormous expense, not necessarily an enormous taxpayer) and the devastation of innocent people’s lives.

    I agree more closely with Williams’ second explanation:

    Once you start spying on somebody, it is incredibly difficult to stop

    This seems about right. We humans love nothing more than to throw money after bad. It’s the basis of the Gambler’s Ruin. We’ve spent so much without results that someone – and it might be me – is going to get in trouble. So we show progress in ever finer detail but rarely have the guts to call it quits. I’ve done it in various roles as an academic, a software engineer, a project manager and a human being. But in addition to that, Williams suggests that the police and other authorities just really love spying on people and don’t want to stop. I think that’s true too. I mean both spying in general and spying on individuals.

    Once you’ve started, the piece of evidence that comprehensively proves innocence doesn’t exist. All that exists is absence, the lack of definitive proof of guilt. One more push might be all it takes.

    Yes. This is true regardless of whether authorities view dissent itself as guilt. As I said, some people think that the means justifies the means. The means exist in anticipation of an end but they don’t seem to rely on one. Hence surveillance in the wider context, too.

    Williams writes a lot of nonsense about Russell Brand, for some reason. He hasn’t been “monstered” as she suggests. He’s been told off in the papers because of his immature and ill-considered views, but has been lionised in about equal measure. He hasn’t been vanished or curtailed, he’s been granted podia at which to air his views regardless of never having earned it by, for instance, actually having something to say. Let’s not consider him someone who’s been demonised because of his off-centre beliefs. If anything, the opposite is true.

    But I liked some of the things Williams said that were not about Russell Brand.  Every time we allow our government to spy on us a little bit more… Well, you know the rest.

    The extent of this surveillance

    Cross-posted at lookatthestateofthat and evilwednesday. Seems to fit with both.

    Zoe Williams writes in The Guardian

    The first compensation award, of £425,000, has been made to Jacqui, one of the women impregnated in the mid-90s by a police officer pretending to be an activist. She said last year that it felt as though she had been raped by the state

    Jacqui says:

    “Did he report every contraction back to the police? What use was that for information purposes? That is a moment so intimate, and I shared it with a ghost.”

    She said that she felt as though she’d been raped by the state and I can see her point.  Presumably the police officer, Bob Lambert, reported with some regularity to his superiors who saw nothing wrong in beginning and maintaining this relationship through to and beyond childbirth.  As Williams says:

    The language doesn’t exist to describe this crime, and that consigns us to imperfect analogies: it is an invasion beyond privacy and beyond sex, into a person’s destiny, holding them hostage forever to the love of a child conceived as the byproduct of state reconnoitre.

    How would you feel if your partner – with whom you share a life and a child – turned out to be leading another life, too?  Not ‘just’ something relatively commonplace like an affair but a completely different life, such as having another family with someone else or having lied about their job?  It’s hard to imagine. But to know that the deception was sanctioned and maybe even encouraged by the state in order to catch some criminals who didn’t exist in the first place is a whole new level of unreality. It must be massively dehumanising; the feeling of being used – of being thought of as a tiny cog in a large and futile game – must be devastating. I don’t know what feelings, if any, Lambert had for Jacqui.  But she doesn’t either. All she knows is that the police didn’t. The state that sanctioned their activities didn’t.  She was unwittingly used: and used as part of a surveillance system aimed at the people and ideals she cared most about.The £425k compensation seems meagre at best.

    Williams points out something I hadn’t considered:

    The impact on Bob Lambert, the police officer, cannot be ignored. His life has been completely denatured by this duplicity. Surveillance, like torture, brutalises the agent as much as it violates the victim

    Well, perhaps, but I’m struggling to summon any sympathy. He didn’t have to form a romantic relationship with Jacqui. He sure as shit didn’t have to father her child. He didn’t have to form any kind of intimate relationship – sexual or otherwise with her – in order to do his job. And he didn’t have to do that job.  I feel safe in my assumption that either he thought his actions were justified or even correct, or that he didn’t care whether they were or not.  And he certainly got off lighter than Jacqui in any case.

    […] at some point, it must have been obvious that this woman was not a threat to the state. One day, using average human judgment, of a woman he knew inside out, Lambert must have known that Jacqui was not a terrorist but rather a person of radical views. The thing we will never know is how long after that penny had dropped he continued to spy on her. One year? Three? Five?

    I don’t know whether the ‘investigation’ was about Jacqui or her circle of friends and contacts.  But the point is important either way.  How much time, effort and money are the police prepared to spend in investigating a lead that’s leading nowhere? And how many lives are they prepared to ruin in the process?

    When, for that matter, did MI5 realise that Eric Hobsbawm had no intention of defecting to Russia, and was simply agitating for radical left possibilities within UK politics? When did it realise that Christopher Hill was not intending to restart the English civil war, with a mind to recreating a Leveller revolution three centuries later? These two men were academics and communists, and last week it emerged that they were trailed by security services for more than three decades. The extent of this surveillance is still considered too incendiary to be released fully into the public domain, with sections still redacted.

    Williams suggests two explanations. The first is that – to the police and state – the possession of radical views is tantamount to a crime in itself.  I think that’s almost true.  I think it’s a case of the means justifying the means: circular logic being let out to run riot.  Unlike youths in a town local to me: police are “clamping down” on large gangs of youths gathering in public parks on the grounds that – in their view – no good can come of it. It’s unfortunate for the police that the officer issuing threats against youths and their parents that cannot be legally enforced is called Inspector Button. Aaaaawwww. Anyway, large groups are bound to contain a bad apple and they’re all so close together! If we watch a large group long enough, a crime is certain to occur eventually and we can justify our intolerance of crowds! I’m not sure that the state (at least this state) thinks that activists are automatically evil, but that some of them are likely to be and that catching one justifies enormous taxpayer expense (that’s enormous expense, not necessarily an enormous taxpayer) and the devastation of innocent people’s lives.

    I agree more closely with Williams’ second explanation:

    Once you start spying on somebody, it is incredibly difficult to stop

    This seems about right. We humans love nothing more than to throw money after bad. It’s the basis of the Gambler’s Ruin. We’ve spent so much without results that someone – and it might be me – is going to get in trouble. So we show progress in ever finer detail but rarely have the guts to call it quits. I’ve done it in various roles as an academic, a software engineer, a project manager and a human being. But in addition to that, Williams suggests that the police and other authorities just really love spying on people and don’t want to stop. I think that’s true too. I mean both spying in general and spying on individuals.

    Once you’ve started, the piece of evidence that comprehensively proves innocence doesn’t exist. All that exists is absence, the lack of definitive proof of guilt. One more push might be all it takes.

    Yes. This is true regardless of whether authorities view dissent itself as guilt. As I said, some people think that the means justifies the means. The means exist in anticipation of an end but they don’t seem to rely on one. Hence surveillance in the wider context, too.

    Williams writes a lot of nonsense about Russell Brand, for some reason. He hasn’t been “monstered” as she suggests. He’s been told off in the papers because of his immature and ill-considered views, but has been lionised in about equal measure. He hasn’t been vanished or curtailed, he’s been granted podia at which to air his views regardless of never having earned it by, for instance, actually having something to say. Let’s not consider him someone who’s been demonised because of his off-centre beliefs. If anything, the opposite is true.

    But I liked some of the things Williams said that were not about Russell Brand.  Every time we allow our government to spy on us a little bit more… Well, you know the rest.

    Wednesday, 29 October 2014

    A day without data

    The BBC’s Technology correspondent, Rory Cellan-Jones, on A day without data. It’s a somewhat contrived story about some of the ways we leave a digital footprint, but reasonably informative.

    Monday, 27 October 2014

    Knox is broken

    Update: I’m told that the UK government has accredited Knox as a security product. Haven’t time to check whether that’s true, but it’s from a source that ought to know.

    Samsung's Knox security layer for Android generates weak encryption keys, stores passwords locally and gives users login hints in a fatal "security by obscurity" design "compromising the security of the product completely," a researcher has detailed.

    It says here

    The US government ordered lots of Samsung devices using Knox and the CEO said this “proves the unmatched security of Samsung Galaxy devices supported by the KNOX platform."

    Knox uses a PIN solely to facilitate the password hint, which is used if you forget your password.  Both the PIN and the password hint are stored in plaintext on the device and the password hint is some letters from and length of your password!

    See the (quite long) article for details. 

    Sunday, 26 October 2014

    The EFF of surveillance self-defence

    I’ll report back when I’ve looked at it.

    Update: the title should have read “on” not “of”.  I’ve taken a look at and recommend it. It’s nicely written and contains a lot of good information.

    It has sections:

    • Ovetviews: Intro to threat modelling, choosing tools, creating strong passwords, keeping data safe and encryption
    • Tutorials: lots of stuff including encrypting devices, deleting data securely, use of various tools and technologies
    • Briefings: public key cryptography, how to do protests, VPNs, protecting yourself on social media and more.

    Life sentence for using a computer to damage the economy

    The Tories in the UK are proposing a computer crime bill which includes a life sentence for “[using] a computer in the commission of an offense that damages national security, human welfare, the economy or the environment.” That’s not very specific. The major concern is that governments will use the law to bully whistleblowers.

    Twitpic deletes your photographs

    Twitpic is going out of business and deleting everyone’s photos.

    Which crowdfunded privacy routers are worthy of your trust?

    http://boingboing.net/2014/10/24/which-crowdfunded-privacy-rout.html

    There are a few promising-looking ones.

    Tuesday, 21 October 2014

    Police ‘tackle’ group of people for being a group of people

    This happened in a town near me. Around 200 youths supposedly met in a park. The police didn’t like it one little bit.  Fortunately, it seems like all the police did was tweet about it, strongly implying that the group was up to no good.  There are plenty of places where they’d have been gassed.

    Sunday, 19 October 2014

    Call for teens to self-regulate net use

    By the BBC. It’s not really a call, though. It’s a report of some research the authors did. Why do journalists always insist on calling research reports and opinions “reports”?  Anyway, if the article is accurate (The BBC doesn’t link to the report and I don’t have time to track it down right now), the research doesn’t say anything startling:

    Their report came to three main conclusions:

    • Children who have positive offline relationships with their parents are more likely to navigate the web in a sensible way
    • Supportive and enabling parenting has a more positive impact than restricting or monitoring internet use
    • Teenagers left to self-regulate their internet and social media use are more likely to teach themselves new skills online and maintain positive online relationships

    In other words, blocking and monitoring is no substitute for good parenting.

    I’m all for this.  (Good) Parents are already used to negotiating with their children over bedtimes, what parties they can go to, how long they can stay, whether or how much they should drink…  A parent might not have a good appreciation of the dangers their children face on the Internet.  Perhaps this makes them bad parents, I’m not exactly qualified to judge. 

    But I think there’s a place for software that blocks and monitors children’s access to the internet: how else are they going to learn how to break it?  How else are they going to learn how to resist surveillance or even that they can resist surveillance?

    Theresa May defends mass data collection of citizens, says it’s not surveillance

    The UK Home Secretary Theresa May has defended the government’s mass collection of its citizen’s phone and internet traffic, according to the BBC.

    "If you are searching for the needle in the haystack, you have to have a haystack in the first place," she said.

    This is a disingenuous statement at best and at odds with what she said next:

    Mrs May argued that collecting and storing phone and internet records was not the same as "mass surveillance" because "most of the data will not be looked at at all, will not be touched".

    The government is either mining this data (to find the needles, they need to examine every bit of hay) or they’re specifically targeting people they have legitimate reason to suspect.  Which is it?  If it’s the latter, they don’t need to collect everyone else’s data along with that of the people they suspect.  The best they can say is that if they collect everyone’s data, they’ll have historic data of terrorists, which might conceivably help the investigation. Except that they probably won’t. I think terrorists are going to be pretty careful about their communications.

    May’s statements ignore some pretty big considerations:

    • How is the data going to be used?  Will the government only look at the data where there is an existing suspicion of wrongdoing or will they use the data to generate suspicion?  Am I automatically a suspect if someone calls me from a suspect’s phone?  Does that mean they get to examine all my traffic metadata? Does it mean that they have a legitimate reason to tap the contents of my communications?
    • How effective is the mass collection of data in foiling terrorist plots?  In fact, May doesn’t just leave this unanswered, she explicitly refuses to answer and rules out ever answering in the future.
    • What guarantee do we have that mission creep will not occur?  It’s surely inevitable even if such guarantees were in place.  The government had no qualms about secretly (and illegally) collecting this data in the first place.  Why should it be any more honest about how that data is being used and will be used in the future?

    She said:

    "I think there is - not a contract entered into - but an unwritten agreement between the individual and the state that the state is going to do everything they can to keep them safe and secure."

    Well that just makes me feel less safe. The only reasons for such an agreement to remain unwritten is so that we, the citizens, don’t get to decide what’s too big a price to pay for ‘safety’; don’t get to know what that ‘safety’ actually entails or how effective the measures have been; and that the government can change the meaning of can in “everything they can” whenever they feel like it.

    She said commercial companies also collected large quantities of data to target advertising at consumers.

    Yes they do and many have abominable practices. But this – as May surely understands – is completely different. First, we get to choose whether we use those companies or not (at least in principle) and second, the agreement with those companies is not “unwritten”.  Sure, companies can and do change their T&Cs all the time and without warning. Sure, this can and does lead to infringements of privacy and our freedom to use the things we’ve bought as we wish, but we do have a choice and a legal system within which we can pursue complaints.  We can also limit the amount of information some companies collect about us.  We might use Google for email and Microsoft for instant messaging, O2 for work calls and EE for personal calls.

    This is not the same as non-consensual and until recently covert mass collection of all our data, to be used for reasons we’re not told about, apparently without judicial oversight.

    She said there was a clear difference between examining data - the time and location of phone calls, for example - and snooping on the contents of calls and emails.

    There is a difference, yes. But that’s a false distinction.  It doesn’t make one benign and the other dangerous.  Besides, if they use metadata to generate targets of suspicion, they can just go and get permission to tap those people’s comms.  May says she agrees to almost all requests so there is no practical difference.

    She said there was a need to educate the public about why bulk data collection was needed

    and then refused to say why it was and ruling out doing so in the future.

    But back to this point, in closing:

    Mrs May argued that collecting and storing phone and internet records was not the same as "mass surveillance" because "most of the data will not be looked at at all, will not be touched".

    Studies have shown (unsurprisingly) that people behave differently when they think they are being watched or – crucially – when they think they might be watched at any time.  When we change our behaviour because we think someone might be watching, we are accepting surveillance in dribs and drabs. 

    And we can’t back out. 

    South Korea considering re-building its ID system from scratch

    The BBC reports that since 2004, an estimated 80% of South Korea’s 50 million people have had their personal details stolen.  There are a number of problems: 

    • Identity numbers started to be issued in the 1960s and still follow the same pattern. The first few digits are the user's birth date, followed by either a one for male or two for female
    • Their usage across different sectors makes them master keys for hackers, say experts
    • If details are leaked, citizens are unable to change them
    • The government required net-users who wanted to deal with banks or shops online to use a Microsoft product, ActiveX, to provide a digital signature but critics say it was a simple password that could easily be duplicated

    So they’re thinking of rebuilding it from scratch.  An (uncited) expert told the BBC that it might take up to a decade.  I’d call that optimistic.

    Saturday, 18 October 2014

    Nintendo will kill your Wii U

    Nintendo recently changed the EULA for the Wii U.  If you don’t agree to the new version, your console is bricked.  People who bought the console under the original T&Cs have no choice but to agree to the new ones if they want to continue using it. 

    When we buy objects, we expect to own them.  It’s perfectly reasonable to expect that we can use them however we like.  As the EFF says:

    He may have expected that, like users of the original Wii and other gaming consoles, he would have the option to refuse software or EULA updates and continue to use his device as he always had before. He might have to give up online access, or some new functionality, but that would be his choice. That’s a natural consumer expectation in the gaming context – but it didn’t apply this time.

    This is a worrying trend and is not limited to consoles.

    Last month, the New York Times reported that some auto loans are accompanied by "starter interrupter" devices that can shut down your car if you're a few days late with a payment or drive out of a designated area. People were suddenly prevented from driving their children to the doctor, stranded when they tried to escape domestic abuse, and in some cases had their cars deactivated while they were on the road. These extreme consequences came without judicial process, and often without notice.

    This is bad news for customers because it shifts the balance of power between suppliers and consumers in a direction unfavourable to consumers.  Suppliers can rewrite their contracts with consumers at any time and force their customers to accept it.

    Passenger privacy in the NYC Taxi dataset

    A researcher shows that the anonymous dataset isn’t all that anonymous.

    How to enable two-step authentication on everything

    Two-step authentication requires that you enter some additional information after your password.  In most implementations, a service will send you a text message when you try to log in.  The message contains a code, which you then enter into the site.  This improves security in an obvious way: attackers will need your phone as well as your password.

    Gizmodo has an article about how to turn on two-step authentication for lots of sites including Apple, Google, Facebook, Twitter and more.