Tuesday, 31 March 2015

Europol chief says being secure makes us less safe

14807863071 02d2649efaA European police chief says the sophisticated online communications are the biggest problem for security agencies tackling terrorism.

I don’t know whether that’s true.  But if it is true, that doesn’t mean we should ban encryption.  There are other things to consider.

Hidden areas of the internet and encrypted communications make it harder to monitor terror suspects, warns Europol's Rob Wainwright.

I’m sure it does, but that’s not the issue, is it?  He’s not even saying that some terror attacks could have been avoided if it weren’t for encryption.  I’m fairly sure we’d have heard about that if it were the case.  There are other things terrorists can do to keep things secret and no overwhelming pressure to ban those.  And of course there are plenty of legitimate reasons to encrypt things, governments and criminals reading those things being a major one.

Tech firms should consider the impact sophisticated encryption software has on law enforcement, he said.

No they shouldn’t.  Their duty should be toward their users, not to the governments and law enforcement agencies who want to snoop on them.

A spokesman for TechUK, the UK's technology trade association, said: "With the right resources and cooperation between the security agencies and technology companies, alongside a clear legal framework for that cooperation, we can ensure both national security and economic security are upheld."

National but not personal security.  “legal frameworks’ are fluid things and tend to flow in one direction only.  It’s hard to imagine a government trying to make it harder for their law enforcement agencies to get at citizens’ data and traffic.  It’s frustrating how blithely supposed experts manage to gloss over this fact.  He does finally go on to state that terrorism is a problem for anti-terrorism:

"[Encryption has] become perhaps the biggest problem for the police and the security service authorities in dealing with the threats from terrorism," he explained.

Banning or breaking encryption isn’t going to stop terrorists from using it unbroken.  It reminds me of the old landing visa that non-Americans had to fill in when travelling to the US. There was a box to tick if you had ever committed genocide.  Then you had to sign it to say your declaration was true as if lying on your visa was a worse crime than genocide. 

"It's changed the very nature of counter-terrorist work from one that has been traditionally reliant on having good monitoring capability of communications to one that essentially doesn't provide that anymore."

Mr Wainwright, whose organisation supports police forces in Europe, said terrorists were exploiting the "dark net", where users can go online anonymously, away from the gaze of police and security services.

But that doesn’t imply that banning or breaking encryption is the only or even the best way to solve that problem.  Maybe they should have to live with those restrictions.  Maybe the price of breaking encryption is too great, maybe it wouldn’t be effective anyway.  Maybe it wouldn’t have been such a problem anyway if the NSA, GCHQ and others hadn’t been spying on all of us.

Mr Wainright really doesn’t lile phones with data encryption that the manufacturers can’t break or encrypted IM.

"There is a significant capability gap that has to change if we're serious about ensuring the internet isn't abused and effectively enhancing the terrorist threat.

Because governments and law enforcement agencies would never abuse the internet…

"We have to make sure we reach the right balance by ensuring the fundamental principles of privacy are upheld so there's a lot of work for legislators and tech firms to do."

Or maybe there’s no work to do because we already have the right balance.  Mr Wainright doesn’t seem to consider this possibility.

Photo credit: Surveillance via photopin. License.

Monday, 30 March 2015

I still haven't found a Daily Mail-proof irony meter

The Daily Mail has a story about companies selling our details left right and especially centre.  The Mail's investigation seems as lacking in journalistic integrity as the story is in journalistic standards, but we do know that lots of companies collect, store and sell lots of data about us.  This is bad. The DM tells us it is bad.

Then it tells us this:

When the Mail contacted some of the 15,000 on the database, they were horrified to hear their details had been sold and said they had never consented to the information being used in such a way.

Yeah... I  think I'll leave it there.


Social engineer your way out of prison

Vector illustration of a man in jail - stock vectorA man escaped from Wandsworth prison by using a smuggled mobile to email prison staff masquerading as a senior court clerk and issuing release instructions.
He set up a domain similar to the court service’s official one and sent an email from that domain. 

He was discovered missing three days later when his solicitor turned up to interview him and he wasn’t there.  He later handed himself in to police.

The prisoner had been convicted of various acts of fraud on the social engineering spectrum and was described as “ingenious” by both the prosecutor and the judge.
 
It isn't really all that ingenious, though.  It’s more that we’re all terribly vulnerable to this sort of attack.  We just don’t expect information coming through official-looking channels to be bogus.  We’re strange, hierarchical creatures, aren't we?

Friday, 27 March 2015

North East local ORG Group?

I’m planning on setting up a local ORG group in the North East of England.  There has been a North East group at some point in the past and, for all I know, this is still running. But the latest post I can find about it dates from 2012.  Still, I’m happy to join up with those guys if they are still around or to start a new group otherwise.

The Open Rights Group does this kind of stuff, with the main issues being:

Organised activism on these issues has never been more important.  We have governments trying to introduce broken encryption, nationwide firewalls and all manner of mass surveillance.  We have companies using surveillance as their business model without honesty or customer choice.  And we have a population understandably ill-prepared to take back our rights. 

I’d like to see the group focus on education and practical organised activism.  Inspirational talks are great and we’d have those, but I’d like to end each meeting with immediate plans for education and activism in some of the key areas and to keep track of progress and provide support with social media and regular virtual meetings.

It’s early days, but I’m expecting physical meetings to take place in Newcastle or Durham, since those are the most easily reachable places in the North East.  I have some contacts in several of the local universities, mostly in the fields of computer science, social science, law and medicine.  I (and many of those contacts) have industry ties which might also be useful. I’m in the process of contacting all these people to see if they’d like to get involved.

If you live in or around the North East and would like to join the group, let me know.  If you live elsewhere in the UK or the rest of the world and would like to join in, that’s great too.  All these issues are global and I’d like to learn from and contribute to other activism taking place around the world, 

So if you’re interested in contributing or know anyone who might be, please let me know in the comments or by emailing me at rob@lookatthestateofthat.com.

How to be safe

Safety helmet by jonataI gave a talk a few weeks ago at a youth club.  The talk turned into a discussion session, actually to my relief. I think that was more valuable than me talking at them for 20 minutes.  The talk/chat was about online safety and I didn’t want to say the usual stuff (passwords, stranger danger etc.) because I figured they’d have had talks like that before. They had and were pretty sensible (at least in principle).  So I talked instead about some of the wider issues of safety that they might not previously have considered.  For example, I stressed the fact that we’re responsible for other people’s privacy and safety as well as our own, telling a story about parents spying on their child’s phone discovering someone else’s secret and making their situation worse.  I think some were surprised that their communications might not be as secret as they thought and that the safe spaces in which they’re chatting might not be safe after all, even when infiltrators are genuinely well-meaning.

We spoke about different kinds of safety and what modes of communication might be appropriate for each.  For example, it might be in a child’s interest for her parents to have access to her phone records when absolutely necessary.  A child might accept this but should understand that some conversations should not take place using that phone.  She should understand that certain activities might be out of the question while she’s carrying her phone…. but that she’s putting herself in danger of a different kind if she doesn’t have it with her.  Then we spoke about some tools we can all use to help us keep safe. 

It was a bit haphazard but I think I got a few useful messages across.  At one point, the supervisor asked for top tips about staying safe online.  I think she wanted me to talk about passwords and stranger danger but what I said instead was this:

To be safe online, we need to be three things:

  • We need to be proactive.  We’re too used to thinking of security as something that lives on our computers and not something we need to be an active participant in.  I pointed to some of the things we’d already talked about as examples of proactive safety.
  • We need to be educators. Since our safety depends on the actions of others, we need to evangelise, correct myths and give practical demonstrations to our peers, our family and sometimes those in authority.
  • We need to be activists. Much of our safety is in the hands of our government and the companies providing the services we use.  We need to show companies and governments that we want to take back some control.  I suspect that most of us will still pay for some services by allowing surveillance, providing we understand what data is being collected and what is being done with it.  The rest of us will probably be happy to pay for services if they are surveillance-free. But first, we need that option.  And we need to show government that breaking encryption and other unwise moves will make the internet less safe for everyone.

I think this was good advice, your mileage may well vary.

Image credit: https://openclipart.org/detail/168575/safety-helmet

Thursday, 26 March 2015

Remote control

The Daily Mail thinks that airlines should be able to seize control of aircraft from the ground to avert disasters like the recent Germanwings tragedy.

Really, they can’t see a problem with that.  They tut at the aviation industry, saying that the technology exists but the industry refuses to use it.

‘Teenproof’ car problematic as hell

One of Chevrolet’s new cars has a ‘Teen Driver’ mode, which lets parents control and spy on how and where their children drive, as the BBC reports.

Parents who worry about handing over their car keys will be able to spy on their teenager's road skills and even set a speed limit soon.

They actually use the word “spy” like it’s a good thing.

The feature, available in the new 2016 Chevy Malibu, does things like mute the radio if the driver's not wearing a seat belt.

Perhaps it should be called Patronising Mode or Fucking Stupid Mode.

A key fob can also be used to set a speed limit between 40 and 75mph.

If they go over that, visual and audible warnings will be triggered to tell the driver to slow down.

I don’t have much of a problem with this from a privacy point of view, but Fucking Stupid Mode is definitely a good name for it.  If there’s anything likely to make me do something dangerous it’s someone – especially a computer – telling me to do something safe instead.  But you know it’s not going to end there.

The feature also allows parents to see a report of the total distance driven, maximum speed travelled, how many speed warnings were issued or if there were any driver road skids.

And here we are.  I’m not convinced that’s the best way to teach kids to drive safely.  Speaking only from myself, I’d definitely find a way to drive dangerously purely out of spite, one way or another.  I’ve never even particularly wanted to drive dangerously, but now I know what would make me want to. But I am not as other people.  Parents: if you don’t trust your kids to drive your car properly, guess what you can do. 

But again, we all know it won’t end there.  It’ll report where the teens took the car, as well.  Spying on children is a bad idea for reasons so obvious that I’m even now surprised to have to continually point them out, but in case the obvious reasons don’t convince you, there’s this: When I had recently passed my test I drove one friend to have an abortion and another to a place where he’d be safe from abuse.  If my parents had been able to track me, the implications for my friends would have been horrific.

But, of course, we also know it will not stop there.  The logical next step is for parents to have a kill switch if their kids take the car somewhere they don’t want it to be.  Which creates a whole set of new threats.

So where will it stop?  I can only say where it must stop: with children agreeing, if necessary, to data being collected as a condition of their driving the car, but with those same children having control over what data their parents can see. That’s a platform for negotiation. It’s also a platform that could help young people better understand the value of their privacy.

Amazingly, the BBC ends its article like this:

The new system has been criticised for not doing anything to stop drivers from using devices like mobile phones.

It’s going to take me a lifetime to even work out where to begin.

Monday, 23 March 2015

Biometric banking

This BBC article talks about various technologies for banking using biometrics.  I’m familiar only in principle with most, but I know a bit more about the Nymi solution and I think it’s a little undersold.

Nymi is a wrist band which authenticates its wearer by sampling her ECG.  When she takes it out of the box, she records her ECG by touching a panel on the band.  This data is encrypted and stored on a device such as her desktop machine or phone.  Then when she puts on the band in the morning she sits in Bluetooth range of that device and touches the band again.  This takes another sample of her ECG and matches it against the stored one.  If it does, the band is considered ‘activated’.  If the band is removed or cut, it is no longer activated.

The activated band can then be used to set up relationships with programs running on other devices and thereafter used to authenticate with those devices. 

The most obvious use for this is to automatically log in a user when she sits down at her machine or unlock her phone when she picks it up, but there are more interesting possibilities, including banking.  With an activated band, there’s something the user has (the band) and something they are (her ECG).  Personally, I’d prefer systems that required something I know, as well.  The Nymi band can recognise gestures, but that doesn’t seem like a very good solution for banking, for obvious reasons.  I think a PIN would be fine; different PINs for different payment methods, preferably.

This isn’t a bad scheme.  Removing the band deactivates it so that nobody else can use it.  It can only be reactivated when the user is in range of (and logged into) the authenticating device.  There are some concerns, of course.  This is a new device (not available commercially yet, I have the developer version).  It has yet to be seen whether its security is up to scratch.  There are several potential vectors for attack and I’d like to see a better track record before I used the band for banking (certainly without a password or PIN).

But I like the approach of the biometric sampling being secondary and never – in theory – out in the wild.  It seems a lot more difficult to steal and replicate my ECG without my knowledge than it is to replicate my fingerprints or – I assume – my iris. And if I ever find myself in a spy movie, at least nobody will cut off my finger or pluck out my eye to get at my stuff. They’d just have to force me to comply with their demands then kill me.  Wait… I think I just found a flaw…

Note: Nymi claims that the consumer version of the band will ship with Bitcoin payment and it has been doing banking trials with the Halifax.  I’ve no idea whether the first claim is true (Nymi has delivered much of what it promised but on a much slower timescale than it expected) and I’ve no idea how the Halifax trial went.

Wednesday, 18 March 2015

Safety and herd immunity

I gave a talk this week to a group of around 12-15 year olds on internet safety.  I didn’t want to give the obvious talk: stranger danger, not posting nude pictures of themselves and so on.  I figured they’d have heard those talks before.  So I spoke instead about some of the less obvious dangers and how we should be proactive about our safety, we should educate our peers and our parents and we should be internet safety activists.

They weren’t all interested, but one topic that got them talking was the idea that safety is a mutual concern.  I think they understood that idea better than most adults I’ve spoken to about it.

Most people find it quite surprising that one of the biggest vectors of privacy loss is our loved ones.  I don’t know why, it seems quite obvious.  We can be as careful as we like about not revealing personal information, only to have a friend innocently blurt out the details we’ve been trying to conceal.  Our friends don’t necessarily know what we’d like to keep secret, especially because, by their very nature, those things don’t tend to come up much in conversation.  On top of that, our friends usually don’t know all our other friends and might not know about the details and complexities of relationships.  This happened a lot in the early days of social media; people would post outrage in their friends’ spaces, not realising that their friends’ mothers were all following.

But it goes further than that.  I need to come up with new examples, but I really like this one so sorry if you’re sick of it.  Amazon has a gift-wrapping service.  If you choose this option, Amazon will wrap what you bought and send it directly to a recipient with a note.  This is convenient, especially if you’ve left buying a gift to the last minute, but it also throws your loved ones under the bus.

By using the service, you’ve told Amazon that your friend exists, knows you enough for you to buy them a gift, possibly when their birthday is, the kind of things you think they’d like, the things you looked at before deciding on that thing and so on.  If they are also an Amazon customer, you’ve added them to the social network Amazon has built about you and you to theirs. This data is of immense value to Amazon and it all happens without your friend’s consent.  They are paying the price for your convenience.

Of course, that’s just privacy.  There are lots of other aspects to online safety. At the meeting we spoke a little about various forms of bullying and the fact that social networks make it easy for us to become the bullies, even sometimes without realising it.  In his recent book, So you’ve been publicly shamed, Jon Ronson investigates people who have been mobbed on social media, sometimes for making comments that might have been poorly considered but without malicious intent.  Lives have been ruined by people piling on the shaming bandwagon.

It’s our responsibility to consider the safety of our friends and loved ones when we interact with them, but also the safety of people we disagree with in public.  There are often great differences in power and influence on social media and people can be silenced or badly hurt when an influential person disapproves.

These young people hadn’t really considered this, but once I pointed it out it seemed obvious to them and I think it might be the message they took home.

Being safe is a little bit like herd immunity.  It’s a balance between what’s good for the individual and what’s good for everyone.  When we don’t look out for each other’s safety, a kind of tragedy of the commons occurs: a few people benefit greatly (at least temporarily), while everyone else loses.

Centre for Free Inquiry statement on Freedom of Expression on the Internet at the Human Rights Council

CFI’s Michael De Dora at the 28th UN Human Rights Council, 13/04/15.

He talks about how governments threaten freedom of expression by blocking sites and removing posts.  It’s good, it’s important, it’s to the point and it’s only a couple of minutes long.

Tuesday, 17 March 2015

Breastfeeding is OK, nipples and arses are not

Facebook has published its community standards, which is commendable if shockingly overdue.

We want people to feel safe when using Facebook.

It’s a mystery that they want people to feel safe rather than actually being safe, but it’s a start.  The community standards statement tells us what’s allowed on Facebook, what’s not allowed and what we’re allowed to complain about.

Here’s what it says about being safe:

  • We remove credible threats of physical harm to individuals. I’m assuming (hoping) they did that already. But there are three obvious concerns. First, they’re the ones who get to decide what’s ‘credible’ and don’t seem inclined to tell us much about their criteria.  Second, what about threats of doxxing or threats to otherwise harass? Third, they remove the threats but not the people making those threats?
  • We prohibit content that promotes or encourages suicide or any other type of self-injury, including self-mutilation and eating disorders.  Seems fair enough, although it’s not clear what they mean by “prohibit”. Presumably they’ll remove it, which seems fair enough.  Well, fair enough providing that their definition of ‘promote’ is up to scratch.  It seems like it might: “People can, however, share information about self-injury and suicide that does not promote these things.”
  • We don’t allow any organizations that are engaged in the following to have a presence on Facebook (Terrorist activity, or Organized criminal activity). Sounds about right.  They say they’ll remove content that supports such groups or their leaders or condones their violent bits. It’s quite a short list to be made into bullet points, isn’t it?  I can’t help but wonder if some other types of hate were taken off the list.  Didn’t organisations engaged in other types of violence such as racism, homophobia, sexism make the list?  Maybe that comes under:
  • We don’t tolerate bullying or harassment.  Providing it’s not in the public interest. Bullying celebrities seems to be OK, although you can’t harass or threaten them.
  • We prohibit the use of Facebook to facilitate or organize criminal activity that causes physical harm to people, businesses or animals, or financial damage to people or businesses. Other sorts of criminal activity are OK. That’s probably fair enough: there are laws in some places that I wouldn’t want to endorse. If I were writing this list, I’d probably do it in a similar way. I might add a few extra things to the list, though. “We do, however, allow people to debate or advocate for the legality of criminal activities, as well as address them in a humorous or satirical way.” So that’s a good sign.
  • We remove content that threatens or promotes sexual violence or exploitation. This includes the sexual exploitation of minors, and sexual assault. This seems OK until  they specifically mention promotion of various services from sex workers such as prostitution, “sexual massages”, whatever that means, escort services (which seems something of a blanket) and “filmed sexual content”.  Since Facebook obviously doesn’t classify these things as “criminal activity that causes physical harm etc.” it would be interesting to know why they single this stuff out as automatically involving sexual violence or exploitation. 
  • You can’t buy or sell “drugs and marijuana” at all, but alcohol and tobacco are for some reason neither so it’s OK to buy or sell them if the relevant laws say it’s OK. That doesn’t seem especially coherent.

Here’s what they say about encouraging respectful behaviour. Quite a lot of ‘respect’ is about not posting pictures of certain, arbitrarily-defined bits of people:

  • Nudity is always and automatically bad.
  • Genitals and “fully exposed buttocks” are for some reason especially bad.
  • Female breasts are OK providing they don’t show *gasp* nipples.
  • They’re totally cool with breastfeeding or showing breasts with post-mastectomy scarring, despite their horrible past record on this issue. I’m going to guess that the nipples rule supersedes this one.
  • You can’t describe explicit sex, for some reason.

The rest is about hate speech. Facebook is against content that “directly” attacks people based on race, ethnicity, origin, religion, sexual orientation, sex, gender, gender identity, disability or disease and:

  • Organizations and people dedicated to promoting hatred against these protected groups are not allowed a presence on Facebook. Great except… “protected groups”?  Really?  “As with all of our standards, we rely on our community to report this content to us.” Oh….kay… but it’s the first we’ve heard of it. Why suddenly mention it now?

Then there’s stuff saying that if anyone says this kind of thing, it’s totally their own fault and not Facebook’s.  They didn’t say that about their ban on carefully-defined illegal stuff.

On balance, I’m not impressed.

French government blocks websites willy-nilly

The French government have used new rules which allow them to block sites without court approval.  And by “blocked” I mean “ordered ISPs to block”.

The new powers apply to sites suspected of commissioning or advocating terrorism or distributing indecent images of children.

(source: http://www.bbc.co.uk/news/technology-31904542)

Thursday, 12 March 2015

CIA tries to spy on your iOS devices

Boing Boing writes:

Researchers working with the Central Intelligence Agency have conducted a multi-year, sustained effort to break the security of Apple’s iPhones and iPads, according to top-secret documents obtained by The Intercept.

The security forces must be particularly upset with Apple now that it’s committed to encryption it can’t itself break.

Here is the Intercept’s report.

Philip Hammond tells us to stop talking about Ed Snowden

Philip Hammond told an audience at the Royal United Services Institute that the debate about surveillance "cannot be allowed to run on forever."

He’s worried that we might distract spies from their spying if we talk about it too much.

The minister added that he, the prime minister and the home secretary are already "determined to draw a line under the debate" with legislation. This, he promised, will give the agencies the powers they need, and an oversight regime that appeases citizens.

Nothing like passing laws quickly to stop people complaining that their rights are being eroded.

Luckily, it turns out that our laws are totally fine and completely effective.  We just need more of them, not less.

"We are right to question the powers required by our agencies, and particularly by GCHQ, to monitor private communications in order to do their job. But we should not lose sight of the vital balancing act between the privacy we desire and the security we need," he said.

That’s the point in keeping everyone talking about it.  Shutting people up isn’t going to help us come to the right balance, especially since the Foreign Secretary has already made up his mind.

"From my position as foreign secretary, responsible for the oversight of GCHQ, I am quite clear that the ability to intercept ‘bulk communications data', to subject that metadata to electronic analysis and to seek to extract the tiny percentage of communications data that may be of any direct security interest, does not represent an enhancement of the agencies' powers," he said.

Well, no.  Because they’ve been doing it all along in secret (and illegally) anyway. That’s not what people are arguing about.  They’re arguing that these powers are far too great and are likely to become more so.  Keeping quiet about it is the last thing we should do.

Protest can work

A court in the Hague has struck down a law requiring ISPs to keep customer data for 12 months.  The law was found to violate data protection and privacy rights and was “more than strictly necessary to meet the claimed needs.”

Because activism!

Wikimedia sues NSA

The Wikimedia Foundation is taking legal action against the NSA and the US Department of Justice over its programme of mass surveillance.

"By tapping the backbone of the internet, the NSA is straining the backbone of democracy," said Lila Tretikov, executive director of the Wikimedia Foundation, in a blogpost announcing the legal action.

Targeting the backbone means the NSA casts a "vast net" and inevitably scoops up data unrelated to any target and will also include domestic communications, violating the rules governing what the NSA can spy on, said Ms Tretikov.

The Snowden papers revealed that Wikipedia had been explicitly targeted for surveillance.

"By violating our users' privacy, the NSA is threatening the intellectual freedom that is central to people's ability to create and understand knowledge," said Ms Tretikov.

The NSA amd DoJ have yet to comment.

Banning Tor is unwise, infeasible

David Cameron doesn’t like Tor because it facilitates online anonymity, which he really doesn’t like. Tor encrypts your browser traffic and bounces it around a network of volunteer-run relays, preventing anyone watching from analysing that traffic or learning your physical location.   Cameron doesn’t like this because he wants the government to be able to spy on everyone in the name of what he likes to call ‘safety’.  I’ve yet to understand how making everyone less safe is going to make them safer, but that’s just me.

The BBC reports that ‘parliamentary advisors’ have told him and MPs that banning Tor would be unwise and technically infeasible. They’re right.

The Parliamentary Office of Science and Technology (Post), which issues advice to MPs, said that there was "widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the UK".

Unwise and infeasible, as they said.

Speaking in January, following attacks by gunmen in Paris and its surrounding areas, David Cameron said there should be no "means of communication" the security services could not read.

He said: "In extremis, it has been possible to read someone's letter, to listen to someone's call to mobile communications.

"The question remains, 'Are we going to allow a means of communications where it simply is not possible to do that?' My answer to that question is, 'No, we must not.'"

And there’s the problem.  It’s a political stance rather than a practical one.  Security depends to a large degree on understanding risks and deploying and evaluating effective counter-measures to those risks.  Cameron alludes to vague, scary-sounding risks which could easily be read to include absolutely any communication at all and to an outright ban on encryption that cannot be broken by the government.  There are two major problems with this.  First, encryption that can be broken by government will sooner or later be broken by someone else. There’s no such thing as a “golden key” (as Cameron likes to put it) that works only for the good guys and not the baddies.  This reduces our safety by laying our communications open to criminals and foreign governments. Second, it is not a measured, effective or proportionate counter-measure to the vague security problem of ‘terror’.  We’d be giving up an enormous amount of personal freedom and granting our intelligent agencies great and no doubt extensible powers, which would be very difficult to later retract.  So we wouldn’t be safe from our own government, either.  Literally nobody believes that once a security agency has access to secret information, it won’t find new ways to use it.  Mass surveillance harms everyone and besides: it’s not remotely clear that it would foil more terrorist plots.

The advisors have advised.  Let’s see if Cameron is listening.

More on the spying report

The BBC writes about the parliamentary report by the Intelligence and Security Committee, which is due to be released later today.

The committee's report is expected to look at whether current legislation provides the necessary powers, what the privacy implications are and whether there is sufficient oversight and accountability.

I’m not very optimistic.  When the former committee chair was asked if there was any evidence that more spying powers were needed, he said the evidence was that they hadn’t caught enough terrorists.  This is exactly the sort of reasoning that seems likely to result in drastic, open-ended and gradually increasing new powers.  How many caught terrorists is enough?

[The committee] heard evidence in public and in secret, and among those to appear publicly were:

Also due to be published later is the annual report from the judge who oversees the interception of communications by spies and the police.

It will provide details on the number of times this had occurred, and any errors or misuse.

We’ll see.

Reporters without borders

The campaign group Reporters Without Borders is mirroring news sites that are banned in various countries:

  • Grani.ru, blocked in Russia
  • Fergananews.com blocked in Kazakhstan, Uzbekistan and Turkmenistan
  • The Tibet Post, blocked in China
  • Dan Lam Bao, blocked in Vietnam
  • Mingjing News, blocked in China
  • Hablemos Press, blocked in Cuba
  • Gooya News, blocked in Iran
  • Gulf Centre for Human Rights, blocked in United Arab Emirates
  • Bahrain Mirror, blocked in Bahrain and Saudi Arabia

They are confident that the proxies won’t get blocked because they are running on Amazon’s cloud service (with mirrors on Google and Microsoft clouds) and they don’t expect countries to block those.  Whether they’re right remains to be seen.  Traffic between users and the proxies is encrypted.

This is important work and deserves to be supported.

Here go our friends again, costing us privacy…

Bruce Schneier reports on research that geotags Twitter users by mining their social graphs.

The method seems pretty successful:

Leave-many-out evaluation shows that our method is able to infer location for 101,846,236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.

Lots of tweets were accurately tagged to within 1km.

It’s getting harder to not reveal private information about our friends by accident.

Wednesday, 11 March 2015

Want to bet that the report doesn’t recommend more spying?

The parliamentary intelligence and security committee is due to report its findings today.  It’s mostly an exercise in justifying more powers to spy on us.  The former chair of the committee appeared on BBC’s Breakfast news programme where he said that the evidence that spies need more spying powers is that they haven’t caught more terrorists. 

Really. That’s genuinely what the man said.  It doesn’t bode well for the report.  Hopefully, however, we’ll learn more about what – if any – oversight there is and will be of the spies.

Tuesday, 10 March 2015

If Apple won’t spy on its users we should ban gov employees from using iphones at work

Amitai Etzioni says:

Technology firms are implementing high end encryption that could derail efforts to track terrorists. The White House should push back against this trend.

The greatest new threat to American security is that thousands of Westerners will return from the Middle East and Africa with the training and intent to commit acts of terrorism.

(emphasis intact)

Is it?  Etzoni doesn’t make clear what he means by “security” or “threat”, let alone on what basis he makes his conclusion. He states it as a fact when it’s not even not a fact. It’s not even a coherent statement.

He admits that some people travelling to the Middle East might be there to good, humanitarian things. Or might alternatively be journalists or people of business.  So we should allow them to travel (thanks, Amitai) but should definitely “keep an eye on them, for a defined period of time”. That involves, he says, tracking their phone and email communications:

to determine if these returning Westerners are keeping in touch with ISIS or al-Qaeda, consulting web pages that teach one how to make bombs, or forming local terrorist cells.

It seems unlikely that people would travel abroad for terrorist training and then look up how to be a terrorist on the web when they get home. That’s a pretty piss-poor training course and they should probably ask for their money back. But Etzoni thinks this is the greatest new threat to American security.  He thinks the government should be able to spy on everyone travelling to the middle east for any reason and for an indeterminate length of time.

But they can’t! Because tech firms are evil!

A major obstacle in proceeding are the major Internet and telecommunication companies—the high tech giants, including Apple, Facebook, and Google—who adamantly object to the new security measures the government is seeking and to many already in place. In doing so, they are placing private profits ahead of the public interest.

That rather depends on what is the public interest. If the public interest is spying on absolutely everyone in case they might one day turn out to be a baddie then I agree that scrupulous companies are not acting in the public interest. But I don’t think that’s what the public is interested in.

Etzoni claims that Ed Snowden’s work was detrimental to US corporations because revealing the extent to which the US spies on every single person ever might make people less likely to use American products. Each loop of that argument is more circular than the last.

But this is what he suggests to combat the threat he seems so incapable of defining or quantifying:

If private corporations continue to put their profit motive and ideological beliefs above the needs of the public, I believe the Obama administration—if it can find its backbone—should respond with a market-based solution. It should announce that it will not do business with—not purchase products or services from, not have our diplomats overseas act on behalf of, and not allow federal employees access during work hours to—corporations that do not cooperate with the war against terrorism.

I love some of this language. “IF YOU DO NOT COOPORATE WITH OUR WAR, THEN…” is my favourite.  But I also love the idea of government employees being banned from using their iphones during work hours or from using Google services. Or from using services that buy data from or sell data to Google, presumably. What if a government employee is served an advertisement by one of Google’s many advertising subsidiaries? Who gets the blame? Who enforces the ban? Do government employees have fewer rights than everyone else because of a policy of the current government? Good fucking luck explaining why that’s constitutional. It might be more effective to ban US companies from buying data that originates from companies that don’t comply with requests by government for broken encryption. You’re going to need even more luck for that, though.

But far be it for me to imply that Etzoni hasn’t thought things through. He understands the problems:

I grant that it might be painful for government employees to do without Google and Apple for a few days, or even longer, while on the job. But it is a price well-worth paying to convince the Silicon Valley CEOs that a good American citizen balances profit and ideology with concerns with the public interest and the urgent need to head off a new wave of terrorist attacks of the kind evident in Europe.

Hilarious.

University violates rape victim’s medical privacy (legally)

Rebecca Watson talks about how a reasonable expectation of privacy turned out to be false.

https://www.patreon.com/creation?hid=1845255

The university forced a rape victim’s doctors to release her therapy records without her consent.  Rebecca explains that in the US, if a student receives medical treatment through the university and on campus, her medical records are considered academic records rather than medical ones and therefore don’t have the same privacy protection medical records from other providers enjoy. In this case, the doctors fought the university’s demands that they release the victim’s records, but eventually had to comply.

This is a violation both of privacy and of reasonable expectation of privacy.  It’s also a devious and pernicious little loophole since many students couldn’t afford medical care from providers outside the university, even if they knew about the loophole in the first place. 

I’m willing to bet that the doctors didn’t know about this loophole either.  Let’s hope that all campus doctors learn about this case so they can at least warn students when they sign up and again when they make an appointment.

Is cash king?

It’s a phrase I’ve been reading a lot lately.  There various schemes in the UK to reduce the operating costs of council-run services by eliminating cash where possible.  For example, certain councils (most noisily Brighton City Council) is rolling out cashless (and cardless) parking.  There are already lots of these schemes around.  Motorists use their phones to pay for parking by using an app, texting or through an automated call.  This lowers a council’s costs because they no longer have to pay people to empty cash from parking meters or maintain the meters.  In some cases, license place recognition is used to enforce payment leading to a further saving on traffic wardens.  In some cases, motorists who prefer to use cash or a card might be able to pay in a local shop. 

There are several benefits to motorists who do not have to carry change, can be alerted when parking is about to expire and can top up the meter remotely.  But the approach has been criticised by some, who argue that not everyone has a mobile phone, that it’s too complicated to use phones for payment (older people are often cited as being incapable of this, rather wrongly, I think) and – somewhat dogmatically – that cash is king.

There are definitely some benefits to using cash, but like all things, there’s a balance to be struck between various competing factors such as convenience, security, anonymity and likelihood of fraud or theft.

Credit cards are a good example of this kind of trade-off. They have arguably reduced security in exchange for convenience, speed, interoperability and ease of use.  With my credit or debit card, I can pay abroad in either local or home currency without needing to visit an ATM or struggle with unfamiliar notes and coins.  I can pay when I’m not present by phone or on the web.  With my card or card details alone, a criminal cannot access my money.  And credit card companies insure transactions, protecting me to an extent against fraudulent transactions. But there are several parties involved in each card transaction including the merchant, the acquirer, the card issuer and the card network.  This means at least four targets for attack with each transaction and with security being only as good as the weakest link, this can be a concern.  A number of breaches in the security of companies in this chain have made headlines recently and these are only the ones we know about.  With so many parties involved, we run the risk of losing control of the credentials we use to access our finances.  In addition, credit card transactions are very easily traceable and the companies in the transaction chain are certainly selling transaction metadata.  Government and law enforcement agencies routinely search transaction data to investigate crimes or – increasingly – to predict them.

Cash, on the other hand, is convenient in some situations but not in others.  Specifically, it’s convenient if the payer and merchant are both present, the payer is carrying sufficient cash and the merchant has sufficient change.  It is not usually convenient when purchasing high-value items, since cash takes up space (the greater the value, the greater the space) and there are usually limits on the amount that can be drawn daily from ATMs.  Larger amounts can be drawn from staffed bank branches, but these are increasingly farther between and available only in banking hours.  Cash is also easy to steal, can be used by thieves with relative anonymity and can be forged.  Cash is relatively anonymous, but not so anonymous as people tend to think.  Notes have serial numbers and can be traced via those who record them (mostly banks).  The bank knows which notes it has issued to which customer, for example.  ATMs (and bank branches, of course) photograph users during transactions.  Cash is also inconvenient to many merchants, who carry risks in holding and banking it.

Digital currencies have a yet different set of trade-offs.  Third parties are not needed and users can pay both reliably and provably.  They have some of the benefits and weaknesses of both cash and cards.  Like cash, they can offer some anonymity and avoid tracking or disclosure of transactions.  Like cash, there are no intermediaries to worry about, but neither are their guarantees of payment protection as offered by cards.  :ike cards, digital currencies are potentially very convenient, can be used regardless of location and without the payer being physically present.  But also like cards, technological components at the user or merchant end could be subject to attacks.  Users may access their counts via their phones, computers or other devices, which might have security problems of their own. There are various ways to mitigate these risks.

In considering whether cash is king, we also have to look at our current infrastructure.  Lots of places are geared and used to dealing in cash.  A couple of decades ago, I holidayed in a fairly remote part of Scotland with friends.  Being used to living in cities, none of us brought much cash and found ourselves crippled for a day or so because nowhere local accepted cards and there were no ATMs nearby.  But these days everywhere takes cards, too.  But the infrastructure for processing card transactions was not made with user privacy in mind.  Schemes such as digital cash, which do have privacy as a key design issue, require a different kind of infrastructure and – crucially – don’t require a lot of the expensive infrastructure that banks and other organisations have been putting in place since the 60s.  A sudden move to digital cash would be a big deal and not much liked by the banks.  Presumably that’s one of the reasons that many apparently new forms of payment are based on the existing card payment and banking infrastructure. 

So the banks don’t like it and government and law enforcement like it even less.

Cash is obviously not king. Like everything privacy and security related, it depends on circumstances and personal choice to the extent allowed by our various legacy infrastructures.  I think we all knew that.  My point here has been to describe some of the compromises we need to make when thinking about what sort of payment to use for a transaction and to show that current entrenched systems are designed to be difficult to avoid.  New ways of paying that are better for the payer and the merchant will have to overcome the inertia of those systems.

“I don't trust the motives behind Bill C-51” Sinister goings-on in Canada

Linda Leon writes an open letter questioning the motives of Bill C-51 in Canada.

It’s the usual sort of thing.  Otherwise meticulously written documents that somehow fail to define what constitutes ‘terrorists’ or ‘terrorism’, bewildering terms such as “kinetic powers” to disguise and gloss over the formation of a secret police force, unsupported claims made about the necessity and likely effectiveness of these powers.

You know, the usual.

Sharing

This approach seems to use privacy information to recommend to users an audience for each Facebook post.  It’s claimed that it balances the benefits of posting against the risks of sharing.  I haven’t read the paper yet but my guess is that the algorithm does things like examine the privacy settings of friends and might exclude them from sensitive posts if they’re not up to scratch.  This is not quite the ‘breakthrough’ the headline declares it to be, but it’s the sort of thing I’d like to see social network providers adopting.

They probably won’t, though.  Users are not the customers of social networks and unless we pressure their operators to care more about user privacy it’s not in their interests.

http://www.siliconrepublic.com/innovation/item/41057-university-researchers/

Thursday, 5 March 2015

Not understanding threats

The BBC reckons that:

US intelligence agencies have placed cyber attacks from foreign governments and criminals at the top of their list of threats to the country.

How is this list ordered?  The most likely things to happen?  The things that are most dangerous if they do happen?  The threats most easily or effectively mitigated? Some metric that includes some or all of these? Something else?

Not important.  The news appears to be that there’s some sort of official list of threats and that cyber attacks have moved right to the top of that list from… wherever they were before. It’s especially odd since the person apparently announcing this increasing threat said he “no longer believed the US faced cyber Armageddon".

We’re really bad at understanding threats and shallow reporting like this doesn’t help.

Speaking of drones

Speaking of drones, the BBC has an inexplicable article on the five best ways to bring down a drone.

French authorities have been left mystified by two consecutive nights of illegal drone flights over central Paris.

It’s OK though, they’re blaming an oaf:

Environmental activists, terrorists, and pranksters have all been mentioned as possible suspects, but no-one has claimed responsibility.

Not terribly worthwhile speculation.

The difficult question now for the Paris authorities and in cities around the world is, how do you catch a drone?

No it isn’t.  The difficult and only worthwhile question is why the drone flights are illegal in the first place and whether they should reconsider the nature of boundaries rather than just making more of them.

The BBC has nevertheless taken it upon itself to look at what it says are the five best ways to catch a drone, whatever that could even possibly mean.  Here they are:

  1. Shoot it down
  2. Laser it (which is somehow different to shooting it down)
  3. Geo-fencing
  4. Use a giant butterfly net
  5. Jam it

Oddly, they have a sixth option, which for some reason doesn’t get a number:

Authorities with the means could also hack into the aircraft and seize its controls.

Thanks for that, BBC.  Nice to know my license fee is going toward excellent articles like this. Leaving drones the fuck alone unless they actually become a problem is an option they haven’t considered.

Lords want to register drones

The House of Lords is doing what it does again, this time urging the EU to create an online database of drone owners.  This, it says, is to prevent people flying drones into aircraft although – as with virtually all such databases – it’s not clear how this would help.  When asked about this, the House of Lords EU committee said that the database “would help the authorities manage and keep track of drone traffic.”

Your guess is as good as mine.  The committee made some other predictable recommendations including geo-fencing and kite marks to certify which drones are certified as being safe to fly.

All these recommendations are troubling.  I might be open to the idea of licensing drone pilots after a short (and cheap) course on safety, responsibility and the law, but this seems a rather different thing to registering drone owners.  The only reason for registering drone owners is to have a handy list of people to investigate if something happens that they don’t like.  Given that the majority of people doing bad things with drones are unlikely to bother to register them, the database is likely to be of limited use at best.

I can see the point of geo-fencing, to an extent.  We probably shouldn’t be flying drones around airports, for example. But it’s dangerous territory.  Currently in the UK we’re forbidden to fly drones within 150m of an area where large crowds are gathered.  This is problematic because those are the exactly the sorts of place we need drones to go; if police are mis-handling a crowd-control situation, for example, it’s something we need to know about and document.

Kite marking is also superficially beneficial, but again we have to be careful.  If it becomes illegal fly drones without the kite mark, it’s tantamount to outlawing hobbyists from building their own drones.  Not only is this a ridiculous over-reaction to the perceived threat, but it would further enforce whatever geo-fencing or spyware the government wants to force on drones.

I agree that we need to consider the threats posed by drones but we also need to think about the threats of drone regulation.  In particular, I don’t want it to be illegal to film the situations where our police are under the most pressure and most likely to misbehave.