Wednesday, 29 April 2015

Don’t tell children how quickly to grow up

A school principal emails 2000+ parents of her students:

We all know that our world is now engulfed in technology. It can be a great tool but can also be problematic when not used properly. Instagram has become the favorite of many intermediate school students. The #1 item in the “Terms of Use” for Instagram clearly states that you must be 13 years old to use the service. There are very good reasons for this. Most importantly is that it is illegal for the service to collect information from anyone under 13 years old and the site does collect information from any account that is created. If they discover that someone is under 13, they will verify the age of the person who created the account and then delete it if they are under the age of 13. If your child has an Instagram account and has agreed to the terms and conditions of the service, we urge you to rethink that decision. Our children grow up so fast. Let them stay young as long as we can.
Unfortunately, inappropriate use has impacted our students socially because they often don’t have the maturity to disregard hurtful comments. Please help us ensure that our children are not involved in social media at the intermediate level. They will have plenty of time to face those issues. Fifth and sixth graders shouldn’t have more to worry about than necessary.
Our school counselors, [redacted], can help you and your child if you need guidance.

Thank you.

I agree with the parent who reported this letter.  I don’t believe that keeping children ‘young’ for as long as possible should necessarily be a parental goal and it certainly shouldn’t be a school’s job, or a school principal’s.  Isn’t the main job of schools to prepare children for adult life?  We need to protect our children, of course, but covering them in bubble wrap isn’t always a great solution.  I gave a talk to a group of ~11 year olds recently and as part of the discussion I asked about comments they’d received on Youtube videos they’d uploaded.  They said, as if reading from a script, “we haven’t uploaded any videos because we’re not 13 yet.”  Not very convincing and a little too practiced.  Interestingly, several of them spoke about their experiences with Instagram: perhaps they didn’t know there was a similar age limit of the reason for the age limit (it’s a legal requirement.  I’ll write about that another day.) 

The parent writes:

In fact, taken at face value, I’m confident this parenting strategy would do more harm than good. It is a goal of mine to foster a child-like perspective which produces imaginative play and a sociable personality, both of which allow my kids to be kids and provide a good foundation for being a successful, functioning adult.

That seems right to me. A child-like, playful attitude is a wonderful thing to foster and if more people retained it in adulthood, perhaps the world would be a batter place.  I’d like to think a playful attitude is a defining factor in geeks like me. But a child-like attitude doesn’t require bubble-wrap. It requires a safe space, but there are better ways to provide one than insulation from reality.

Instagram isn’t forcing my son to grow up faster. In fact, it’s helping him maintain an open mind and a playful imagination. It’s also providing me a window in to his world, which I love. I’m his biggest fan.

Right again.  Growing up in what was universally agreed to be the middle of nowhere, my access to interesting people and thought was limited to books.  I didn’t see books as limited, but I later learned that interesting people are even more interesting than interesting books. These days, kids have access to a really important resource: people recording themselves committing blithering acts to see what would happen. It’s inconceivable to me that children be shielded from this wealth.

The email subsequently explains that children “…often don’t have the maturity to disregard hurtful comments.”

I’m 37 years old, and I’m not sure that I have the maturity to disregard hurtful comments!

Not disregard, no. I tend to attract quite a lot of nasty comments and they don’t really upset me, but they certainly get my attention.  I suspect that the ‘maturity’ to deal with such comments is better approached by exposure coupled with a safe space to discuss them than by denying they exist.

The email goes on to say, “They will have plenty of time to face those issues.”

Spoiler alert

They are already facing those issues at school.

And, whether you know it or not, they are already posting Instagrams and Youtube videos.  And before you know it, they’ll be having sex as often as they possibly can.  We know that not teaching kids about sex is a really bad idea. We know that demonstrating that they can ask about sex and relationships and anything else is a good idea.

Kids want honest answers and we fail as adults when we lie to them.

The parent suggests an email better than the principal’s, which answers the same concern:

It has been called to our attention that many of our students are using Instagram, a popular photo-sharing app. We understand that keeping up with the technology your children use can be daunting, so here are a few ideas to help out.

1. Talk to your children about the apps they use and why. Ask to see them, let your children teach you, and try to see the world through their eyes and their apps.
2. Consider if and where your children might have unsupervised access to online services and ensure that you are comfortable with those situations.
3. Talk to our counselors about any concerns you might have. We are here to help!
Thank you!

I’d change one thing: I think point 2 should read “ensure that you are both comfortable”, but I get the impression that’s implied. Rather than trying to control our offsprings’ race to maturity, we can let them set the pace and be there and attentive to what they discover.  Let them explore and learn what they’re looking at and all learn from that. They’re going to do inadvisable shit anyway. If they don’t, we’re probably doing parenting wrong. 

Haven’t (successful) parents always held the same attitude about sex and relationships and shit like that?  We know that abstinence-only sex education is no education at all. We know that porn can be useful in many respects, but that it’s not terribly helpful in teaching humans how to relate to the various objects of our sexual desires.  In some places we’ve graduated towards better sex education. Let’s hope better porn follows. And let’s hope that we can manage better safety education than stranger-danger.

We know that kids are more at risk from people they know than from slavering strangers. Technology can be a way out of danger as well as occasionally a way into it.  Let’s teach our young to know the difference. Let’s not teach them to be the idiots we were.

Thursday, 23 April 2015

What to do with all this data

What do you do with a lake of data? You go fishing.  I don’t know the first thing about fishing (I know a little about phishing) so I’m likely to embarrass myself with this metaphor but here goes anyway. You can catch a fish with a worm as bait. You can catch a bigger fish if you use the first fish for bait. Bollocks to that, though, why not make a lure that looks like the little fish so you don’t have to bother with the worm or the little fish?  It’s starting to sound a little like cheating to me. You might as well skip the dangling your pole part of the exercise altogether and just go straight to the fishmonger.

Let me see if I can bring this metaphor back on track.  Once a government has a bunch of data it can mine it (shit, I should have started with a mining analogy) to find behaviour that might incriminate existing suspects. That seems OK. But then it could use that suspect’s data to identify other people as targets for future surveillance.  And then it could write code that searches for people who’ve said similar things or have been in similar places to people it suspects of something on already dubious grounds.  And then it could set alerts that trigger whenever some data fulfils those criteria in the future.

So we’ve learned two things:

  1. I’m shit at metaphor.
  2. It’s childsplay to imagine how data can (and therefore will) be misused.

Greed

There’s a particular kind of greed that comes with an embarrassment of riches.  We’re more likely to complain that there’s nothing on TV now that we have hundreds of channels than we were when we had only three.  Yeah, I’m that old and aware of the irony that my parents said exactly the same thing about my generation because they had only one channel.  In fact, my grandparents had the first TV in the village where I was born.  I’m surprised they weren’t burned as witches.  We also had a three digit phone number. Christ, how did I get so old?

Anyway, when we’re rich, we want more.  See my post earlier today about being slightly unsatisfied with the frankly amazing technology literally at my fingertips.  So it’s unsurprising when governments complain that parts of the information landscape are ‘dark’.  They can see huge amounts of stuff, so they’re frustrated when there’s something they can’t see or when they need to do something like fill in a form or ask a judge before they can see something. 

It’s easy to imagine how an intelligence or law-enforcement agency might become obsessed with shining lights into those few dusty corners they can’t see, regardless of whether there’s anything there worth looking at. 

An interesting question is this: what would an intelligence service without much money spend it on?  Would they favour mass surveillance technology or skilled analysts and agents?

Another interesting question is what would an impoverished intelligence service be jealous of?

I’m willing to bet mass surveillance would be a luxury at best if costs had to be justified against tangible goals.

More from Jim Killock

“I am no expert on relations between the companies and GCHQ, but it’s well known that these relations exist. Snowden, of course was a contractor, and you have companies like Lockheed Martin and the Detticas of this world essentially making money out of government contracts. Their business model is to sell technologies to governments who pay very well. The defence and security sectors - particularly in the UK - do lack proper oversight. Parliamentary oversight focuses exclusively on the legality of what GCHQ are doing.

Imagine if the Parliamentary Health Committee only ever asked, “Is the NHS breaking the law?” Imagine the lack of debate which would ensue: “Well, they are not breaking the law, so we don’t have to worry if people are dying, or if NHS money is being misspent or if companies are providing inappropriate services. They are not breaking the law, so why are you worried?”

An excellent point.  It reminds me of those people who spend more time arguing whether an act was technically rape using carefully-selected dictionary definitions and legal mumbo-jumbo than they do telling people not to do things that might be rape.

See what the ORG’s Director has to say about interesting stuff

Sweet, my day of Increasingly Erratic Privacy Blogging isn’t half way through yet and I was starting to worry about having enough to say.  Luckily, there’s this:

Do you expect the machine to solve the problems? In this wide-ranging interview with the Director of the Open Rights Group we discuss bulk collection, state bureaucracies, the pre-crime era and trust.

Right up my alley.

[Ken Macdonald QC, former Director of Public Prosecutions] stated that public trust in the organs of the state was going to be crucial, because from then on, ”Finding out other people’s secrets is going to mean breaking everyday rules of morality.”

I’m not sure that’s quite right. I think those everyday rules of morality increasingly don’t apply.  I don’t think we understand our own responsibilities when it comes to our data.  We’re even less equipped to identify a culprit when something bad comes of the decisions we make in an evermore connected world.  We badly and urgently need to change what we think of as everyday rules of morality to reflect how the world is now.  This applies to us as citizens of the internet as well as to the bodies that administrate our lives.

Ken Macdonald:

Now, what the paper completely fails to address is how that precondition, that essential public trust, could possibly survive a system under which the security services were empowered by law to routinely trawl through the private communications data of vast numbers of citizens suspected of no crime, simply in order, as Sir David Omand puts it, ‘to identify patterns of interest for further investigation’. How would the public regard their security services in that world?

If you live…well, at last count anywhere… that question has been largely answered.  Our trust has been funnelled elsewhere.  We’re supposed to trust our governments to accurately strike a balance between some nebulous idea of security (against terrorists and criminals) and the giving up of freedom.  We’re supposed to trust them when they minimise or ignore the potential costs of terrifying security measures.  They are careful to make it difficult for us to understand the threats and especially the consequences of the claimed countermeasures.

Of course, such a world would change the relationship between the state and its citizens in the most fundamental and, I believe, dangerous ways. In all probability, it would tend to recast all of us as subservient and unworthy of autonomy. It would destroy accountability and it would destroy trust.

Well let’s hope so,  I don’t see any of that happening, yet.  We’re not, as societies, challenging the decisions made by our governments on our behalf about the eradication of fundamental freedoms.  Plainly, this is by design; our governments could choose to educate us and involve us in those decisions if they wanted to,  That they invariably choose otherwise suggests they know that nobody in their right mind would agree to many of those decisions if they understood the consequences.

This is for one very simple reason: because to abolish the distinction between suspects and those suspected of nothing, to place them entirely in the same category in the eyes of the state, is a clear hallmark of authoritarianism.”

Personally, I’d call it fascist.  It’s one thing to investigate a suspect by looking at the footprint they leave on the world.  It’s quite another to generate suspects that way.

Jim Killock responds:

It is much easier to oppose something when it hasn’t apparently happened: to anticipate the problems and say, “We don’t want this kind of power to exist”.

As soon as you’ve materialised that power, and that is what has happened under ‘bulk warrants, bulk collections’, it is much harder to say, “Well actually the billions of pounds that you have invested in this system, the integration with the NSA that you have done for strategic reasons – that must stop. I wish to oppose this, to dismantle it, and essentially wish you to turn your back on the investment you have put in.”

He says a number of other correct things, too.  You’d be crazy not to read it all.

I’m not going to phone my dead loved ones. I HATE my loved ones

You see, this is what I’m always saying. We’re so out of control of our data that we don’t stand a chance of understanding how the apparently trivial decisions we make (such as whether to install an app) might affect other people’s privacy, let alone our own.

I'm sure all the privacy issues have all been worked out, and the app is completely benign and totally worth trusting with your phone permissions.

You might need to recalibrate your sarcasm meter.  I keep meaning to get around to positing a law that says the more useful an idea sounds, the more horrifying the privacy issues will be,

Paperless office

I has one, for the most part.  I genuinely haven’t printed anything for years (although I have twice in the last five years or so  asked someone to print something for me).  Of course, I don’t have any employees, which makes things a lot easier.  Still, I’m confident in saying that every single piece of paper I have was generated by someone else and I keep this to a minimum with electronic billing and other correspondence where possible.

I make handwritten notes every day, but they are all on my phone or tablet.  Both are Galaxy Notes and the things I write are shared between the devices and available wherever else I want them.  Before tablets were widely available, I had a Tablet PC.  These were laptops with pens and they were actually pretty good.  I’m not sure why they didn’t take off like later tablets did.  I’m not a particular fan of Microsoft Office software, but OneNote was a brilliant way to take notes with a pen.  There’s nothing remotely like it available for tablets.

I have lots and lots of books but I haven’t considered buying a new one on paper since I got my first Kindle.  The ebook readers I had before the Kindle were fine, but the books weren’t available. These days, there’d have to be a very compelling reason indeed for me to buy a book on paper.

So I live the vast majority of my life without reading or writing on dead trees, but I still manage to find things to moan about.  They are all the more annoying because there are no real technical barriers to any of the things I want.  Here they are:

  1. My tablet PC had a very simple handwritten note-taking app that did one thing no other software I’ve seen does.  It had an infinite page.  When you came close to filling up the screen with handwriting, it scrolled up what you’d written and created new blank space underneath.  No creating new pages and then flicking between them, it was on one continuous page.  You could scroll through it with a gesture.  I want software for my Note that does the same thing.
  2. There were several great things about using OneNote with a pen on a tablet PC. First, you could organise your notes by section and by page.  A simple idea but done very well in OneNote and overcomplicated – where it exists at all – in other note-taking software.  Second, you could drag content in from virtually anywhere and annotate it. Third, you could easily designates bits of handwriting (and text and other content) as belonging to a category, then have a page that summarised all the bits marked as belonging to that category.  It was a great way to make lists.  For example, if you were taking notes in a meeting and some evil person gave you something to do, you could mark the note you made about it as belonging to your todo list and then later look at all the items on that list in one place.  I really miss being able to do that.
  3. I used a Livescribe pen for a while,  With Livescribe, you have special notebooks printed with a very subtle pattern and a special pen with a camera in it.  When you write in the notebook, the pen remembers what you wrote so you can transfer it to a PC and do handwriting recognition if you want.  That’s pretty nice but redundant now I have a tablet and phone that do this without the need for special notebooks. But the Livescribe pen also has a mic and it remembers the audio it records and what you were writing at the time.  So if you’re looking back through notes you made in a meeting and don’t understand what you wrote, you can just tap the note with the pen and it’ll play back the audio.  If my tablet could do that and had the ability to mark bits of handwriting as belonging to a category (as in point 2) I’d be very happy indeed,
  4. My Kindle is great for reading novels, slightly less so for reading text books and even less so for reading sundry documents such as academic papers.  PDFs don’t render very well unless they are single column in a fairly small font and without diagrams.  Hardly any of the documents I have to read are even slightly like that so I usually have to read them on my tablet when I’m travelling.  That seems like the sort of thing that could be fixed fairly easily.
  5. The DRM on my Kindle books really bothers me for the usual reasons.
  6. I need various cloud accounts to keep my various bits of content synched.  I want to be able to choose a provider or providers, not have ones forced on me by the software I happen to use.  If that means I end up paying for previously free software, that’s fine by me.

See? I’m not asking for much.  The fact that I’m probably the only person in the world who likes to work this way shouldn’t stop companies writing hideously expensive (to develop) software that does all that and selling it to me for next to nothing, surely?

The BBC site has a piece about paperless offices.  I think it’s asking the wrong question.  I don’t want to replace paper, I want technology that lets me work however I want.  That technology might even be – as with Livescribe – smarter paper.  I think the reason OneNote works so well is that it doesn’t push the filing cabinet metaphor too hard.  Each page just happens to be doubly-indexed with additional symbolic links where you want them.

The article points out that our relationship with paper is different to our relationship with other technologies. I think that’s certainly true but I tend to think of the difference in terms of security and privacy. Handwriting on a piece of paper is fundamentally different in a variety of ways to electronic handwriting in terms of how likely it is to be stolen, observed, intercepted, copied etc. with each having different advantages and disadvantages.  At the most basic level, if I write something on a piece of paper, it’s usually because I want other people to see it.  If I write it on my phone or tablet, it’s usually a note to myself.  There are bound to be loads of psychologists, social scientists and privacy/security people studying this, I should look it up.  It would be interesting to find out if there’s demand for an electronic page that behaves as much like a piece of paper as possible.

EDIT: OneNote is available for phones and tablets, but (at least the last time I looked at it) it doesn’t have anything like the functionality of OneNote on a Tablet PC all those years ago.

“Busting Google for sleazy e-commerce search results is like taking down Al Capone for tax-evasion.”

Writes Cory Doctorow.  Because, he rightly says, Google’s e-commerce sucks.  There’s a bigger problem.

Part of the European anti-trust action against Google alleges that it gave preference to it’s own e-commerce sites in search results to comparison shoppers when that wasn’t necessarily the cheapest option.  That’s a shitty thing for anyone to do, let alone for a company that’s become a tacit backbone of the internet, arguably too big to fail. Too big at any rate for governments to impose restrictions on, say, data collection and use, which would cause them to fail.  Besides, the giants are too useful to governments because of their effortless collection of incredibly personal data.

As Cory points out, we didn’t really expect the internet to be anti-competitive because relatively little capital is required to set up an internet shop.  Somewhat ironically and rather worryingly, startup and operating costs reduce further when there are large players dominating the infrastructure scene.  Cloud computing revolutionised the way business is done but only when it became cheap enough. This required a lot of expense over decades by companies that could afford it so the things that make life easier for customer-facing businesses have tended to make it more difficult for new infrastructure providers to compete.  That kind of homogeneity can be bad for a variety of reasons:

As the internet giants grew, so did states’ interest in their business practices. YouTube was started by three people with a garage, a pile of hard-drives and an unhealthy interest in video. In the years that followed, YouTube has acquiesced to a mounting compliance regime – spending hundreds of millions on its Content ID system for automated copyright enforcement, filling buildings with expensive lawyers and specialists to police obscenity, libel and other potential sources of liability, working out how to comply with the complex legal requirements of different jurisdictions, from the Thai royal family’s insistence on the right to remove videos that criticised the monarchy to the UK government’s insistence on the right to police videos advocating anything it unilaterally characterises as violent Islamism.

All this infrastructure is an additional and possibly unassailable barrier to competition:

One thing is certain: three people in a garage with a pile of hard drives could not disrupt YouTube anymore.

Cory talks about having to spend >£700 on software, accountancy etc. in order to collect £18.76 in VAT to satisfy EU rules.  This kind of thing makes it virtually impossible for small, independent sellers of digital content to compete…. unless they use Amazon.

This is bad news for privacy, too.  The business model of the internet giants is surveillance.  We don’t get to decide – or even to know – what data those giants collect about us or how they use it.  But more importantly, we don’t have a choice.  If there’s one thing we need to do for future generations, it’s to force the pervasive internet giants to treat our privacy as explicitly important and to give us better choices about how to pay for stuff.  It’ll be hard.  And we have to be wary of our data being effectively held to ransom by a premium being charged for private services.  But I think it would be achievable if everyone started to care enough.  If they cared enough about having choices, that is.  I’m not particularly interested in what they choose, as long as they can do it.

Wednesday, 22 April 2015

A reasonable warning and a terrible example

The FBI decided to detail an airline passenger for several hours because of a tweet.

Find myself on a 737/800, lets see Box-IFE-ICE-SATCOM, ? Shall we start playing with EICAS messages? "PASS OXYGEN ON" Anyone ? :)

He’s suggesting that he might be able to deploy the oxygen masks.

He tweeted this while he was on the plane and the FBI were waiting for him 2 hours later when the plane landed.  Either they were already watching him because he’s a security expert who has spoken about the security of plane networks in the past or they have software mining tweets for threats against airports.  Or – my personal favourite explanation – they have compromised the plane wifi themselves, with or without the airline’s permission.  Or someone reported him, I guess. I don’t much like any of the possibilities.

In response to criticism, the FBI has issued a warning to airlines.  It’s long overdue.  We have no idea how secure plane networks are or how isolated plane controls and safety critical systems are from networks passengers have access to.  The warning should be taken seriously and it’s kind of weird that it took an incident like this for it to come.

But reasonable though the warning is, the example the FBI have set is a terrible one.  The best way to improve security is to have experts think up ways to attack your networks.  Scaring off creative experts is one of the worst possible ways to improve or maintain security.  I’m not suggesting passengers hack planes in flight to see what they can do. I’d really, really rather they didn’t.  But the message the FBI is sending is that speculation and creative thinking about how plane networks might be compromised will not be tolerated.  That’s a terrible idea.

Google starts mobile network

C’mon, Google, don’t you know enough about everyone already?

It doesn’t bode well for competition, either.  Still:

Once you're connected, we help secure your data through encryption.

That’s just for their wifi hotspots, though.  They’re renting space on Sprint and T-Mobile for mobile comms.  And that “help” is a bit weird.

Your data is secured through encryption when we connect you to open Wi-Fi hotspots. It's like your data has a private tunnel to drive through. (http://fi.google.com/about/network/)

They don’t say whether Google itself (and therefore the many and various authorities) can decrypt that data.

Evil Wednesday Roundup

I keep forgetting about Evil Wednesday Roundup.  Even though I have my calendar open right in front of me at all times.  Here’s what’s been happening:

Monday, 20 April 2015

The post-Snowden privacy habits of Americans

Since I was just talking about Edward Snowden and the impact of his leaks, here’s a Pew survey of Americans’ privacy habits post-Snowden.

Here are the headlines:

Lots more interesting stuff in the report.

John Oliver’s interview with Ed Snowden

I finally got round to watching the interview, which you can find here: https://www.youtube.com/watch?v=XEVlyP4_11M

If you’re outside the US, you’ll have to use Tor or something to see it.

The interview seems poorly edited and not exactly relaxed.  I wouldn’t blame Snowden if he thought Oliver was making light of what he did and of his plight.  Perhaps that was it.  I felt at times that Oliver was trying to push a gag rather than making one out of what was actually said as a more skilled and spontaneous interviewer might.  But it makes a good point: the public can’t understand the implications of the leaks because they can’t interpret it in the context of their lives.  So Oliver suggests a context he thinks people can relate to: the government having access to our dick pics.  Snowden seems to relax a bit as he explains how the government could use various NSA programmes to get hold of (pictures of) his junk.

I’ve come across this attitude quite often in privacy research.  We all tend to have a fairly similar sense of what we feel should be private, with significant cultural variations.  It’s part of what has been called the “ick factor”; the feeling that something is too personal to share or to have shared with us.  We can’t always articulate or even rationalise why we want to keep something secret, but it’s important to us anyway.  Most of us feel icky about airport porn scanners, even if they can’t see our genitals.  It feels invasive.  Other privacy violations don’t necessarily feel invasive, even when they are.  Presumably this is because they are more abstract and less closely related to everyday experience. 

I’ve found that when you have to construct hypotheticals to explain why it’s bad for a government to collect some piece of information about us, you’ve lost most of the audience.  Unfortunately, this doesn’t get us much closer to learning how to explain why we should all be outraged at the NSA and its many collaborators.

Monday, 13 April 2015

Human rights challenge for UK surveillance

Rights groups have asked the European Court of Human Rights to rule on the legality of the UK's large-scale surveillance regime.

Amnesty International, Liberty and Privacy International filed a legal complaint with the court [of Human Rights] today.

Last year, The Investigatory Powers Tribunal, which oversees the intelligence services ruled (hardly unexpectedly) that mass surveillance of British citizens somehow doesn’t breach their human rights and is a legitimate way to gather intelligence.  Confusingly, it also ruled two months later that the surveillance is unlawful because the processes governing how data is gathered and shared are not sufficiently public.

"The IPT was clear in its December judgment that the legal regime is lawful, and that GCHQ does not seek to carry out mass surveillance, "added the spokesperson [for GCHQ]. "The government will be vigorously defending this case at the European Court of Human Rights."

Fighting cybercrime in Africa

The BBC reports on cybercrime in Africa.  There’s lots of it:

Security expert Kaspersky says more than 49 million cyber-attacks took place on the continent in the first quarter of last year, with most occurring in Algeria, ahead of Egypt, South Africa and Kenya.

But cybercrime is actually most pervasive in South Africa, with security firm Norton saying 70% of South Africans have fallen victim to cybercrime, compared with 50% globally.

McAfee, another cybersecurity firm, reported that cybercrime cost South African companies more than $500m (£340m) last year.

So there’s work to be done.  They’re working on it:

But in June 2014, the African Union (AU) approved a convention on cybersecurity and data protection that could see many countries enact personal protection laws for the first time.

Interesting.  Privacy doesn’t usually get a look-in because, most commonly, the thing we’re supposed to be scared of is nebulous threat of terrorism.  Perhaps the focus on cybercrime is responsible or perhaps this group (or Africans in general) are more concerned about their privacy or more distrustful of their governments.

15 of the 54 African Union member states need to ratify the proposal before it can be implemented and none have done that yet, but it’s early days.  I’m going to quote Drew Mitnick junior policy counsel at the human rights organisation Access, solely because the idea of caring about people’s privacy is so refreshing:

"It is critical for the countries to adopt cybersecurity policies that better protect users while respecting their privacy and other human rights."

Yes. Yes it is.  Access has been tracking cybercrime laws in Kenya, Madagascar, Mauritania, Morocco, Tanzania, Tunisia and Uganda and has criticised those laws on the grounds of being ineffective and/or allowing governments to violate privacy, freedom of expression and assembly.

The AU proposal has itself been criticised for getting the balance wrong.  The Centre for Intellectual Property and Information Technology Law at Strathmore University, Kenya, for example, thinks the proposal gives too much power to judges and law enforcement and fails to take into account different perspectives:

"It was written by lawyers," he says. "Cybersecurity and cybercrime need a multi-sectoral approach - cybersecurity educators, researchers, NGOs [non-governmental organisations], vendors, ethical hackers were supposed to be involved so they could present a multi-dimensional framework instead of legal paper."

Can’t argue with that, but if the balance is indeed wrong, the heart seems to be in the right place.  Let’s hope it homes in on a good balance between protection and privacy.

Note: Some disappointing sexism and ageism from the BBC right at the top of the article:

[The 419 scam] involves gangs extorting money from the likes of great aunt Mabel by promising her riches, if she'll just send some cash and/or her bank details to a nice man in Nigeria.

Thanks, Tom Jackson, but it’s not only women and the elderly who are fooled by 419 and related scams.  And for that matter, it’s not extortion.

Friday, 10 April 2015

Why police in India can’t have nice things

Police in Lucknow, India are planning to use drones to hose down protesters with pepper spray.

"The results were brilliant. We have managed to work out how to use it to precisely target the mob in winds and congested areas," Yadav told AFP.

Can one precisely target a mob?

It’s just what we need: another level of abstraction between the police and the people they’re *ahem* policing.  Slightly shamefully, that was my second reaction. My first was that someone is bound to find a way to take control of these drones.  They probably couldn’t do more damage than the police, though.

Thursday, 9 April 2015

Well, good luck with that

Class action privacy lawsuit filed against Facebook in Austria.
“Basically we are asking Facebook to stop mass surveillance, to (have) a proper privacy policy that people can understand, but also to stop collecting data of people that are not even Facebook users,” said Schrems.
Beautifully put.  Unfortunately that’s asking Facebook and pretty much the entire internet to abandon its business plan.   Fuck it: bring it on.
Schrems is also fighting to stop the US security services from gaining access to his personal data held by Facebook and other US technology firms. His case, which has been crowdfunded, is currently being heard in the European Court of Justice in Luxembourg, Europe’s highest court.
I find myself liking Schrems.
Facebook declined to comment.


Wednesday, 8 April 2015

We’re the black box

The police are supposed to be citizens.  They’re supposed to protect and serve their fellow citizens. They’re not supposed to murder them in the expectation of getting away with it unless one of us happens to film it.  Police should (and in many cases do) carry their own black boxes. Many UK police have bodycams.  But they also have the right to turn them off.  The fact that they think of those cameras as protection for police rather than protection of the public speaks more than one volume.

This guy is right. Let’s have a vault for uploaded video of police officers. Let’s have an app that streams that footage to the vault in realtime. And let’s have celebrities promoting it without caveat, if we can.  Fuck it: let’s have an app that recognises police uniforms and automatically films and uploads when we point our phones at them.

Turn-around is fair play. If they film us for their own protection, we should film them for ours.

The Tories want mandatory age restrictions on porn sites

Culture Secretary Sajid Javid said that the Conservative party would ensure that porn can only be seen by over-18s.

I’m not sure I understand why.  The porn industry certainly has problems.  There are surely sex workers being exploited in one way or another and the depictions of sex – and especially of women – are often severely problematic.  But these are not problems that relate only to a youthful audience.  I’m concerned that young people might be developing both unrealistic expectations of sex and unpleasant and damaging attitudes toward sexual partners or potential partners and, again, especially toward women.  But I’m not sure that age restrictions on porn sites will necessarily help that.  It strikes me that a foundation in understanding sex, and relationships and respecting people, coupled with a critical examination of porn might be the way to go here.

But the issues more pertinent to this blog are, of course: can it be done? And what privacy/security problems will such a measure inevitably introduce?

First, it’s pretty clear that it couldn’t be done. The Tories want to implement a regulatory scheme that applies to sites both in the UK and abroad.  It’s hard to imagine why any other country would comply; doing so would seem to imply that they implement new legislation, policy and policing.  That sounds expensive and the UK is hardly in a position to bribe or bully anyone into doing that.  No doubt they want to implement technological solutions, too.  We already know that blacklists and whitelists don’t work, are very costly and generally not all that difficult to get around altogether.

And second, they’d need to implement some way of establishing age, which is to say, identity.  Perhaps the most benign approach would be age verification by credit card.  There are some problems, though.  Porn sites must be trusted with a user’s credit card details.  Second, most credit cards are associated with an individual.  Governments might compel porn companies to reveal details of their customers for flimsy and potentially dangerous reasons.

A national ID scheme might be introduced (indeed, this is already underway in the UK) whereby independent yet trustworthy parties verify the age of individuals on request.  The ‘independent’ part shouldn’t fool anyone, though. It certainly wouldn’t imply that the government couldn’t and wouldn’t insist on access to their customer records including who they provided a verified age for and on what sites. 

The same thing could be done without government involvement: sites could accept age verification by a number of trusted providers.  In theory, this could be carried out anonymously.  We’d have to prove somehow to the trusted party that we were old enough (using documents or by physically turning up at their offices). The company need not record anything about our identities, so long as its internal processes were satisfied.  But this is likely to be inconvenient for users and unacceptable for government since there is no transparency.  And it would probably be fairly easy to game that system in practice.

So whichever way the Tories intend to go about it we know that (1) it won’t work, (2) it will fuck with our privacy and (3) the actual benefits are unclear.  This does not seem a very good basis on which to implement a policy.

Tuesday, 7 April 2015

Do you want to hear a creepy story?

Of course you do, that’s why you’re here.  The culprit is Facebook, as it so often is.

Parents can now set up an account for their offspring so they can tag them in photographs.  This is supposed to be the equivalent of a ‘baby book’ which is a thing I didn’t know existed in the first place: a book full of pictures of a child as it grows for the sole purpose of later embarrassment.

The idea is that this ‘establishes an identity’ for the child, which can then inherit the account at age 13.

What it means is that Facebook will know a vast amount about that child from (or even before) its birth, regardless of whether it chooses to use Facebook in 13 years time or not.  It’ll know all about the child’s parents and its parents’ network of friends and family.  It’ll know about the traditions, beliefs and economic circumstances in which that child was raised.  It’ll know whether its parents have separated or died, remarried and/or had children with others.  It’ll literally know where the bodies are buried.

It’s creepy, it’s dangerous and it makes me shudder.

John Oliver interviews Edward Snowden

That’s got to be good.

But not available in the UK.

Monday, 6 April 2015

Other people stamping all over our digital shadows

One of the biggest difficulties in protecting our privacy is that we have no control over how leaky our friends and acquaintances are.  This has always been true and sales and marketing people have relied on this in one way or another throughout all civilisation.  These days, it’s getting ever harder to not leak personal information about other people. 

Here’s an example:

A friend recently set up a LinkedIn account and invited me to join her network.  That’s fine.  I don’t have a LinkedIn account and I’m not interested in ever having one, but I’m not at all offended by the request.  Those networks are useful for some people and it’s nice that she thought to invite me.  However, the invitation email comes from LinkedIn, rather than from my friend.  Since I don’t have an account, she must have typed in my email address into the invite form.  So LinkedIn have my email address.  Not only that, but they say in the email footer that they can and will use my address for reasons so vaguely worded that it means, in practice, anything.

They do provide a link to ‘unsubscribe’ (as if I’d actively subscribed in the first place) but it’s depressing that they feel entitled to use it for their own purposes without my permission simply because someone else invited me to join their service.

I don’t blame my friend.  I very much doubt that LinkedIn made it clear what they would and wouldn’t do with the details of the people she invited.  And even if they did, she could certainly be forgiven for not understanding why I object to it.  For a variety of reasons, the fallout from our carelessness about other people’s details is rarely clear. It would be virtually impossible for me to connect some spam or other consequence with LinkedIn’s abuse of my email address.  My friend wouldn’t have a chance of doing so.

Companies like LinkedIn could do a lot more to remind us of the business they’re actually in.

Wednesday, 1 April 2015

Forget security-theatre, here’s some security penny dreadful

Former Newcastle striker Faustino Asprilla claims he ORDERED pilot to remain in cockpit during flight so other airman could not crash the jet (and gave him an empty bottle in case he needed a loo break)

Yeah, OK, it’s the Daily Mail, so the odds of it being true are considerably less than 50/50.  I guess the evidence will be there or not on Instagram for anyone who wants to look for it, but I’m not one of those.  Let’s go crazy and assume that the DM is telling something resembling the truth for once.

Sitting on an aircraft, Asprilla told his followers: 'I went to the pilot's cabin. I strictly forbid the pilot to get out and urinate because if that other lunatic locks the door in, it will happen what happened in that other flight. Everyone knows.”

Personally, I’d be more worried about a pilot busting for a piss or spraying urine around the cockpit than I would be about one of them deliberately crashing the plane.  In fact, I’d be a lot more worried about passengers who feel entitled to dictate a pilot’s behaviour.

As far as anyone knows, the Germanwings tragedy is the only time a pilot has ever decided to crash the plane when the other pilot left the cockpit.

We’re so bad at understanding risk that I’m constantly amazed that any of us manage to get through a single day, myself included.

Amazon just got even more terrifying

Amazon is releasing a Dash-based button that sticks onto devices and automatically orders a product from Amazon when you press it.  So you might stick one on your washing machine to order detergent.  So Amazon now knows how often we wash our clothes.  It will know if we suddenly start doing twice as much washing or half as much.  It’ll be able to infer a lot of useful stuff from that: maybe it indicates you’ve started (or stopped) living with someone or had a baby.  Changes in lifestyle (especially expensive ones like weddings and children) are exactly the sorts of thing that the companies who buy data from Amazon want to know about.  Is it too crazy-paranoid of me to immediately wonder whether law enforcement agencies might be able to make use of this data?

Either way, this is sure to be more about generating data than it is about attracting more shoppers of household goods.

I have an instinctive attraction to the idea of a physical button in a physical place, although I’m struggling to think of applications that would be useful for me personally.  The examples in the BBC article don’t seem that great to me.  I get washing powder at the supermarket when I’m already there buying food and although I don’t use makeup, my wife does.  She orders a lot of stuff from Amazon, but I don’t think she’s ever bought makeup online.  She prefers a physical shop for that kind of thing.  She also uses about 20 different cosmetic products, would she have a button for each one?  I think a better example is printer ink, especially in an office.  Office staff will know that the ink is running out on one of the printers, but won’t necessarily know the printer’s model number or be confident about which ink to buy.  Pressing the button could save quite a bit of time and stress.  It’s also immediate and staff don’t have to remember to order the ink when they get back to their desks.  But then what’s to stop half a dozen people ordering ink?  And you’re bound to get someone like me who sits next to the printer all day every day relentlessly pressing the button to see what happens (*).

Of course, now I’m suddenly worried about printers that can order their own ink when they start running low.  That’s an insidious little form of DRM, isn’t it?

Amazon is also planning to market services through the button, so press the one on your boiler if you need a plumber or one on your computer to book someone to fix it.  To me, this idea is not as immediately appealing.  Services aren’t as much of a commodity as household or office products.  But I can see an office that has outsourced its IT having a button on its computers to book the soonest available technician, I guess.

None of these applications are attractive enough to overcome my cynicism about privacy, though.

(*) Fun fact: I worked for an early ecommerce startup.  There was a secret fake credit card number that the sales people could use to demonstrate the full ordering process to potential clients without actually charging the card or processing the order.  Due to a mixup, one of the sales people – who had a habit of using filing cabinets as the example purchase – was given the IT director’s actual personal credit card number.  One day, about a dozen filing cabinets suddenly turned up at our tiny office.