Thursday, 24 December 2015

Spying on children will make them feel less disenfranchised, apparently

The BBC says:
Schools in England must set online filters and monitor pupils' internet use under plans to protect them from radicalisation, education secretary Nicky Morgan said.
Because spying on people coupled with having power over them has proved such a great way to prevent harm throughout history.  If I were a kid today, would you imagine for a second that an underpaid, under-appreciated school network admin would be able to stop me doing pretty much what I wanted?  I'm talking about a kid who used to spend hours a day in Currys learning to program on their demo model BBC Micros and Spectrums.  For any teachers out there, by the way, I run a course in how kids can very probably get past your internet filters.  Also useful for teachers who (for legitimate reasons) want to get past their school's internet filters.

Besides, what kind of 'monitoring' would be required to prevent all this radicalisation the government is so frightened of?  It's not as though kids will be visiting radicalisethefuckoutofme.com, or anything.  Clearly, there is some of this stuff going on, but it seems more likely to be happening in places where all the kids hang out.  Social media, educational sites, music sites....  Unless schools are keylogging, they aren't going to be able to stop kids being influenced by scary people on the internet.  I think that level of surveillance is too high a price to pay.
Mrs Morgan said: "As a parent, I've seen just what an important role the internet can play in children's education. But it can also bring risks, which is why we must do everything we can to help children stay safe online - at school and at home." 
The proposed measures include showing young people how to use the internet responsibly and making sure parents and teachers are able to keep youngsters safe from exploitation and radicalisation, she added.
Yep, bring that on. Right up to the "making sure" part.  Does Ms Morgan not understand the definition of "responsibility"?  But as usual, it's the "making sure" part that's the problem.  Look at what she said earlier in the article:
Mrs Morgan said some pupils had been able to access information about so-called Islamic State at school.
The horror! Kids able to find out about what's going on in the world around them!  MAKE IT STOP.

I'm neither a parent nor a teacher but it strikes me that the more information kids have about what's really going on in Syria and elsewhere, the more likely they are to make good decisions.

Filtering might be necessary.  For example, if a kid wants to access a particular site for research or just general interest, there should be an easily-followed pathway that will end in a timely decision and immediate implementation.  The decision should be based on an assessment of the risk of the sites concerned, the responsibility and maturity of the child making the request and - perhaps - should consider whether some sort of monitoring is appropriate.

But surveillance as a knee-jerk reaction is almost always a bad idea.  It's more likely to expose vulnerabilities than to protect people from them.  Not all teachers or other school employees are entirely benevolent to children.  Suppose, for example, a child is being abused at home and wants to use the school's computers to find out what to do, connect with other victims etc.  Are schools going to monitor that activity?  There are times when they probably should, but obviously it must be done with the utmost care.  Schools shouldn't get to charge into a difficult situation, guns blazing.  If schools are going to monitor pupil's browsing activity, then the monitoring needs to be monitored and what school employee has the time or energy for that?

Surveilling kids is not the answer to radicalisation.  Actual answers will involve properly understanding the risks and weighing them against a whole bunch of other things.  It will involve understanding the length kids will go to to get around restrictions.  It takes only one kid at a school to be slightly smarter than the IT people, after all.  I'm pretty confident that I could visit any school in the UK and get around their filter without a great deal of effort or technical skill.  And I'm in no way suggesting that school network admins aren't good at their jobs.  It's just that it really isn't that hard to get around these things.
Their head teacher has said there is no evidence they were radicalised at school as pupils cannot access social media on the academy's computers.
I bet I could prove that statement wrong in under five minutes and if I can, hoards of kids certainly can too.

But the main answer is in that word "responsibility" and its dependent condition, trust.  It doesn't matter whether radicalisation or other forms of abuse take place on school computers or elsewhere.  The only way to deal with abuse - or potential abuse - of children is to treat them like humans.  Provide a safe space where they can talk about things without judgement or threat.  Our desire to protect children is not misplaced.  The way we go about it very often is.

Sunday, 6 December 2015

I've been saying for years that our poor security habits can hurt the ones we love more than they hurt us.  This is trivially true: viruses like to steal contact data; attackers are generally after data that describes our social networks.  I'm more or less alone in my conviction that privacy is a thing we should do to other people, but here's a story that vindicates that view.

Police are spying on friends and relatives of prisoners.  They're not even storing those conversations securely.  People's words are being recorded because they happen to know someone who was convicted of a crime.

Saturday, 5 December 2015

This is troubling.  A woman is rightly worried about the security of her pacemaker.  Doctors blither on about how they are totes safe, honest.  Manufacturers refuse to release source code.  It's a nightmare.

There's no good reason for manufacturers to not publish their code.  There's no commercial advantage I can see in keeping it secret unless they, too, are concerned about security.  By which I mean they are incompetent.
While nations spend hundreds of millions defending critical infrastructure from cyber-attacks, Marie wonders if the computer inside her is secure and bug-free - she still hasn't been able to find the answer.
It's not as though she has a choice about whether to have that device in her.  It's not as though she can easily pick and choose the manufacturer of the machine that keeps her alive.  You'd think she'd have a right to inspect the hardware and software of the device she has no choice but to wear under her skin.  It's not even as though the thing is doing anything secret or obscure (I hope).

We know that open sourcing is an excellent way to find bugs and security flaws.  If I had a pacemaker and access to it's source code, damn fucking right I'd inspect it in minute detail.  If the companies that make these things aren't confident enough to publish their code and wiring diagrams, we should be very frightened indeed.
When Marie first had her pacemaker fitted she downloaded the manuals. She discovered it had not one, but two wireless interfaces.
One enables doctors to adjust the pacemaker's settings via a near-field link. Another, slightly longer-range, connection lets the device share data logs via the internet.
That last sentence is... unsettling.  What networks is this damn thing connecting to?  It shares data logs with whom? What data?  Why?
Hearts are now part of the Internet of Things, she realised.
This is an important point.  It's reasonable to ask what the pacemaker manufacturers are really selling.  Or the hospitals, for that matter.  Who gets this data and what do they do with it?  Nobody seems to know.
He believes hacking is a purely theoretical risk: "The only significant effort I've seen took a team of people two days, being within 20cm of the device, and cost around $30,000."
Yeah, that's bullshit.  Want to bet that I couldn't do it with a soldering iron and a few weeks of my time?  Want to bet that almost all of that money wasn't salary for the researchers?  What the fuck is a "theoretical risk" anyway?  It's a risk or it's not.  If someone can hack a pacemaker, they will.

"The good news is that this model is no longer sold and the risks have been addressed," he told the BBC's PM programme.
Oh, that's good news, is it?  The hackable device has been replaced by ones that might also be hackable?  The fact that we don't know whether pacemakers are hackable or not is somehow good news?
In general security is better. It's not a completely solved problem but businesses have "learned quite a bit over the last seven or eight years in improving security engineering", he said.
Um.  Yeah, that's weird.  The guy is talking about security in general but talking about a product that could not possibly be more specific.  The 'fact' that businesses in general have a better handle on security these days (and the scare quotes should tell you that I don't believe they have) says exactly nothing about the security of any particular device.
Marie Moe is careful not to overstate the risk of hacking - she fears programming mistakes more. 
Not long after having her pacemaker fitted, she was climbing the stairs of a London Underground station when she started to feel extremely tired. After lengthy investigations, Marie says, a problem was found with the machine used to alter the settings of her device.
 I hope it wasn't Covent Garden. I once had to walk up those stairs and there are a lot. Marie is right.  She has no idea whether the device keeping her alive is any good and there's not much she can do about it if it turns out it's shit.  And apparently it's not just the device itself but the others that talk to it that might have a problem.  And that's assuming zero human error for overworked and underpaid doctors....
"It's a computer running my heart so I really have to trust this computer and it's a little bit hard for me because I don't have any way of looking into the software of this device."
Marie would like to see more third-party testing. She's a member of I Am the Cavalry, a grassroots organisation that works on cybersecurity issues affecting public safety.
Worryingly, I wasn't previously aware of this organisation.  It sounds like something I should know more about.
The challenge, according to Kevin Fu, is to find a compromise between the commercial interests of manufacturers anxious to protect their intellectual property and the needs of researchers.
But that isn't the problem at all, is it? The problem is that devices people need to keep them alive might be hackable.  There's no intellectual property here and who in all of fuck are these "researchers"?  The 'compromise' Kevin Fu suggests doesn't even involve the patient, who you'd think might have some sort of interest in the whole business.  

And it's not a challenge.  Write good code, make good hardware, publish the details, learn from your mistakes.  Cheap knockoff pacemakers are not your competition and your intellectual property is worth exactly fuck all.
Andrew Grace says the devices are "transformative"; if you need one, he and Marie agree, you shouldn't be put off by colourful cyber-assassination tales in TV dramas. But that doesn't mean security isn't important.
Unbelievable. Yeah, you shouldn't be put off installing a device that makes you not dead because of security concerns, but dismissing legitimate security concerns as fancy is horrifying. 
Andrew's colleague, cardiologist Simon Hansom believes security has to be a priority.
I'm glad that someone vaguely involved gives at least some lip service to security but I'm unimpressed.  The BBC can do better than this and I'll be contacting the journalist, Chris Vallance, in the hope that he'll follow the story up by interviewing some people who are a little more informed.