Monday, 24 April 2017

More on The Herod Effect

How do we know it isn't true?
The Herod Effect is a wishy-washy sort of term but none the worse for that.  Here's how I described it earlier:
The Herod Effect is a name I've decided to give to the situation where a lie is widely repeated as a truth because it makes such a great story that people want it to be true.
It's a bit wishy-washy because there are lots of reasons people might want a false rumour to be true.  I mentioned two of them in my last post.

First, a kind of sour grapes.  If someone is particularly rich or attractive or successful or whatever, many people feel a kind of schadenfreude when they're "taken down a peg or two". Gere was an impossibly handsome and successful man, living many people's dreams.  Presumably people felt that the idea of his being gay (he isn't) somehow devalued him (it wouldn't).

Second, a wish to have one's biases confirmed.  Gamergater types often feel that men somehow own all aspects of the video game space and that women are 'invading' when they participate or write about it.  They felt that the false story of Quinn's supposed sexual and unethical behaviour matched their feelings about how women are ruining video games.

There are significant differences here in the reasons people wanted to believe the different stories.  Perhaps the most significant difference is that Gamergaters often didn't seem to care whether the original story was true or not.  They repeated it as a weapon and a justification of doing even more horrible things.

Even so, I still think the concept of The Herod Effect is reasonably sound.  It's about the things viral lies have in common rather than an attempt to classify.  I use it as an example of a particular kind of threat to privacy.  In a follow-up post, I'll discuss how The Herod Effect harms privacy in more details, with some examples.  In this one, I'll describe some of the reasons people might have for believing and/or repeating rumours without evidence.

Other reasons may occur to you, feel free to comment.

Bad reasons to believe stuff

  1. We're scared or anxious.  A lot of false stories cater to our fears.  Vaccines cause autism.  Immigrants are all rapists.  Women want to take 'our' jobs.  When we're already worried about something, stories that confirm our fears are oddly comforting. They seem to confirm that we're right to be scared of that thing, justifying the investment we've put into our crackpot theories.  We tend to require less evidence for things that confirm our fears.  This is somewhat complicated by the fact that a lot of the things people are scared of are manufactured by the media in the first place. Presumably we want to believe in the existence of threats to satisfy some other bias (immigrants are terrorists, scientists are lying to us or whatever) and then want to believe in the things that confirm our newly manufactured biases.  Humans are weird.
  2. The rumors are surprising but not too surprising. We like stories that shock us but not so much that we have to actually expend effort in working out whether they're true.  Stories that are shocking but still fit with our biases are the most durable.  For example, we're accustomed to Trump saying idiotic things so it would be easy to accept yet another idiotic Trump quote as true - perhaps no matter how extreme it was - because at this point it's pretty much background.  Actually, that might be a bad example because there's literally nothing terrible that Trump wouldn't say. The stupidest, most ignorant, terrifying thing in the world might as well be attributed to Trump whether he said it or not.  Let's say Bush instead.  He said a lot of stupid things too but one in particular that springs to mind is when he said that the French have no word for "entrepreneur".  It wouldn't surprise me if he had said that and it would probably make me hate him a little bit more, but he didn't say it.
  3. We trust people who tell us things above people who actually know things. People like Alex Jones are either fantastically uneducated about every single aspect of the world or they prefer to promote lies because they see it as advancing their cause.  It's not always possible to tell the difference.  If we believe a story someone tells us because of some other bias, we're more likely to believe other things they say, even if they don't fully tick our bias boxes.  A person constantly spewing "interesting" "facts" is more credible than someone wedded to a highly specific conspiracy theory, even if the conclusion is the same.  This is different to believing someone because they have previously presented exemplary evidence for the things they say.  No evidence of previous careful research is required, we believe them because they shotgun "facts" at us.  Once one seems credible due to bias, others are likely to seem more credible too.
  4. We trust people who are experts in one field to have expertise in others. We see this a lot with physicists, unfortunately.  It might seem like a stereotype but examples abound.  There is a surprising number of physicists who think they know all about biology without ever having studied it, for example, and say very stupid things about it.  We have a tendency to believe experts when they say things we want to hear, even if their expertise is in something completely different.
  5. Conversely, we hate experts. The hatred and distrust of expertise is part of what put Trump in the White House. It is perfectly clear that people unqualified for a job aren't the best candidates, but Trump successfully argued the opposite.  "Joe the plumber" and "the man on the London omnibus" are other examples.  The common person on the street is certainly entitled to an opinion and to have their opinion heard, but we wouldn't want that person to fly the plane we're on or perform surgery.  But we do want non-experts to tell us what to think about things that require expertise. It's perverse.
  6. Repetition is very convincing. We tend to believe stories more when we hear them often, even if what we hear about them is debunking them. Climate change deniers' beliefs don't lessen when they consume stories debunking their silly arguments. Quite the reverse: mentioning climate change as either real or fake with tend to strengthen whichever belief you already happen to have.
  7. We are susceptible to what's already on our minds. Zeitgeist shifts and what terrified us a few decades ago might not be so convincing as what terrifies us now.  Some time ago people were frightened of witches and believed all sorts of rumours with tragic consequences.  Belief in witchcraft is much depleted (although by no means gone) but the susceptibility to believe claims without evidence is now based on the fear of terrorism, among other things.  The comparison between draconian measures against immigrants and actual witch hunts are apt.  It's strange that we find it so hard to see this.
  8. We tend to believe things that are easily explained. Vaccines cause autism.  X causes cancer. Obama is a muslim.  These are the sorts of fake ideas that tend to perpetuate. However:
  9. Conspiracy theories are justifications. People like simple fake facts ("X ordered 9/11") but they really love complicated explanations of why that's true, regardless of how they fit with the facts.  Simple statements with complicated explanations are gold. The more complicated the justification, the better, especially if (as they inevitably do) they drag in other simple fake facts.  I think what's happening here is that the complexity of the conspiracy argument causes people to treat it as abstract rather than something that supposedly happened in the actual real world.  When we treat something as abstract it's easy to ignore inconvenient detail. That's what abstract thinking is.
  10. Numbers, even ludicrous numbers, are key. In my day, the pop star in question was Marc Almond. I've heard the same story about all sorts of other people, though, I won't provide a list. The rumour is that a given pop star was admitted to hospital and had a large quantity of sperm removed from his or her stomach.  There is always a precise quantity of sperm specified. Sometimes it's a gallon.  Sometimes it's ten gallons(!).  It's always a wildly improbable amount. Ten gallons. That's 80 pints. Clearly more than a human stomach could possibly contain. And I'm not going to work out how many ejaculations that would require.  But there's a number so people believe it.  If it was "some" sperm, nobody would.  Weird, bullshit specifics are somehow more convincing than actual evidence.
There are other bad reasons to believe things, but this post is already long enough and I'm not sure I've made my point yet.  It is that these things have more in common than they do otherwise; bad reasons reasons to believe things fuel The Herod Effect and that effect is something worth considering in itself because of the horrible consequences it can have for privacy and other things.  It's important to know why people believe stupid things but also important to recognise when it's happening even when we don't understand the specifics.

In my next post I want to tie all this stuff in with privacy violations.  The Herod Effect is a specific threat to privacy which is not often considered.  I'll explain why.

The Herod Effect

Herod, yesterday
The Herod Effect is a name I've decided to give to the situation where a lie is widely repeated as a truth because it makes such a great story that people want it to be true.

The Herod Effect is named after a joke I once made that quickly got out of hand.  My sister is a very Christian person and named her first two children after characters in the Bible.  When she had another baby I told my wife that she had decided to call it Herod.

I didn't think for a moment that she'd believe me but it turns out she did and told everyone she knew, who also believed it.  Thanks to social media the story spread across a surprisingly large fraction of the globe.  It is fortunate that when my sister found out (I've no idea how) she saw the funny side.

The Herod Effect is responsible for a lot of privacy violations.  False stories can be harmful in many ways, including undue scrutiny.  Everyone my age remembers the false story about Richard Gere and the gerbil (look it up on Snopes if you're young).  I expect people liked this story because it allowed them to feel superior in some way to an otherwise esteemed celebrity. Sour grapes. It's a story people want to believe because they are unthinking, horrible pricks.

I've no idea whether Gere's career was harmed by this story but he - and his sex life - certainly came under huge amounts of scrutiny.  It was suddenly fair game to pry into every aspect of his life because a rumour, buoyed by The Herod Effect, caused people at large to feel that another human's private life was somehow in the public interest.

There are plenty of examples of non-celebrities being harmed by the Herod Effect.  The one that comes most quickly to my mind is that of the games journalist ZoĆ« Quinn and Gamergate,  False claims about her sex life and ethical practice led to years of misery and countless violations of her privacy including doxing, as well as countless threats of rape and murder.

It was a story that some people (specifically horrible, bigoted people) wanted to believe regardless of facts because it seemed to confirm their biases.  That the story was about the supposed sexual impropriety of a woman, pitched to people who already felt that women have no place in video games journalism made it irresistible to many.  Very little critical thinking took place in a community of people who tend (quite wrongly) to pride themselves and each other on their rationality.

So that's The Herod Effect.  We've all been guilty of it at one time or another.  And we shouldn't.  We should check our facts because the effects on privacy (and on well-being in general) can be horrific.

Monday, 27 March 2017

Wait, a politician was WRONG?

Surely not.  But Amber Rudd has a history of being wrong both on and off the internet and she's in no mood to break that streak.  Of wrongness.  I need to get better at metaphors.

Anyway, the UK Home Secretary, Amber Rudd, says that the ability of people to use encrypted communications is "unacceptable" because terrorism.

Let's not focus on the fact that the recent Westminster Terrorist was acting alone or that law enforcement intercepting his WhatAapp messages presumably wouldn't have prevented him from killing and injuring people. Instead, let's focus on the fact that he - and millions of other people - used a messaging service that happens to use encryption.  Plainly it is encryption that's at fault here, not bad people doing awful things.

Rudd either doesn't understand or pretends not to understand that tapping someone's phone is not at all the same as intercepting their encrypted messages.
"It used to be that people would steam-open envelopes or just listen in on phones when they wanted to find out what people were doing, legally, through warrantry," she said.
And that's fine.  Surveillance is sometimes necessary and by definition invades the privacy of the person being surveilled. Traditional surveillance such as this results in collateral damage, too, which is regrettable: innocent people who happen to call a terrorist are targets for further investigation, for example. But even I agree that some degree of surveillance is needed for general safety. People need to be followed. Rooms need to be bugged. Phones need to be tapped.  Knock yourself out.

What Rudd either doesn't know or pretends not to know is that tapping someone's phone, bugging their rooms or following them about are fundamentally different to intercepting their encrypted

Here's why: tapping a suspect's phone doesn't automatically tap everyone else's phone.  Decrypting a suspect's messages pretty much does.

There's no way to provide a back door to encrypted messages that only law enforcement can use.  Criminals will almost immediately gain access to any such back door through either hacking or good old-fashioned extortion or bribery. If there are back doors, criminals will have the keys within a few hours at most.

But that's not even the greatest threat.  Missions will creep.  Introduce cryptographic back doors and governments will very soon be decrypting people's messages to assist in enforcing parking fines.  That's a small step from the mass use of communication data to mine for suspects.  We don't need to rely on literature to tell us how bad an idea that is, real-world examples abound. To pick one of the (relatively) less horrifying examples, the United States of the late 40s to mid 50s suffered from almost literal witch hunts targeting people deemed disloyal or subversive.  Needless to say, the definitions of those charges changed on the basis of convenience.  All it took for the government to take action against someone they didn't like was a noisy accusation from a member of the public that they were a communist.  The only possible aim of mass surveillance is to associate people with labels based on arbitrary criteria. History - in this case and many others - tells us why this is a bad idea.  Data mining can cheerfully construct any label about any person. It's a means of constructing evidence for an already decided conclusion.  We only have to look in detail at, say,  the arrest and conviction rates of white and black people across the US to understand that this is a really terrible idea.

Surveillance is OK as a general concept but already widely abused.  Mass surveillance is never OK.  It's about fitting people to crimes rather than crimes to people.  It's about removing freedom in the name of freedom.

Make no mistake: Amber Rudd's call for law enforcement agencies to have access to encrypted communications is a significant step toward mass surveillance.  Remember also that surveillance infrastructure and culture is not something you can easily take back.  The opposition party tutting at laws that increase surveillance will certainly use and expand them once they are in power.

The good news is that governments can't prevent people from encrypting things.  The bad news is that they can criminalise people who do.

Fortunately, we're getting better at hiding the fact that we're using encryption... Suddenly all those years of wearing a tinfoil hat are looking pretty fucking well-spent.

Saturday, 25 March 2017

Judges tell woman she can't have a divorce

A UK court has told a woman she can't have a divorce because, apparently, she should expect unhappiness.  In Britain the grounds for divorce are: adultery, desertion, unreasonable behaviour, five years of separation or the agreement of both parties.

This makes absolutely no sense.  If someone has decided that their marriage is over then surely it is and there's no sense at all in a state forcing it to continue.

What's worse is that the partner seeking the divorce might need to rely on the assets of the marriage in order to live.  Ruling, therefore, that unhappiness is not grounds for divorce does nothing but condemn the unhappy partner to continued misery and/or financial difficulty.

It's a fact, of course, that the law is the law and courts don't - and shouldn't have - the power to change laws in the progress of a particular case.  But then, look at the ruling:
It is plain from his judgement that Judge Tolson was unimpressed by the wife's petition. He variously described it as "hopeless", "anodyne", and "scraping the barrel". He said it "lacked beef because there was none". He said the allegations are "at best flimsy".
He said "In reality I find that the allegations of alleged unreasonable behaviour in this petition - all of them - are at best flimsy. I would not have found unreasonable behaviour on the wife's pleaded case. As it is, having heard both parties give evidence, I am satisfied that the wife has exaggerated the context and seriousness of the allegations to a significant extent.  They are all at most minor altercations of a kind to be expected in a marriage. Some are not even that."
Wow.  Bitch should know her place, I guess.  Tolson isn't done, though:
I will not overburden this judgement by setting out the pleaded allegations in full. This, the wife's best case, skillfully argued by leading counsel, proceeds by emphasising what he submits is her increased sensitivity to the husband's old-school controlling behaviour. [...] Having seen him, I hope the husband will forgive me for describing him as somewhat old-school. I can also find the wife to be more sensitive than most wives. It matters not.
He... wants the husband's forgiveness for calling him old-school and not the wife's forgiveness for condemning her to a life of misery?  He's arbitrarily decided that she's "more sensitive than other wives"?  What if she were? Wouldn't that mean that she'd be even more miserable?  Wouldn't the husband's behaviour be even more unreasonable if the wife were especially 'sensitive', whatever that is supposed to mean?

The law is entirely arse-about-tit.  Promises are only meaningful if they are not enforced by law, especially if those laws are plainly biased because, you know, old white rich male judges.  This case is an example of such a judge deciding what's reasonable in a relationship and what isn't.  His contempt for the wife in this case is evident.  His support for the husband is obvious.  He wants to punish the wife for complaining and reward the husband for being a cock.

Let's get rid of the idea that some fault is required to dissolve a marriage.  If one partner wants to end a marriage, let it be so.  Let marriage be the contract it so plainly is: a sharing of assets which need to be properly distributed when the marriage ends.  A set of shared responsibilities which have to be properly untangled when one or more partner decides to end the contract.

Thursday, 23 March 2017

Scott Adams endangers future generations with his idiocy

Dilbert creator Scott Adams likes to refer to himself as a “master persuader”, apparently on the grounds that he took a hypnotism class decades ago.  He is neither very persuasive nor masterful when he says things like this, though:
I Declare Mobile Phone Carriers to Be Enemies of the State
Here’s the basic problem.
Kids as young as eleven have smartphones. That situation won’t change.
A kid with a smartphone has access to any illegal drug in the world, as well as all the peer pressure in the world.
Pills are small, cheap, odorless, widely available, and nearly impossible for a parent to find in a bedroom search. When you have this situation, the next generation is lost.
Where to begin?  Apart from the one-sentence paragraph apparently being part of the master-persuader's toolkit, I guess.

But getting back on topic I could explain why paying for drugs with credit cards or Paypal is rather difficult (but not impossible).  I could explain that while other solutions involving proxies and/or cash are certainly possible, they are also risky to all parties.

But there’s no need at all for me to do that because Adams’ premise is absolute bullshit.

Kids can get access to any drug in the world anyway, regardless of whether they own a smartphone. There’s the internet, which Adams has forgotten can be used from devices other than phones.  There’s the dark/deep web. Then there’s the good old-fashioned ways people have been using to buy drugs for generations.  They’ve stood the test of time for good reasons; they’re adaptive and minimise risk.  If kids are buying drugs, they are vastly more likely to be dealing in person and in cash than using their phones and/or hypothetical credit cards.  Because kids – whatever else they might be – are not necessarily stupid.  Neither are drug dealers.

Smartphones are not automatic gateways to untraceable drug transactions.  In fact, they are about the worst possible method.

But why are these hypothetical kids buying drugs in the first place?  Peer pressure, according to Adams.  The problems with that argument are that not all kids succumb to peer pressure and that peer pressure isn’t always in favour of drugs.  In due course I’ll get onto Adams’ ‘argument’ that “good parenting isn’t enough”.  For now, I’ll say that his glib assessment that children – on a generational scale – will buy drugs just because they can is bullshit.  To be clear, remember what he wrote:
When you have this situation, the next generation is lost.
Kids of my generation and many before could buy drugs very easily indeed.  I (very) occasionally bought drugs in my youth.  Some of my peers sometimes did, some didn’t.  The availability of drugs is not a conveyor belt.  That’s why not every child smokes, drinks, snorts or shoots up.

In any case, the generation wasn’t “lost” in any sense.  We continue to contribute to society in all sorts of ways.  If ease of drug purchase was tantamount to lost generations then we’ve been lost since way before Adams was born.  And if we have been lost all this time, smartphones certainly – demonstrably - aren’t to blame.

Having proposed this already wrong premise, Adams explains how to solve the non-problem:
To address the problem, you would need the phone companies to allow parents full access to all messages on a kid’s phone. And this feature should be mandatory, not optional. Parents need to see all messages, and all photos, from all apps.
There are two major points here.  The first is that this would absolutely not prevent kids from buying drugs.  They just wouldn’t use their phones to do it.  The second is that such monitoring would likely cause many kids to take greater risks (irrespective of drug-buying or otherwise) than they otherwise would.

A couple of examples of how over-surveillance can create risk:
  • Young people who knew their phones were being monitored would not use them for sensitive communications.  What child would communicate secrets on an open channel when there are so many secret channels they could use instead?  Why wouldn’t they just organise illicit activities using other means?  Why would they take their phones with them when they went to places they’re not supposed to? When you rank the negligible probability that your kid will buy drugs in the most stupid and risky way possible greater higher than  that of their getting into real and immediate danger because of unnecessary surveillance, then you have a shitload of justification to do.
  • Monitoring a child’s communications also monitors those of their friends.  This has been shown to cause serious problems.  I know of two cases where a child described the abuse she was undergoing to her friend.  The friend’s mother snooped her phone, saw those messages and confronted the abused child’s parents.  This made the situation vastly worse.  No doubt the parents were well-meaning, but they weren’t equipped to deal with the situation properly.  And that’s without even thinking about the ethical considerations of snooping the communications of other people’s kids, let alone those of your own kids.
I don’t know what Adams means by “mandatory, not optional”.  I assume he doesn’t mean that every photo taken, every text sent etc. immediately pops up on the parents’ phones, but he could well be suggesting that, I wouldn't be surprised.  It seems as though it would do a lot more harm than good.  Should kids think before they take a photo or send a message?  Sure.  Should they be terrified to do so in case they are arbitrarily punished for doing so after the fact?  I think not.  In one scenario, they’re being trusted enough to learn how to grow up.  In the other, they are being forced to adhere to arbitrary rules they might not understand or agree with and are not being allowed to develop a personality.  They’re in a Skinner Box.  Let's not put kids in a Skinner box.

If Adams instead means that parents should be allowed access to their kids’ phone activities in case it becomes necessary, the situation isn’t much better.  In this scenario, it’s the parents who get to decide what needs to be seen and what doesn’t.  This might work OK in situations where there is deserved trust between parent and child but not all such relationships are like that.  Consider the abused child asking for help.  Consider the gay or trans child of fundamentally religious parents trying to understand themselves.

Legislating that parents always get to see all their children’s phone activities is going to put some children in extreme danger or limit their ability to get out of danger.  But Adams doesn't understand what parenting is.
I know what you are going to say. You’re going to say good parenting is all you need. But my observation is that no more than 20% of kids can be “parented” away from temptation. The other 80% are totally out of luck.”
No you don’t, Scott. I’m not even sure you know what you're going to say, half the time.  I’m not about to say that good parenting is all anyone needs, but let’s deal first with your “observation” that “no more than 20% of kids can be “parented” away from temptation.”

I’ve been staring at that sentence wondering what it can possibly mean.  I mean, to begin with, citation fucking needed and then “parented away from temptation”?  I… I… no, I don’t even.  Look, Scott, children are going to be tempted by all sorts of shit.  They’re going to succumb to some of those temptations.  It’s not even necessarily wrong if they do.  It’s called learning.  It’s called growing up. People have to make mistakes and rather than hovering over our children to prevent mistakes ever happening, we have to be there to pick up the pieces if they do.  That’s parenting, Scott.  It's interactive. I guess you didn't think of that.

Functional relationships are largely about trust, which is a mutual thing.  Can you trust absolutely that your kid won’t try drugs?  Absolutely not and I’m not convinced that you should be able to.  But can you trust that your kids will come to you for help when they need it?  That’s almost entirely up to you and if you’re surveilling them, imposing arbitrary rules, forcing them to divulge their communications and whereabouts at all times then newsflash: they aren’t going to trust you. And with excellent reason.

You can’t “parent away” potential drug use (whatever that can even possibly mean) but parenting is about a whole lot more than that.  It terrifies me that Adams might one day be a parent when he obviously doesn’t understand that simple fact and is so clear about how other parents should be.
My observation is that smartphones have made half of all adults mentally ill. I mean that literally, not figuratively. The business model of phones is addiction, not value. And they addict you at the expense of the things humans need in their lives to be happy and healthy.
Adams isn’t qualified to diagnose mental illness.  He’s also apparently (despite being trained as an economist) unqualified to identify business models.  The business model of phones isn’t addiction and there’s no evidence to suggest that phone addiction exists.  The business model of phone companies is about fooling us into revealing as much information about ourselves as can be wrung out of us.  I agree that there’s a problem here and I’ve written about it often, but the effect cannot be characterised as a mental illness. And Adams very, very clearly doesn't understand the phone company business model. How embarrassing.
Today I declare the phone companies to be enemies of the state. They are ruining everything you love, and everything you care about. And they are doing it right in front of you.
I love my cat.  She is un-ruined by phones.  I love the views from my house. I love the part of the world I live in and how it changes over the seasons. I love my spouse.  I love my friends.  I love science. I love engineering.  I love solving problems. I love crosswords. I love reading. I love logic. Not necessarily in that order.

Phone companies aren’t ruining any of those things.

What’s left in this article?  Oh yes, the usual Adams embarrassing bullshit:
If this is not already obvious to you, it probably means you’re a smartphone addict. A normal person’s brain will spontaneously generate a prote ctive illusion to support an addiction. If you see no problem with smartphones causing drug addiction in kids, or you think I am exaggerating, you’re probably in the illusion.
Yeah, this is part of what Adams thinks of as masterful persuasion.  Do I even need to pick it apart?  The false dichotomy hurts. If you're not part of the etc. Scream if you want to go faster, Adams fans.
I’m going to delete any comments that say good parenting is all you need. That opinion would not be worthy of this topic.
Make of that what you will.  I'll delete any comments that don't rhyme with "clickhead". They won't be worthy of my mighty yet randomly idiotic understanding of something or other I can't define.

Tuesday, 5 July 2016

Your supermarket is spying on you and other shocking news

It’s easy to forget that we already live in a dystopian future. The breadcrumbs of personal data we scatter around us wherever we go are already being collected, aggregated and analysed to an extent where harmful privacy breaches are practically inevitable.

We’re under surveillance everywhere we go, including during our weekly supermarket shop. Surveillance in supermarkets is nothing new. For decades they’ve offered store cards which collect detailed information about your purchasing habits in exchange for (frankly insulting) incentives. The data collected includes when and where you shop, what things you routinely buy, when you tend to buy extravagant things, how likely you are to take advantage of special offers and much more. It is used to come to surprisingly complex (and often accurate) conclusions about you, your family, your lifestyle and your family’s lifestyle. With schemes such as Nectar in the UK, even more data is collected and more valuable conclusions drawn, because data is collected from a wide variety of shops of different kinds, moreso if you also have a Nectar credit card.

Many people feel that this information is a fair price for the incentives offered. Privacy activists like me disagree…. but that isn’t the point isn’t the point I’m making here. The point I’m making is that surveillance via store card is the tip of the iceberg. There are plenty of other very creepy things supermarkets routinely do and more that will surely appear in the near future.

With online shopping, supermarkets are able to track the items you buy, the ones you look at but don’t buy and the ones you buy instead. In physical stores, this has not been possible until recently. Nowadays, supermarkets can and do track your movements around their physical stores. If you (like most people) leave your phone’s wifi turned on when you visit the supermarket, then the store can track your movements, even if you don’t connect to the store wifi. From this, it can build a very intimate picture of your shopping habits. For example, it can determine how long you stare at a shelf of near-identical brands of washing powder before deciding which to buy and can compare that with your past behaviour. How easily are you affected by the specific placement of certain items? Can you be manipulated into buying the one with the highest profit margin? It can note the things people always forget to buy as they walk around the store and have to go back for. Are there trends in this data that can manipulate customers into buying things they don’t really want? Could the supermarket put a shelf with the things everyone forgets right at the far end of the supermarket, but put only the brands with the highest profit margins on that shelf?

With data like this, already being collected by supermarkets, coupled with eventual buying choices and place/time data collected by use of a store card, supermarkets can build a very intimate picture of what their customers buy and how they make buying decisions. This is not data you’d want to fall into the wrong – or even the ‘right’ - hands. And it’s set to get worse. Trials are already being carried out on various means of monitoring shelves so that data can be collected about which items a customer picks up but doesn’t eventually buy and which items they most closely scrutinise (and then whether that’s the one they buy). There are also trials being carried out on automatic expression recognition via CCTV. What are you thinking when you look at a product or display? Pleased? Excited? Disgusted? Bored? Confused? Will supermarkets also start to use automatic facial recognition to track those of us who turn our wifi off and pay in cash? I don’t see why not.

Data like this is used to create profiles of customers, to optimise displays, shelving and pricing and to offer customer incentives such as sales and coupons. It’s up to individual customers to decide whether they think this is creepy and manipulative or genuinely useful, but it’s not up to customers whether they are tracked in the first place.

The data is very valuable, of course, and will not be used only to decide where to put the baked beans. It (or subsets of it) will be sold to other companies, which will aggregate it with the things they know about us. And it will be stolen by people who want to use it to also steal our identities. It will be used to draw possibly false conclusions about us, which might haunt us in the future. If you don’t look sufficiently concerned when putting high value items into your trolley, will you be considered poor credit risk by a completely different company in the future? Will health insurance companies count how many doughnuts you bought and look at your waist size as calculated from CCTV footage over time to decide whether to pay for your heart attack? I’m being flippant, but I don’t think these are particularly unrealistic scenarios.

Store cards are all very well. I don’t have one but I don’t look down on anyone who does. For them, the trade-off between privacy and money-off coupons is worth it. I’d argue with them that the trade-off only seems worth it because they probably don’t understand how their data is being used and misused, but that’s OK too; taking the time and effort to understand these things is a cost many people don’t think is worth paying. It’s up to them. They’re helping – like parents refusing to vaccinate their children – to create an environment that’s more dangerous for everyone else, but I think we have a little way to go yet before most people really start to see the downside of this abandonment of privacy. I’m not saying I won’t gloat when they do, but I understand why it’s difficult to take privacy seriously when it comes at the expense of convenience.

But while store cards are opt-in, the other surveillance methods employed by supermarkets are not. If I have to turn off my phone wifi, pay with cash in unordered notes and wear a disguise to the supermarket, then I can’t honestly say I’ve been given a realistic opportunity to opt out.

But I can’t end on a negative note. I think there are some excellent uses for store cards and supermarket tracking. Here’s my suggestion:

Your store card is issued 100 points when you enter the supermarket, to be redeemed upon checkout. This number ticks down the longer you are in the store and ticks down even faster whenever you stand still. That way, perhaps everyone in the supermarket will finally get out of my fucking way.

Friday, 20 May 2016

Free Chelsea Manning

This is not an offer to get a free Chelsea Manning, it's a post about why she shouldn't be in prison:
The issue with Manning being found guilty of a criminal offense for violating her employer's terms of use is one of precedent: if Manning is sent to jail for violating the fine print in her employment agreement, then so can anyone else who breaks their own employer's terms and conditions. That means that most of us could be sent to jail for things we do every day.
This isn't hyperbole.

13858609833_3ab5048441_bI once signed an employment contract that prohibited 'horseplay'.  And a flat rental agreement that was altered to allow a pet which read "providing that the pet in question is not a horse kept in the bath".

The latter was a joke by the estate agent, but the horseplay clause (not sure what's going on with all these horses) was completely serious.  It was a clause designed to be deliberately vague so they could sack people with impunity for arbitrary reasons.

Of course, this was a company which had a light sensor to turn off the security lights, positioned directly in the glare of a security light.

We also built a test rig which was so poorly earthed that it was vitally important that you went to the toilet before touching it.

More spying on kids

There are some very worrying online resources about how to spy on your kids.  Many of them assume that spying on your kids is the right thing to do.

Some, like this one, purport to ask whether it's OK to spy on your kids but also include statements about the alleged tactics kids use when they find out you're spying on them.  Almost as if they answer to the question is self-evidently "yes".

That particular link contains the following on when you shouldn't spy on your kids:
Image result for spying on your kidsIf you have a teenager who meets her responsibilities, comes home on curfew, is where she says she’ll be when she said she’d be there, is hanging out with the people with whom she said she would be hanging out, and you have no reason to be suspicious about anything, I suggest you stay out of her room. And I think you should tell her that, too. You can say something like, “I’m not going to interfere with your privacy, because you’re doing so well. I have no reason not to trust you.” That way, she knows she’s being rewarded for her behavior—your lack of interference in her personal space is a direct result of her actions.
Yeah, that's.... creepy.  And nonsensical. Threatening to invade your kids' privacy if they don't behave exactly according to your standards isn't going to develop trust and likely is going to foster risky behaviour.
So when you spy on your otherwise responsible child, the message you’re sending is, “I don’t trust you, even when you haven’t done anything wrong.”
Way to miss the point.  Kids are going to make mistakes.  The way to deal with that is to talk about it and then let it go, not to invade their privacy in a doomed attempt to prevent them making further mistakes.  Mistakes are how we learn.
To be honest, I don’t like talking about rights; the word is just too overused in our culture. But here’s the deal: I believe that whoever’s name is on the mortgage has a right to look anywhere in their house. In my opinion, that’s your right because you own the house. 
Yeah, the thing here is that kids are people and you don't own them.  It's not like they even have much of a choice about where to live.  After all the rhapsodising about how it's a parent's responsibility to keep their children safe, we get this:
Many parents will ask, “Why should I tell him I’m going to [search his room]? He’ll only hide it outside of the house.” But that’s not your problem as a parent.
Yeah, as long as presumably dangerous activity isn't happening under your own, mortgaged, roof, there's nothing to worry about.

Here is a (hilariously inept) instruction manual for how to spy on your kids. That's it's actual title - "how to spy on your kids online". This isn't beating about the bush.  But to be fair, there's some good advice in that article, hidden amongst the bullshit.

This is sort of sweet:
And be warned: Kids can learn how to delete the history to cover their tracks, so ask questions if you discover that the history was cleared by someone other than you.
Yeah, if your kids aren't smarter than that, you probably have more problems than you think.  Especially if you're not smarter than that either.
With most issues of safety -- climbing a tree, riding a bike, crossing the street -- we progressively give kids more freedom. But in the digital world, new and different risks come up as they grow. Your instinct might be to back off as they approach the tween years, but that's when to get even more involved.
This is not a clear thinker.  There is a considerable difference in risk between a four-year-old climbing a tree and a 14-year-old climbing a tree. They're climbing different trees.  There are 'nImage result for spying on your kidsew and different risks [in tree climbing] as they grow'.  The last sentence is worrying on every level.

The article ends with a very telling 'decoding' of some common abbreviations.  The author seems at least as concerned with figurative use of the word "fuck" as about actual safety.  As I said, telling.

 Abbreviations and code words speed up instant messaging and texting, but they also mask what people are saying! Brace yourself. Here are some commonly used terms:
ADIH: Another day in hell
A/S/L: Age, sex, location
BTDT: Been there done that
CULTR: See you later
GTFO: Get the f-ck out (expression of surprise)
H8: Hate
ILY or 143 or <3: br="" i="" love="" you="">JK or J/K: Just kidding
KWIM: Know what I mean?
LLS: Laughing like sh-t
LMIRL: Let's meet in real life
LYLAS (B): Love you like a sister (brother)
NIFOC: Naked in front of computer
PAW or PIR or P911: Parents are watching or Parent in room (drop the subject)
POS: Parent over shoulder (can also mean "piece of sh-t," used as insult)
Pr0n: Intentional misspelling of "porn"
STFU: Shut the f-ck up (expression of surprise rather than reprimand)
TMI: Too much information
TTFN: Ta ta, for now (goodbye)
WTF: What the f-ck?

Kids are complicated

Kids are complicated. They need privacy.
In this article, Livingstone walks us through the daily routine of her research subjects -- the way networks ebb and flow through their face to face interactions, family time, homework and leisure. Her account sharply highlights danah boyd's finding from her indispensable book It's Complicated, that teens prize face-to-face time above computer and phone time, but it has to be time with their peers and away from adult supervision -- a rare commodity in the era of bubblewrap child-rearing.
I've come across a few real life reasons why kids need privacy.  In one case, a girl was being abused by her parents and confided in a friend. They spoke about it on their phones.  The parents of the friend snooped her phone, found messages about the abuse and confronted the abusers. This made things much worse for the girl and put her in even more danger.

Image result for spying on your kidsI'm not suggesting that the abuse should have been kept secret.  Clearly the abusers needed to be stopped.  I'm saying that snooping your own kid's phone can have dire consequences for other people.  Because kids are complicated.  In this case, the assumption of privacy was vital; the girl likely wouldn't have confided in anyone if she thought it would be intercepted.  If your kids know you're snooping their phones, they won't use them to communicate about sensitive things.  If they know you're tracking their phones they won't take them when they go somewhere without their approval.

Spying on your kids is likely only to put them at greater risk.  Fostering a trusting environment is a lot more difficult but obviously superior.  Respect your kids, Accept their need for privacy.