Thursday, 5 October 2017

Age verification is a distaster waiting to happen

The UK Prime Minister Theresa May is planning to force porn sites to adopt some means of verifying the age of visitors before serving them any porn.  Any porn site not implementing such measures will be placed on a blacklist and ISPs will be forced to block them. For those people terrified of children being exposed to pornography, nothing could be more reasonable.

Indeed, porn is a worrying thing for a variety of reasons. Are the performers in a position where they can give meaningful consent? Are they victims of trafficking or their own economic situation? How does the content portray sex and - particularly - women?  Is porn normalising terrible attitudes toward what is expected of women and how they ought to be treated?

These are all relevant concerns and ones we should consider when we access pornography. Evidence (and common sense) indicates that people are harmed in the making and consumption of porn and we should ask ourselves whether that price is too great. We should consider whether ethically-sourced porn is truly possible and sustainable.  If not, we should probably just stop accessing it altogether until it is.

So I agree that there are serious concerns about the porn industry and about children being brought up on a diet of misogynistic pornography. Of course, the antidote to this isn't age verification (which kids will get around without serious effort) but requires changing attitudes toward what sort of thing is acceptable in porn.  Probably better to allow kids (and adults) easy access to porn that treats and depicts women with fairness and respect than to hide it away. Prudishness is not the answer either.

But May has decided that age verification is the way ahead and that's what we'll have to deal with. There are several problems with this approach beginning with the fact that it is absolute bullshit.

First, it won't work. Does May seriously believe that children can't get around age verification? They've been doing it for years.  I was never once even asked to prove my age in all my years of underage drinking but I'm sure I could have come up with fake ID if I needed to. It's especially easy these days.

Second, it's a disaster waiting to happen.  Let's examine the two most common age verification methods.

The first is to use credit cards. Kids can't have credit cards so anyone wielding one must be an adult, right? If May believes that, she hasn't seen the dozens of stories that turn up every year about kids who have racked up enormous credit card bills ordering stuff with their parents' credit cards. This is especially true for free-to-access sites such as YouPorn. The credit card statement will show no suspicious activity so parents won't even know their card has been used.

So children will easily circumvent this method of age verification. Besides, if their parents already have accounts with porn sites, their children will likely find a way into them without any need to prove who they are.

Using credit cards for age verification is a disaster waiting to happen. Porn sites will suddenly have huge lists of their customers' credit card numbers as well as their porn viewing habits. If you go to YouPorn today, you can do so anonymously. If you go there post-credit-card-age-verification, you are vulnerable to blackmail for doing something that pretty much everyone does from time to time. Even assuming that the porn site and its employees are eminently reputable, someone is bound to hack that site sooner rather than later. Our credit cards and porn-viewing records will be stolen and available to identity thieves and blackmailers the world over.

The second method is to use two-factor authentication using a phone.  When you log in, the site sends a code to your mobile phone and you type that into the site to gain access.  The mobile company knows how old you are so it can decide whether to serve you pornography.

This won't work either. For one thing, parents get phones in their own names for their kids, since kids are not eligible for credit. For another, it's not very difficult to buy phones whose operators do not know how old the customer is.  For yet another, how difficult would it be for kids to wait around until a parent's phone is unattended?

If anything, using phone verification is an even bigger disaster waiting to happen.  Now the porn sites have your phone number instead of your credit card, which doesn't make the lives of blackmailers or identity thieves any more difficult. But now your phone company knows about your viewing habits too and it will also be hacked sooner rather than later.

And then there's a final problem. Who gets to decide which sites contain porn? Well, the government, of course. I worry about how transparent this process will be and how transparent it will remain over time.  I worry a lot about mission creep: what other identification demands will this or subsequent governments make down the line?

This, I suspect, is the agenda. It probably has less to do with porn than the government claims it does. It probably has more to do with their feverish desire to track absolutely everything we do online regardless of the societal cost. This seemingly reasonable proposal about porn is the thin end of one motherfucking wide wedge.

Friday, 29 September 2017

EC makes idiotic plans for dangerous and unworkable mass online censorship

Copyright. Because - disappointingly, I couldn't find any good images for 'copyshite'
The European Commission has outlined plans for mandatory copyright filters on sites the public are allowed to post on. The filters will automatically decide whether some user-posted content - a video, say - infringes copyright and will automatically take down that content without input from any pesky, expensive humans.

This is the sort of thing Teresa May means when she talks about companies like Facebook using technology to take down 'extremist' content. But more on that later.

There are two problems with this idea. 

First, it is a clear violation of our human rights. These measures would constitute mass surveillance of all the content we post online and the automatic removal of the things we post that a company or government doesn't want us to.  This should worry you.  Governments and companies - however apparently benign - always want to censor complaints.  There's never been such a thing as a benign government and companies tend to get less so in direct proportion to the power they gain over their customers.  More worryingly still, governments are going to want access to the outputs of these filters. They're going to want to know who is posting what kind of content across multiple sites.  They'll want to know who's 'making trouble' by criticising party lines and what better way to do that by recognising what materials they've referenced in their works?  Companies will want access to the filters too and the ones running the filters will be all too happy to sell that access.  For a lot of companies, this kind of data would be pure gold. It would help them to better identify people as sales targets and to take down fair and protected criticism of their content.

Second, it won't work.  We know this because YouTube.  YouTube has spent years and several fortunes trying to solve this problem and the results are notoriously terrible.

Julia Redda is a German MEP and member of the Pirate Party Germany.  She wrote an article on (mostly) this second point. Filters of this sort don't work, have never worked and likely will never work, even if we wanted them to, which we shouldn't.

Here are some highlights:
5. Memory holes in the Syrian ArchiveAnother kind of filter in use on YouTube, and endorsed by the Commission, is designed to remove “extremist material”. It tries to detect suspicious content like ISIS flags in videos. What it also found and removed, however: Tens of thousands of videos documenting atrocities in Syria – in effect, it silenced efforts to expose war crimes.
I'll also wave vaguely in the direction of my point above about companies ravenously buying up access to filters. What better way to identify and censor whistleblowers?
6. Political speech removedMy colleague Marietje Schaake uploaded footage of a debate on torture in the European Parliament to YouTube – only to have it taken down for supposed violation of their community guidelines, depriving citizens of the ability to watch their elected representatives debate policy. Google later blamed a malfunctioning spam filter.
It was sweet of them to blame an oaf but really it was.... But ironically I can't find a clip of the Simpsons I wanted to link to to illustrate this. I wonder why.
7. Marginalised voices drowned outSome kinds of filters are used not to take down content, but to classify whether it’s acceptable to advertisers and therefore eligible for monetisation, or suitable for minors.
Recently, queer YouTubers found their videos blocked from monetization or hidden in supposedly child-friendly ‘restricted mode’ en masse – suggesting that the filter has somehow arrived at the judgement that LGBT* topics are not worthy of being widely seen.
Read the full article. And share it. But I think it's useful to put Redda's lessons supposedly but obviously not learned by the EC here:
  • Lesson: That such a ridiculous error was made after years of investment into filtering technology shows: It’s extremely hard to get this technology right – if it is possible at all.
  • Lesson: Copyright exceptions and limitations are essential to ensure the human rights to freedom of expression and to take part in cultural life. They allow us to quote from works, to create parodies and to use copyrighted works in education. Filters can’t determine whether a use is covered by an exception – undermining our fundamental rights.
  • Lesson:Public domain content is at risk by filters designed only with copyrighted content in mind, and where humans are involved at no point in the process.
  • Lesson: Automatic filters give big business all the power. Individual users are considered guilty until proven innocent: While the takedown is automatic, the recovery of an illegitimate takedown requires a cumbersome fight.
  • Lesson: Filters can’t understand the context of a video sufficiently to determine whether it should be deleted.
  • Lesson: There are no safeguards in place to ensure that even the most obviously benign political speech isn’t caught up in automated filters
  • Lesson: Already marginalised communities may be among the most vulnerable to have their expression filtered away.
  • Lesson: Legitimate, licensed uploads regularly get caught in filters.
  • Lesson: Not even those wielding the filters have their algorithms under control.
Remember: all these lessons are illustrated in Julia Redda's article with examples

Remember: lawmakers are recommending forcing this kind of clusterfuck on millions of people without understanding that they don't work or caring that they violate our most basic human rights.

Remember: that Julia Redda and others (the EFF, for example) are fighting this and you can support them.

Tuesday, 19 September 2017

W3C makes idiotic decision, EFF resigns

This is terrible news. The World Wide Web Consortium (W3C) is the organisation that controls open standards for the web. It's mission has always been to protect the interests of ordinary web users as well as the corporate members of the W3C.  Today it abandoned the principle of empowering web users in favour of bowing to the pressure of corporate members acting in their own interests. The web is set to become a worse place because of this.

As Cory Doctorow writes at Boing Boing:
In July, the Director of the World Wide Web Consortium overruled dozens of members' objections to publishing a DRM standard [for video] without a compromise to protect accessibility, security research, archiving, and competition.
As a direct result, the Electronic Frontier Foundation (EFF) has resigned from the consortium. This is also very bad news indeed. The EFF has consistently had our collective interests at heart since its formation in 1990. Without their presence at the table, who knows what further havoc the W3C will wreak.

DRM (Digital Rights Management) is some electronic means by which content publishers can protect their content from being pirated. This would be perfectly acceptable - no reasonable person wishes to deprive artists or software writers of their rightful income - if it weren't for the fact that the model is deeply flawed.  DRM usually means that we don't, in fact, own the movie or the music or the software we just bought. We own a license to use that content, a license which can be taken away from us by the company if they don't like what we're doing with it.

This means that companies can prevent us from lending music or books to other people. They can stop us moving content to different media so we can back it up or play it in different circumstances (on our phones, from a CD, etc) It means they can summarily cut access to our content or stop our computers from working if they decide we've violated their terms and conditions. It means that they can stop us from using third-party ink in our printers because they want us to buy the more expensive kind from them. And it means they can artificially inflate the price of that ink in the knowledge that we'll have to pay.

In these days of the Internet of Shit Things, where all manner of devices in our lives - from light bulbs to cars - are internet connected, this is especially troubling. If manufacturers can turn off our cars if they think we're voiding the warranty or can tie us into mandatory deals with 'approved' insurance companies (who might also be able to turn off the car if they don't like the way we're driving or the places we're driving to) then we're pretty much fucked.

This is not an outlandish scenario. Some vehicle manufacturers already make it virtually impossible for a third party (or the owner) to service and fix their own vehicles because DRM prevents access to diagnostic tools.  This is made worse by bewilderingly terrible laws around the world which make it illegal to disrupt or bypass DRM.  For example, the Digital Millennium Copyright Act in the US makes bypassing DRM a federal offense. Modifying the software on a John Deere tractor in the US, for example, so that you can diagnose and fix a problem, can land you a five year prison sentence.

Not only do you not own your music and books, you don't own your car, tractor or many of the other things you think you own. They can be taken away from you on a whim and there's not a great deal you can do about it.

Adopting DRM as a web standard is especially troubling. It means that browsers get to decide how you can use content based on the arbitrary whim of the consortia providing that content. This has dire implications in a number of important areas:

  • Security researchers might not be permitted to break DRM to investigate whether products are safe. Users won't know whether the services they're using are secure and companies will likely be disincentiveised from ensuring their products are safe.
  • Archiving of important content will be impossible (without breaking DRM which is illegal in some countries and very likely to become so in many more very soon). The grief of losing a great work of art because of the selfish interest of the rights owner is tragic. The outrage of losing documentaries of atrocities and the true version of history is an abomination. That governments will seek to do so if they have the means is simply a given.
  • Accessibility will be compromised. For example, subtitling of videos is essential to those with hearing difficulties. 300 hours of video are uploaded to YouTube every minute. Automatic subtitling is the only way to cope with this volume and that would involve breaking DRM.
  • The big media and software companies reached their position through innovation: they provided new platforms that circumvented established companies who would otherwise have been their competitors. Adopting DRM as a web standard will prevent new companies from doing the same thing, stifling competition.
Read the article for more details, but I quote the EFF's letter of resignation from the W3C in full below. I have a feeling that the EFF won't mind.

The W3C's decision is due to pressure from its big consortium members. It has allowed the interests of a small number of rich people to run rough-shod over the interests of billions of internet users. This is the exact opposite of the freedoms it was established to protect. It wouldn't surprise me if its founder and current director, Tim Berners-Lee, found himself being appointed to the board of various member consortia in the near future. The most disappointing part of all this is that the W3C doesn't seem to understand (or understands but doesn't care) that it holds the power. What are the powerful consortium members going to do? Build a new web?

The EFF's campaign against DRM as a web standard has been exploratory. It has suggested compromises which would have given the pro-DRM everything they want (or at least, everything they claim to want) while protecting the rights of users.

Cory Doctorow again:
EFF appealed the decision, the first-ever appeal in W3C history, which concluded last week with a deeply divided membership. 58.4% of the group voted to go on with publication, and the W3C did so today, an unprecedented move in a body that has always operated on consensus and compromise. In their public statements about the standard, the W3C executive repeatedly said that they didn't think the DRM advocates would be willing to compromise, and in the absence of such willingness, the exec have given them everything they demanded.
They "didn't think" the pro-DRM partners would be willing to compromise so instead of forcing them to compromise or taking the motion off the table entirely, they decided to screw us all.

Here's the EFF's resignation letter:

Dear Jeff, Tim, and colleagues, 
In 2013, EFF was disappointed to learn that the W3C had taken on the project of standardizing “Encrypted Media Extensions,” an API whose sole function was to provide a first-class role for DRM within the Web browser ecosystem. By doing so, the organization offered the use of its patent pool, its staff support, and its moral authority to the idea that browsers can and should be designed to cede control over key aspects from users to remote parties.
When it became clear, following our formal objection, that the W3C's largest corporate members and leadership were wedded to this project despite strong discontent from within the W3C membership and staff, their most important partners, and othersupporters of the open Web, we proposed a compromise. We agreed to stand down regarding the EME standard, provided that the W3C extend its existing IPR policies to deter members from using DRM laws in connection with the EME (such as Section 1201 of the US Digital Millennium Copyright Act or European national implementations of Article 6 of the EUCD) except in combination with another cause of action.
This covenant would allow the W3C's large corporate members to enforce their copyrights. Indeed, it kept intact every legal right to which entertainment companies, DRM vendors, and their business partners can otherwise lay claim. The compromise merely restricted their ability to use the W3C's DRM to shut down legitimate activities, like research and modifications, that required circumvention of DRM. It would signal to the world that the W3C wanted to make a difference in how DRM was enforced: that it would use its authority to draw a line between the acceptability of DRM as an optional technology, as opposed to an excuse to undermine legitimate research and innovation.
More directly, such a covenant would have helped protect the key stakeholders, present and future, who both depend on the openness of the Web, and who actively work to protect its safety and universality. It would offer some legal clarity for those who bypass DRM to engage in security research to find defects that would endanger billions of web users; or who automate the creation of enhanced, accessible video for people with disabilities; or who archive the Web for posterity. It would help protect new market entrants intent on creating competitive, innovative products, unimagined by the vendors locking down web video.
Despite the support of W3C members from many sectors, the leadership of the W3C rejected this compromise. The W3C leadership countered with proposals — like the chartering of a nonbinding discussion group on the policy questions that was not scheduled to report in until long after the EME ship had sailed — that would have still left researchers, governments, archives, security experts unprotected.
The W3C is a body that ostensibly operates on consensus. Nevertheless, as the coalition in support of a DRM compromise grew and grew — and the large corporate members continued to reject any meaningful compromise — the W3C leadership persisted in treating EME as topic that could be decided by one side of the debate.  In essence, a core of EME proponents was able to impose its will on the Consortium, over the wishes of a sizeable group of objectors — and every person who uses the web. The Director decided to personally override every single objection raised by the members, articulating several benefits that EME offered over the DRM that HTML5 had made impossible.
But those very benefits (such as improvements to accessibility and privacy) depend on the public being able to exercise rights they lose under DRM law — which meant that without the compromise the Director was overriding, none of those benefits could be realized, either. That rejection prompted the first appeal against the Director in W3C history. 
In our campaigning on this issue, we have spoken to many, many members' representatives who privately confided their belief that the EME was a terrible idea (generally they used stronger language) and their sincere desire that their employer wasn't on the wrong side of this issue. This is unsurprising. You have to search long and hard to find an independent technologist who believes that DRM is possible, let alone a good idea. Yet, somewhere along the way, the business values of those outside the web got important enough, and the values of technologists who built it got disposable enough, that even the wise elders who make our standards voted for something they know to be a fool's errand. 
We believe they will regret that choice. Today, the W3C bequeaths an legally unauditable attack-surface to browsers used by billions of people. They give media companies the power to sue or intimidate away those who might re-purpose video for people with disabilities. They side against the archivists who are scrambling to preserve the public record of our era. The W3C process has been abused by companies that made their fortunes by upsetting the established order, and now, thanks to EME, they’ll be able to ensure no one ever subjects them to the same innovative pressures.
So we'll keep fighting to fight to keep the web free and open. We'll keep suing the US government to overturn the laws that make DRM so toxic, and we'll keep bringing that fight to the world's legislatures that are being misled by the US Trade Representative to instigate local equivalents to America's legal mistakes.
We will renew our work to battle the media companies that fail to adapt videos for accessibility purposes, even though the W3C squandered the perfect moment to exact a promise to protect those who are doing that work for them.
We will defend those who are put in harm's way for blowing the whistle on defects in EME implementations. 
It is a tragedy that we will be doing that without our friends at the W3C, and with the world believing that the pioneers and creators of the web no longer care about these matters. 
Effective today, EFF is resigning from the W3C. 
Thank you,
Cory Doctorow
Advisory Committee Representative to the W3C for the Electronic Frontier Foundation


Tuesday, 12 September 2017

Guy misses point about privacy, at least doesn't claim to be cyborg

A rail company replaces tickets with a chip in your hand. The conductor scans the chip with a phone (presumably it's an NFC chip). The phone identifies the customer from the ID number stored on the chip then looks in the database to find out whether the customer has paid for the journey.

It means you can't lose your ticket, I guess, but it leaves a trail which is hackable by criminals and governments and is presumably subject to court orders from law enforcement agencies. It's not for me, but I won't judge people who find the convenience is worth all the leaking privacy. And I have to admit that this kind of thing is still pretty cool, even though it's generally a bad idea.

There's an almost criminal lack of detail in this short video (which was filmed for the Travel Show of all things), but there's enough to get the general idea. The WARNING WILL ROBINSON part comes where the presenter asks the guy from the company about privacy. He says it's OK because the chip doesn't transmit and that it only contains a customer ID so nobody "outside the company" will be able to find out anything about you.

Yeah, that's all bollocks. Of course the chip transmits, otherwise how could the phone read it? What he means is that the chip doesn't have a power source so it only transmits when an NFC reader (presumably such as the one on my phone) is in close proximity and powers up the chip itself.  Surely there's no way to scan someone's chip without their knowing, right?  Yeah, and pockets never get picked.

But that's not the main issue. The company guy is arguing that the ID number is not very useful by itself because all the information about the customer is in a database somewhere else, which is assumed to be secure.

It isn't. Someone will get at it sooner or later. Let's look at how this system might work:

Let's give the company the benefit of the doubt and say that the conductor's phone contains only a list of IDs of the customers who have paid for a particular journey. That would be the most secure option and the only thing the conductor needs to know. Live access by the phone to the company's database, for example, would be a very bad idea for lots of reasons. In the interest of brevity, I won't cover them here (ask in the comments if you're interested).

But that list will need to be updated before and during the journey. This could happen via wifi or mobile networks. So perhaps the train has a server which is constantly updated by a mobile signal and the phone talks to the server by wifi. Or maybe the phone itself is directly updated by a mobile service. It doesn't matter what scenario is used, these connections are all points at which attacks are very attractive. The chances of there being a way to attack those points of vulnerability are not particularly slim.

Once someone has got into that system, there's a good chance that mischief will be possible. Even if the network interface strictly implements the rule that customers are referred to only by their ID numbers, what's the betting that the company that supplies the kit and software won't be able to resist bundling extra features? What if the customer wants a physical receipt to claim expenses? Will the conductor be able to print one? In which case the transaction and customer data will need to be on the air at some point, where it is vulnerable. We can think of many other scenarios where the customer's data would not be 'safely' behind a firewall.

And that's before someone has managed to get hold of the conductor's phone, in which case all security bets are off.

But all of these potential attacks are a needlessly elaborate way to get access to sensitive personal and travel information. It's all right there in a server somewhere, ripe for stealing. Hackers, unfriendly governments (domestic and foreign) and law enforcement agencies would be able to get access to all the customer information and the journeys they'd made by hacking the company's servers or getting a court order. Ill-intentioned people within the company might also be able to get this data. It's not as though that sort of thing doesn't happen all the time.

Why would this matter? Here are a few scenarios:

  1. This is data you don't want to get into the hands of stalkers or abusive exes. I know of at least one case where an abusive ex-husband got hold of his ex-wife's location via her lease car's built-in tracking system (which was there to disable it in case she stopped making payments, itself a security threat). Or what if the conductor took a shine to a customer and abused the system to find out where they lived, what other journeys they make and so on?
  2. Blackmailers (who might be criminals, governments etc) could search for unusual journeys by brute force and might, with a tiny amount of additional detective work, find targets.  They could also determine to some degree which customers might have been traveling together.
  3. Law enforcement agencies might search the database to find targets for further inquiry, putting everyone who traveled on a particular route under suspicion. Once someone is a target of suspicion, they might be subject to additional scrutiny and when mining data from several sources, it's hard not to find a fit, even if it's not for the crime under investigation.
  4. People will be able to tell whether you actually traveled on that train. You may buy a paper ticket for someone else. You might send an e-ticket to their phone. A ticket is a transferable document, even if it isn't physical. But a chip-ticket is not. I might buy a ticket for someone fleeing violence, for example, so that person could not be traced by their persecutors. But not if it's a chip-ticket.

I don't mind the fact that many people will find this a convenient way to pay although I wouldn't encourage it (especially if you're up to something). What bothers me is the blasé attitude of the guy from the company who is lying to the BBC when he says there isn't a privacy issue. He goes on to spout the usual nonsense about unlocking your car and house and so on. Yeah it can be done, but yeah it's a stupid way to do it.

Friday, 8 September 2017

The shop where you pay with privacy

Kaspersky Lab has opened a pop-up store in London. It doesn't accept money, though, you pay with
private data.  It's a publicity stunt, of course, but a good one.  People have been 'buying' stuff too, even though Kaspersky usually gives this stuff away for free.

The prices of item varies:
[...]to acquire the mug, you had to hand over three photos, or screenshots of your WhatsApp, SMS and email conversations, to Kaspersky. To buy the t-shirt, it had to be the last three photos on your Camera Roll — so you couldn't be selective — or the last three messages on your phone. The original print, finally, forced you to hand over your phone. A member of staff would then poke around and select five photos or three screenshots. 
What's interesting is how punters are reacting:
There was a mixture of excitement and nervousness in the store. Some people were caught off guard and immediately started rummaging through their phones, checking photos and messages for anything that might cause embarrassment. [...] "But [a member of staff said] when you're actually asked to exchange this private information and walk away with something that does have monetary value, people are like, 'Whoa! What is actually on my telephone? What are the messages that I've sent?' It's a little bit scary."
The reporter, Nick Summers, had originally intended to go for the original print, handing over his phone for staff to poke around in. He panicked, though and went for a mug instead (choosing three pictures to disclose). I don't blame him: I am abnormally well aware of what's on my phone because I'm paranoid about that sort of thing, but I'd certainly hesitate before handing anyone my phone.

What I like about this is that the customers are obviously willing participants, playing along because they want to experience the panic of willfully exposing private data.  They're learning about privacy through role play, which is a great idea.

Wednesday, 6 September 2017

How to trust?

Trust is an enormously important part of security. How and why we trust things is the basis for both security in principle and in practice. Ultimately, we have to trust our password managers, for example, and - more importantly - the services we sign up for. I know that Amazon is using my data to dubious and potentially harmful purpose. I trust that my VPN provider is not.

That trust might well be misguided. It's based partly on reputation and partly on a vague belief that businesses probably won't harm their long-term future to make a fast buck. This is not a safe assumption, even in the VPN business. Due to the mistrust most citizens of most countries currently have for their government, VPN companies have sprung up all over the place. Quite a lot of them are fraudulent, either not actually providing a proper VPN service or sucking up and exploiting your data willy-nilly.

We live in an age where the US president has shown us that he can break the rules everyone thought were there to limit the power of presidents.... and nobody will do a thing about it.

We live in an age where trusted retail pharmacists such as Boots in the UK sell quack medicines alongside actual medicines with no way to tell the two apart.

So trust is important, all the more so because it can be so easily subverted when people just don't play by the rules we expect them to play by.  We need to establish better bases for trust and one of those is reputation. We look at the reviews. We're all pretty savvy, right? We look at the negative reviews first and use them to establish a kind of context for the positive reviews. If the negative reviews are all about late delivery or the occasional breakage or that the buyer clearly ordered the wrong thing in the first place, we are more comfortable accepting the positive reviews.

We do that because glowing reviews are easy to manufacture and bad ones are more difficult to fake.  And we do it because we tend to think we can recognise fake reviews of either type.  We're probably kidding ourselves.

Cory Doctorow writes:
One of the reasons that online review sites still have some utility is that "crowdturfing" attacks (in which reviewers are paid to write convincing fake reviews to artificially raise or lower a business or product's ranking) are expensive to do well, and cheap attacks are pretty easy to spot and nuke. 
But in a new paper, a group of University of Chicago computer scientists show that they were able to train a Recurrent Neural Network (RNN) to write fake reviews that test subjects could not distinguish from real reviews -- and moreover, subjects were likely to rate these as "helpful" reviews.
The bad news is that neural nets can write convincing reviews. The good news is that other neural nets are even better at spotting fake reviews. The worse news is that they need our metadata to do this, the most obvious being our social graphs. Presumably anonymous reviews will do little to train these neural nets to recognise fake reviews, further reducing any basis we have for trust.

Those of us who work to put metadata back into our own hands might be inadvertently harming everyone by making trust ever harder to come by.

Tuesday, 5 September 2017

Fridge Porn

I've never bought into the idea that the fridge is somehow a family hub and therefore should be part Internet of Shit. I don't even understand why families need a hub. Everyone in every family I know has at least one internet-connected device and is contactable at pretty much any time. Why should family members have to come home and stare at the fridge to find out that they should have bought milk while they were out?

And for that matter, what's wrong with bits of paper and magnets?

But that aside, here's a story of the inevitable.  Remember back in the 80s when electronics stores had things we called 'home computers' that we could play with?  Who didn't go in there and write something like:
10 PRINT "FUCK OFF "
20 GOTO 10
on every BBC Micro they had?

Nobody, that's who.

Here is the modern day equivalent. Someone drew dick pics on sticky notes on a fridge in a Home Depot shop and left porn streaming in a browser. Because of course they did, who wouldn't?
This was discovered when the organization’s head toured a visitor through the office, and wanted to show off a streaming feature on the Samsung fridge.
Fantastic.


Tuesday, 15 August 2017

Surveillance self-defense

A good set of resources from EFF about the basics of anti-surveillance protection.

Read them! Send them to your friends and family!  Security and privacy are a joint enterprise.

Friday, 11 August 2017

You can hack gene sequencers by hiding malware in DNA

This is seriously cool.

Today at the Usenix Security conference, a group of University of Washington researchers will present a paper showing how they wrote a piece of malware that attacks common gene-sequencing devices and encoded it into a strand of DNA: gene sequencers that read the malware are corrupted by it, giving control to the attackers.
I sometimes forget that we're living in the 21st century. 

Thursday, 10 August 2017

Amber Rudd breaks the irony meter

The UK Home Secretary, Amber Rudd, is no fan of encryption. She's said that 'real' people don't need  encryption as an argument against secure communications apps such as Whisper System's Private Messaging service Signal, which uses end-to-end (e2e) encryption meaning that not even the operators themselves can intercept their users' messages.

It's ironic, then, that she has fallen victim to a prank which would not have been possible if she - presumably a real person - had used encryption.

The now-notorious email prankster known as Sinon Reborn set up an email address in the name of
Theresa May's communications chief, Robbie Gibb.  Reborn emailed Rudd's parliamentary email address and she replied from a private address.
“I managed to speak to a home secretary with relative ease on her personal email address,” Reborn told the Guardian. “I replied again saying: ‘Don’t you think you should be more aware of cyber security if you are home secretary?’ and I never got a reply from that.”
This ought to be embarrassing for any cabinet member. I'm sure there are numerous guidelines and memos on this, especially as Reborn has pulled the same trick on other high profile figures:
The same hoaxer has tricked the son of the US president, Eric Trump, the next US ambassador to Russia, Jon Huntsman Jr, and the former White House communications chief Anthony Scaramucci, sparking an investigation in Washington into cyber-security. He has also duped the governor of the Bank of England, Mark Carney, and Barclays boss Jes Staley by setting up fake email accounts.
It's especially embarrassing for the minister who is supposed to be in charge of cyber-security.

And even more so given her strong anti-encryption stance. If MPs and government staff used encryption, then Rudd could have verified that the email was really from Gibb.
A Home Office source confirmed that the exchange had taken place, but said Rudd does not use her personal email address to discuss government business. “As the email exchange shows, she rapidly established that this was a hoax and had only exchanged pleasantries up to that point.”
That, of course, is not the point. It was still a security breach and a national embarrassment. That it happened to the minister who is supposed to lead us through an age of rising cyber-crime is also terrifying.

Thursday, 3 August 2017

Not this again

The BBC reports that someone has put a chip in his body to unlock his car. It is not clear why although his evident undeserved smugness is likely reason enough for him. It's also unclear that there's even a very credible security advantage since hacking car locks has so far proved easier than stealing people's keys.


But I'm biased. It reminds me too much of the pointless Kevin Warwick who has for decades been claiming to be a cyborg because he had an RFID chip in his arm. Having an RFID chip 1mm outside your skin in a badge doesn't make you a cyborg but having one 1mm on the other side does, apparently. The distinction without a difference has certainly earned him a lot of stupifyingly dull and stupid column inches over the years.

I've nothing in principle against using implants for authentication and I've no doubt it'll happen in the near future. It'll be convenient, but it won't pay to underestimate the security concerns, or the practical ones, for that matter.

It seems a nice idea, for example, to use an implant for 2FA alongside a physical artifact such as car keys, but then how do you lend your car to someone else or even allow them to unlock it to get stuff out? Perhaps taking care of your keys like, you know, an adult might be a superior solution all round.

We already know that RFID chips in passports etc can be skimmed from a distance. At least we can put our passports in RFID-proof wallets. It's a little less convenient to wear lead gloves.  And besides, how do we deactivate authentication when we know someone has skimmed our implant? How do we upgrade?

The problem is one of poor analogy. Authentication shouldn't be thought of as a key, it should be thought of as (some) proof of who we are. After that, infrastructure needs to decide what we're allowed to do in a given situation.

There are lots of smart people working out how that infrastructure might work, but slitting yourself open and installing an RFID chip is not approaching smart. People are working on how we might delegate authentication in complicated ways and how identity certifiers and authentication services could collaborate without creating a vast security minefield. There is already a fucktonne or so of literature on this subject.

But what's reported is some idiot injecting a chip into himself as though the future has already happened.

Broken encryption not required for policing encryption-using terrorists

Three people who planned terrorist attacks have been caught, tried, convicted and jailed for life. According to the BBC, they called themselves the Three Musketeers "when exchanging encyrpted messages". *GASP* - would-be terrorists using encryption!!!!!!!


But they were caught anyway, government-broken encryption was not required, conventional policing techniques sufficed.

Cory Doctorow's history of the rhetoric of the backdoor wars

Cory Doctorow writes at Boing Boing about the sort of rhetoric The UK Home Secretary Amber
Rudd used last week to justify her proposed ban on workable encryption.

It's pretty much spot on:
Here's a brief history of the rhetoric of the backdoor wars:
* "No one wants crypto, you can tell because none of the platforms are deploying it. If crypto was something normal people cared about, you'd see it in everyone's products. You crypto advocates are weird and out-of-step." (Clipper Chip - San Bernardino)
* "Companies are all using crypto. They are being irresponsible. Sure, everyone wants crypto and adding it to a product helps you sell it, but that's just profiteering while reducing our common security." (San Bernardino - This week)
* "Companies are all using crypto. But no one wants it. The fact that every major platform has rolled out working, end-to-end cryptography tells us nothing about the preferences of their customers. They're wasting their shareholders' money on working security that no one wants, while reducing our common security." (Last week - ??)
Next: some company will cave to Rudd and lose all their business to a competitor with working crypto. Then Rudd will say:
* "Sure, everyone wants working crypto, but you can't always get what you want. Look at Sellout.com, plc: they caved to our demands to eliminate security and got destroyed in the market. We must defend the good corporate stewardship of Sellout.com, plc by punishing their competitors for not joining them in the race to the bottom."
Here's a brief history of the rhetoric of the backdoor wars:
  • "No one wants crypto, you can tell because none of the platforms are deploying it. If crypto was something normal people cared about, you'd see it in everyone's products. You crypto advocates are weird and out-of-step." (Clipper Chip - San Bernardino)
  • "Companies are all using crypto. They are being irresponsible. Sure, everyone wants crypto and adding it to a product helps you sell it, but that's just profiteering while reducing our common security." (San Bernardino - This week)
  • "Companies are all using crypto. But no one wants it. The fact that every major platform has rolled out working, end-to-end cryptography tells us nothing about the preferences of their customers. They're wasting their shareholders' money on working security that no one wants, while reducing our common security." (Last week - ??)
Next: some company will cave to Rudd and lose all their business to a competitor with working crypto. Then Rudd will say:
  • "Sure, everyone wants working crypto, but you can't always get what you want. Look at Sellout.com, plc: they caved to our demands to eliminate security and got destroyed in the market. We must defend the good corporate stewardship of Sellout.com, plc by punishing their competitors for not joining them in the race to the bottom."
Here's a brief history of the rhetoric of the backdoor wars:
* "No one wants crypto, you can tell because none of the platforms are deploying it. If crypto was something normal people cared about, you'd see it in everyone's products. You crypto advocates are weird and out-of-step." (Clipper Chip - San Bernardino)
* "Companies are all using crypto. They are being irresponsible. Sure, everyone wants crypto and adding it to a product helps you sell it, but that's just profiteering while reducing our common security." (San Bernardino - This week)
* "Companies are all using crypto. But no one wants it. The fact that every major platform has rolled out working, end-to-end cryptography tells us nothing about the preferences of their customers. They're wasting their shareholders' money on working security that no one wants, while reducing our common security." (Last week - ??)
Next: some company will cave to Rudd and lose all their business to a competitor with working crypto. Then Rudd will say:
* "Sure, everyone wants working crypto, but you can't always get what you want. Look at Sellout.com, plc: they caved to our demands to eliminate security and got destroyed in the market. We must defend the good corporate stewardship of Sellout.com, plc by punishing their competitors for not joining them in the race to the bottom."
Here's a brief history of the rhetoric of the backdoor wars:
* "No one wants crypto, you can tell because none of the platforms are deploying it. If crypto was something normal people cared about, you'd see it in everyone's products. You crypto advocates are weird and out-of-step." (Clipper Chip - San Bernardino)
* "Companies are all using crypto. They are being irresponsible. Sure, everyone wants crypto and adding it to a product helps you sell it, but that's just profiteering while reducing our common security." (San Bernardino - This week)
* "Companies are all using crypto. But no one wants it. The fact that every major platform has rolled out working, end-to-end cryptography tells us nothing about the preferences of their customers. They're wasting their shareholders' money on working security that no one wants, while reducing our common security." (Last week - ??)
Next: some company will cave to Rudd and lose all their business to a competitor with working crypto. Then Rudd will say:
* "Sure, everyone wants working crypto, but you can't always get what you want. Look at Sellout.com, plc: they caved to our demands to eliminate security and got destroyed in the market. We must defend the good corporate stewardship of Sellout.com, plc by punishing their competitors for not joining them in the race to the bottom."
Here's a brief history of the rhetoric of the backdoor wars:
* "No one wants crypto, you can tell because none of the platforms are deploying it. If crypto was something normal people cared about, you'd see it in everyone's products. You crypto advocates are weird and out-of-step." (Clipper Chip - San Bernardino)
* "Companies are all using crypto. They are being irresponsible. Sure, everyone wants crypto and adding it to a product helps you sell it, but that's just profiteering while reducing our common security." (San Bernardino - This week)
* "Companies are all using crypto. But no one wants it. The fact that every major platform has rolled out working, end-to-end cryptography tells us nothing about the preferences of their customers. They're wasting their shareholders' money on working security that no one wants, while reducing our common security." (Last week - ??)
Next: some company will cave to Rudd and lose all their business to a competitor with working crypto. Then Rudd will say:
* "Sure, everyone wants working crypto, but you can't always get what you want. Look at Sellout.com, plc: they caved to our demands to eliminate security and got destroyed in the market. We must defend the good corporate stewardship of Sellout.com, plc by punishing their competitors for not joining them in the race to the bottom."

Tuesday, 1 August 2017

'Real' people don't need encryption

Unfortunately, our Home Secretary here in the UK is the increasingly deranged Amber Rudd. Amber Rudd wants to break encyrption in the name of security fascism.

She seems to be channelling the Australian Prime Minister, Malcolm Turnbull, who recently said:
The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.
Here’s Rudd’s version:
I know some will argue that it’s impossible to have both – that if a system is end-to-end encrypted then it’s impossible ever to access the communication. That might be true in theory. But the reality is different.
Unfortunately, the source is behind a paywall if that’s the sort of thing that slows you down.

She goes on to say that “real” people don’t use encryption:
Real people often prefer ease of use and a multitude of features to perfect, unbreakable security. So this is not about asking the companies to break encryption or create so called “back doors”. Who uses WhatsApp because it is end-to-end encrypted, rather than because it is an incredibly 
user-friendly and cheap way of staying in touch with friends and family? Companies are constantly making trade-offs between security and “usability”, and it is here where our experts believe opportunities may lie.
I’m not sure what “opportunities” she means or why usability is scare-quoted, but there are lots of us who use certain channels because they are e2e encrypted rather than because of how nice they look. We have legitimate reasons for keeping secrets, not least of which are the things Amber Rudd says.

Want to see something even scarier from the same article?
So, there are options. But they rely on mature conversations between the tech companies and Government 
– and they must be confidential.
Let that sink in. Let. It. Sink. In. We won't be privy to the details of whether or how our conversations are to be laid bare to all and sundry. It'll be done and it'll be done in secret.

She finishes thisway:
The key point is that this is not about compromising wider security. It is about working together so we can find a way for our intelligence services, in very specific circumstances, to get more information on what serious criminals and terrorists are doing online.
It might not be about compromising wider security but that’s what it will do. She obviously knows that or she wouldn’t be fielding those objections. She’s lying. She's obviously lying.

What not to do while anonymous

Ineffective security can be worse than no security at all. Being lulled into a false sense of security can cause us to engage in risky behaviours. This is true of anonymous browsing technologies such as Tor.  As the Tor Project site takes pains to tell us, Tor is by no means a panacea. We need to avoid certain behaviours to remain anonymous online even if we're using anonymisation technology.

Hiding our IP address and encrypting our traffic is not enough to remain anonymous. As the Tor Project puts it:
Also, to protect your anonymity, be smart. Don't provide your name or other revealing information in web forms. Be aware that, like all anonymizing networks that are fast enough for web browsing, Tor does not provide protection against end-to-end timing attacks: If your attacker can watch the traffic coming out of your computer, and also the traffic arriving at your chosen destination, he can use statistical analysis to discover that they are part of the same circuit.
Whonix is more specific on its Do Not page. Note: you should definitely check out Whonix if you are interested in online anonymity.

Here's their index of things not to do while trying to be anonymous.  All excellent advice, as you'd expect.
Things NOT to Do

    Visit your Own Website when Anonymous
    Login to Social Networks Accounts and Think you are Anonymous
    Never Login to Accounts Used without Tor
    Do not Login to Banking or Online Payment Accounts
    Do not Switch Between Tor and Open Wi-Fi
    Prevent Tor over Tor Scenarios
    Do not Send Sensitive Data without End-to-end Encryption
    Do not Disclose Identifying Data Online
    Do Use Bridges if Tor is Deemed Dangerous or Suspicious in your Location
    Do not Maintain Long-term Identities
    Do not Use Different Online Identities at the Same Time
    Do not Login to Twitter, Facebook, Google etc. Longer than Necessary
    Do not Mix Anonymity Modes
        Mode 1: Anonymous User; Any Recipient
        Mode 2: User Knows Recipient; Both Use Tor
        Mode 3: User Non-anonymous and Using Tor; Any Recipient
        Mode 4: User Non-anonymous; Any Recipient
        Conclusion
        License
    Do not Change Settings if the Consequences are Unknown
    Do not Use Clearnet and Tor at the Same Time
    Do not Connect to a Server Anonymously and Non-anonymously at the Same Time
    Do not Confuse Anonymity with Pseudonymity
    Do not Spread your Own Link First
    Do not Open Random Files or Links
    Do not Use (Mobile) Phone Verification
This is just the index. Visit the page to see why these are all bad ideas.

There are things you can do to help projects like this.

You can donate to Tor and/or Whonix. You can run a Tor relay. You can campaign and advocate for privacy and you can harangue your government representatives. You can support the Open Rights Group and the Electronic Frontier Foundation. And you can educate your loved (or hated) ones.

Monday, 31 July 2017

How we screw friends, families and strangers by being careless

If you know me, you'll know that my eyes are constantly aching from rolling at the phrase "if you've nothing to hide, you've nothing to fear."  One of the guiding principles and main motive force of privacy work is that most people believe this.  It's the fact that most people believe it that makes the accidental or deliberate shedding of personal data so valuable.

I won't go into the reasons the phrase is wrong here, largely because I have learned through experience that if I start, I'm unlikely ever to stop.  But I will go into some of the reasons why it's difficult to convince people that their privacy matters and then give some suggestions about how to do it.

It's easy enough to understand why people use that damnable phrase. I don't actually blame anyone for believing it, although my aching eye muscles could do with a break for at least a few minutes every day.  The scale and malevolence of how we're all being screwed by the people who have our data is - deliberately - largely hidden.  We're not told who our data will be sold to or what it will be used for. Fine print tells us that it "might" (or the even more insidious "may") be 'shared' with 'partners' but without any indication of exactly what data is being shared with whom or why.  We have a tendency to think that if people aren't telling us things in capital letters then the things are probably not very important.

It's also hard to connect the consequences of sharing data with any negative outcome. It's unlikely that we'll ever connect an instance of identity theft with the box we ticked on a website nine years ago, for example. Plus, of course, we often give away data to get cool stuff (10th sub free! apps that anticipate our needs etc) and we don't want to give that up, especially since we don't always understand why giving away that data might be bad. Nor should we, necessarily. The benefits might indeed outweigh the harm for some people in some cases. The problem is that we're not equipped to make that decision, because of the deliberate machinations of the companies who make money from our data and the complexity of the landscape.

So people like me need to come up with convincing examples. How can ticking this particular box harm you in the future?  This is hard, not because examples don't exist but because they have to cover that distance in time and place between the ticking of the box and the stealing of the identity. They have to show that it's in aggregation of data over a period of years that the greatest danger lies. We humans are not very good at internalising knowledge of that sort or at practising the regimen needed to do anything about it.

The examples I've had the most success with tend to be ones that show how poor privacy habits can screw our friends and family. I find this confusing - I hate my friends and family - but it seems to work for lots of people.

It's a very important point and one I harp on enough to contribute to the eye-rolling muscle strain of my friends and family (good - I told you I hate them): privacy is a group exercise. It would be good if we tried not to inadvertently screw each other the whole time through our own carelessness.

There are ways we can screw the people in our own networks through complacency and other ways we can screw complete strangers.  Please don't take this as a manual of how to screw people, by the way, treat it instead as a way to be mindful of how our actions can harm others whether we mean to  or not.

1. Harming friends
I frequently talk about the Amazon gift service because it is such a perfect example. You're an Amazon customer, you buy someone a gift to be sent directly to them. You've just given away an enormous amount of information about that other, innocent (well, not if they're one of my friends or family) person. Their address, their possible birth date or other significant date, the sort of things they like (or that you think they like) and so on. If Amazon already knows their address it can start building a social network of their friends and family and make inferences about them too. 

Why is this harmful rather than delightful? 

For one thing, your friends never asked you to hand Amazon their data. There might be all sorts of reasons they don't want that to happen. Even if (perhaps especially if) you don't know what reasons they might have for not wanting Amazon to have this data, you should at least ask them first and not ask or press the issue if they say no. They might have things to fear regardless of whether they have anything to hide. And they might have things to hide.

Second, harm may come from a variety of sources, malignant, benign or indifferent.  Couples or families might be hurt if one member receives targeted adverts based on a gift. It's not hard to imagine how trouble might be caused if one member of a couple received the gift of a sex toy in the post. It might also be problematic if the adverts someone were served while browsing were informed by a gift, wanted or otherwise. Inducting someone into a social network operated in secret by people who wish to sell us things is not a kind thing to do.

Third, the companies who buy this data collate lists of people they deem 'vulnerable', by which they mean vulnerable to being sold things they don't want or need. The information you shed about them contributes to aggressive targeting and other borderline con-artistry as well as out-and-out conning by less scrupulous firms.

Fourth, this data will certainly be stolen at some point. Hackers will use this data to do bad things to our friends. They'll steal their identity, which is very much easier if they know trivial facts about people such as where they shop and eat. They'll create digests of information about certain types of people and sell them to bad guys who specialise in screwing that type of person. For example, helpful gifts might indicate that the recipient is elderly. An unscrupulous company might (rightly or wrongly) conclude that the elderly person is especially vulnerable and target them for scams that match the gifts they've received.

Fifth, spam. You're putting people on lists that are sold to spammers - email, real world, knocking on our doors - I don't think anyone wants that.

2. Screwing strangers
There's a very real sense in which customers are becoming less customers and more sheep to be shorn, bags of organs to be harvested. Our gleeful introduction of others into this practice completes the analogy.  We're all the Judas Goat for faceless corporations, dragging our friends into dangers they didn't sign up to.

But it's worse even than that because those companies are also screwing their own employees based on our privacy choices.

Here's one of the most obvious examples: you know when you visit a restaurant and they ask you to rate the service on a card or - increasingly - on a touch screen? What on Earth do you think that's for other than to generate an excuse to deprive servers of their tips? A simple scale of dissatisfaction isn't going to help the restaurant improve its business, is it? With the card-based version, companies might be angling to seem caring about customers (while still changing nothing and punishing servers) but with the computerised version, we can be sure that servers will be screwed more. What kind of servers generate the most dissatisfaction?  Can companies find ways to incorporate these results into their existing racist or sexist hiring and firing policies? Can they generate brand new racist or sexist policies?

Well of course they fucking can. And will.

But look also at the wider picture. Much of restaurant technology is aimed at either getting people back out through the door as quickly as possible or selling them more stuff. To achieve this they (especially chains) do all kinds of worrying stuff. They greet you by name. They remind you of what you ordered last time you visited (even if it's a different location). 

The servers and lower to middle management are easy to punish if this does not go according to plan and customers sit around enjoying their meals instead of hurrying and/or ordering stuff they didn't want.

By gleefully shedding data we turn ourselves into sheep to be shorn. But we turn other people into sheep, too. And we turn the former farmers into serfs, serving at the whim of their owners to achive goals not related to their jobs and punishments that are not based on how well they do their jobs.

That's the harm. Don't make me roll my eyes.

A terrible idea

http://www.bbc.co.uk/news/av/technology-40676084/how-facial-recognition-could-replace-train-tickets

The URL says most of what you need to know.

Wednesday, 26 July 2017

Tuesday, 25 July 2017

Some solid advice

https://media.boingboing.net/wp-content/uploads/2017/07/upsstore_100729948_medium.jpgCeaser's Palace in Las Vegas is holding this year's Defcon, a conference about hacking and security.  There are good reasons to believe that scoundrels will be attempting to hack everything in sight and even better reasons to believe they have the skills to pull it off.

For this reason, the UPS business centre in the hotel has decided only to accept print jobs that come as an email attachment, not on a USB stick or via a link. This is a reasonable precaution and probably the best compromise they can make while still doing business. Email attachments aren't at all safe either, of course, but people will need to print stuff, I guess. In general, reducing the number of attack vectors is worthwhile but at a conference like this it might just goad people into getting creative...

Cory Doctorow reports at Boing Boing (from where I borrowed the photo for this post), also noting that Andy Thompson (aka @R41nM4kr) has offered a list of security essentials for attendees.  They are pretty sensible. I follow an almost identical list of rules whenever I am forced to leave the house.

Here's the part of Thompson's list concerned with internet access and connectivity:
  1. Unless absolutely necessary for a job function, disable WiFi.
  2. Disable Bluetooth on your computer and phone.
  3. Disable NFS connectivity on your phone and computer.
  4. If Wifi is absolutely required, ONLY use your own provided wifi. I used a JetBack/MiFi and connect ONLY to that device.
  5. Always use a VPN as soon as you obtain WiFi access.
  6. Do NOT plug any network cable into the laptop.
  7. Do not plug any USB storage devices (hard drives, sticks, network adapters, Raspberry Pi’s, etc) into the laptop or phone. 
The importance of not connecting to public WiFi unless you really need to and then only doing so over a VPN cannot be overstated. I'd love to know more about the pscyhology behind the willingness we have to connect to random networks just because they happen to be there. We generally have no idea about whether they are secure, whether they have been compromised or whether the operators have malicious intent. We don't even know if the network is legit: we tend to assume that if there's a WiFi signal with the same name as the venue, then it's operated by that venue.

It's frighteningly easy to intercept traffic on unencrypted wireless networks. It's almost as easy to write scripts to scan for things that might be passwords flying about the place.  So if you do need to use public or commercial WiFi, be sure to use a VPN.

I use my phone as a mobile hotspot with a VPN rather than use other people's WiFi.  I only make an exception when there's no mobile signal. Something tells me this won't be a problem in Vegas.

My list, if I happen to be leaving the country (especially to the US) has some additions:
  1. Log out of social media, email and messaging accounts on your laptop and phone. Remove any cookies that store passwords.
  2. Use a hardware token (I use a Yubikey Neo) to protect access to your password manager (you're using a password manager, right?)
  3. Send the hardware token in your checked luggage, don't carry it with you.
That way, nobody can force you to reveal your passwords. Of course, they might refuse you entry to the country and it will be quite inconvenient when your luggage is inevitably lost, but if these prices seem like they are worth paying, go for it. Also, you'll feel kind of like a spy.

Monday, 24 July 2017

Age verification

The UK government is threatening to implement age verification on porn sites because won't
someone think of the children. This means that porn site users will have to prove they are 18 before they can feast upon the porn within.

I have to admit, I have some concerns about porn which can be summarised as:
  1. Lots of performers (especially women) are hurt by the porn industry. There are questions of whether consent is really possible when one's income relies on saying yes. Sex work is not necessarily just another job and there are certainly porn companies that take advantage of performers and their plight, if they have one. I have nothing at all against consensual performance and am entirely in favour of sex workers being allowed to work without criticism or harassment. But we usually have no way of telling what pressures the performers face and therefore what, if any, consent, they are really capable of. I think we - as consumers of porn - need to be very careful.
  2. The messages children are likely to glean from porn are not positive. They could be, I reckon. Hangups about sex and sexuality from previous generations and religious nonsense are terrible things and being positive and cool and non-judgmental about sex and sexuality is surely good. But it's clear that the vast majority of porn doesn't encapsulate great messages about agency and consent and equality. If a child's introduction to sex is mainstream porn, it seems likely that it'll have fucked up ideas about how to treat other people, especially women. I would  rather they learn sex-positive lessens from places other than porn.
The second item is most germane to the government's goal of age verification of porn sites but there are some problems. I'll stick with two:
  1. Literally everyone on the planet knows it won't work. It's the equivalent of - in the 70s and 80s - putting porn on the top shelves of news agents where children cannot supposedly reach with their short arms. It's like the buying of booze and tobacco by children through very easy means such as asking someone older. Refusing to sell young people cigarette papers probably won't cause an indelible barrier to their smoking a bit of whatever takes their fancy.
  2. Putting porn verification in the hands of the people who sell the porn is an open season for blackmail.
And this is the thing. Make up your own mind about porn: it's not illegal to make it (for the most part) or consume it (usually) in the UK. But if we have to register our consumption of porn, we're at the mercy of laws that will certainly change for the worse. 

Being a registered porn consumer will automatically put you in the frame for sex crimes, for example, regardless of any other suspicion. The register of casual porn users will become a list of automatic suspects. 

And porn companies, who have our credit card details, would be in an excellent position to threaten us, fake our browsing or chat behaviour or otherwise fuck us over.

And of course that's all before worrying about how the whole registration and access business might work, which is nightmarish in itself just from an engineering perspective.

TL;DR: It's complicated. Age verification won't protect anyone and it'll certainly expose people who haven't done anything wrong to undue and improper security. 

And above all, it won't protect the people who need the most protection: the performers.