Search Mailing List Archives
[liberationtech] Fwd: Haystack
jyoull at alum.mit.edu
Thu Aug 19 14:47:33 PDT 2010
I regret that I can't take the time now to write a proper rebuttal, but due to other constraints i have to answer this now or never, and never feels wrong.
I assume that ALL "users" of security technology are unqualified to evaluate the merits or non-merits, and particularly, the long-term risks of using such technology. There are stacks of CHI and other papers discussing this. Some of them take an orthogonal approach and you have to read it into them. THis has nothing to do with looking down at people in other places. I "trust" that my word processor is not sending my poetry to the FBI. that's about all I can do. At some point - and it comes pretty early when we're talking about software on a computer network - we can't do any more checking and have to "trust". The declaration that users are responsible for their own well-being forgets that they can't check out big claims because even the creators of the software can't really check out some of their own claims.
i just reviewed a conference paper that purported - yet again - to "fix" web site security through an enhanced browser display. That's a TRIVIAL problem compared to keeping noisy people safe from their angry governments, and we _do not_ even know how to do that correctly, despite 15+ years of trying.
Developers are not good at predicting the long-term implications of using their software. I've seen no disclosures yet that rise to the level of sober severity and disclaimers in which every one of these technologies should be wrapped.
I know why that is - because people at risk might not use them. As a developer, you don't want to discourage people from using your stuff.
Talking to the wrong person on the street carries a tiny fraction of the risk of talking to the wrong person online... in terms of detection after the fact (and forever), playback, "evidence against you," and the at-risk person's ability to perceive the real or possible risks of the action or to have any sense of who's watching either now or who's reviewing the records later. I might vary my walking routes and meeting habits to avoid pattern analysis, but online much of what might feel like "avoidance" is actually disclosure - encrypting e-mails? Uh oh...
If perhaps people don't use this software because it's too risky to use, then maybe that is correct. But I've never seen enough info from publishers about why people should "not" use a piece of software, only lots of talk about why they should.
I've seen others try to fault these trusting, non-technical (or even somewhat technical) users for their "decision" to use the software, with the same claim that they are "accepting" some risks. I say that is absolutely unfair and a cheap dodge by developers who like publicity for "human rights software" but who really have not taken ownership of the consequences for others who may use the software as it was intended to be used.
As civilian users of software or any other complex machine, is it the case that we would look at a piece of software (or any other machinery) that seems to do something we want, and say "hmm... this software is endorsed by several organizations, and it's written up in the Wall Street Journal.... maybe it's completely unsuited for the task. I'll avoid it."
No. We don't do that.
The further risk is that people who believe they are "more safe" because of this software may take greater risks than they would if they were not using any software at all. The risk there is very much wrapped up in our inability to estimate or compare abstract levels of risk or reward (that's why gambling works). The tragic downside is that when there is a failure, the user is in a much worse spot than they may have been, when they were afraid of being caught, because they've possibly been induced to send more messages , or reveal more damning information, than they might have otherwise - due to the illusion of safety.
On Aug 19, 2010, at 12:42 PM, Gabe Gossett wrote:
> “How many of the people known to have been arrested or silenced were using, or thought they were using, some kind of 'safe' technology to subvert both technological blockades and national laws? Until we know that, should we be prescribing these cures to patients we've never met and can't watch over?”
> At the risk of going into a similar debate that took place on this listserv within the last year . . .
> Is there any way to know how many people have been arrested or silenced when using a “safe” technology? Not really. No doubt it has happened many times. But I don’t see why that would mean these technologies shouldn’t be developed and distributed by Westerners in safe societies with access to the means to do so. There is a long history of cat and mouse government information blockade circumvention that predates computers. In every instance that circumvention information circuit involved unknown degrees of risk.
> As long as the developers are honest about the capabilities of their applications, and the users have as good an understanding of the risks as is possible, I don’t see a problem. I’m speaking on a theoretical level here, not about the implementation of any one technology. Haystack may have inflated claims about its capabilities and lacks clarity on what they are offering (if anything at all), and that is wrong. But, Haystack aside, if we waited until we knew for certain whether a technology was entirely safe from government prying eyes or not we would just do nothing. If any circumvention technology developer is going around claiming that they have developed an entirely safe technology, that is wrong. I have a problem, though, with implicitly assuming that users in repressive countries are too naïve to weigh the risks of trying to get around government barriers. I see that implication in the statement above, though perhaps that was not intentional.
> I think that there is generally a good point in that statement, but it only goes so far. Any user of these technologies is probably already putting themselves at risk with their government. Just having a face to face conversation with the wrong person, after all, will get you in trouble. So if a safe Westerner thinks they can develop something that might give people in these countries an edge against a government, then by all means let them do it and feel good about it.
> From: liberationtech-bounces at lists.stanford.edu [mailto:liberationtech-bounces at lists.stanford.edu] On Behalf Of Jim Youll
> Sent: Thursday, August 19, 2010 10:32 AM
> To: Mahmood Enayat
> Cc: Liberation Technologies
> Subject: Re: [liberationtech] Fwd: Haystack
> On Aug 19, 2010, at 6:42 AM, Mahmood Enayat wrote:
> The big players of circumvention solutions, which have received less attention, are all available here: www.sesawe.net , Why Haystack is not available online like them?
> Cat and mouse can be played, yes.
> But this technology is looking more and more like merely a way for privileged, warm, well-fed, free, safe Westerners to feel good about themselves while putting already at-risk populations at even greater risk of trouble.
> Laws, guns, and prisons trump technological finesse. Period. This is not negotiable.
> Keep in mind that US companies providing equipment to Internet providers are also providing access and monitoring capabilities in that equipment... at full OC3 speeds...
> How many of the people known to have been arrested or silenced were using, or thought they were using, some kind of 'safe' technology to subvert both technological blockades and national laws? Until we know that, should we be prescribing these cures to patients we've never met and can't watch over?
> "...But Chinese surfers often use proxy servers - websites abroad that let surfers reach blocked sites - to evade the Great Red Firewall. Such techniques are routinely posted online or exchanged in chat rooms. But China's 45 million internet users face considerable penalties if they are found looking at banned sites. According to human rights activists, dozens of people have been arrested for their online activities on subversion charges."
> - http://news.bbc.co.uk/2/hi/technology/2234154.stm
> ... Those attempting to access these banned sites are automatically reported to the Public Security Bureau. Internet police in cities such as Xi'an and Chongqing can reportedly trace the activities of the users without their knowledge and monitor their online activities by various technical means."
> - http://www.amnestyusa.org/document.php?id=ENGUSA20060201001
> "...Around 30 journalists were known to be in prison and at least 50 individuals were in prison for posting their views on the internet. People were often punished simply for accessing banned websites"
> - http://www.amnesty.org/en/region/china/report-2008
> "... The ministry of public security said 5,394 people had been arrested and that over 9,000 websites had been deleted for having pornographic content. The ministry did not say how many people had subsequently been put on trial. The authorities released the figures with a warning that its policing of the internet would intensify in 2010 in order to preserve 'state security'. China maintains strict censorship of the internet in order to make sure that unhealthy content, including criticism of the Communist Party, does not reach a wide audience."
> - http://www.telegraph.co.uk/news/worldnews/asia/china/6921568/China-arrests-5000-for-internet-pornography-offences.html
> liberationtech mailing list
> liberationtech at lists.stanford.edu
> Should you need to change your subscription options, please go to:
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the liberationtech