Search Mailing List Archives
[liberationtech] The worrisome trend toward liability in networking technology
jyork at cyber.law.harvard.edu
Mon Nov 7 14:58:00 PST 2011
I'll take a whack at this. To start off, I'll say that you raise some
important concerns that I share, particularly what happens when/if
regulations are enacted in the US and EU and governments instead turn to
Chinese (or other) companies for solutions. Definitely a point worth
That said, I'll start off by saying--mostly for the sake of other
readers--that you are missing a huge nuance in the law here. I've
elaborated, but essentially you seem to think that more regulation exists
than currently does...poking a huge hole in most of your arguments. See
On Mon, Nov 7, 2011 at 1:18 PM, Daniel Colascione
<dan.colascione at gmail.com>wrote:
> I've read with interest the recent discussion on this list of the
> ramifications of the use of western filtering technology by oppressive
> regimes. While the problem warrants serious discussion, I feel that
> the direction of remedies is veering dangerously and I'd like to adopt
> a bit of a contrary position on the nature of an appropriate response:
> We're all outraged by the use by oppressive regimes of western
> filtering and network monitoring technology. But we should pause
> before acting on this outrage: the situation is more subtle than might
> be immediately apparent. Oppressive regimes don't censor for a
> profit: the filtering and monitoring are existential and ideological
> imperatives. While these regimes (like all rational actors) seek to
> minimize costs, they would continue to censor even at greatly
> increased prices. Of course, there is some price at which these
> regimes would discontinue censorship: but, as the failure of
> conventional economic sanctions to discourage other behavior has
> shown, oppressive regimes are nowhere near being inconvenienced enough
> to change their behavior.
> If the efforts discussed here were successful and oppressive regimes
> were somehow cut off from western filtering technology, we wouldn't
> see an end to Internet censorship: Instead, we would see regimes
> invest in domestically-developed solutions, which are actually less
> expensive than one might think: anyone can create professional-quality
> software for free, and hardware can be manufactured with a little
> capital investment. Iran has already made headway in this area. All
> the sanctions, codes of conduct, lawsuits, and so on we've been
> discussing here amount to an increase in the cost of censorship, and
> oppressive regimes simply aren't cost-bound at this point. Now, all
> things being equal, the world becomes a better place as censorship
> becomes more expensive. If all things were equal, I would support the
> ideas being raised on this list.
I think this is an incredibly relevant point, and one that I make
frequently when speaking publicly; for example, Chinese company Huawei was
recently spotted shopping their wares at an expo in Tunis...so it's not
simply the American and European solutions to worry about.
Thus, any discussion about limiting the export of technologies from these
countries should be coupled with the realistic notion that doing so may not
stop most regimes.
That said, I think there are two separate reasons to continue fighting
this: one is moral (e.g., the US stands for X and thus should not do Y),
but the other is economic: As a taxpayer, I am contributing to the State
Department's "net freedom" agenda. And yet, that agenda--which, among
other things, heavily funds circumvention technology--is not only viewed
globally as being hypocritical due to the exports made by Cisco et al, but
is also genuinely throwing money down the toilet: The American government
is funding tools to circumvent content blocked by tools made by American
companies. That point isn't missed by most of the folks I work with
outside the US.
> But not all things are equal: the measures under discussion have
> serious externalities that warrant discussion, and they could affect
> the entire technology industry.
> 1) How do we decide what technology is subject to regulation? Network
> technology is quintessentially dual-use: anyone who has administered a
> network knows that the same features that allow us to block outbound
> attacks, accelerate the web via transparent caching, scan our servers
> for vulnerabilities, detect cross-site scripting attacks, and scan our
> email for spam can also be used for malicious ends. Cluster bombs and
> Saturday night specials have no legitimate purpose, but proxies,
> firewalls, and logging tools have many.
This is definitely an important question as well. EFF's
very specific in their approach to companies producing
*surveillance* equipment. We're not advocating a simple ban on such tools,
either, but rather, asking companies to approach their sale of such tools
in a manner that holds the companies accountable (to the public, to
shareholders) and limits the ability of the buyer to use them for certain
> The only reasonable way to draw a line between filtering and general
> network technology is to use intent, which is a notoriously vague
> standard. If marketing censorship technology were to become attached
> to serious liability issues, vendors would develop software with
> precisely the same features and market them with "wink, wink, nudge,
> nudge" references to benign network management.
Again, this is why we're advocating for standards, rather than governmental
regulation. We've all seen the text of GOFA, right?
> 2) Where does responsibility end? Brett Solomon suggests that we
> attach liability to technology, but the concept isn't entirely clear.
> Consider Blue Coat: the exact mechanism by which Syria acquired Blue
> Coat's technology remains unclear. If Blue Coat knowingly sold this
> technology to Syria, it ought to be prosecuted under applicable
> laws. But if it turns out that Syria indirectly obtained Blue Coat's
> technology through the gray market or outright piracy, I feel it'd be
> difficult to attach legal or moral blame to the company.
Let's be clear here: *Beyond Syria *(and a few other countries)*, there are
no laws*. If Blue Coat sells to Bahrain knowingly and for the purpose of
surveillance, there is not a single law to prosecute them under. Selling
to Syria is only prosecutable because a) there have been previous sanctions
regulating those sales and b) because Syria is recognized as a repressive
And yet, Bahrain continues to use *the same exact tools* for *the same
exact thing*, while both the government and the companies operate with
As to your latter point, sure...if Syria obtained Blue Coat tech through
the gray market, there's little to be done legally. As it appears at the
moment, the tech was sold from Blue Coat to the UAE for use in Iraq...which
is questionable anyway! Why should Blue Coat sell to Iraq with impunity?!
Oh right, because the US is on friendly terms with the puppet regime that
we instituted there.
> Let's suppose we do hold Blue Coat liable for Syria's use of its
> products because it provided automated software updates or other
> routine and free support: to avoid future liability, companies would
> be forced to consult blacklists and prevent certain IP address ranges
> from downloading updates, manuals, and other materials. (Or they could
> be required to go through a more robust process to verify the
> nationality of anyone requesting support.)
Again, we're only theoretically holding Blue Coat liable because we're
talking about Syria.
> These restrictions would be wholly ineffective --- any marginally
> competent network administrator could obtain support via third parties
> and proxy servers, but every networking firm would need to bear the
> expense of implementing these restrictions. Would we propose that
> companies use a more robust process to verify the nationalities of
> those requesting support?
> After the failure of these measures, what would be next? Would we
> compel networking vendors to include "kill switches" that could be
> used to remotely disable network technology remotely if it's found in
> an unapproved context? Should devices refuse to function without
> continuous approval of the original manufacturer?
> Never mind the financial cost of these measures: instead, imagine the
> potential for misuse. What happens when bad actors inevitably
> commandeer these safeguards to cause mischief? What happens when
> governments force companies to cut off support for (and perhaps
> disable outright) network equipment in a rival nation?
> 3) What about open source software: do we subject it to the same
> regulations, effectively destroying the tradition of openness that
> fosters its development? Or do we exempt open source software from
> regulation, allowing oppressive regimes to access filtering technology
> almost as easily as before, but penalizing companies that try to
> innovate with new networking technology they'd like to keep
> proprietary. Should downloading Squid or Nessus warrant a background
> check? The result of such a scheme would be a retroactive,
> self-inflicted defeat in the crypto wars of the 1990s.
This is where I suggest you read about the Berman
for the google-d link, it's a PDF) These are not new questions.
> When we ask for regulation of filtering technology, we're asking for
> the very dystopia we're all trying to avoid. Brett Solomon's proposal
> for "human rights by design" is in fact "oppression by
> design". Because software cannot judge the moral orientation of the
> bytes it processes, the only mechanism with which we can enforce
> enforce human rights is authoritative third party control, which can
> be used to do more evil than it can ever do good.
Again, we're not asking for the regulation of straight-up filtering
technology, nor are we all necessarily asking for regulation. Perhaps more
nuance needs to be used in this discussion, I'll grant you, but I think
most folks here have differentiated between surveillance, dual-use, and
filtering tech. Websense makes straight-up filtering tech, for example,
while Cisco and Blue Coat are obviously producers of dual-use tech. Not
sure, actually, if there are any great examples of straight-up surveillance
> Look: it makes sense to enforce simple, clear measures to disrupt
> censorship and surveillance: we already effectively prohibit direct
> transfer of technology to oppressive regimes, as anyone who has dealt
> with OFAC can attest. But the increasingly strenuous measures proposed
> here, e.g. kill switches and licensing, come with diminishing returns
> and serious costs to our budgets and liberties, and any "victory"
> would be Pyrrhic.
*we already effectively prohibit direct transfer of technology to
*No, we do not. We prohibit it to 5 countries (Syria, Sudan, Iran, Cuba,
and N. Korea) while Bahrain, Saudi Arabia and our "frenemies" operate with
total impunity. If you're cool with the way the State Department defines
"repressive," fine, but I'm most certainly not.
> Most participants in this discussion have analogized filtering
> software to weapons technology, but the better analogy is drug
> prohibition: both drugs and filtering technology can be produced
> without massive capital investment; both can be produced clandestinely
> (albeit at higher cost relative to commercial production); both are in
> high demand; both are easily moved on black and gray markets; both
> often have perfectly legitimate uses impossible to distinguish from
> illicit ones; both may very well have important adverse effects on
> society. Most importantly, in both cases, attempts to crack down on
> trade simply moves the trade underground while simultaneously causing
> collateral damage to society as a whole.
This is a fair point, but again, you still have the moral and economic
arguments to deal with. Why should my taxes pay for money to solve a
problem that companies from my country help create? Why should the United
States--which proclaims net freedom--allow its companies to aid in
repression? Just questions.
> I may attract some flames for the sentiment, but I believe that
> outrage over the use of western filtering technology is a red
> herring. Our resources are better spent on 1) circumventing of the
> filtering that will inevitably be put in place, and 2) activism that
> will reduce the demand for censorship and surveillance in the first
> Really? You think circumvention is where the funding should go? I think
we need to think pretty hard about that statement...First off, you have
evidence that very few people are using it in the first place (see Berkman
Center's 2010 report which claims 3%). Even if Berkman's report is way off,
that's still a serious issue. Second, you've got loads of incredibly
dishonest and unsafe circumvention technology (ahem, Daniel), including
several of the major providers and some of those funded by the USG, with
little accountability from most. And even when you do have great tools
(e.g., Tor), activists often fail to use them for various reasons (most
often heard of Tor? "It's too slow"). For better or worse, that's just
the way it is. Sure, we should think about solutions, but I'm not thrilled
with the state of circumvention funding as it exists...the squeakiest
toolmakers tend to get the grease.
As to your second point, sure, I doubt you'll hear any arguments on that
> liberationtech mailing list
> liberationtech at lists.stanford.edu
> Should you need to change your subscription options, please go to:
> If you would like to receive a daily digest, click "yes" (once you click
> above) next to "would you like to receive list mail batched in a daily
> You will need the user name and password you receive from the list
> moderator in monthly reminders.
> Should you need immediate assistance, please contact the list moderator.
> Please don't forget to follow us on http://twitter.com/#!/Liberationtech
jilliancyork.com | @jilliancyork | tel: +1-857-891-4244
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the liberationtech