Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] dumb question

Mike Perry mikeperry at fscked.org
Wed Sep 15 08:42:02 PDT 2010


I feel I have to interject here, because while I am decidedly in the
Tor camp, I come from a background which has also caused me to pretty
strongly reject academia, mostly because I believe that the majority
of what computer-related academia performs is not in fact science.

This distinction also may seem over-nuanced, but I feel it is just as
important to make, because I believe that Tor actually *is* the middle
ground that you are seeking... 

Thus spake Daniel Colascione (dan.colascione at gmail.com):

> Hi, Adam. I appreciate your nuanced post. You are right: there is plenty
> of room for other tools in the gap between Tor and a bare proxy. Every
> solution involves certain trade-offs, and not everyone prefers the
> trade-offs made by Tor.

Perhaps I am biased, but I believe that the tradeoffs made by Tor are
actually practical and rooted in engineering, not academia. This means
that in fact they are not tradeoffs at all, but just more engineering
problems that remain to be solved. 

In fact, I find it extremely frustrating and somewhat myopic when
people point at Tor and proclaim "Well, that obviously won't work for
performance, therefore one hop solutions are the only alternative!",
when these people have no experience deploying, balancing and scaling
a distributed system with anywhere near Tor's numbers of users or
relays.

Yes, we know Tor's primary problem is latency. However, I believe this
problem has arisen primarily due to the expedience in design of a
reliable, stream based link protocol that also carries reliable,
stream based transport. This was done because it was easy to build.

We can do a number of things to improve on this. For example: We can
switch to end-to-end reliable flow control and a datagram transport.
We can write a SPDY exit proxy/HTTP prefetch to reduce round trips of
the application layer over Tor. We can parallelize and shortcut the
TCP handshake process to cut round trips. We can prioritize
short-lived streams over bulk transfers. We can improve path selection
to minimize latency metrics. We can experiment with cheap, unreliable,
but distributed bulk transport and portable IP space at large ISPs... 

We are in fact trying all of these approaches, and more. We have lots
of engineering left to do on our network that should bring it more in
line with tolerable performance thresholds.


Now, the reason I prefaced this rant with my comments about academia
is that there are actually a whole different set of tradeoffs that the
academic community believes Tor has made. They believe that we have
made tradeoffs in favor of deployability, and that the reality is that
a well-funded (possibly global but maybe not) adversary can still
identify who is visiting which sites with very very high reliability.

Personally, I believe that most academics need to do a better job at
proving these claims:
http://lcamtuf.blogspot.com/2010/06/https-is-not-very-good-privacy-tool.html#c6562786287106207707
https://conspicuouschatter.wordpress.com/2008/09/30/the-base-rate-fallacy-and-the-traffic-analysis-of-tor/

However, many academics will say that regardless of the methodology,
there are still adversaries out there that we are vulnerable to:
http://archives.seul.org/or/talk/Aug-2010/msg00239.html

Now this is where my main conflict with the academics arises. I firmly
believe that actual science, not just academia (and certainly not SF
startup culture) is required to find solutions to the problems of
*both* censorship resistance and anonymity.  

In fact, I believe that fingerprinting of obfuscated data and evasion of
a global passive adversary are really the same problem:
http://archives.seul.org/or/talk/Aug-2010/msg00225.html

I am also not convinced that most research produced by computer
science academics is at all reproducible, due to lack of source code,
result traces, and standardized testing datasets, and that this fact
is in danger of preventing us from properly evaluating defenses in the
future.

This very same deficit of information is also what made me
shy away from Haystack. Lots of hype, but no actual reproducible
or verifiable results.

> By the way, I noticed certain misunderstandings in your post. I hope
> you don't mind my correcting them.
> 
> > The key is not to confuse the two. Haystack is such a disaster
> > because it purported to be an anonymity tool but really was just a
> > semi-functional circumvention tool.
> 
> I assume you're talking about the broken test program that was
> released.  If so, I urge you not to perpetuate this myth. Haystack
> was never completed. The program that was released was never meant
> to be Haystack.

If you believe that you have a truly undetectable link layer, the Tor
folks (and everyone else?) would love to have it published somewhere.

I was at first very hopeful when I read Haystack's FAQ about combining
its use with Tor, because if the claims were true, this would be
great. However, I'm a skeptic, and without a published, reviewed
design, I'm not going to believe this is in fact the case.

Of course, publishing might make it slightly less valuable from the
point of view of a startup that is selling circumvention. However, I
do promise you that any revenue model of a censorship system that does
not also involve anonymization will ultimately end up inherently evil ;)


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20100915/6f23fea8/attachment.sig>


More information about the liberationtech mailing list