Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] How to make the Internet secure

Phillip Hallam-Baker phill at hallambaker.com
Fri Feb 3 07:27:52 PST 2017


On Fri, Feb 3, 2017 at 9:04 AM, Rich Kulawiec <rsk at gsp.org> wrote:

>
> Many of these kinds of proposals are wonderfully thought out BUT
> they presume underlying network infrastructure that (a) exists
> (b) has sufficient performance (c) is not heavily censored/blocked
> (d) is not heavily monitored/surveilled.
>

​Those are not the problems I am working on. I am always very clear about
the ​fact that a security solution for people in the (currently) free world
is not going to be suitable for use by activists in the non free.

​The point about the Mesh is to address the needs of the 70% of users who
live in places where strong encryption can be used with minimal risk and to
make the use of strong multi-layer security the default. ​



> The problem with that, as everyone here probably knows all too well,
> is that those assumptions are often not true.  In some places, they're
> less and less true by the day.
>

​Making encryption the default for email prevents attacks like those
performed by Putin/Wikileaks which is a good start.

Making encryption the default for email in North America and Europe means
that there is a much larger volume of encrypted traffic that will be
​available for the activist traffic to hide in.

So even if all connections are encrypted, that won't suffice: the
> endpoints won't be able to connect.


​That is an acceptable outcome in some situations and not in others.​


For these kinds of proposals to be viable under adverse circumstances --
> which are the circumstances where they NEED to be viable -- they need to
> be able to work over highly intermittent/slow connections (e.g., ad hoc
> connections that don't traverse any ISP's broadband infrastructure)
> and over sneakernet.
>
> *Especially* over sneakernet: I think that's an absolute must-have.
>
> For example:
>
>         1. Mail queue written onto a USB stick or memory card or similar.
>         2. It's smuggled into a location.
>         3. It's plugged into a system which delivers the queue locally
>                 and/or via wifi and/or bluetooth.
>         4. Same system collects traffic going the other way, then
>                 2 and 1 are reversed.
>

Yes, that is a great architecture for some applications to solve some
problems. It is not an architecture for solving the problem the Mesh is
designed to address.


> (As a site note, let me note that there's also good reason to think
> about the Usenet flooding model, because it makes traffic analysis
> difficult: everything is delivered to everyone.  I wrote a proposal
> a few years back to leverage Usenet news plus encryption to create a
> highly asynchronous, indirect communications method that would suffice
> to get email and news in/out of countries with NO external connectivity.
> Plus: almost all the software already exists, it's well-understood and
> mature, it scales indefinitely, works over intermittent and slow links,
> works over sneakernet.  Minus: arguably inefficient, somewhat vulnerable
> to DoS, requires rethinking duplicate detection problem.)
>

​That is not the problem I am trying to solve. But you could use the
Mesh/Remesh tool I have built as a basis for solving it.

First, consider the fact that you would need to have some form of ingress
control to stop the authorities simply DoSing the network by flooding. So
expect that the content is going to be limited to short text messages plus
longer data objects that come from trusted sources.​ That is a solvable
problem though, distribute some form of currency and use it to ration
bandwidth.

​The tool I use a lot is proxy-re-encryption 'recryption' which is three
key cryptography.


I am, by the way -- and this is a nod to your preamble about
> incrementalists and absolutists -- firmly in the middle.  There are a
> number of things that we're doing (and have been doing) that we need
> to stop doing immediately, and some things we don't do that we need to
> start doing yesterday.  That's the absolutist part.  The incrementalist
> part says that we should not be tinkering with machinery that has proven
> itself to be very robust in the face of determined attacks *unless*
> the replacement parts we have in mind can demonstrate that they'll be
> just as resilient.  For instance, I'm highly dubious about JSON email,
> which I think is far more likely to introduce gratuitous complexity,
> creeping featurism, and horrible code bloat -- and thus massive security
> and privacy issues -- than anything else.
>

​JSON is a lot simpler than any of the alternatives we have used to date.
Right now we use different packaging formats for each messaging modality.
We use RFC822 and MIME for mail, ASN.1 for S/MIME and X.509/PKIX, Jabber
uses RFC822 plus XML, plus other stuff, ...

Security is layered in differently in each case, always as an afterthought
and always an optional extra that barely functions and is poorly supported.

One of the big problems with SMTP is that the infrastructure has ossified
and it is impossible to eliminate bad design decisions. Mailing lists are a
hack that don't have a security mode. There is no concept of social network
and so anyone can send a message of any length with possibly malicious
content to anyone else without introduction

​Any security infrastructure that does not support the existing Internet
application protocols is not worth considering. Signal is growing fast at
the moment but there is no reason to believe that it will ever become
larger than all the earlier niche secure email ​systems that grew to a few
million users and then stalled. But there are some security problems that
cannot be solved in the current protocol frameworks.

The MatheMatical Mesh is very closely modeled on the deployment strategy
developed by my former CERN/W3C colleagues Dan Connolly and Tim Berners-Lee
for the World Wide Web. When I first started working on the Web, less than
5% of Web content was on HTTP servers. Most was on FTP, some on Gopher,
some on NNTP. The Web took off because it provided two things, a compelling
user experience for native content and a means to use the legacy content as
bootstrap.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.stanford.edu/pipermail/liberationtech/attachments/20170203/7e185e59/attachment.html>


More information about the liberationtech mailing list