Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] Internet blackout

Rich Kulawiec rsk at gsp.org
Fri Jun 14 04:49:39 PDT 2013


On Thu, Jun 13, 2013 at 04:27:17PM -0700, Seth David Schoen wrote:
> These properties are really awesome.  One thing that I'm concerned
> about is that classic Usenet doesn't really do authenticity.  It
> was easy for people to spoof articles, although there would be
> _some_ genuine path information back to the point where the spoofed
> article originated.  It seems like if we're talking about using
> Usenet in an extremely hostile environment, spoofing and forgery
> are pretty significant threats (including classic problems like
> spoofed control messages! but also cases of nodes modifying
> message content).  

I completely agree with you: I share that concern.  I think a *possible*
fix for it -- or perhaps "fix" is too strong a term, let me call it
an "approach" -- is to remove the Path: header (among others) and use
the article body's checksum as a unique identifier.  Thus node A,
instead of telling node B "I have article 123456, do you want it?",
would say instead, "I have an article with checksum 0x83FDE1, do you
want it?" -- slightly complicating propagation, but not unduly so.
I think this can be used to strip out all origination information:
when A presents B with articles, B will not be able to discern
which originated on A and which are merely being passed on by A.

Encrypting everything should stop article spoofing.  (Although it
doesn't stop article flooding, and an adversary could try to overwhelm
the network by injecting large amounts of traffic.  Deprecating the
Path: header actually makes this easier for an attacker.)  The use of
encryption also means that private messages can be sent from user U1
to user U2 -- yes, they'll be present on every node (eventually) but
only user U2 will be able to decrypt them using her private key.

(In other words, the way U2 discovers which messages are directed to
her is that she attempts to decrypt them *all*.  When it works: that
one was for her.  Provided an adversary does not have U2's private key,
the adversary can't figure out which ones are addressed to her.  Or who
they're from.  Or where they originated. [1])

Your mention of spoofed control messages is spot-on: that's another
problem with this.  I've been thinking that perhaps the approach to
that is to consider only allowing certain control messages: for example,
article cancellation probably shouldn't be supported.  (I briefly thought
about encrypted article cancellation but then realized that it would
only work on one node: that belonging to U2 in the example above.
Not very useful!)  I rather suspect though, that my analysis of this
is incomplete and that the best way to figure out how to deal with
control messages might be to set up a testbed network and have someone
play the role of an adversary.

Clearly, the Usenet model is very efficient for one-to-many, but
inefficient for many-to-one and one-to-one.  However, that same
inefficiency is what gives it the ability to survive major node loss
and link disruption and still work.  It's also what makes it resistant
to traffic analysis: when everyone says everything to everyone else,
it's much harder to discern who's really talking to who.

Speaking of survivability, this recent work:

	Guaranteed delivery -- in ad-hoc networks
	http://web.mit.edu/newsoffice/2013/ad-hoc-networks-0109.html

has direct applicability here.  Hauepler's algorithm shows that to
guarantee delivery to a network of N nodes, delivery to log2(N) nodes
will suffice.

What all this does *not* give a real-time communications medium.
But I'm not at all sure that's desirable.  Over the past few years,
I've slowly formed the hypothesis that the closer to real-time
network communications are, the more susceptible they are to
(adversarial) analysis.  I can't rigorously defend that -- like I said,
it's just a hypothesis -- but if it's correct, then it would be a good
idea, when and where possible, to make communications NON-real-time.
(Thus it might be a good idea for nodes participating in this
kind of network to randomize the time intervals for outbound
transmissions, in order to avoid generating a flurry of network
activity that can be readily associated with an external event,
a location, or a person.)

One of other nice features of a Usenet-like architecture is that
it works beautifully with sneakernet data transmission.  A micro SD
card or a USB stick can hold a *lot* of data, and they're easily
concealed, traded, or dropboxed.  It's not at all unreasonable to
conceive of a scheme where daily reports of events inside Elbonia
are transmitted by physically carrying them to a location outside
Elbonian-controlled network space and injecting them back into
the network.  Or vice-versa.

I'm not saying this is "the" answer.  I'm not even sure it's "an"
answer.  But I think it might be the foundation for one.  Now if
I could just find the funding to work on it for 6-12 months I'd
be all set. ;-)

---rsk

[1] I suspect that an adversary in possession of a large number of
nodes might be able to make some progress on this.  I also suspect
that someone much better at graph theory than me will need to tackle
that problem if we're to have something better than my best guess.



More information about the liberationtech mailing list