Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] CJDNS hype

Ralph Holz holz at
Sun Jul 14 13:15:57 PDT 2013


> Not necessary. If you (in some abstract overly network) connect to
> multiple other nodes and Sybil attacks (so fake identities) in the
> network are impossible, then there is a low probability that all of
> those nodes would be controlled by the adversary. If you send a
> request to all of them, and they repeat the same thing ... This
> example might be impractical, but it is here just to show that it
> might not be necessary to know your peers.

Ah, I see. I was thinking more along the lines of "how do I make sure I
am not accidentally speaking to the censor himself?". As that would be
as harmful as not being able to speak at all.

BTW, how do you propose to make Sybil nodes "impossible"? I don't
believe this is achievable without having network access control. Which
brings you back to the trust question, as you need a party in control of
this. While it's possible to decentralise that to some degree, previous
work in that direction was not very fruitful (performance degradation,
unrealistic requirements to network). There is a reason no P2P network
uses a decentralised access control - it's either centralised, or it
does not exist.

BTW, some work on limiting Sybil nodes or detecting them does exist.
Some people propose crypto puzzles, but that is bad for weaker devices.
Others say you should avoid random ID and bind the overlay ID
cryptographically to the underlay ID (= hash of IP+port). That does not
work well with NAT. Others attempt to detect Sybils, e.g. SybilGuard.

> So yes, while we currently don't know how to do such a network without
> being sure to who you are talking, I am wondering if there is some
> proof that we will never be able to know how to do that? So is there
> some inherent property which would as a consequence show that we have
> to trust somebody ultimately? (Maybe we have to trust them just
> partially, or just for a short periods of time, or maybe with some
> probability we can get "good enough" performance.")

I am not so sure I understand what you mean, I am afraid. But generally
speaking, it is very hard to quantify 'trust'. There is a host of
literature on that, with trust metrics etc. I don't know much about
that, except that I don't see it used anywhere.

>> Only if the route is predictable and not in some way randomised. E.g. in
>> Kad every step through the routing protocol gives you a choice of nodes
>> to query next. The attacker would need to make sure he occupies all of
>> those hot spots. Add some random walk during the initial routing phase,
>> and costs for the attacker rise a lot more.
> Here, you are talking about an attacker targeting a particular route.
> I have more in mind an attacker who's goal is just to be disrupting
> the network enough so that people give up using it altogether. And if
> I understand correctly, I just have to spawn many man IDs and some of
> the routes (because of the Kademlia distance) will hit those IDs
> sooner or later. Anything that hits me, I drop.

Ah, I see. But the same argument would still apply for Kad: there are so
many other options (routes) to take, your network does stand a good
chance of transporting your payload.

Although, if I were the censor, I'd go and censor the underlay. Do DPI
on IP level and throttle (not block) by dropping 80% of all TCP
segments. Just enough to let TCP retransmit all the time. Tor has been
attacked (not throttled, blocked) that way on several occasions. I
expect Jake and Roger will tell us more about that when they give their
next talk. :)


Ralph Holz
I8 - Network Architectures and Services
Technische Universität München
Phone +
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF

More information about the liberationtech mailing list