Search Mailing List Archives

Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] Crypho

Mike Perry mikeperry at
Sat Jun 8 03:15:07 PDT 2013


> On Tue, Mar 26, 2013 at 09:24:13AM +0100, Yiorgis Gozadinos wrote:
> > 
> > Assuming there is a point of reference for js code, some published instance of the code, that can be audited and verified by others that it does not leak. The point then becomes: "Is the js I am running in my browser the same as the js that everybody else is?". 
> > Like you said, it comes down to the trust one can put in the verifier.
> > A first step could be say for instance a browser extension, that compares a hash of the js with a trusted authority. The simplest version of that would be a comparison of a hash with a hash of the code on a repo.
> > Another (better) idea, would be if browser vendors would take up the task (say Mozilla for instance) and act as the trusted authority and built-in verifier. Developers would sign their code and the browser would verify.
> > Finally, I want to think there must be a way for users to broadcast some property of the js they received. Say for example the color of a hash. Then when I see blue when everyone else is seeing pink, I know there is something fishy. There might be a way to even do that in a decentralised way, without having to trust a central authority.
> Dear Yiorgis:
> I think this is a promising avenue for investigation. I think the problem is
> that people like you, authors of user-facing apps, know what the problem is
> that you want to solve, but you can't solve it without help from someone else,
> namely the authors of web browsers.
> With help from the web browser, this problem would be at least partly solvable.
> There is no reason why this problem is more impossible to solve for apps
> written in Javascript and executed by a web browser than for apps written in a
> language like C# and executed by an operating system like Windows.
> Perhaps the next step is to explain concisely to the makers of web browsers
> what we want.
> Ben Laurie has published a related idea:

Now this is interesting. Had not seen that link before.

I wonder how that above 2012 Ben Laurie would get along with this
slightly more vintage 2011 Ben Laurie, who discounts not only the
hashtree concept, but any attempt to secure it with computation as well:

The problem is, 2012 Ben Laurie's system is obviously quite easy to
censor and manipulate if the adversary has any sort of active traffic
capabilities in terms of showing custom extensions of the hash chain (ie
malware) to targeted individuals.

2011 Ben Laurie's "Efficient Distributed Currency", on the other hand,
suggests a Tor-like multiparty signing protocol to avoid these issues:

But if we assume the worst, the 2011 model Ben Laurie is weak to an
adversary such as the NSA that might compromise his datacenter
computers (or keys) behind his back.

However, 2012 Ben Laurie could detect this compromise by the NSA if it
was reasonably hard to add new, fake entries to the hash tree, if
clients kept history, and if he had multiple authenticated network
perspectives on the hash tree (ie notaries).

Can't both Ben Laurie's just get along? ;)

To bring us back to Earth:

The core problem with the website-as-an-app JS model is that *every* JS
code download from the server is not only authenticated only by the
abysmal CA trust root, but that insecure/malicious versions of the
software can also be easily targeted *specifically* to your account by
the webserver (or by the CA mafia) at any time without informing you in
any way.

But, the really scary situation we now face is that many of us have
accounts on app stores capable of delivering updates *right now* that
have the same type of targeted capabilities. In fact, in my opinion, all
app stores that exist today are just as unsafe for delivering crypto
software as website-based solutions are :/.

I think I still agree that the takeaway is that it's better to create
situations where you only have to do a heavyweight "double Ben Laurie"
PKI+notary+hashtree+PoW all-in-one-check *once* upon initial download,
to establish a trust root with the software provider themselves, rather
than regularly trusting an intermediary appstore, webserver, and/or 
CA trust root.

Once that initial strong check is done (and you've either run the
malware or you haven't), then the software can update using its own
strong signature authentication. In the case of paid/proprietary
software, proof of purchase from the client should be based upon
blind-signatures/ZKPs instead of unique account credentials.

But like, really nobody in the world is doing any of this, are they?

Mike Perry

More information about the liberationtech mailing list