Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] New Yorker debut's Aaron Swartz's 'Strongbox.'

Fabio Pietrosanti (naif) lists at infosecurity.ch
Thu May 16 15:05:16 PDT 2013


I like this topic of discussion!

On 5/16/13 4:45 PM, Eleanor Saitta wrote:
> Well, in this case, the system was designed to receive leaked
> documents, fairly specifically; I think that's probably a reasonable
> term here.
My personal feeling is that "deaddrop approach" works if an only if all 
the following conditions apply:
- the traffic of submission is very low
- the technical team are dedicated (paid) to do that job (incl. 
maintenance and support)
- the journalists are dedicated (paid) to do that job (and work mainly 
from an office and not on the move)
- no cooperation among journalists is required (if data get out the 
system for cooperation, it would break the threat model)

I know that mine is going to be an "unpopular position", but by trying 
to help some whistleblowing initiatives (some of them in 
not-so-democratic-countries) i learned that journalists and activism 
organization engaged in that kind of "actions" (anonymous 
whistleblowing) does not have much resources (money, time, skills) and 
need to work with something that's "practical" and "efficient" (other 
than secure).

I like deaddrop uber-paranoid approach. I'm just convinced that's 
overkill, designed to be excessively scarifying usability & efficiency, 
thus not being suitable for the many uses that we'd love to see starting 
up their anonymous whistleblowing initiatives.

Is very important, in my own view, to let an ecosystem of initiatives to 
start with few or no effort because it's better to have 10.000 diverse, 
distributed whistleblowing sites rather than few big and complicated ones.

Most people need an airplane, a secure airplane, but not a space shuttle.
Because to run a space shuttle you have too much constraint that will 
not let people start traveling a lot!


>> If i would had to take actions on DeadDrop i would simplify as
>> follow: - Make everything work only with 1 server
> Why do you think that less compartmentalization will result in a more
> secure system, if that system is likely to be under active attack by
> corporate and nation state security forces?
IMHO we need to be very practical in evaluating that kind of risks.

That kind of "enemy" (corporate or nation state security) would attack 
the organization and the people, not the server (placed in a "unknown 
location" behind a Tor Hidden Services).

And "if" that "enemy" would attack the servers, it would reasonably do 
it only after many weeks or months that the "incriminated submissions" 
has been done, after the "information has been already leaked and 
published".

With proper "Data Retention Policy" in place (ie: deleting all data 
older than X days/weeks automatically),  you waive the "effective 
exposure window", because the "online data stored on server related to 
that leaks" are already destroyed (at the time of attack).

Regarding compartmentalization, that's to be done trough proper 
system/filesystem/network sandboxing system for efficiency purpose, by 
using SELinux/Apparmor/Iptables modern systems.
Even US NSA abandoned most "physical compartmentalization" practices by 
applying "logical compartmentalization" (see NSA Mobility Package or NSA 
Trusted Systems as examples).

Given those consideration above, i think that realistically 1 server 
with the right software and practical/secure procedure of use and 
maintenance is enough.

>> A journalist (or a group of journalist) need to work on received
>> material "online" and not "offline" because they need to search
>> databases, browse google and apply investigative techniques to
>> investigate on the topic. And do it in an efficient way, because
>> time is always a scarce resource.
> There is a difference between "reading leaked documents" and doing
> investigation.  It's perfectly reasonable to have another laptop right
> next to the viewing workstation, where story notes go, searches are
> run, less confidential background material is looked at, etc.

In that scenario if the "journalist workstation is compromised" also the 
"scope of his investigation is compromised", regardless the "secure 
viewing workstation is secure".
If "national security forces" are listening to "journalist workstation", 
they know what's going on.

Additionally if the journalist "find something", before or later, he 
will need to share it, by bringing it to the "journalist workstation", 
breaking the security model.

In theory the "offline secure viewing workstation" is cool, but 
practically i really feel that the "normal workflow of work" will break 
the security model.

>> Additionally they need, for efficiency purpose, to "collaborate" on
>> the received material and to do so there are excellent platform for
>> sharing it like http://www.DocumentCloud.org or DMS (document
>> management system) like Alfresco (www.alfresco.com/) that can help
>> extracting text, applying semantic analysis, collaborating on
>> documents.
> This depends on the kind of documents you're talking about, and the
> kind of story.  If you've been given a dump of millions of documents
> that need to be analyzed in the manner you're talking about, sure.
If you receive just a bunch of 50 .docx, 25 .xls, 10 ppt, 1 .pst you 
need the flexibility to operate within your own environment, with your 
workflow, with you applications.
Those are not millions documents, but the scale of "operational 
efficiency" vs. "quantity of data to be analyzed" maybe subjective.
> Not all leaks look like that; many don't.  In a case like this, it
> might be a reasonable decision to, having looked at a document dump,
> move it to a non-airgapped machine where it can be accessed in a
> collaborative way.
When end-user  find a security measure to be not suitable for them, they 
will just "bypass the security procedure".

I think that a journalist will need to use the tools that they normally 
use to work with.
Security tools should be part of their day-by-day use and experience, 
not an exception in a dedicated notebook.

When you "split" the working environment between "secure" and 
"unsecure", then the "secure" one will slowly be abandoned.
I agree that's a bad practice, but it's a human attitude, and it will 
happen.

With this knowledge in mind, imho, we should work towards "integrated 
systems".

> However, one might well not want to bring over
> potentially incriminating records of messages with a source into that
> environment, and one might wish to ensure that unnecessary metadata
> had been removed from documents first, again to protect sources.
This is a controversial topic because the "metadata" may be one of the 
few source of information that will let the journalist to make 
appropriate correlation to identify that the data are good.
 From an investigative journalistic point of view metadata should not be 
scrubbed by default.
Metadata maybe an extremely valuable piece of information and should be 
really up to the journalist to evaluate whenever those should be 
scrubbed or not.

Documents received anonymously can be faked or even manufactured, so any 
bit of information that's useful
to evaluate the "reputation" of this "piece of information" should be 
preserved until it's in the "hands of the analyst" (that's a trusted 
person) that will choose whenever to delete it or not.

>
>> So i really think it's unrealistic to handle dozen or hundreds of
>> submission per month by copying received data offline, decrypting
>> and analyzing it offline trough a different workstation.
> What do you base your assumptions of submission rate and workload on?
An anti-corruption initiative i spoke with, got more than 2000 
submission in one year.

A media going to startup a whistleblowing initiative could expect to 
receive as much submission as much as they advertise their initiative, 
with spike when a scoop get out.

Additionally we should consider that when a whistleblower does a 
submission, he expect to get a feedback "quickly".
He is nervous, risking a lot, don't know if what he is doing is the 
right or wrong things, and he cannot just wait the journalist to get 
back to the office to check his "secure workstation" to download a 
submission 3 days later.

>
>> IMHO in a realistic workflow, at first the journalist "evaluate"
>> the data received quickly, identifying if it's spam or ham, define
>> how securely he should handle that data, and then will apply
>> "appropriate operational security procedure" depending on the data
>> received.
> If you do this on a non-airgapped machine that's been compromised and
> you figure out that what you've been handed is serious, it's a bit
> late, no?  Operational security isn't magic sauce you can spread
> around afterwards.
I totally agree.
But for the reason explained above i find ineffective in a 
real-world-scenario the dedicated machine approach.

>> - Too Many Servers Looking at
>> https://raw.github.com/deaddrop/DeadDropDocs/master/Deployment.jpg
>> we see that there are 4 servers, 1 switch, several dedicated
>> hardware for operational security (external encrypted hard drive)
>> with a quite complex installation procedure
>> https://github.com/deaddrop/DeadDropDocs/blob/master/README.md .
>>
>> This increase the cost and effort required to startup a
>> whistleblowing initiative in terms of hardware, software, services
>> and skill set required.
> ...because this is what's needed, in this architecture.
I fully understand the architectural consideration, just find it 
excessively expensive in the effort/result provided.
>   You're
> talking about analyzing hundreds of submissions a month
> collaboratively and using large scale document analysis systems, and
> you're worried about buying a few boxes and hiring a sysadmin?
No, in my mind there are not zillion of documents and not using large 
scale document analysis systems.

A Document Management System is used in any editorial information system 
of a magazine, so this maybe needed to cooperate also on documents where 
investigative journalism activities is carried on.

Anyhow in my own view a sysadmin must not be even required to start a 
whistleblowing initiative.

Lowering the entrance barrier by still keeping a very high security 
level is a the target i'd like to see reached.

I hope to see whistleblowing platform run entirely on an old macbook, 
placed in the toilette of an apartment, connected to local WiFi with a 
backup with an cheap usb 3G key, installed by a "Power user" in 
Virtualbox with no sysadmining skills.

>
>> - Too Much Customized Software Looking at the installation
>> procedure there are several customized procedures and software such
>> as using "Hardened GRSecurity" linux kernel, requiring to manually
>> maintain security update for all kernel release, and manual setup
>> of a Certification Authority (with OpenSSL), requiring manual
>> handling and management of certificate via command line.
> Well, if folks start shipping properly hardened distributions (and
> there are some arguments for moving over to tails, for this reason),
> then this'd be a bit less work.  Again, just because it's hard doesn't
> mean it's not necessary.
We may enter into philosophical discussion on security topic, by opening 
question like:
"Is more secure a system with grsecurity's custom kernel that require 
high skills and a painful procedure to upgrade or a system without 
grsecurity where you can schedule automatic-upgrade?"

I'm of the school that's better to have a scheduled automatic-upgrade 
rather than a custom grsecurity built kernel, but discussing about it 
may require at least 1 liter of beer for each of us.

Are you coming to OHM2013 http://ohm2013.org in the netherland this 
summer?  It would be a good occasion for the liter of beer said 
previously :-)

>
>> I just find it overkill for a general use.
> What's "general" use?
General use are for example:
- A Citizen Media
- An independent media in a closed society
- An Investigative journalism group
- A political activism action/campaign

Those kind of users have very low resources and represent, imho, the 
foundation of the future of digital whistleblowing that we should 
foster. I hope to see thousands of whistleblowing initiatives up and 
running in few years, for each context, in each country, for each sector.

When the civil society will be empowered that way, i think that we will 
have achieved our goal to see a more transparent world.

-- 
Fabio Pietrosanti (naif)
HERMES - Center for Transparency and Digital Human Rights
http://logioshermes.org - https://globaleaks.org - http://tor2web.org




More information about the liberationtech mailing list