Search Mailing List Archives


Limit search to: Subject & Body Subject Author
Sort by: Reverse Sort
Limit to: All This Week Last Week This Month Last Month
Select Date Range     through    

[liberationtech] Boston event: How nonprofits can use Facebook to broadcast their impact??? (Feb 27th)

Rich Kulawiec rsk at gsp.org
Sat Feb 25 12:15:15 PST 2017


This has a relatively short part and a really long part.  The short part has
some responses to comments.  I have elided the attribution of those comments
because I'm responding to multiple people, and I'm not trying to pick
on anyone in particular: I pretty much think you're all wrong. ;)

The long part explains some of the fundamentals of abuse control.
It's focused on responsibility and enumerates some basic tactics/strategies.
If you run an Internet service or Internet-connected servers, you're
expected to know this stuff and to practice it: it's part of ethical,
responsible operations 101, AND it's a good first step toward not being
the subject of the security-breach-of-the-day.  It's direct and and forceful
and snarky: consider it a wake-up call.  But keep in mind that if I wasn't
trying to help, I wouldn't have bothered to write it.

I'm also going to bash Facebook (and Twitter) some more: they've
earned it, by being excellent examples of worst practices in action.

It took a while to write; it'll take a while to read.  Make coffee.
You're gonna need it.


> blaming Facebook for this seems, to me, no different than blaming
> the manufacturer of the computer used to broadcast the crime...

That's a false equivalence.  I don't expect anyone to do anything about
abuse that's not happening via their own operation: they probably can't.
(Well, except report it.)  I expect everyone to act immediately on
abuse that IS happening via their own operation.

So let's review: they broadcast a gang rape.  And a torture/assault.

UNACCEPTABLE.

I don't care what else they broadcast, before, during, or after.  It could
be 57 channels of snuggly kittens and happy puppies around the clock
for 10 years.  This can *never* happen.  Not even once.

But incidents like this -- see article -- have happened repeatedly.
And they'll keep happening, because it's profitable for Facebook to
ensure that they do.  It's not an accident.

	"It's not a mistake. They don't make mistakes. They don't do random."

Facebook calculates what will make money, and that's what it does --
regardless of how much damage is done or to whom.  That's what it's
always done.  That's why it exists.  It has no other purpose.


> First, I'm not sure Mark Zuckerberg is a sociopath
> and certainly I don't have any evidence to claim so.

I've studied the scum of the Internet for decades: you know, spammers,
stalkers, phishers, etc.  (This has gotten somewhat easier thanks to
horizontal and vertical integration in the scum ecosystem: these days,
they're quite often the same people.)  It's ugly, kinda like studying
raw sewage, but someone has to do it.  I stand by my assessment.
He's absolutely disgusting filth -- which is why it's unsurprising
that Facebook is the mortal enemy of everything this lists stands for.


> [...] are you suggesting Facebook should be permanently blocked?

Yes, that's exactly what I'm recommending.  I did it years ago.  This:

	whois -h whois.radb.net -- '-i origin AS32934' | egrep "^route"

will give you Facebook's current ranges.  Firewall permanently and move on.


> I did, and saw this "Facebook Live also caught the aftermath of an
> incident in which a police officer shot and killed a man in St Paul,
> Minnesota in July 2016."

I saw that video too, and I'm not on *any* social media.  There are
plenty of ways to document incidents like this and spread that
documentation around without resorting to Facebook.



Now for the long part.  (Did you make coffee?  Good.)

This is a teachable moment, so I'm going to take advantage of it and
momentarily act like a patient -- albeit snarky and forceful -- teacher.

(There's a reason I chose the words "patient teacher".  See quote below.
The snarky part?  Well...it's what I do.  See Colonel Jack O'Neill
for a reference implementation.)

And I'm going to use the rhetorical "you" in what follows, so consider
this directed to everyone and to no one simultaneously.  I needed a
pronoun, that's the one I picked.

The subject of today's lesson is: abuse.

If abuse comes from/through your servers on your network at your operation
on your watch, then *you own it*.  You don't get to say "well, gosh,
someone else...".  No.  NO.  That's not how it works.  You don't get
a pass.  It's your responsibility, and if you're not diligent enough,
smart enough, professional enough to handle that responsibility...then
unplug your systems from the net, shut them down, power them off, and
go home, because you're not good enough to be here.  Not even close.

As you may have heard elsewhere:

	"With great power comes great responsibility."

Anyone running any piece of the 'net wields tremendous power: something
that I don't think a lot of newer people realize, because they've always
had it.  Your dinky 1U server in a dark corner of a colo somewhere
could have more reach and influence than the New York Times of 1987...
or 2017.  Which is flat-out amazing if you stop to think about it.
You have this power because of all the work done before you got here,
work on the ARPAnet and CSnet and Usenet and everything else, nearly
all of it done by people you've never heard of.  But you must use
that power responsibly.

I realize that some of you are rather new to the 'net and thus new to
this concept of responsibility that's commensurate with power...but that's
how it works because that's how it HAS to work.  Consider the converse,
and if you're at all thoughtful, it'll take you just a couple of minutes
to realize that the inevitable end result of that is a dysfunctional
or completely nonfunctional network: "tragedy of the commons" and all that.

If you think about it for another few minutes, then it should dawn on you
that many of the problems we face have *exactly* this root cause: e.g.,
spam (in its myriad forms).  It doesn't magically fall out of the sky.
It comes from systems run by people on networks run by people: negligent,
incompetent, and/or malicious people.  Same for phishing.  Same for
DDoS attacks.  Same for a lot of things.

If your thoughts don't follow this course, then you're probably inadequate
for the task of running an Internet-connected operation, because you
just don't get it.  If you choose, in spite of that, to arrogantly and
blindly go ahead anyway, because you think you're somehow entitled to do so,
then you're going to do far more damage than good.  Far more damage
than you know.

And no, you do not get a pass because you're trying to save the world.
You *especially* don't get a pass -- because you should know better.

	"Current Peeve: The mindset that the Internet is some sort of
	school for novice sysadmins and that everyone *not* doing
	stupid dangerous things should act like patient teachers with
	the ones who are."
		--- Bill Cole

(Now you know why I said "patient teacher".)


This isn't the Internet of 1987, when nearly every abuse incident was
accidental and likely the result of a grad student doing something
boneheaded at 3 AM.  This is the Internet of 2017, where the very first
two questions that *anyone* contemplating the deployment of *any* online
service of *any* type should ask are:

1. How can this service be abused?
2. How can we stop that before it happens?

If you can't do that, or don't know that you should, then you have
absolutely no business deploying a service.  The Internet is not your
playground and it is not only selfish, but dangerous for you to deploy
things that you can't competently manage.

Amateur night is over.  Sorry if you missed it -- it was fun, and in many
ways it's sad that it's done.  But it's been done for 30 years, and we
can collectively no longer afford the kind of mistakes that we once made.

I've made my share.  I've watched others make theirs.  I've had to clean
up a lot of ugly messses.  Why do you think I'm so adamant?  And if
you don't like my tone in this message, consider this: If I wasn't
interested in helping you, I would simply remain silent and watch you
make the same obvious, well-known mistakes.  Which you WILL do.  Which
some of you ARE doing.

That would be much easier for me than writing this.  Also I could be
drinking bourbon right now instead of coffee.  But y'know, I'd like to
see you do *better* than we did, not worse.  Which you will not do if
you ignorantly and arrogantly insist on cluelessly repeating mistakes that
we all knew were mistakes decades ago.  We already suffered the pain.
There's no reason for you to retrace our erroneous footsteps.

Also: the stakes are a hell of a lot higher now.  The kind of screwups
that resulted in some annoyance and chagrin and a few reboots back then
can get people hurt or killed today

If you have any kind of functional conscience whatsoever, then that
should give you serious pause.


Most of abuse control just requires knowing what your technology is
doing, and there are some very well-known ways to do that.

	"How can you call yourself the chief technology officer if you
	don't know what your technology is doing?"
		--- Marcus Ranum

Now let's talk about one of those well-known ways.  Let me direct your
attention to section 2 of RFC 2142 (https://www.ietf.org/rfc/rfc2142.txt),
where it says:

	[...] if an Internet service provider's domain name is
	COMPANY.COM, then the <ABUSE at COMPANY.COM> address must be
	valid and supported [...]

(Incidentally, if you don't know what an RFC is, then you need to learn.
Yesterday.  You should also read the rest of RFC 2142.)

In other words, if you run example.com, then abuse at example.com must be
valid and supported.  Note: MUST.  This is not optional behavior on your
part.  It's mandatory.  RFC 2142 dates from 20 years ago, and the abuse@
role address was a de facto best practice for many years before that.
So this is something that you should have learned before lunch on your
first day of your first class in Internet Operations 101.

If your domain(s) doesn't have this in place, then drop what you're doing
and go put it in place right now.  Make sure it goes to the person(s)
empowered to handle abuse reports and make sure that they're paying
attention to what shows up.  Make sure they have the tools necessary to
handle it (I strongly recommend mutt, see http://mutt.org, as a mail
client for abuse handlers, as it's about as impervious to attack as
you're going to find) and make sure that they have the authority to do
whatever it takes to solve the problem(s).

I'll pause to give you time to take care of that.  Yes, I really do mean
RIGHT NOW, not five minutes from now.  Because you should have done it
a long time ago.

Note that this isn't just for our benefit: it's for yours.  That address
is not just for reporting abuse BY your operation, it's for reporting
abuse OF your operation.  So whether we're doing your job for you,
by reporting abuse that you're responsible for but didn't catch,
or whether we're doing you a big favor by giving you a heads-up about a
problem you don't know you have, you should be reading what shows up
there very carefully.  And promptly.

Or you can just stick your fingers in your ears and studiously ignore
complaints about all of those problems until we collectively get tired of
you and your operation and reach for firewall rules and blacklist entries
in order to permanently remove you from our view of the Internet.  (Do a
search for: AGIS spambone.  That happened 20 years ago. There are *still*
firewall rules and blacklist entries in some operations for that.
It's scorched earth.)

Note: you can, and in many cases, should, provide other means to report
abuse.  If you are allegedly smart enough to create an Internet service,
then you should be smart enough to figure out what those other means should
be and how to integrate them.   But you must provide this one, at least.

The bigger you are, the more abuse you will attract, so make sure that
abuse-handling resources are proportional.  Joe's Donuts in Dubuque can
probably get by with one person checking the abuse mailbox once a day.
A top-100 Alexa site needs a 24x7 multilingual abuse team.

And the "promptly" part is important.  Anyone who has worked in or
studied the problem of abuse control knows that if you're slow to react,
word gets around.  You'll be perceived as a soft target, and thus will
attract more abuse.  This is one of the ways that operations become "abuse
magnets", whose problems sometimes increase exponentially.  Conversely,
if you act early and decisively, the opposite reputation will propagate,
and many/most abusers will be discouraged from even trying -- because they
know there's a high probability their efforts will be promptly quashed.

You can make your job easy, or you can make your job hard.

Here's an example of an operation that made its job hard by failing to
pay attention to the "prompt" part and thus became an abuse magnet:

	Massive networks of fake accounts found on Twitter
	http://www.bbc.com/news/technology-38724082

Why is their job hard now?  Because they didn't crush this out of
existence when it was tiny.  Thus nearly all of this problem is their
own fault.  They didn't pay attention.  They didn't act promptly.
They had no idea what their technology was doing.

Pointed question: do you think this is all of the fake and/or automated
and/or spamming accounts on Twitter?

Speaking of making your job easy: the best way to minimize the resources
required to handle this is to design abuse out at the whiteboard stage.
If you skip over that, because it's not as fun as implementing your pet
idea, or because you're naive enough to think it can't happen to you,
or because you didn't do your homework, then no whining when you find
yourself needing a staff of 50 to keep up.  Or when you can't keep up
and you find yourself getting blacklisted/firewalled by operations who
are defending themselves from the abuse you never saw coming.

	"We were so concerned with getting out that we never stopped to
	consider what we might be letting in, until it was too late."

Free clue #1 on that whiteboard thing: do a search for "abuse magnet"
and make a list of all the things that have been identified as such.
Be sure that you're not doing any of those things.  Actually, be sure
that you're not doing anything that's in the same zip code with any of
those things.  Because if you blunder ahead and do one of these anyway,
then one day it's going to be abused by someone who doesn't like you.

And since the collective patience has worn paper-thin, some of the people
targeted by that abuse are not going to do you the very generous favor
of filing an abuse report: they're just going to silently block, firewall,
and/or null-route your operation, which solves THEIR problem.  Your
ensuing problems are of little-or-no concern to them, nor should they be.

Free clue #2 on the whiteboard thing:

(a) Find someone who has long experience dealing with abusers, show them
your design, and give them some time to think about it.

b) They will give you a list of all the ways that your wonderful concept
is going to be promptly shredded as soon as someone takes an interest.

(c) Study the list and revise your design so that none of those things
are possible, or at least are very difficult and very expensive.

(d) Go to step (a) until the list size in (b) is zero.  If you really
want to be diligent, find a second reviewer and repeat the process.

This won't cover everything -- there's always something nobody has seen
or thought of before -- but it will at least cover all the things that
you should know about because you *can* know about them.  It's not
great to be the first one to suffer an entirely novel form of abuse,
but that's far better than suffering one that we knew all about in 2005,
one that you could have stopped cold with an hour's work.

Free clue #3: make sure you have a per-service/per-server kill switch.
Test it.  Test it regularly.  Because 3:32 AM is not the time to discover
that you don't actually know how to shut down something that is
spewing abuse as fast as it can shovel packets.  Make sure that
the design/deployment process for anything new includes this.  You have
to be ready and able and willing to pull your own plug if that's
what it takes.

Free clue #4: Consider this: many of you think you're building
weapons -- to fight for justice, freedom, rights, etc.  And maybe
you are.  Good for you.  But you're also building targets.  Big targets.
Big vulnerable targets.

So keep in mind: if you can't even fend off abusers, what possible
chance do you think you have against a determined adversary?  The first
serious enemy you have will take you apart -- if they can stop derisively
laughing at you long enough to bother.  Susceptibility to abusers is a
surface indicator of high vulnerability to attackers.

Don't make the fundamental mistake of presuming you're not a target.
You are.  Whether it's someone who opposes your goals, or whether it's
an accident, or whether it's someone trying to leverage your resources to
attack a third party, or whether it's just someone doing it because
they can, you really are a target.

	"Because some men aren't looking for anything logical, like money.
	They can't be bought, bullied, reasoned, or negotiated with.
	Some men just want to watch the world burn."

You WILL face these people.  You probably already are: you may just not
know it yet.  Some of them are very smart, some are relentless, some are
well-resourced, and some are full-blown batshit crazy.  That's not
a good combination to face unless you're prepared.


So what should you do when something shows up in your abuse mailbox?
The one you have now, even if you didn't have it when you started
reading this message?

1. Give it a once-over check.  Make sure it's about your operation,
your domain, your network, etc.  Sometimes people who are getting
their asses kicked by abuse make typos under duress.  So rule that
possibility out first.  And cut them a break, they're not having
a good day.

Note that you are not only responsible for any abuse your operation
emits, but for any abuse your operation facilitates.  For example,
if someone's phishing via another mail server but is using a dropbox
email address that you host: you're not responsible for the phish,
but you are definitely responsible for the dropbox.

2. Acknowledge receipt and confirm that you're investigating. (Or, if step 1
revealed that it's not about you, acknowledge receipt and redirect.)

3. Investigate and try to find a way to stop the abuse NOW.  If necessary,
shut down part or all of your operation: it's clearly unethical for you
to allow it to continue to run if you know it's emitting abuse.  So if
that's the only available option, then do it.  Obviously in instances like
this, you're going to have to figure out what the root cause is and fix it
before you can turn the service/operation back on.  But your priorities
should be (a) stop it from getting worse (b) stop it from staying the
same (c) fix it.

4. Preserve and accumulate evidence.  Stash copies of everything you
find, and keep them permanently.  Not only are you going to need them
for subsequent steps (see below) but if the abuse is distributed over
multple services, which it often is, you're going to need this to
assist your colleagues elsewhere.

5. When you're sure you've fixed it, write back to the person who filed
the abuse report and (a) thank them for doing your job for you FOR FREE
(b) explain how you stopped the abuse and (c) apologize for the abuse.
Remember: you own it.  So you're the one that owes an apology for it.
That costs you nothing and it demonstrates that you understand your
responsibilities.

6. You're not done.  You *missed* this.  It got by you.  Why?  How?
What could you have done to give yourself a fighting chance of detecting
it before someone else did?  Fix it.  Test your fix.  Test it again.

7. You're still not done.  Review previous cases of abuse.  Is this part of
a chronic or systemic pattern?  If so, then you have a larger, deeper
problem to solve.  Get to work.  This may be indicative of a fundamental
design or implementation flaw, and it may require substantial changes.
Sometimes, and this doesn't happen often, it may be necessary to discontinue
a service entirely because there is no good way to operate it without
facilitating/generating abuse.


And finally: abuse control is not the end.  Once you've dealt with this
in at least a minimally acceptable manner, you should be thinking to
yourself: "Huh.  I didn't know about designing against abuse and having an
abuse@ address.  I wonder what other fundamental things I should know?"
And then you should go find out.  Or you can have at least one person
on your team who has been around long enough to know.

Listen to them, especially when they begin a sentence with "That's not
a good idea..." because they are almost certainly far more right than
you know and are probably about to stop you from walking off a cliff
that you don't even know exists.  (Disclosure: I do this too and and
I've been around for a while.  But I don't know everything, so I seek
seasoned advice all the time.  It really helps.  Even when they tell
me things I don't want to hear.  ESPECIALLY when they tell me things
I don't want to hear.)


Now back to some examples of abuse control failure.

Facebook has a market capitalization of 378 *billion* dollars.
With resources like that, they could have stopped this immediately.
If necessary: by shutting the entire service down right that minute.
(Don't tell me it can't be done.  I've done it.)  But they shouldn't
have faced that problem: this incident should never have happened.
It should have been designed out at the whiteboard stage.  It's really
not that hard, especially if you have a huge bucket of money to throw
at the problem and a large staff chock-full of allegedly smart people.

So I invite you to do that very exercise.  Presume you have a tiny
budget of $50M and modest staff of 50.  Spend one hour and come up with a
design that makes it far more difficult for someone to do this, ensures
it won't last long if they do, and includes measures to avoid a repeat.
Your design must work at Facebook scale, of course.

(Yes, "tiny".  With $378B, $50M is pocket change.)

You should, if you have any operational expertise at all, find this
a rather simple problem to solve.

And that, in turn, should lead you immediately to the question: "if it's
this easy, why isn't it solved?"   And that should lead to the answer
that I gave at the beginning of this message: because they don't want
it solved.  It's not a mistake.  It's not random.  It's part of the design.


Now let's talk about Twitter some more.  It is abundantly clear to even the
casual observer that Donald Trump has repeatedly violated Twitter's ToS.
He's a serial abuser.  Yet he still has an account.  Why?  Because
Jack Dorsey is a spineless coward and fears the backlash.  And well,
because it makes rather a lot of money for Twitter, and Twitter values
corporate and personal profit over professional responsibility and
basic human decency.

Don't be like that.  If, despite all of your design and implementation,
all your due diligence, everything you've done, somebody *still* finds a
way to channel abuse through your service, then make it stop right now.
Do whatever it takes.  Throw them out and keep them out.  Show some
backbone.  Be prepared to fire your biggest customer.  (Don't tell me
it can't be done.  I've done that too.)

Then sit down with your best people, and figure out how to make it
stop -- not just today, but forever -- and execute the plan with ruthless
efficiency.  Even if that plan begins with "as a triage measure, shut
the whole thing down and unplug the network".

And please write it up so that all the rest of us can learn from your
experience, and incorporate the lesson into our operations.

Oh, and the answer to my pointed question about Twitter, above?

(Recapping: "Pointed question: do you think this is all of the fake
and/or automated and/or spamming accounts on Twitter?")

No, it's not:

	As a conservative Twitter user sleeps, his account is hard at work
	https://www.washingtonpost.com/business/economy/as-a-conservative-twitter-user-sleeps-his-account-is-hard-at-work/2017/02/05/18d5a532-df31-11e6-918c-99ede3c8cafa_story.html

Do you think the continued existence of this account and MANY more
like it is an accident?

Of course it's not.  This account's abuse methodology is trivial to
detect using dirt-simple first-order metrics like tweets/unit time
and (identical tweets)/(total tweets).  If those were in place, this
account would have been killed and the spammer dirtbag behind it blacklisted
for life a long time ago.  But it's still there, and so are all of the
others like it.

Which is why Twitter's pronouncement last week about how much better they're
going to make things oh-so-much better is notable not for what it says,
but what it doesn't say:

	"Today, we're announcing three changes: stopping the creation
	of new abusive accounts, bringing forward safer search results,
	and collapsing potentially abusive or low-quality Tweets."

Notice what's missing?

Nowhere does it say that abusers will be kicked off and banned for life.
No, Twitter doesn't want to do that: it would hurt profits.  So they're
going to continue facilitating as much abuse as they can manage as long
as the cash register keeps ringing -- by tapdancing around the edges of
the problem and refusing to tackle the root cause.

I'm not the only seasoned observer who's noticed this.  Lauren Weinstein
wrote (on his "nnsquad" mailing list, in a comment to an article about
Twitter's infrastructure):

	"If they filtered out all of their spam, racist, and other trash
	postings they could serve the bulk of the remaining useful Tweets
	these days from a pair of Raspberry Pi boards, with cycles to spare."

He's exaggerating for effect of course, but he's not far from the truth.
Twitter's infrastructure is probably at least two orders of magnitude
larger than it needs to be because of the abuse that they don't
have the guts to stop.  Keep this in mind the next time one of the
clueless newbie engineers who works there brags about how amazing
their scale is: it's that big because they didn't have the intelligence,
the will, and the courage to avoid the *need* to have that much stuff.


If you're not willing to stop abuse when circumstances call for it,
then you're simply not good enough.  You don't have the sense of
great responsibility that's required for those who wield great power.
You're just a parasite, leeching off the hard work and good will of
others, but not willing to do your part.  You and your operation are a
menace to the entire rest of the Internet and everyone on it.

So don't be like that.  Be better.  A LOT better.  It's just not that
hard, provided you pay attention and keep your priorities straight.
(Your top priority is always: don't be a hazard to the billions
of people, systems, and networks that constitute the entire rest of
the Internet.  Write it down.  Tape it to the top of your screen.)

And do note, in re the need to be a ruthless bastard/bastardette about
dealing with abusers: I don't like the all-of-nothing nature of this
situation.  I don't like the fact that (with precious few exceptions)
that there are no other viable options.  Not even a little bit.  In a
better world, it would only be necessary to say "Hey -- not cool: don't
do that" and that would suffice to make the problem go away.  30 years
ago, we more-or-less had that world on the 'net, and that kind of thing
actually worked most of the time.

That was then.  This is now.  It doesn't work.  It's not going to work.

So as much as I don't like the heavy-handed approach, I've seen enough to
realize, sadly, that it's nearly always necessary these days.

And if that isn't your personality type, then find someone who is and
have them handle it.  Because if your operation can't stand up to mere
abusers, then you have no chance to withstand real enemies who will
attack you in more concerted fashion.  They'll go right through you, and
whatever work you did, whatever good you could have done for the world,
will be quickly rendered irrelevant.  Like I said above, susceptibility 
to abusers is a surface indicator of high vulnerability to attackers.

Worse, whatever data and metadata you have, including data and metadata
about your supporters, friends, and allies, will end up in their hands.
All that you will have accomplished is accumulating a stockpile of
valuable intelligence that is highly useful to your enemies and handing
it over to them without much of a fight.  And in that circumstance,
it would have been better -- in the long run -- if you had done nothing.

So don't let that happen.  If necessary, you must be ready and willing
to burn it down before you let it fall into the wrong hands.  But a much
better course of action would to never let it get so far that drastic
measures are called for.  Don't go there and you won't have to cope with
being there.  Do all the stuff I said above and you'll take some first
steps toward greatly improving your survival chances.

Maybe you *will* do some good in the world, and congrats to you if you do.

Because we could use it.  Doubly so in dark times like these.

Good luck.

Class dismissed.

---rsk



More information about the liberationtech mailing list