Quantcast

Facebook thought the Brooklyn LIRR crash was a big deal. Here’s why that matters

Facebook uses data to decide who your friends are, what you're interested in and when you're in an emergency. That should give us pause.
Facebook uses data to decide who your friends are, what you’re interested in and when you’re in an emergency. That should give us pause. Photo Credit: Getty Images / Chip Somodevilla

The response to Wednesday’s Long Island Railroad crash at Atlantic Terminal was just what you would have expected for what looked like a major event in New York City.

A helicopter beat the air. More than a dozen emergency vehicles shut down Flatbush Avenue. But it quickly became clear that, fortunately, the situation didn’t match the magnitude of the response. There were 104 minor injuries according to the FDNY, but no fatalities.

One police officer ticked off the uniformed services onsite to another officer: FDNY, ESU, MTA. . . Soon he was on his second hand. The robust response was more than enough to care for the walking injured, some of whom sat on a ledge outside the terminal with bemused or dazed expressions and ice packs held to noses and knees.

Trains were only briefly delayed.

But as the city got back to normal, the flurry of concern lingered in an unlikely place: Online, where Facebook’s Safety Check feature had been activated, which allowed users to mark themselves “safe.”

How did the LIRR crash become a Facebook Safety Check?

Some Facebook users were surprised that what turned out to be a small incident got attention from the social media company. The reason appears to be a policy change made over the summer, part of Facebook’s dilemma over how much to be a top-down arbiter. That struggle, which also extends to the company’s fake news problem, is increasingly bleeding over into the real world.

Early versions of Safety Check date to 2011, when the company experimented with a feature that let users reassure friends and loved ones that they were free from harm during a nearby natural disaster. The company decided when to turn on the feature.

The effort expanded, notably during the Paris terror attacks in November 2015, when tourists and travellers and natives alike appreciated the chance to communicate that they were unharmed to friends nearby and afar. But in the aftermath, critics chastised Facebook for not turning on the feature for similar attacks in Beirut a day earlier.

This June, Facebook changed the rules so that user behavior largely determines when the function activates. A third-party security firm “aggregates information directly from police departments, weather services, and the like,” said Facebook spokesman Stephen Rodi in an email. Once an event has been certified by the firm, the feature is activated if a certain number of people are talking about it on Facebook. Rodi says the necessary threshold varies by place and incident type.

Once the threshold is crossed, users in the area who are also talking about the incident get a prompt asking them if they’re safe. They can then share, respond, or ask others.

That’s how the not-so-serious train crash spread on Facebook Wednesday morning. The title of the incident, in this case the somewhat surreal “The Train Accident in Brooklyn, New York,” comes from the third-party firm.

It didn’t take long for Facebook users to start joking about the alert. The incident didn’t exactly rise to the level of other events of serious scale — or even to the level of the Hoboken train crash or Chelsea bombs, both of which triggered Safety Checks this September.

Facebook did not provide a full list of Safety Check activations in NYC. But their activation protocols mirror the new experiments to combat fake news announced in the wake of the presidential election.

Letting Facebook decide

In response to the proliferation of bogus articles on the platform — the Pope endorsed Trump, Obama outlawed the Pledge of Allegiance — Facebook announced they had begun testing a system where users were able to mark links as potentially fake.

Potentially fraudulent articles would be sent to neutral third-party fact-checking organizations like Snopes or PolitiFact. The outside decision would lead to the articles getting a “disputed” mark on Facebook if they were indeed fake. Facebook also took economic action against the advertising side of fake news.

The step was an acknowledgment that the rowdy blue-framed community Mark Zuckerberg has made for us sometimes needs a referee. It’s a place where lies and exaggerations can spread the way rumors tend to. Third parties are asked to do the best they can. But the limits of third-party power both in fake news and Safety Check show Facebook’s concern of wading too far into the arena themselves.

In some ways, it’s too late.

Facebook wants to be so deeply entwined in our lives that we use it not just to post updates but share live video, mobile-message, proclaim our politics, identify our most closest ring of friends, in addition to being the judge of whether or not we are safe during national emergencies and train bumps.

When we do all those things, of course, we don’t have the third party to blame but just ourselves — or whoever asked us to share and be marked.