A program called E-Responder, launched by the NYC Citizens Crime Commission last year, helps workers identify risky posts on social media before they go viral–and has already reported successes in violence intervention. The group’s president says it’s worth a look by other cities, and social media providers like Facebook.
The live-streamed murder of Robert Goodwin on Easter Sunday was a brutal and shocking online event—but it was merely the latest in an increasing number of violent and deadly acts on Facebook and other social media networks.
Social media providers are struggling to deal with this troubling trend. They have responded to widely publicized violent events with promises of new internal processes and commitments to add more human “monitors” to catch violent posts before they go viral.
But if the goal of all that is to actually prevent online violence, Facebook and others will continue to fail miserably.
Facebook, for instance, mostly relies on its users to report violent posts—a policy that allowed the video of Mr. Goodwin’s death to stay online for two hours before it was removed. By then, of course, it had already spread around the world.
Here’s what Facebook said about their failure to take down the video: “We disabled the suspect’s account within 23 minutes of receiving the first report about the murder video. We … are in touch with law enforcement in emergencies when there are direct threats to physical safety.”
This very “hands off” approach means that Facebook will frequently be late to address serious violence.
Facebook’s excuse for not catching violent posts immediately is essentially that it’s impossible for them to do it alone. Even though they have “reviewers” who monitor the network for dangerous activity—and just pledged to add 3,000 more—it is logistically impossible for Facebook to track more than one billion users in real time. Or so they have inferred.
But that’s not the case. Advances in search capability and artificial intelligence now make it possible for violent posts to be detected and blocked almost immediately.
Facebook just must utilize the technology to create a user-safe experience.
Artificial intelligence techniques such as deep learning can be used to automatically identify and process human text and images and determine if the content is risky or violent. Google has already come up with this kind of smart-tech. Facebook can develop its own deep learning algorithm that is specifically tailored to flag violent posts and give instant feedback.
Early warning systems based on deep learning algorithms that can identify violent content by image, text and facial recognition can help trained employees to improve situational awareness and secure our communities.
Indeed, pairing this level of technology with Facebook’s team of “reviewers” could have saved Mr. Goodwin’s life.
Facebook could also make it easier for its community to access critical services. For instance, Facebook should expand its outreach to include other types of prevention services, such as referrals to more national hotlines and the ability to chat to an online crisis counselor. They already do this for suicide prevention.
Facebook should also be using technology and training to identify a range of troubling behavior on its own platform, bypassing the need for untrained users to report others’ behavior.
There are efficient, effective ways to keep the virtual from turning violent. We are already doing it at the New York City Citizens Crime Commission with the launch of E-Responder. E-Responder is a program that trains people to identify dangerous posts on social media, help those at risk, and de-escalate conflicts online before they turn in to real world violence.
E-Responder launched last year, and the Citizens Crime Commission has since trained dozens of New York City-based anti-violence workers across the city. Ninety-seven percent of interventions performed by E-Responders resulted in positive outcomes. E-responders were also significantly more likely to identify risky posts.
We now plan to expand the program outside of New York City. With knowledge and training, we believe tragedies that start or end online can be avoided.
Facebook and other social media platforms should take note. They have a responsibility to lead the way and protect their community members and the wider world. Because, with their help, we will not just have the ability to identify and block dangerous posts—we will have the ability to stop violent behavior.
In fact, if we all do more, social media can give us the clues and the tools we need to interrupt and effectively address the behavior that foreshadows tragedy.
Richard Aborn is President of the Citizens Crime Commission of New York City. He welcomes readers’ comments.