Reader Be Warned

Nov 14, 2017  |  By Isaac Maltzer  |  

Growing scrutiny over tech companies to better monitor the activity taking place on their platforms has become one of the central hot-button issues of the day. The 2016 Presidential election and alleged activity from foreign agents brought the conversation to the main stage and is calling into question the responsibility big tech has in controlling the spread of false information.  Inflammatory content naturally generates more attention, and algorithms working behind the scenes of services like Facebook’s Newsfeed favor such content over material with a lesser draw. In hindsight, it’s easy to see how things could have gotten out of hand. But before any issues arose, this set-up appeared to be common sense. In fact, it was (and in many cases still is) favored by most users. But now we’re seeing what happens when people understand the rules behind distribution, and manipulate them for personal gain. The argument for more regulation is hard to deny.

 

Propaganda has always existed. There are countless examples of mass communication being used by a small number of people to influence large groups, but older channels were much easier to discern as promotion than materials used in our current environment. Whether it was a voice over PSA at the end of a commercial stating which campaign paid for the ad, or a handout featuring artwork glorifying a particular candidate, there was usually a distinguished and/or mandated clue as to the intent behind the message. All that has changed now. Support for various people and causes can be championed by anyone, and utilizing tools provided by Facebook and Google allow for a fairly cheap method of distributing it. On top of all this, the content can be made to look virtually identical to a verified news source. And in intensely divided times, some content lends itself to sharing. People become less interested in finding the facts, so long as they have a headline that complements their beliefs; no matter the source. This is the dangerous teetering point we now find ourselves.

 

The solution many are calling for is for Facebook and Google to more aggressively moderate how their services are used and to intercept or remove deceptive content. While that may seem like the logical solution, it puts these tech companies in an understandably difficult position. They certainly have a role to play in the solution, but are they really the best ones to determine what we can or can’t say on their channels? They are historically experts in technology, and are now being thrust into territories better suited for a political scientist or expert interrogator. And even an interrogator would have the benefit of meeting their subject face to face. The internet is notoriously faceless. How can they look at a single piece of online content and understand its full context? Take into account that any decision they make carries with it the weight of their own future policy shifts and the policy of whichever country the content is served in. For some posts, this would be easier than others. For instance, the 2016 U.S. Presidential election gave rise to an influx of hoax sites that rolled out bombastic headlines to drive mass sharing and subsequent ad revenue. A quick scan of the sites under a critical eye, would lead many to recognize their lack in credibility. These represent content that would be easier for Facebook or Google to identify as false and remove. But Facebook is currently facing a somewhat similar issue overseas, in a means that is much more difficult to resolve and sheds light on the dangerous effects of an incorrect intervention method.

 

One of the key players in the Rohingya crisis in Myanmar is Buddhist Monk, Ashin Wirathu. Wirathu has used all too familiar tactics to raise fear and hatred towards the Muslim Rohingya ethnic group and has played a large role in the violence against them and their subsequent migration. Facebook is one of his primary tools for disseminating information. And in Myanmar, internet connectivity is much more limited. In fact, Facebook is the only internet activity many in Myanmar have access to and is often misinterpreted as “the internet.” For obvious reasons, the spread of false information on Facebook is of greater stakes in Myanmar, than in the United States. A common post format from Wirathu is a disturbing image or video of various forms of carnage, along with a message attributing the damage to the Rohingya ethnic group. The situation for Facebook is unprecedented. Without an office in Myanmar, it is nearly impossible for Facebook to gather context around these posts and determine their authenticity. If true, the posts present important information for followers to be aware of for their own safety. If false, they spread poisonous misinformation. Whichever path is taken is in danger of setting a damaging precedent. Just because something is disturbing to look at doesn’t mean people should live in blissful ignorance of its existence. But they shouldn’t be exposed to it under manipulative or malicious pretenses either. And even if Wirathu’s posts are blocked or his page removed for good, he’s been quoted saying he will just create a new page. It is in these murky waters that we start to understand the depth of the issue.

 

Coming back to the United States, it is important to discuss who should be controlling what is posted. It’s easy to say the responsibility lies with Facebook or Google, but any policy developed by a tech firm will undoubtedly have its shortcomings. No matter what solution comes from tech, there will always be misinformation that creeps into our feeds. The freedom of speech that we hold so sacred all but assures that. The true responsibility lies with us.

 

Step one is to be able to identify if a message has been promoted or organically shared. An important aspect of advertising in the U.S. is the disclosure of the sponsor. Across all formats, it is required that paid messages be made known. Facebook and Twitter have also recently come forward with new policies specifically related to disclosure about political advertising. When it comes to advertisements on social media platforms, people are generally aware of their intent and should continue to take ads with a grain of salt. The real shift needs to happen in what YOU share. The dissemination of false information is most damaging when it comes organically from a trusted source. It’s all too easy to be swept up in an emotionally charged article, but before sharing solely because it aligns with your personal sentiments, make sure the page or article passes a few tests. First, check the source. If you’ve never heard of it, snoop around the website. Look at other articles to see if you can discern a trend in viewpoints. A site may even disclose itself in a mission statement on the homepage. If comments are enabled within article pages, take a look through the discussions to see if you can derive a bias among the people commenting and begin to put together an audience profile. And most importantly, check the facts. If the entire weight of the article is riding on a few statistics, check the sources. If the core message of the article is not statistically based, but instead focused on a news story, research the topic to see if it’s been picked up by a reputable source. If you can’t find the story or statistical backing anywhere else, it’s probably not authentic and potentially damaging to share.

 

With big tech companies developing an ever-growing arsenal of communication tools, it is important for all of us to make sure our personal habits reflect the habits we want everyone to employ. Unfortunately, there is no “simple” solution. The end result will likely be a combination of tech intervention, legislation, and trial-and-error. But while there are many forces at play to attempt to rectify the situation, we cannot discount our own personal responsibility. While our differences will always exist, they can be better used as a tool of education than division. With a greater effort on all our parts, we can take this timely matter into our own hands and help shape an outcome that brings the truth back to light.

All Blogs