Twitter flags Republican leader’s video as ‘manipulated’ for altering disabled activist’s words

Twitter flagged an inflammatory video by House Republican Whip Steve Scalise on Sunday for altering footage of a conversation between progressive activist Ady Barkan and Joe Biden. The video is now labeled as “manipulated media” in a tweet from Scalise, though remains online.

The inflammatory video pulls in out-of-context quotes from a number of Democrats and activists, but appears to have crossed a line by altering Barkan’s words from a portion of the conversation about policing reform. Barkan, who has ALS, speaks with an assistive eye-tracking device.

“These are not my words. I have lost my ability to speak, but not my agency or my thoughts,” Barkan tweeted in response, adding “…You owe the entire disability community an apology.”

In the video excerpt, taken from a longer conversation about policing and social services, Barkan appears to say “Do we agree that we can redirect some of the funding for police?” In reality, Barkan interrupted Biden during the conversation to ask “Do we agree that we can redirect some of the funding?”

In the video, Barkan’s altered sentence is followed by a dramatic black background stamped with the words “No police. Mob rule. Total chaos. Coming to a town near you?” Those ominous warnings are followed by a logo for Scalise’s reelection campaign.

The addition of the two words, falsely rendered in Barkan’s voice, don’t significantly change the meaning of his question, but the edit still crossed a line. A Twitter spokesperson confirmed that the tweet violated the company’s policy for “synthetic and manipulated media,” though did not specify which part of the video broke the rules.

The synthetic and manipulated media policy states that Twitter “may label Tweets containing synthetic and manipulated media to help people understand their authenticity and to provide additional context.” In the policy, Twitter explains specifically that “new video frames, overdubbed audio” and other edits count as deceptive and significant manipulation.

from Social – TechCrunch https://techcrunch.com/2020/08/30/steve-scalise-twitter-video-ady-barkan/
via Superb Summers

Facebook partially documents its content recommendation system

Algorithmic recommendation systems on social media sites like YouTube, Facebook and Twitter, have shouldered much of the blame for the spread of misinformation, propaganda, hate speech, conspiracy theories and other harmful content. Facebook, in particular, has come under fire in recent days for allowing QAnon conspiracy groups to thrive on its platform and for helping militia groups to scale membership. Today, Facebook is attempting to combat claims that its recommendation systems are at any way at fault for how people are exposed to troubling, objectionable, dangerous, misleading, and untruthful content.

The company has, for the first time, made public how its content recommendation guidelines work.

In new documentation available in Facebook’s Help Center and Instagram’s Help Center, the company details how Facebook and Instagram’s algorithms work to filter out content, accounts, Pages, Groups and Events from its recommendations.

Currently, Facebook’s Suggestions may appear as Pages You May Like, “Suggested For You” posts in News Feed, People You May Know, or Groups You Should Join. Instagram’s suggestions are found within Instagram Explore, Accounts You May Like, and IGTV Discover.

The company says Facebook’s existing guidelines have been in place since 2016 under a strategy it references as “remove, reduce, and inform.” This strategy focuses on removing content that violates Facebook’s Community Standards, reducing the spread of problematic content that does not violate its standards, and informing people with additional information so they can choose what to click, read or share, Facebook explains.

The Recommendation Guidelines typically fall under Facebook’s efforts in the “reduce” area, and are designed to maintain a higher standard than Facebook’s Community Standards, because they push users to follow new accounts, groups, Pages and the like.

Facebook, in the new documentation, details five key categories that are not eligible for recommendations. Instagram’s guidelines are similar. However, the documentation offers no deep insight into how Facebook actually chooses how it chooses what to recommend to a given user. That’s a key piece to understanding recommendation technology, and one Facebook intentionally left out.

One obvious category of content that many not be eligible for recommendation includes those that would impede Facebook’s “ability to foster a safe community,” such as content focused on self-harm, suicide, eating disorders, violence, sexually explicit, regulated content like tobacco or drugs, content shared by non-recommendable accounts or entities.

Facebook also claims to not recommend sensitive or low-quality content, content users frequently say they dislike, and content associated with low-quality publishings. These further categories include things like clickbait, deceptive business models, payday loans, products making exaggerated health claims or offering “miracle cures,” content promoting cosmetic procedures, contest, giveaways, engagement bait, unoriginal content stolen from another source, content from websites that get a disproportionate number of clicks from Facebook versus other places on the web, news that doesn’t include transparent information about the authorship or staff.

In addition, Facebook claims it won’t recommend fake or misleading content, like those making claims found false by independent fact checkers, vaccine-related misinformation, and content promoting the use of fraudulent documents.

It says it will also “try” not to recommend accounts or entities that recently violated Community Standards, shared content Facebook tries to not recommend, posts vaccine-related misinformation, has engaged in purchasing “Likes,” has been banned from running ads, posted false information, or are associated with movements tied to violence.

The latter claim, of course, follows recent news that a Kenosha militia Facebook Event remained on the platform after being flagged 455 times after its creation, and had been cleared by 4 moderators as non-violating content. The associated Page had issued a “calls to arms” and hosted comments about people asking what types of weapons to bring. Ultimately, two people were killed and a third was injured at protests in Kenosha, Wisconsin when a 17-year old armed with an AR-15-style rifle broke curfew, crossed state lines, and shot at protestors.

Given Facebook’s track record, it’s worth considering how well Facebook is capable of abiding by its own stated guidelines. Plenty of people have found their way to what should be ineligible content, like conspiracy theories, dangerous health content, COVID-19 misinformation and more by clicking through on suggestions at times when the guidelines failed. QAnon grew through Facebook recommendations, it’s been reported.

It’s also worth noting, there are many gray areas that guidelines like these fail to cover.

Militia groups and conspiracy theories are only a couple examples. Amid the pandemic, U.S. users who disagreed with government guidelines on business closures can easily find themselves pointed towards various “reopen” groups where members don’t just discuss politics, but openly brag about not wearing masks in public or even when required to do so at their workplace. They offer tips on how to get away with not wearing masks, and celebrate their successes with selfies. These groups may not technically break rules by their description alone, but encourage behavior that constitutes a threat to public health.

Meanwhile, even if Facebook doesn’t directly recommend a group, a quick search for a topic will direct you to what would otherwise be ineligible content within Facebook’s recommendation system.

For instance, a quick search for the word “vaccines,” currently suggests a number of groups focused on vaccine injuries, alternative cures, and general anti-vax content. These even outnumber the pro-vax content. At a time when the world’s scientists are trying to develop protection against the novel coronavirus in the form of a vaccine, allowing anti-vaxxers a massive public forum to spread their ideas is just one example of how Facebook is enabling the spread of ideas that may ultimately become a global public health threat.

The more complicated question, however, is where does Facebook draw the line in terms of policing users having these discussions versus favoring an environment that supports free speech? With few government regulations in place, Facebook ultimately gets to make this decision for itself.

Recommendations are only a part of Facebook’s overall engagement system, and one that’s often blamed for directing users to harmful content. But much of the harmful content that users find could be those groups and Pages that show up at top of Facebook search results when users turn to Facebook for general information on a topic. Facebook’s search engine favors engagement and activity — like how many members a group has or how often users post — not how close its content aligns with accepted truths or medical guidelines.

Facebook’s search algorithms aren’t being similarly documented in as much detail.

 

 

from Social – TechCrunch https://techcrunch.com/2020/08/31/facebook-partially-documents-its-content-recommendation-system/
via Superb Summers

TikTok’s rivals in India struggle to cash in on its ban

For years, India has served as the largest open battleground for Silicon Valley and Chinese firms searching for their next billion users.

With more than 400 million WhatsApp users, India is already the largest market for the Facebook-owned service. The social juggernaut’s big blue app also reaches more than 300 million users in the country.

Google is estimated to reach just as many users in India, with YouTube closely rivaling WhatsApp for the most popular smartphone app in the country.

Several major giants from China, like Alibaba and Tencent (which a decade ago shut doors for most foreign firms), also count India as their largest overseas market. At its peak, Alibaba’s UC Web gave Google’s Chrome a run for its money. And then there is TikTok, which also identified India as its biggest market outside of China.

Though the aggressive arrival of foreign firms in India helped accelerate the growth of the local ecosystem, their capital and expertise also created a level of competition that made it too challenging for most Indian firms to claim a slice of their home market.

New Delhi’s ban on 59 Chinese apps on June 30 on the basis of cybersecurity concerns has changed a lot of this.

Indian apps that rarely made an appearance in the top 20 have now flooded the charts. But are these skyrocketing download figures translating to sustaining users?

An industry executive leaked the download, monthly active users, weekly active users and daily active users figures from one of the top mobile insight firms. In this Extra Crunch report, we take a look at the changes New Delhi’s ban has enacted on the world’s second largest smartphone market.

TikTok copycats

Scores of startups in India, including news aggregator DailyHunt, on-demand video streamer MX Player and advertising giant InMobi Group, have launched their short-video format apps in recent months.

from Social – TechCrunch https://techcrunch.com/2020/08/28/tiktoks-rivals-in-india-struggle-to-cash-in-on-its-ban/
via Superb Summers