Twitter cuts off API access to follow/unfollow spam dealers

Notification spam ruins social networks, diluting the real human interaction. Desperate to gain an audience, users pay services to rapidly follow and unfollow tons of people in hopes that some will follow them back. The services can either automate this process or provide tools for users to generate this spam themselves, Earlier this month, a TechCrunch investigation found over two dozen follow-spam companies were paying Instagram to run ads for them. Instagram banned all the services in response an vowed to hunt down similar ones more aggressively.

ManageFlitter’s spammy follow/unfollow tools

Today, Twitter is stepping up its fight against notification spammers. Earlier today, the functionality of three of these services — ManageFlitter, Statusbrew, Crowdfire — ceased to function, as spotted by social media consultant Matt Navarra.

TechCrunch inquired with Twitter about whether it had enforced its policy against those companies. A spokesperson provided this comment: “We have suspended these three apps for having repeatedly violated our API rules related to aggressive following & follow churn. As a part of our commitment to building a healthy service, we remain focused on rapidly curbing spam and abuse originating from use of Twitter’s APIs.” These apps will cease to function since they’ll no longer be able to programatically interact with Twitter to follow or unfollow people or take other actions.

Twitter’s policies specify that “Aggressive following (Accounts who follow or unfollow Twitter accounts in a bulk, aggressive, or indiscriminate manner) is a violation of the Twitter Rules.” This is to prevent a ‘tragedy of the commons’ situation. These services and their customers exploit Twitter’s platform, worsening the experience of everyone else to grow these customers’ follower counts. We dug into these three apps and found they each promoted features designed to help their customers spam Twitter users.

ManageFlitter‘s site promotes how “Following relevant people on Twitter is a great way to gain new followers. Find people who are interested in similar topics, follow them and often they will follow you back.” For $12 to $49 per month, customers can use this feature shown in the GIF above to rapidly follow others, while another feature lets them check back a few days later and rapidly unfollow everyone who didn’t follow them back. 

Crowdfire had already gotten in trouble with Twitter for offering a prohibited auto-DM feature and tools specifically for generating follow notifications. Yep it only changed its functionality to dip just beneath the rate limits Twitter imposes. It seems it preferred charging users up to $75 per month to abuse the Twitter ecosystem than accept that what it was doing was wrong.

StatusBrew details how “Many a time when you follow users, they do not follow back . . . thereby, you might want to disconnect with such users after let’s say 7 days. Under ‘Cleanup Suggestion’ we give you a reverse sorted list of the people who’re Not Following Back”. It charges $25 to $416 month for these spam tools. After losing its API access today, StatusBrew posted a confusing half-mea culpa, half-“it was our customers’ fault” blog post announcing it will shut down its follow/unfollow features.

Twitter tells TechCrunch it will allow these companies “apply for a new developer account and register a new, compliant app” but the existing apps will remain suspended. I think they deserve an additional time-out period. But still, this is a good step towards Twitter protecting the health of conversation on its platform from greedy spam services. I’d urge the company to also work to prevent companies and sketchy individuals from selling fake followers or follow/unfollow spam via Twitter ads or tweets.

When you can’t trust that someone who follows you is real, the notifications become meaningless distractions, faith in finding real connection sinks, and we become skeptical of the whole app. It’s the users that lose, so it’s the platforms’ responsibility to play referee.

from Social – TechCrunch
via Superb Summers

Facebook just removed a new wave of suspicious activity linked to Iran

Facebook just announced its latest round of “coordinated inauthentic behavior,” this time out of Iran. The company took down 262 Pages, 356 accounts, three Facebook groups and 162 Instagram accounts that exhibited “malicious-looking indicators” and patterns that identify it as potentially state-sponsored or otherwise deceptive and coordinated activity.

As Facebook Head of Cybersecurity Policy Nathaniel Gleicher noted in a press call, Facebook coordinated closely with Twitter to discover these accounts, and by collaborating early and often the company “[was] able to use that to build up our own investigation.” Today, Twitter published a postmortem on its efforts to combat misinformation during the US midterm election last year.

Example of the content removed

As the Newsroom post details, the activity affected a broad swath of areas around the globe:

“There were multiple sets of activity, each localized for a specific country or region, including Afghanistan, Albania, Algeria, Bahrain, Egypt, France, Germany, India, Indonesia, Iran, Iraq, Israel, Libya, Mexico, Morocco, Pakistan, Qatar, Saudi Arabia, Serbia, South Africa, Spain, Sudan, Syria, Tunisia, US, and Yemen. The Page administrators and account owners typically represented themselves as locals, often using fake accounts, and posted news stories on current events… on topics like Israel-Palestine relations and the conflicts in Syria and Yemen, including the role of the US, Saudi Arabia, and Russia.

Today’s takedown is the result of an internal investigation linking the newly discovered activity to other content out of Iran late last year. Remarkably, the activity Facebook flagged today dates back to 2010.

The Iranian activity was not focused on creating real world events, as we’ve seen in other cases. In many cases, the content “repurposed” reporting from Iranian state media and spread ideas that could benefit Iran’s positions on various geopolitical issues. Still, Facebook declined to link the newly identified activity to Iran’s government directly.

“Whenever we make an announcement like this we’re really careful,” Gleicher said. “We’re not in a position to directly assert who the actor is in this case, we’re asserting what we can prove.”

from Social – TechCrunch
via Superb Summers

Facebook users who quit the social network for a month feel happier

New research out of Stanford and New York University took a look at what happens when people step back from Facebook for a month.

Through Facebook, the research team recruited 2,488 people who averaged an hour of Facebook use each day. After assessing their “willingness to accept” the idea of deactivating their account for a month, the study assigned eligible participants to an experimental category that would deactivate their accounts or a control group that would not.

Over the course of the month-long experiment, researchers monitored compliance by checking participants’ profiles. The participants self-reported a rotating set of well being measures in real time, including happiness, what emotion a participant felt over the last 10 minutes and a measure of loneliness.

As the researchers report, leaving Facebook correlated with improvements on well being measures. They found that the group tasked with quitting Facebook ended up spending less time on other social networks too, instead spending more time to offline activities like spending time with friends and family (good) and watching television (maybe not so good). Overall the group reported that it spent less time consuming news in general.

The group that Facebook also reported less time spent on the social network after the study-imposed hiatus was up, suggesting that the break might have given them new insight into their own habits.

“Reduced post-experiment use aligns with our finding that deactivation improved subjective well-being, and it is also consistent with the hypotheses that Facebook is habit forming… or that people learned that they enjoy life without Facebook more than they had anticipated,” the paper’s authors wrote.

There are a few things to be aware of with the research. The paper notes that subjects were told they would “keep [their] access to Facebook Messenger.” Though the potential impact of letting participants remain on Messenger isn’t mentioned again, it sounds like they were still freely using one of the platform’s main functions though perhaps one with fewer potential negative effects on mood and behavior.

Unlike some recent research, this study was conducted by economics researchers. That’s not unusual for social psych-esque stuff like this but does inform aspects of the method, measured used and perspective.

Most important for a bit more context, the research was conducted in the run-up to the 2016 U.S. presidential election. That fact that is likely to have informed participants’ attitudes around social media, both before and after the election.

While the participants reported that they were less informed about current events, they also showed evidence of being less politically polarized, “consistent with the concern that social media have played some role in the recent rise of polarization in the US.”

In an era of ubiquitous threats to quit the world’s biggest social network, the fact remains that we mostly have no idea what our online habits are doing to our brains and behavior. Given that, we also don’t know what happens when we step back from social media environments like Facebook and give our brains a reprieve. With its robust sample size and fairly thorough methodology, this study provides us a useful glimpse into those effects. For more insight into the research, you can read the full paper here.

from Social – TechCrunch
via Superb Summers