Instagram launches “Data Download” tool to let you leave

Two weeks ago TechCrunch called on Instagram to build an equivalent to Facebook’s “Download Your Information feature so if you wanted to leave for another photo sharing network, you could. The next day it announced this tool would be coming and now TechCrunch has spotted it rolling out to users. Instagram’s “Data Download” feature can be accessed here or through the app’s privacy settings. It lets users export their photos, videos, Stories, profile, info, comments, and messages, though it can take a few hours to days for your download to be ready.

An Instagram spokesperson now confirms to TechCrunch that “the Data Download tool is currently accessible to everyone on the web, but access via iOS and Android is still rolling out.” We’ll have more details on exactly what’s inside once my download is ready.

The tool’s launch is necessary for Instagram to comply with the data portability rule in European Union’s GDPR privacy law that goes into effect on May 25th. But it’s also a reasonable concession. Instagram has become the dominant image sharing social network with over 800 million users. It shouldn’t need to lock up users’ data in order to keep them around.

Instagram hasn’t been afraid to attack competitors and fight dirty. Most famously, it copied Snapchat’s Stories in August 2016, which now has over 300 million daily users — eclipsing the original. But it also cut off GIF-making app Phhhoto from its Find Friends feature, then swiftly cloned its core feature to launch Instagram Boomerang. Within a few years, Phhhoto had shut down its app.

If Instagram is going to ruthlessly clone and box out its competitors, it should also let users choose which they want to use. That’s tough if all your photos and videos are trapped inside another app. The tool could create a more level playing field for competition amongst photo apps.

It could also deter users from using sketchy third-party apps to scrape all their Instagram content. Since they typically require you to log in with your Instagram credentials, these put users at risk of being hacked or having their images used elsewhere without their consent. Considering Facebook launched its DYI tool in 2010, six years after the site launched, the fact that it took Instagram 8 years from launch to build this means it’s long overdue.

But with such strong network effect and its willingness to clone any popular potential rival, it may still take a miracle or a massive shift to a new computing platform for any app to dethrone Instagram.

from Social – TechCrunch https://techcrunch.com/2018/04/24/instagram-export/
via Superb Summers

Advertisements

Facebook reveals 25 pages of takedown rules for hate speech and more

Facebook has never before made public the guidelines its moderators use to decide whether to remove violence, spam, harassment, self-harm, terrorism, intellectual property theft, and hate speech from social network until now. The company hoped to avoid making it easy to game these rules, but that worry has been overridden by the public’s constant calls for clarity and protests about its decisions. Today Facebook published 25 pages of detailed criteria and examples for what is and isn’t allowed.

Facebook is effectively shifting where it will be criticized to the underlying policy instead of individual incidents of enforcement mistakes like when it took down posts of the newsworthy “Napalm Girl” historical photo because it contains child nudity before eventually restoring them. Some groups will surely find points to take issue with, but Facebook has made some significant improvements. Most notably, it no longer disqualifies minorities from shielding from hate speech because an unprotected characteristic like “children” is appended to a protected characteristic like “black”.

Nothing is technically changing about Facebook’s policies. But previously, only leaks like a copy of an internal rulebook attained by the Guardian had given the outside world a look at when Facebook actually enforces those policies. These rules will be translated into over 40 languages for the public. Facebook currently has 7500 content reviewers, up 40% from a year ago.

Facebook also plans to expand its content removal appeals process, It already let users request a review of a decision to remove their profile, Page, or Group. Now Facebook will notify users when their nudity, sexual activity, hate speech or graphic violence content is removed and let them hit a button to “Request Review”, which will usually happen within 24 hours. Finally, Facebook will hold Facebook Forums: Community Standards events in Germany, France, the UK, India, Singapore, and the US to give its biggest communities a closer look at how the social network’s policy works.

Fixing the “white people are protected, black children aren’t” policy

Facebook’s VP of Global Product Management Monika Bickert who has been coordinating the release of the guidelines since September told reporters at Facebook’s Menlo Park HQ last week that “There’s been a lot of research about how when institutions put their policies out there, people change their behavior, and that’s a good thing.” She admits there’s still the concern that terrorists or hate groups will get better at developing “workarounds” to evade Facebook’s moderators, “but the benefits of being more open about what’s happening behind the scenes outweighs that.”

Content moderator jobs at various social media companies including Facebook have been described as hellish in many exposes regarding what it’s like to fight the spread of child porn, beheading videos, racism for hours a day. Bickert says Facebook’s moderators get trained to deal with this and have access to counseling and 24/7 resources, including some on-site. They can request to not look at certain kinds of content they’re sensitive to. But Bickert didn’t say Facebook imposes an hourly limit on how much offensive moderators see per day like how YouTube recently implemented a four-hour limit.

A controversial slide depicting Facebook’s now-defunct policy that disqualified subsets of protected groups from hate speech shielding. Image via ProPublica

The most useful clarification in the newly revealed guidelines explains how Facebook has ditched its poorly received policy that deemed “white people” as protected from hate speech, but not “black children”. That rule that left subsets of protected groups exposed to hate speech was blasted in a ProPublica piece in June 2017, though Facebook said it no longer applied that policy.

Now Bickert says “Black children — that would be protected. White men — that would also be protected. We consider it an attack if it’s against a person, but you can criticize an organization, a religion . . . If someone says ‘this country is evil’, that’s something that we allow. Saying ‘members of this religion are evil’ is not.” She explains that Facebook is becoming more aware of the context around who is being victimized. However, Bickert notes that if someone says “‘I’m going to kill you if you don’t come to my party’, if it’s not a credible threat we don’t want to be removing it.” 

Do community standards = editorial voice?

Being upfront about its policies might give Facebook more to point to when it’s criticized for failing to prevent abuse on its platform. Activist groups say Facebook has allowed fake news and hate speech to run rampant and lead to violence in many developing countries where Facebook hasn’t had enough native speaking moderators. The Sri Lankan government temporarily blocked Facebook in hopes of ceasing calls for violence, and those on the ground say Zuckerberg overstated Facebook improvements to the problem in Myanmar that led to hate crimes against the Rohingya people.

Revealing the guidelines could at least cut down on confusion about whether hateful content is allowed on Facebook. It isn’t. Though the guidelines also raise the question of whether the Facebook value system it codifies means the social network has an editorial voice that would define it as a media company. That could mean the loss of legal immunity for what its users post. Bickert stuck to a rehearsed line that “We are not creating content and we’re not curating content”. Still, some could certainly say all of Facebook’s content filters amount to a curatorial layer.

But whether Facebook is a media company or a tech company, it’s a highly profitable company. It needs to spend some more of the billions it earns each quarter applying the policies evenly and forcefully around the world.

from Social – TechCrunch https://techcrunch.com/2018/04/24/facebook-content-rules/
via Superb Summers

Facebook’s new authorization process for political ads goes live in the US

Earlier this month — and before Facebook CEO Mark Zuckerberg testified before Congress — the company announced a series of changes to how it would handle political advertisements running on its platform in the future. It had said that people who wanted to buy a political ad — including ads about political “issues” — would have to reveal their identities and location and be verified before the ads could run. Information about the advertiser would also display to Facebook users.

Today, Facebook is announcing the authorization process for U.S. political ads is live.

Facebook had first said in October that political advertisers would have to verify their identity and location for election-related ads. But in April, it expanded that requirement to include any “issue ads” — meaning those on political topics being debated across the country, not just those tied to an election.

Facebook said it would work with third parties to identify the issues. These ads would then be labeled as “Political Ads,” and display the “paid for by” information to end users.

According to today’s announcement, Facebook will now begin to verify the identity and the residential mailing address of advertisers who want to run political ads. Those advertisers will also have to disclose who’s paying for the ads as part of this authorization process.

This verification process is currently only open in the U.S. and will require Page admins and ad account admins to submit their government-issued ID to Facebook, along with their residential mailing address.

The government ID can either be a U.S. passport or U.S. driver’s license, a FAQ explains. Facebook will also ask for the last four digits of admins’ Social Security Number. The photo ID will then be approved or denied in a matter of minutes, though anyone declined based on the quality of the uploaded images won’t be prevented from trying again.

The address, however, will be verified by mailing a letter with a unique access code that only the admin’s Facebook account can use. The letter may take up to 10 days to arrive, Facebook notes.

Along with the verification portion, Page admins will also have to fill in who paid for the ad in the “disclaimer” section. This has to include the organization(s) or person’s name(s) who funded it.

This information will also be reviewed prior to approval, but Facebook isn’t going to fact check this field, it seems.

Instead, the company simply says: “We’ll review each disclaimer to make sure it adheres to our advertising policies. You can edit your disclaimers at any time, but after each edit, your disclaimer will need to be reviewed again, so it won’t be immediately available to use.”

The FAQ later states that disclaimers must comply with “any applicable law,” but again says that Facebook only reviews them against its ad policies.

“It’s your responsibility as the advertiser to independently assess and ensure that your ads are in compliance with all applicable election and advertising laws and regulations,” the documentation reads.

Along with the launch of the new authorization procedures, Facebook has released a Blueprint training course to guide advertisers through the steps required, and has published an FAQ to answer advertisers’ questions.

Of course, these procedures will only net the more scrupulous advertisers willing to play by the rules. That’s why Facebook had said before that it plans to use AI technology to help sniff out those advertisers who should have submitted to verification, but did not. The company is also asking people to report suspicious ads using the “Report Ad” button.

Facebook has been under heavy scrutiny because of how its platform was corrupted by Russian trolls on a mission to sway the 2016 election. The Justice Department charged 13 Russians and three companies with election interference earlier this year, and Facebook has removed hundreds of accounts associated with disinformation campaigns.

While tougher rules around ads may help, they alone won’t solve the problem.

It’s likely that those determined to skirt the rules will find their own workarounds. Plus, ads are only one of many issues in terms of those who want to use Facebook for propaganda and misinformation. On other fronts, Facebook is dealing with fake news — including everything from biased stories to those that are outright lies, intending to influence public opinion. And of course there’s the Cambridge Analytica scandal, which led to intense questioning of Facebook’s data privacy practices in the wake of revelations that millions of Facebook users had their information improperly accessed.

Facebook says the political ads authorization process is gradually rolling out, so it may not be available to all advertisers at this time. Currently, users can only set up and manage authorizations from a desktop computer from the Authorizations tab in a Facebook Page’s Settings.

from Social – TechCrunch https://techcrunch.com/2018/04/23/facebooks-new-authorization-process-for-political-ads-goes-live-in-the-u-s/
via Superb Summers