- Social Site-

Last update 28.09.2017 14:48:34

Home  Analysis  Android  Apple  APT  Attack  BigBrothers  BotNet  Congress  Crime  Crypto  Cryptocurrency  Cyber  CyberCrime  CyberSpy  CyberWar  Exploit  Forensics  Hacking  ICS  Incindent  iOS  IT  IoT  Mobil  OS  Phishing  Privacy  Ransomware  Safety  Security  Social  Spam  Vulnerebility  Virus  EN  List  Czech Press  Page

Introduction  List  Kategorie  Subcategory  0  1  2  3  4  5  6  7  8



16.12.18

Twitter Fixes Bug That Gives Unauthorized Access to Direct Messages

Social

Bleepingcomputer

15.12.18

Facebook Photo API Bug Exposed Pics of Up to 6.8 Million Users

Social

Bleepingcomputer

14.12.18

Facebook Paid Out $1.1 Million in Bug Bounties in 2018

Social

Securityweek

14.12.18

Facebook Photo API Bug Exposed Pics of Up to 6.8 Million Users

Social

Bleepingcomputer

13.12.18

Rhode Island Sues Alphabet Over Google+ Security IncidentsSocialSecurityweek
12.12.18

Facebook Fined $11.3M for Privacy Violations

Social

Threatpost

11.12.18

New Bug Prompts Earlier End to Google+ Social NetworkSocialSecurityweek

11.12.18

Bug in Google+ API Puts at Risk Privacy of over 52 Million UsersSocialBleepingcomputer

10.12.18

Google Accelerates Google+ Shutdown After New Bug Discovered

Social

Threatpost

7.12.18

Facebook Defends Data Policies On Heels of Incriminating Internal Docs

Social

Threatpost

6.12.18

Zuckerberg Defends Facebook in New Data Breach ControversySocialSecurityweek

6.12.18

Facebook Emails Show How it Sought to Leverage User DataSocialSecurityweek
30.11.18Facebook Mulled Charging for Access to User Data

Social

Securityweek

28.11.18

British MP: Facebook was aware about Russian activity at least since 2014

Social

Securityaffairs

27.11.18

Facebook Knew About Russian Activity in 2014: British MP

Social

Securityweek

26.11.18Spotify Phishers Hijack Music Fans’ AccountsSocialThreatpost

26.11.18

Very trivial Spotify phishing campaign uncovered by experts

Social

Securityaffairs

25.11.18

Facebook appeals UK fine in Cambridge Analytica privacy Scandal

Social

Securityaffairs
24.11.18Spotify Phishers Hijack Music Fans’ AccountsSocialThreatpost
22.11.18Facebook increases rewards for its bug bounty program and facilitate bug submissionSocialPBWCZ.CZ

22.11.18

Get paid up to $40,000 for finding ways to hack Facebook or Instagram accounts

Social

Thehackernews

21.11.18

Real Identity of Hacker Who Sold LinkedIn, Dropbox Databases Revealed

Social

Thehackernews

20.11.18

Instagram Accidentally Exposed Some Users' Passwords In Plaintext

Social

Threatpost

19.11.18Instagram glitch exposed some user passwordsSocialPBWCZ.CZ
17.11.18

Scammers Use Facebook Sharer Page to Push Tech Support Scams

Social

Bleepingcomputer
14.11.18Facebook flaw could have exposed private info of users and their friendsSocialPBWCZ.CZ
7.11.18Facebook Blocks 115 Accounts on Eve of US ElectionSocialPBWCZ.CZ
5.11.18Crooks offered for sale private messages for 81k Facebook accountsSocialPBWCZ.CZ
5.11.18Twitter deletes over 10,000 accounts that aim to influence U.S. votingSocialPBWCZ.CZ
27.10.18UK Regulator Hits Facebook With Maximum FineSocialPBWCZ.CZ
26.10.18UK ICO fines Facebook with maximum for Cambridge Analytica scandalSocialPBWCZ.CZ
24.10.18For the first time Japanese commission ordered Facebook to improve securitySocialPBWCZ.CZ
23.10.18Japan Orders Facebook to Improve Data ProtectionSocialPBWCZ.CZ
19.10.18Facebook Launches 'War Room' to Combat ManipulationSocialPBWCZ.CZ
14.10.18Facebook Says Hackers Accessed Data of 29 Million UsersSocialPBWCZ.CZ
14.10.18Industry Reactions to Google+ Security Incident: Feedback FridaySocialPBWCZ.CZ
13.10.18Facebook Data Breach Update: attackers accessed data of 29 Million usersSocialPBWCZ.CZ
12.10.18Facebook Purges 251 Accounts to Thwart DeceptionSocialPBWCZ.CZ
9.10.18Google was aware of a flaw that exposed over 500,000 of Google Plus users, but did not disclose itSocialPBWCZ.CZ
9.10.18Google Says Social Network Bug Exposed Private DataSocialPBWCZ.CZ
4.10.18Facebook Says No Apps Were Accessed in Recent HackSocialPBWCZ.CZ
2.10.18The Scandals Bedevilling FacebookSocialPBWCZ.CZ
2.10.18New Twitter Rules Target Fake Accounts, HackersSocialPBWCZ.CZ
2.10.18Industry Reactions to Facebook HackSocialPBWCZ.CZ
1.10.18Several Bugs Exploited in Massive Facebook HackSocialPBWCZ.CZ
29.9.18Facebook hacked – 50 Million Users’ Data exposed in the security breachSocialPBWCZ.CZ
29.9.18Facebook: User shadow data, including phone numbers may be used by advertisersSocialPBWCZ.CZ
28.9.18Facebook Admits Phone Numbers May be Used to Target AdsSocialPBWCZ.CZ
24.9.18Bug Exposed Direct Messages of Millions of Twitter UsersSocialPBWCZ.CZ
22.9.18Facebook Building a 'War Room' to Battle Election MeddlingSocialPBWCZ.CZ
22.9.18Facebook Boosts Protections for Political CandidatesSocialPBWCZ.CZ
18.9.18Facebook Offers Rewards for Access Token Exposure FlawsSocialPBWCZ.CZ

Elon Musk BITCOIN Twitter scam, a simple and profitable fraud for crooks
12.11.18 securityaffairs
Cryptocurrency  Social

Crooks are exploiting the popularity of Elon Musk and a series of hacked verified Twitter accounts to implement a new fraud scheme.
Crooks are exploiting the popularity of Elon Musk and a series of hacked verified Twitter accounts (i.e. UK retailer Matalan, US publisher Pantheon Books, and official government Twitter accounts such as the Ministry of Transportation of Colombia and the National Disaster Management Authority of India.) in a simple as effective scam scheme.

₿iht Coign BSc (Hons)
@abztrdr
Come on @twitter @TwitterSupport ??

This is a blatant scam which is being promoted by Twitter and by other potencially hacked or impersonating VERIFIED accounts.

tweet: https://web.archive.org/save/https://twitter.com/PantheonBooks/status/1059433629795926024 …

cc: @elonmusk @Cointelegraph @coindesk @ADCuthbertson @verified @BillyBambrough

2
3:04 PM - Nov 5, 18
See ₿iht Coign BSc (Hons)'s other Tweets
Twitter Ads info and privacy
The accounts were hacked to impersonate Elon Musk, once hijacked, scammers changed the accounts’ names and profile pictures to those of the popular entrepreneur and started using them to share tweet calling for people to send him cryptocurrency.

The accounts were informing Twitter users of a new alleged Musk’s initiative of creating the biggest crypto-giveaway of 10,000 bitcoins.

“I’m giving 10 000 Bitcoin (BTC) to all community!” I left the post of director of Tesla, thank you all for your support,” states the hacked account of Pantheon Books.

Elon Musk twitte scam

With this scheme crooks already earned over 28 bitcoins or approximately $180,000 USD, in just a single day, the scammers received 392 transactions to the bitcoin address 1KAGE12gtYVfizicQSDQmnPHYfA29bu8Da.

In order to improve the visibility of the Tweets, scammers promoted a series of giveaway sites through Twitter advertising (i.e. musk[.]plus, musk[.]fund, musk[.]plus, and spacex[.]plus), which instruct visitors to send .1 or 3 BTC to a specific address in order to get back 1-30 times in bitcoins.

Elon Musk scam-site

“To verify your address, send from 0.1 to 3 BTC to the address below and get from 1 to 30 BTC back!

BONUS: Addresses with 0.30 BTC or more sent, gets additional +200% back!

Payment Address
You can send BTC to the following address.

1KAGE12gtYVfizicQSDQmnPHYfA29bu8Da

Waiting for your payment…

As soon as we receive your transaction, the outgoing transaction will be processed to your address.”

Dozens of people sent the minimum 0.1 bitcoins, but some naive users sent as much as from 0.5, up to 0.9995 bitcoins (roughly $6,000).

Twitter does not comment on individual accounts, but shared the following statement:

“Impersonating another individual to deceive users is a clear violation of the Twitter Rules. Twitter has also substantially improved how we tackle cryptocurrency scams on the platform. In recent weeks, user impressions have fallen by a multiple of 10 in recent weeks as we continue to invest in more proactive tools to detect spammy and malicious activity. This is a significant improvement on previous action rates.”


How to Check What Facebook Hackers Accessed in Your Account
18.10.18 securityweek
Social

Could hackers have been able to see the last person you cyberstalked, or that party photo you were tagged in? According to Facebook, the unfortunate answer is "yes."

On Friday, the social network said fewer users were affected in a security breach it disclosed two weeks ago than originally estimated — nearly 30 million, down from 50 million. In additional good news, the company said hackers weren't able to access more sensitive information like your password or financial information. And third-party apps weren't affected.

Still, for users already uneasy about the privacy and security of their Facebook accounts after a year of tumult , the details that hackers did gain access to — gender, relationship status, hometown and other info — might be even more unsettling.

Facebook has been quick to let users check exactly what was accessed. But beyond learning what information the attackers accessed, there's relatively little that users can do — beyond, that is, watching out for suspicious emails or texts. Facebook says the problem has been fixed.

The company set up a website that its 2 billion global users can use to check if their accounts have been accessed, and if so, exactly what information was stolen. It will also provide guidance on how to spot and deal with suspicious emails or texts. Facebook will also send messages directly to those people affected by the hack.

On that page, following some preliminary information about the investigation, the question "Is my Facebook account impacted by this security issue?" appears midway down. It will also provide information specific to your account if you're logged into Facebook.

Facebook said the hackers accessed names, email addresses or phone numbers from these accounts. For 14 million of them, hackers got even more data — basically anything viewable on your account that any of your friends could see, and more. It's a pretty extensive list: user name, gender, locale or language, relationship status, religion, hometown, self-reported current city, birthdate, device types used to access Facebook, education, work, the last 10 places you checked into or were tagged in, your website, people or Pages you follow and your 15 most recent searches.

An additional 1 million accounts were affected, but hackers didn't get any information from them.

The company isn't giving a breakdown of where these users are, but says the breach was "fairly broad." It plans to send messages to people whose accounts were hacked.

Facebook said the FBI is investigating, but asked the company not to discuss who may be behind the attack. The company said it hasn't ruled out the possibility of smaller-scale attacks that used the same vulnerability.

The company said it has fixed the bugs and logged out affected users to reset those digital keys.

Facebook Vice President Guy Rosen said in a Friday call with reporters that the company hasn't ruled out the possibility that other parties might have launched other, smaller scale efforts to exploit the same vulnerability before it was disabled.

Patrick Moorhead, founder of Moor Insights & Strategy, said the breach appeared similar to identity theft breaches that have occurred at companies including Yahoo and Target in 2013.

"Those personal details could be very easily be used for identity theft to sign up for credit cards, get a loan, get your banking password, etc.," he said. "Facebook should provide all those customers free credit monitoring to make sure the damage is minimized."

Thomas Rid, a professor at the Johns Hopkins University, also said the evidence, particularly the size of the breach, seems to point to a criminal motive rather than a sophisticated state operation, which usually targets fewer people.

"This doesn't sound very targeted at all," he said. "Usually when you're looking at a sophisticated government operation, then a couple of thousand people hacked is a lot, but they usually know who they're going after."


Facebook Chief Says Internet Firms in 'Arms Race' for Democracy
5.9.18 securityweek 
Social

Facebook chief Mark Zuckerberg said late Tuesday that the leading social network and other internet firms are in an arms race to defend democracy.

Zuckerberg's Washington Post op-ed came on the eve of hearings during which lawmakers are expected to grill top executives from Facebook and Twitter.

Google's potential participation is unclear.

The hearings come with online firms facing intense scrutiny for allowing the propagation of misinformation and hate speech, and amid allegations of political bias from the president and his allies.

"Companies such as Facebook face sophisticated, well-funded adversaries who are getting smarter over time, too," Zuckerberg said in an op-ed piece outlining progress being made on the front by the leading social network.

"It's an arms race, and it will take the combined forces of the US private and public sectors to protect America's democracy from outside interference."

After days of vitriol from President Donald Trump, big Silicon Valley firms face lawmakers with a chance to burnish their image -- or face a fresh bashing.

Twitter chief executive Jack Dorsey and Facebook chief operating officer Sheryl Sandberg were set to appear at a Senate Intelligence Committee hearing on Wednesday.

Lawmakers were seeking a top executive from Google or its parent Alphabet, but it remained unclear if the search giant would be represented.

Sources familiar with the matter said Google offered chief legal officer Kent Walker, who the company said is most knowledgeable on foreign interference, but that senators had asked for the participation of CEO Sundar Pichai or Alphabet CEO Larry Page.

Dorsey testifies later in the day at a hearing of the House Energy and Commerce Committee on online "transparency and accountability."

The tech giants are likely to face a cool reception at best from members of Congress, said Roslyn Layton, an American Enterprise Institute visiting scholar specializing in telecom and internet issues.

"The Democrats are upset about the spread of misinformation in the 2016 election, and the Republicans over the perception of bias," Layton said.

"They are equally angry, but for different reasons."

Kathleen Hall Jamieson, a University of Pennsylvania professor and author of an upcoming book on Russia's role in election hacking, said the hearings could give the companies a platform to explain how they operate.

"Hearings are an opportunity as well as a liability," she said.

"These companies have put in place fixes (on foreign manipulation) but they have done it incrementally, and they have not communicated that to a national audience."


Twitter to Verify Those Behind Hot-button US Issue Ads
4.9.18 securityweek 
Social

Twitter on Thursday started requiring those behind hot-button issue ads in the US to be vetted as part of the effort by the social network to thwart stealth campaigns aimed at influencing politics.

The tightened ad policy included requiring photos and valid contact information, and prohibited state-owned media or national authorities from buying political ads to be shown on Twitter outside their home countries.

Those placing these Twitter ads will need to be "certified" by the company and meet certain guidelines, and the ads will be labeled as political "issue" messages.

"The intention of this policy is to provide the public with greater transparency into ads that seek to influence people's stance on issues that may influence election outcomes," Twitter executives Del Harvey and Bruce Falck said in a blog post.

The new ad policy came as major technology firms including Facebook, Google and Twitter battle against misinformation campaigns by foreign agents.

Facebook, Twitter, Google and Microsoft recently blocked accounts from Russian and Iranian entities which the companies said were propagating misinformation aimed at disrupting the November US elections.

The new ad policy at Twitter applies to paid messages that identify political candidates or advocate regarding legislative issues of national importance.

Examples of issue topics provided by Twitter included abortion, civil rights, climate change, guns, healthcare, immigration, national security, social security, taxes and trade.

The policy did not apply to news agencies reporting on candidates or issues, rather than advocating outcomes, according to Harvey and Falck.

Silicon Valley executives are set to take part in a September 5 Senate hearing about foreign efforts to use social media platforms to influence elections.


Instagram Introduces New Account Safety Features
30.8.18 securityweek
Social

Instagram this week announced new features to boost account security and provide users with increased visibility into accounts with a large number of followers.

Instagram will soon provide users with the ability to evaluate the authenticity of an account that reaches large audiences. The information will be accessible through an “About This Account” option in the Profile menu, Mike Krieger, Co-Founder & CTO, explains in a blog post.

Information displayed will include the date the account joined Instagram, the country it is located in, accounts with shared followers, username changes in the last year, and details on the ads the account might be running.

The feature appears as a reaction to numerous misinformation campaigns that have been exposed over the past few months, some supposedly originating from Russia or Iran.

“Our community has told us that it’s important to them to have a deeper understanding of accounts that reach many people on Instagram, particularly when those accounts are sharing information related to current events, political or social causes, for example,” Krieger notes.

Starting next month, the social platform will allow people with accounts that reach large audiences to review the information about their accounts. Soon after, the “About This Account” feature will become available to the global community.

Additionally, Instagram is allowing accounts that reach large audiences and meet specific criteria to request verification through a form within the Instagram app. The social platform will review the requests “to confirm the authenticity, uniqueness, completeness and notability of each account,” Krieger says.

The verification request form is available by accessing the menu icon in the Profile section, selecting Settings, and then “Request Verification.” Users requesting verification will need to provide the account username, their full name, and a copy of their legal or business identification.

Instagram will review all requests but might decline verification for some accounts. The verification will be performed free of charge and users won’t be contacted to confirm verification.

Soon, the platform will also include support for third-party authenticator apps for those who choose to use such tools to log into their Instagram accounts.

To take advantage of the feature, users would need to access the profile section, tap the menu icon, go to “Settings,” and then select “Authentication App” in the “Two-Factor Authentication” section. If an authentication app is already installed, Instagram will automatically send a login code to it. Users will need to enter the code on Instagram to enable two-factor authentication.

According to Krieger, support for third-party authenticator apps is already rolling out to users and should reach all of them in the coming weeks.


Twitter Suspends Accounts Engaged in Manipulation
29.8.18 securityweek
Social

Twitter this week announced the suspension of a total of 770 accounts for “engaging in coordinated manipulation.”

The suspensions were performed in two waves. One last week, when the social networking platform purged 284 accounts, many of which supposedly originated from Iran, and another this week, when 486 more accounts were kicked for the same reason.

“As with prior investigations, we are committed to engaging with other companies and relevant law enforcement entities. Our goal is to assist investigations into these activities and where possible, we will provide the public with transparency and context on our efforts,” Twitter noted last week.

The micro-blogging platform took action on the accounts after FireEye published a report detailing a large campaign conducted out of Iran focused on influencing the opinions of people in the United States and other countries around the world.

Active since at least 2017, the campaign focuses on anti-Israel, anti-Saudi, and pro-Palestine topics, but also included the distribution of stories regarding U.S. policies favorable to Iran, such as the Joint Comprehensive Plan of Action nuclear deal.

The report triggered reactions from large Internet companies, including Facebook and Google. The former removed 652 pages, groups, and accounts suspected of being tied to Russia and Iran, while the latter blocked 39 YouTube channels and disabled six Blogger and 13 Google+ accounts.

“Since our initial suspensions last Tuesday, we have continued our investigation, further building our understanding of these networks. In addition, we suspended an additional 486 accounts for violating the policies outlined last week. This brings the total suspended to 770,” Twitter said on Tuesday.

The social platform also revealed that fewer than 100 of the 770 suspended accounts claimed to be located in the United States, and many were sharing divisive social commentary. These accounts, however, had thousands of followers, on average.

“We identified one advertiser from the newly suspended set that ran $30 in ads in 2017. Those ads did not target the U.S. and the billing address was located outside of Iran. We remain engaged with law enforcement and our peer companies on this issue,” Twitter also said.

In June, Twitter announced a new process designed to improve the detection of spam accounts and bots and also revealed updates to its sign-up process to make it more difficult to register spam accounts. In early August, Duo Security announced a new tool capable of detecting large Twitter botnets.


Telegram Says to Cooperate in Terror Probes, Except in Russia
29.8.18 securityweek
Social  BigBrothers

The Telegram encrypted messenger app said Tuesday said it would cooperate with investigators in terror probes when ordered by courts, except in Russia where it is locked in an ongoing battle with authorities.

The company founded by Russian Pavel Durov has refused to provide authorities in the country with a way to read its communications and was banned by a Moscow court in April as a result.

But in its updated privacy settings, Telegram said it would disclose its users' data to "the relevant authorities" elsewhere if it receives a court order to do so, although not in Russia.

"If Telegram receives a court order that confirms you're a terror suspect, we may disclose your IP address and phone number to the relevant authorities," Telegram's new privacy settings said.

"So far, this has never happened. When it does, we will include it in a semiannual transparency report," the app added.

Durov said the new privacy terms were adopted to "comply with new European laws on protecting private data."

But Durov assured his Russian users that Telegram would continue to withhold their data from security services.

"In Russia, Telegram is asked to disclose not the phone numbers or IP addresses of terrorists based on a court decision, but access to the messages of all users," he wrote on his Telegram channel.

He added that since Telegram is illegal in Russia, "we do not consider the request of Russian secret services and our confidentiality policy does not affect the situation in Russia."

Durov has long said he would reject any attempt by the country's security services to gain backdoor access to the app.

Telegram lets people exchange messages, stickers, photos and videos in groups of up to 5,000 people. It has attracted more than 200 million users since its launch by Durov and his brother Nikolai in 2013.

Russia has acted to curb internet freedoms as social media has become the main way to organise demonstrations.

Authorities stepped up the heat on popular websites after Vladimir Putin started his fourth Kremlin term in 2012, ostensibly to fight terrorism but analysts say the real motive was to muzzle Kremlin critics.

According to the independent rights group Agora, 43 people were given prison terms for internet posts in Russia in 2017.

Tech companies have had difficulty balancing the privacy of users against law enforcement, with encryption of communications adding a layer of complexity to cooperating with authorities.

One of Telegram's rival apps, Facebook-owned Whatsapp, says it complies with authorities in accordance with "applicable law".


Facebook Pulls Security App From Apple Store Over Privacy

28.8.18 securityweek Social

Facebook has pulled one of its own products from Apple's app store because it didn't want to stop tracking what people were doing on their iPhones. Facebook also banned a quiz app from its social network for possible privacy intrusions on about 4 million users.

The twin developments come as Facebook is under intense scrutiny over privacy following the Cambridge Analytica scandal earlier this year. Allegations that the political consultancy used personal information harvested from 87 million Facebook accounts have dented Facebook's reputation.

Since the scandal broke, Facebook has investigated thousands of apps and suspended more than 400 of them over data-sharing concerns.

The social media company said late Wednesday that it took action against the myPersonality quiz app, saying that its creators refused an inspection. But even as Facebook did that, it found its own Onavo Protect security app at odds with Apple's tighter rules for applications.

Onavo Protect is a virtual-private network service aimed at helping users secure their personal information over public Wi-Fi networks. The app also alerts users when other apps use too much data.

Since acquiring Onavo in 2013, Facebook has used it to track what apps people were using on phones. This surveillance helped Facebook detect trendy services, tipping off the company to startups it might want to buy and areas it might want to work on for upcoming features.

Facebook said in a statement that it has "always been clear when people download Onavo about the information that is collected and how it is used."

But Onavo fell out of compliance with Apple's app-store guidelines after they were tightened two months ago to protect the reservoir of personal information that people keep on their iPhones and iPads.

Apple's revised guidelines require apps to get users' express consent before recording and logging their activity on a device. According to Apple, the new rules also "made it explicitly clear that apps should not collect information about which other apps are installed on a user's device for the purposes of analytics or advertising/marketing."

Facebook will still be able to deploy Onavo on devices powered by Google's Android software.

Onavo's ouster from Apple's app store widens the rift between two of the world's most popular companies.

Apple CEO Tim Cook has been outspoken in his belief that Facebook does a shoddy job of protecting its 2.2 billion users' privacy — something that he has framed as "a fundamental human right."

Cook sharpened his criticism following the Cambridge Analytica scandal. He emphasized that Apple would never be caught in the same situation as Facebook because it doesn't collect information about its customers to sell advertising. Facebook CEO Mark Zuckerberg fired back in a separate interview and called Cook's remarks "extremely glib." Zuckerberg implied that Apple caters primarily to rich people with a line of products that includes the $1,000 iPhone X.

Late Wednesday, Facebook said it moved to ban the myPersonality app after it found user information was shared with researchers and companies "with only limited protections in place." The company said it would notify the app's users that their data may have been misused.

It said myPersonality was "mainly active" prior to 2012. Though Facebook has tightened its rules since then, it is only now reviewing those older apps following the Cambridge Analytica scandal.

The app was created in 2007 by researcher David Stillwell and allowed users to take a personality questionnaire and get feedback on the results.

"There was no misuse of personal data," Stillwell said in a statement, adding that "this ban appears to be purely cosmetic." Stillwell said users gave their consent and the app's data was fully anonymized before it was used for academic research. He also rejected Facebook's assertion that he refused to submit to an audit.


Facebook Suspends Hundreds of Apps Over Data Concerns
23.8.18 securityweek
Social

Facebook on Wednesday said it has suspended more than 400 of thousands of applications it has investigated to determine whether people's personal information was being improperly shared.

Applications were suspended "due to concerns around the developers who built them or how the information people chose to share with the app may have been used," vice president of product partnerships Ime Archibong said in a blog post.

Apps put on hold at the social network were being scrutinized more closely, according to Archibong.

The app unit launched in March by Facebook stemmed from the Cambridge Analytica data privacy scandal.

Facebook admitted that up to 87 million users may have had their data hijacked by Cambridge Analytica, which was working for Donald Trump's 2016 presidential campaign.

Archibong said that a myPersonality app was banned by the social network for not agreeing to an audit and "because it's clear that they shared information with researchers as well as companies with only limited protections in place."

Facebook planned to notify the approximately four million members of the social network who shared information with myPersonality, which was active mostly prior to 2012, according to Archibong.

Facebook has modified app data sharing policies since the Cambridge Analytica scandal.

"We will continue to investigate apps and make the changes needed to our platform to ensure that we are doing all we can to protect people’s information," Archibong said.

Britain's data regulator said last month that it will fine Facebook half a million pounds for failing to protect user data, as part of its investigation into whether personal information was misused ahead of the Brexit referendum.

The Information Commissioner's Office began investigating the social media giant earlier this year due to the Cambridge Analytica data mishandling.

Cambridge Analytica has denied accusations and has filed for bankruptcy in the United States and Britain.

Silicon Valley-based Facebook last month acknowledged it faces multiple inquiries from regulators about the Cambridge Analytica user data scandal.

Facebook chief Mark Zuckerberg apologized to the European Parliament in May and said the social media giant is taking steps to prevent such a breach from happening again.

Zuckerberg was grilled about the breach in US Congress in April.


Microsoft Rolls Out End-to-End Encryption in Skype
22.8.18 securityweek
Social

Skype users on the latest version of the messaging application can now take full advantage of end-to-end encryption in their conversations, Microsoft says.

Rolled out under the name of Private Conversations, the feature was initially introduced for a few Skype users in January this year, as preview, and is now available in the latest version of Skype on Windows, Mac, Linux, iOS and Android (6.0+). It started arriving on desktops a couple of weeks ago.

Private Conversations, which takes advantage of the industry standard Signal Protocol by Open Whisper Systems, secures text chat messages and audio calls, along with any files the conversation partners share over Skype (including photo, audio, and video files).

Skype has been long using TLS (transport-level security) and AES (Advanced Encryption Standard) to encrypt messages in transit, but the addition of end-to-end encryption adds an extra layer of privacy.

Now, not only are the conversation channels secured, but also are all of the transmitted messages kept encrypted when on Microsoft’s servers, meaning that they are only accessible to those engaged in the conversation.

Private Conversations, however, can only be accessed on one device at a time, the software giant reveals.

To take advantage of the feature, users simply need to tap or click on New Chat and then select Private Conversation. Next, they need to select the contacts they want to start the private conversation with, and these will receive a notification asking them to accept the invitation.

Once a contact accepts the invitation, the private conversation is available on the devices the invitation was sent from/accepted on.

One can also start a private conversation with a contact they are already chatting with.

Users can also delete private conversations, meaning that all of the content will be erased from the device. They can then pick up the conversation again, without having to send a new invitation.


Facebook Stops Misinformation Campaigns Tied to Iran, Russia
22.8.18 securityweek
Social

Facebook said Tuesday it stopped stealth misinformation campaigns from Iran and Russia, shutting down accounts as part of its battle against fake news ahead of elections in the United States and elsewhere.

Facebook removed more than 650 pages, groups and accounts identified as "networks of accounts misleading people about what they were doing," according to chief executive Mark Zuckerberg.

While the investigation was ongoing, and US law enforcement notified, content from some of the pages was traced back to Iran and from others linked to groups previously linked to Russian intelligence operations, the social network said.

"We believe they were parts of two sets of campaigns," Zuckerberg said.

The accounts, some of them at Facebook-owned Instagram, were presented as being independent news or civil society groups but were actually working in coordinated efforts, social network firm executives said in a briefing with reporters.

Content posted by accounts targeted Facebook users in Britain, Latin America, the Middle East and the US, according to head of cybersecurity policy Nathaniel Gleicher.

He said that posts by the involved accounts were still being scrutinized and their goals were unclear at this point.

The Facebook investigation was prompted by a tip from cybersecurity firm FireEye regarding a collection of "Liberty Front Press" pages at the social network and other online services.

Facebook linked the pages to Iranian state media through publicly available website registration information, computer addresses and information about account administrators, according to Gleicher.

Among the accounts was one from "Quest 4 Truth" claiming to be an independent Iranian media organization. It was linked to Press TV, an English-language news network affiliated with Iranian state media, Gleicher said.

The first "Liberty Front Press" accounts found were at Facebook were created in 2013 and posted primarily political content focused on the Middle East along with Britain, Latin America and the US.

- Russian military tie -

Facebook also removed a set of pages and accounts linked to sources the US government previously identified as Russian military services, according to Gleicher.

"While these are some of the same bad actors we removed for cybersecurity attacks before the 2016 US election, this more recent activity focused on politics in Syria and Ukraine," Gleicher said.

The accounts were associated with Inside Syria Media Center, which the Atlantic Council and other organizations have identified as covertly spreading pro-Russian and pro-Assad content.

US Senator Richard Burr, a Republican who chairs the select committee on intelligence, said that the halted campaigns further prove that "the goal of these foreign social media campaigns is to sow discord" and "that Russia is not the only hostile foreign actor developing this capability."

Facebook chief operating officer Sheryl Sandberg is among Silicon Valley executives set to take part in a September 5 Senate hearing about foreign efforts to use social media platforms to influence elections.

"We get that 18 is a very important election year, not just in the US," Zuckerberg responded when asked about the upcoming hearing.

"So this is really serious. This is a top priority for the company."

In July, Facebook shut down more than 30 fake pages and accounts involved in what appeared to be a "coordinated" attempt to sway public opinion on political issues ahead of November midterm elections, but did not identify the source.

It said the "bad actor" accounts on the world's biggest social network and its photo-sharing site Instagram could not be tied to Russia, which used the platform to spread disinformation ahead of the 2016 US presidential election.


Facebook Announces 18 Internet Defense Prize Winners
17.8.18 securityweek
Social

Facebook this week announced the winners of its 18 Internet Defense Prize. Three teams earned a total of $200,000 this year for innovative defensive security and privacy research.

In the past years, Facebook awarded only one team a prize of $100,000 as part of the initiative. In 2016, the winning team presented research focusing on post-quantum security for TLS, and last year’s winners demonstrated a novel technique of detecting credential spear-phishing attacks in enterprise environments.

Winners of Facebook Internet Defense Prize

Facebook says this year’s submissions were of very high quality so the social media giant has decided to reward three teams instead of just one.Winners of Facebook Internet Defense Prize

The first prize, $100,000 as in the previous years, was won by a team from imec-DistriNet at Belgian university KU Leuven. Their paper, titled “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies,” describes methods that browsers can employ to prevent cross-site attacks and third-party tracking via cookies.

It’s worth mentioning that a different team of researchers from KU Leuven has been credited for discovering the recently disclosed Foreshadow speculative execution vulnerabilities affecting Intel processors.

Second place, a team from Brigham Young University, earned $60,000 for a paper titled “The Secure Socket API: TLS as an Operating System Service.” The research focuses on a prototype implementation that makes it easier for app developers to use cryptography.

“We believe safe-by-default libraries and frameworks are an important foundation for more secure software,” Facebook said.

Third place, a group from the Chinese University of Hong Kong and Sangfor Technologies, earned $40,000 for “Vetting Single Sign-On SDK Implementations via Symbolic Reasoning.”

“This work takes a critical look at the implementation of single sign-on code. Single sign-on provides a partial solution to the internet’s over-reliance on passwords. This code is widely used, and ensuring its safety has direct implications for user safety online,” Facebook explained.

Last week, Facebook announced that it had awarded a total of more than $800,000 as part of its Secure the Internet Grants, which the company unveiled in January. Facebook has prepared a total of $1 million for original defensive research, offering grants of up to $100,000 per proposal.


Social Mapper – Correlate social media profiles with facial recognition
10.8.18 securityaffairs
Social

Trustwave developed Social Mapper an Open Source Tool that uses facial recognition to correlate social media profiles across different social networks.
Security experts at Trustwave have released Social Mapper, a new open-source tool that allows finding a person of interest across social media platform using facial recognition technology.

The tool was developed to gather intelligence from social networks during penetration tests and are aimed at facilitating social engineering attacks.

Social Mapper facial recognition tool automatically searches for targets across eight social media platforms, including Facebook, Instagram, Twitter, LinkedIn, Google+, VKontakte (The Russian Facebook), and Chinese Weibo and Douban.

An individual could be searcher by providing a name and a picture, the tool allows to conduct an analysis “on a mass scale with hundreds or thousands of individuals” at once.

“Performing intelligence gathering is a time-consuming process, it typically starts by attempting to find a person’s online presence on a variety of social media sites. While this is a easy task for a few, it can become incredibly tedious when done at scale.” Trustwave states in a blog post.

“Introducing Social Mapper an open source intelligence tool that uses facial recognition to correlate social media profiles across a number of different sites on a large scale. Trustwave, which provides ethical hacking services, has successfully used the tool in a number of penetration tests and red teaming engagements on behalf of clients.”

Social Mapper

The Social Mapper search for specific profiles in three stages:

Stage 1—The tool creates a list of targets based on the input you give it. The list can be provided via links in a CSV file, images in a folder or via people registered to a company on LinkedIn.

Stage 2—Once the targets are processed, the second stage of Social Mapper kicks in that automatically starts searching social media sites for the targets online.

This stage can be time-consuming, the search could take over 15 hours for lists of 1,000 people and use a significant amount of bandwidth, for this reason, experts recommend running the tool overnight on a machine with a good internet connection.

Stage 3—The Social Mapper starts generating a variety of output, including a CSV file with links to the profile pages of the target list and a visual HTML report.

Of course, this intelligence-gathering tool could be abused by attackers to collect information to use in highly sophisticated spear- phishing campaigns.

Experts from Trustwave warn of potential abuses of Social Mapper that are limited “only by your imagination.” Attackers can use the results obtained with the tool to:

Create fake social media profiles to ‘friend’ the targets and send them links to credential capturing landing pages or downloadable malware. Recent statistics show social media users are more than twice as likely to click on links and open documents compared to those delivered via email.
Trick users into disclosing their emails and phone numbers with vouchers and offers to make the pivot into phishing, vishing or smishing.
Create custom phishing campaigns for each social media site, knowing that the target has an account. Make these more realistic by including their profile picture in the email. Capture the passwords for password reuse.
View target photos looking for employee access card badges and familiarise yourself with building interiors.
If you want to start using the tool you can find it for free on GitHub.

Trustwave researcher Jacob Wilkin will present Social Mapper at the Black Hat USA conference today.


Researchers find vulnerabilities in WhatsApp that allow to spread Fake News via group chats
9.8.18 securityaffairs
Social

WhatsApp has been found vulnerable to multiple security flaws that could allow malicious users to spread fake news through group chats.
WhatsApp, the most popular messaging application in the world, has been found vulnerable to multiple security flaws that could allow malicious users to intercept and modify the content of messages sent in both private as well as group conversations.

Researchers at security firm Check Point have discovered several vulnerabilities in the popular instant messaging app Whatsapp, the flaws take advantage of a bug in the security protocols to modify the messages.

An attacker could exploit the flaws “to intercept and manipulate messages sent by those in a group or private conversation” as well as “create and spread misinformation”.

The issues affect the way WhatsApp mobile application communicates with the WhatsApp Web and decrypts the messages using the protobuf2 protocol.

The flaws allow hackers to abuse the ‘quote’ feature in a WhatsApp group conversation to change the identity of the sender, or alter the content of members’ reply to a group chat, or send private messages to one of the group members disguised as a group message.

Experts pointed out the that flaws could not be exploited to access the content of end-to-end encrypted messages and in order to exploit them, the attackers must be already part of group conversations.

“Check Point researchers have discovered a vulnerability in WhatsApp that allows a threat actor to intercept and manipulate messages sent by those in a group or private conversation.” reads the blog post published by the experts.

“The vulnerability so far allows for three possible attacks:

Changing a reply from someone to put words into their mouth that they did not say.
Quoting a message in a reply to a group conversation to make it appear as if it came from a person who is not even part of the group.
Sending a message to a member of a group that pretends to be a group message but is in fact only sent to this member. However, the member’s response will be sent to the entire group.”
The experts demonstrated the exploitation of the flaws by changing a WhatsApp chat entry sent by one member of a group.

Below a video PoC of the attack that shows how to modify WhatsApp Chats and implements the three different attacks.

The research team from CheckPoint researchers (Dikla Barda, Roman Zaikin, and Oded Vanunu) developed a custom extension for the popular tool Burp Suite, dubbed WhatsApp Protocol Decryption Burp Tool, to intercept and modify encrypted messages on their WhatsApp Web.

“By decrypting the WhatsApp communication, we were able to see all the parameters that are actually sent between the mobile version of WhatsApp and the Web version. This allowed us to then be able to manipulate them and start looking for security issues.” states the experts.

The extension is available on Github, it requires the attacker to provide its private and public keys.

“The keys can be obtained from the key generation phase from WhatsApp Web before the QR code is generated:” continues the report published by the experts.

“After we take these keys we need to take the “secret” parameter which is sent by the mobile phone to WhatsApp Web while the user scans the QR code:”

Experts demonstrated that using their extension an attacker can:

Change the content of a group member’s reply.
Change the identity of a sender in a group chat. The attack works even if the attacker is not a member of the group. “Use the ‘quote’ feature in a group conversation to change the identity of the sender, even if that person is not a member of the group.”
Send a Private Message in a Group, but when the recipient replies the members of the group will see it.

The experts reported the flaws to WhatsApp, but the company explained that end-to-end encryption if not broken by the attacks.

“We carefully reviewed this issue and it’s the equivalent of altering an email to make it look like something a person never wrote.” WhatsApp said in a statement.

“This claim has nothing to do with the security of end-to-end encryption, which ensures only the sender and recipient can read messages sent on WhatsApp.”

“These are known design trade-offs that have been previously raised in public, including by Signal in a 2014 blog post, and we do not intend to make any change to WhatsApp at this time,” WhatsApp security team replied to the researchers.

Checkpoint experts argue that the flaws could be abused to spread fake news and misinformation, for this reason, it is essential to fix the flaws as soon as possible along with putting limits on the forwarded messages.


Snapchat source Code leaked after an iOS update exposed it
9.8.18 securityaffairs
Social

Hackers leaked the Snapchat source code on GitHub, after they attempted to contact the company for a reward.
Hackers gained access to the source code of the frontend of Snapchat instant messaging app for iOS and leaked it on GitHub.

A GitHub account associated with a person with the name Khaled Alshehri who claimed to be from Pakistan and goes online with the handle i5xx created the GitHub repository titled Source-Snapchat.

After being notified, Snap Inc., has confirmed the authenticity of the source core and asked GitHub to remove it by filing a DMCA (Digital Millennium Copyright Act) request.

“Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online.**”

“SNAPCHAT SOURCE CODE. IT WAS LEAKED AND A USER HAS PUT IT IN THIS GITHUB REPO. THERE IS NO URL TO POINT TO BECAUSE SNAP INC. DOESN’T PUBLISH IT PUBLICLY.” reads the reply of the company to a question included in the DMCA request.

SnapChat source code

According to Snapchat, the source code was leaked after an iOS update made in May that exposed a “small amount” of the app source code. The problem was solved and Snap Inc ensured that the data leak has no impact on the Snapchat users.

The hackers who leaked the source code are threatening the company of releasing new parts of the leaked code until the Snap Inc will not reply. Likely they are blackmailing the company.SnapChat source code

SnapChat source code

Two members of the group who leaked the Snapchat source code have been posting messages written in Arabic and English on Twitter.

The two hackers are allegedly based in Pakistan and France, they were expecting a bug bounty reward from the company without success.

At the time of writing two other forks containing the source code are still present on GitHub, it seems that the code was published just after the iOS update.

Snapchat currently run an official bug bounty program through HackerOne and has already paid several rewards for critical vulnerabilities in its app.


Facebook Asks Big Banks to Share Customer Details
7.8.18 securityweek 
Social

Facebook has asked major US banks to share customer data to allow it to develop new services on the social network's Messenger texting platform, a banking source told AFP on Monday.

Facebook had discussions with Chase, JPMorgan, Citibank, and Wells Fargo several months ago, said the source, who asked to remain anonymous.

The Silicon Valley-based social network also contacted US Bancorp, according to the Wall Street Journal, which first reported the news.

Facebook, which has faced intense criticism for sharing user data with many app developers, was interested in information including bank card transactions, checking account balances, and where purchases were made, according to the source.

Facebook confirmed the effort in a statement to AFP, but said it was not asking for transaction data.

"Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management," Facebook said.

The goal was to create new ways for Messenger to be woven into, and facilitate, interactions between banks and customers, according to the reports. The smartphone texting service boasts 1.3 billion users.

"The idea is that messaging with a bank can be better than waiting on hold over the phone -- and it's completely opt-in," the statement said.

Citigroup declined to comment regarding any possible discussions with Facebook about Messenger.

"While we regularly have conversations about potential partnerships, safeguarding the security and privacy of our customers' data and providing customer choice are paramount in everything we do," Citigroup told AFP by email.

JPMorgan Chase spokeswoman Patricia Wexler directed AFP to a statement given to the Wall Street Journal saying, "We don't share our customers' off-platform transaction data with these platforms and have had to say 'No' to some things as a result."

Wells Fargo decline to address the news.

Privacy worries

Messenger can be used by businesses to help people keep track of account information such as balances, receipts, or shipping dates, according to the social network.

"We're not using this information beyond enabling these types of experiences -- not for advertising or anything else," Facebook explained in its statement.

"A critical part of these partnerships is keeping people's information safe and secure."

But word Facebook is fishing for financial information comes amid concerns it has not vigilantly guarded private information.

Facebook acknowledged last month that it was facing multiple inquiries from US and British regulators about a scandal involving the now bankrupt British consultancy Cambridge Analytica.

In Facebook's worst ever public relations disaster, it admitted that up to 87 million users may have had their data hijacked by Cambridge Analytica, which was working for US President Donald Trump's 2016 election campaign.

Facebook CEO Mark Zuckerberg announced in May he was rolling out privacy controls demanded by European regulators to Facebook users worldwide because "everyone cares about privacy."

The social network is now looking at cooler growth following a years-long breakneck pace.

Shares in Facebook plummeted last week, wiping out some $100 billion, after the firm missed quarterly revenue forecasts and warned growth would be far weaker than previously estimated.

Shares in the social network have regained some ground, and rose 4.4 percent to close at $185.69 on Monday.


Facebook reported and blocked attempts to influence campaign ahead of midterms US elections
2.8.18 securityweek 
Social

Facebook removed 32 Facebook and Instagram accounts and pages that were involved in a coordinated operation aimed at influencing the midterm US elections
Facebook has removed 32 Facebook and Instagram accounts and pages that were involved in a coordinated operation aimed at influencing the forthcoming midterm US elections.

Facebook midterm US elections

Facebook is shutting down content and accounts “engaged in coordinated inauthentic behavior”

At the time there is no evidence that confirms the involvement of Russia, but intelligence experts suspect that Russian APT groups were behind the operation.

Facebook founder Mark Zuckerberg announced its response to the recently disclosed abuses.

“One of my top priorities for 18 is to prevent misuse of Facebook,” Zuckerberg said on his own Facebook page.

“We build services to bring people closer together and I want to ensure we’re doing everything we can to prevent anyone from misusing them to drive us apart.”

According to Facebook, “some of the activity is consistent” with Tactics, Techniques and Procedures (TTPs) associated with the Internet Research Agency that is known as the Russian troll farm that was behind the misinformation campaign aimed at the 2016 Presidential election.

“But we don’t believe the evidence is strong enough at this time to make public attribution to the IRA,” Facebook chief security officer Alex Stamps explained to the reporters.

Facebook revealed that some 290,000 users followed at least one of the blocked pages.

“Resisters” enlisted support from real followers for an August protest in Washington against the far-right “Unite the Right” group.

According to Facebook, fake pages that were created more than a year ago, in some cases the pages were used to promote real-world events, two of them have taken place.

Just after the announcement, the US Government remarked it will not tolerate any interference from foreign states.

“The president has made it clear that his administration will not tolerate foreign interference into our electoral process from any nation-state or other malicious actors,” deputy press secretary Hogan Gidley told reporters.

The investigation is still ongoing, but the social media giant decided to disclose early findings to shut down the orchestrated misinformation campaign.

Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, explained that the threat actors used VPNs and internet phone services to protect their anonymity.

“In total, more than 290,000 accounts followed at least one of these Pages, the earliest of which was created in March 2017. The latest was created in May 18.
The most followed Facebook Pages were “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters.” The remaining Pages had between zero and ten followers, and the Instagram accounts had zero followers.
There were more than 9,500 organic posts created by these accounts on Facebook and one piece of content on Instagram.
They ran about 150 ads for approximately $11,000 on Facebook and Instagram, paid for in US and Canadian dollars. The first ad was created in April 2017, and the last was created in June 18.
The Pages created about 30 events since May 2017. About half had fewer than 100 accounts interested in attending. The largest had approximately 4,700 accounts interested in attending, and 1,400 users said that they would attend.” said Gleicher.
Facebook announced it would start notifying users that were following the blocked account and users who said would attend events created by one of the suspended accounts and pages

Facebook reported its findings to US law enforcement agencies, Congress, and other tech companies.

“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” declared the Senate Intelligence Committee’s top Democrat Mark Warner.


Facebook Uncovers Political Influence Campaign Ahead of Midterms
1.8.18 securityweek 
Social 

Facebook said Tuesday it shut down 32 fake pages and accounts involved in an apparent "coordinated" effort to stoke hot-button issues ahead of November midterm US elections, but could not identify the source although Russia is suspected of involvement.

It said the "bad actor" accounts on the world's biggest social network and its photo-sharing site Instagram could not be tied directly to Russian actors, who American officials say used the platform to spread disinformation ahead of the 2016 US presidential election.

The US intelligence community has concluded that Russia sought to sway the vote in Donald Trump's favor, and Facebook was a primary tool in that effort, using targeted ads to escalate political tensions and push divisive online content.

With the 18 mid-terms barely three months away, Facebook founder Mark Zuckerberg announced his company's crackdown.

"One of my top priorities for 18 is to prevent misuse of Facebook," Zuckerberg said on his own Facebook page.

"We build services to bring people closer together and I want to ensure we're doing everything we can to prevent anyone from misusing them to drive us apart."

Trump, now president, has repeatedly downplayed Kremlin efforts to interfere in US democracy.

Two weeks ago, he caused an international firestorm when he stood alongside Russian President Vladimir Putin and cast doubt on assertions that Russia tried to sabotage the vote.

But after Facebook's announcement, the White House stressed Trump opposed all efforts at election interference.

"The president has made it clear that his administration will not tolerate foreign interference into our electoral process from any nation state or other malicious actors," deputy press secretary Hogan Gidley told reporters.

Facebook said "some of the activity is consistent" with that of the Saint Petersburg-based Internet Research Agency -- the Russian troll farm that managed many false Facebook accounts used to influence the 2016 vote.

"But we don't believe the evidence is strong enough at this time to make public attribution to the IRA," Facebook chief security officer Alex Stamps said during a conference call with reporters.

Special Counsel Robert Mueller is heading a sprawling investigation into possible collusion with Russia by Trump's campaign to tip the vote toward the real estate tycoon.

Mueller has indicted the Russian group and 12 Russian hackers connected to the organization.

Facebook said it is shutting down 32 pages and accounts "engaged in coordinated inauthentic behavior," even though it may never be known for certain who was behind the operation.

The tech giant's investigation is at an early stage, but was revealed now because one of the pages being covertly operated was orchestrating a counter-protest to a white nationalism rally in Washington.

The coordinators of a deadly white-supremacist event in Charlottesville last year reportedly have been given a permit to hold a rally near the White House on August 12, the anniversary of the 2017 gathering.

Facebook said it will notify members of the social network who expressed interest in attending the counter-protest.

- US 'not doing' enough -

Facebook has briefed US law enforcement agencies, Congress and other tech companies about its findings.

"Today's disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity," said the Senate Intelligence Committee's top Democrat Mark Warner.

The panel's chairman, Republican Senator Richard Burr, said he was glad to see Facebook take a "much-needed step toward limiting the use of their platform by foreign influence campaigns."

"The goal of these operations is to sow discord, distrust and division," he added. "The Russians want a weak America."

US lawmakers have introduced multiple bills aimed at boosting election security.

While top Senate Democrat Chuck Schumer applauded Facebook's action, he said the Trump administration itself "is not doing close to enough" to protect elections.

Some of the most-followed pages that were shut down included "Resisters" and "Aztlan Warriors."

Facebook said some 290,000 users followed at least one of the pages.

"Resisters" enlisted support from real followers for an August protest in Washington against the far-right "Unite the Right" group.

Inauthentic pages dating back more than a year organized an array of real world events, all but two of which have taken place, according to Facebook.

The news comes just days after Facebook suffered the worst single-day evaporation of market value for any company, after missing revenue forecasts for the second quarter and offering soft growth projections.

Zuckerberg's firm says the slowdown will come in part due to its new approach to privacy and security, which helped experts uncover these so-called "bad actors."


Twitter removed more than 143,000 apps from the messaging service
28.7.18 securityaffairs
Social

On Tuesday, Twitter announced it had removed more than 143,000 apps from the messaging service since April in a new crackdown initiative.
Last week, Twitter announced it had removed more than 143,000 apps from the messaging service since April in a new crackdown initiative aimed at “malicious” activity from automated accounts.

jack

@jack
We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.

5:33 PM - Mar 1, 18 · San Francisco, CA
14.3K
11.7K people are talking about this
Twitter Ads info and privacy
The social media giant was restricting the access to its application programming interfaces (APIs) that allows developers to automate the interactions with the platform (i.e. Tweet posting).

Spam and abuse issues are important problems for the platform, every day an impressive number of bots is used to influence the sentiment on specific topics or to spread misinformation or racism content.

“We’re committed to providing access to our platform to developers whose products and services make Twitter a better place,” said Twitter senior product management director Rob Johnson.

“However, recognizing the challenges facing Twitter and the public — from spam and malicious automation to surveillance and invasions of privacy — we’re taking additional steps to ensure that our developer platform works in service of the overall health of conversation on Twitter.”

Twitter says the apps “violated our policies,” although it wouldn’t say how and it did not share details on revoked apps.

“We do not tolerate the use of our APIs to produce spam, manipulate conversations, or invade the privacy of people using Twitter,” he added.

“We’re continuing to invest in building out improved tools and processes to help us stop malicious apps faster and more efficiently.”

Cleaning up Twitter it a hard task, now since Tuesday, Twitter deployed a new application process for developers that intend to use the platform API.

Twitter is going to ask them for details of how they will use the service.

“Beginning today, anyone who wants access to Twitter’s APIs should apply for a developer account using the new developer portal at developer.twitter.com. Once your application has been approved, you’ll be able to create new apps and manage existing apps on developer.twitter.com. Existing apps can also still be managed on apps.twitter.com.”Johnson added.

“We’re committed to supporting all developers who want to build high-quality, policy-compliant experiences using our developer platform and APIs, while reducing the impact of bad actors on our service,”

Twitter messaging service

Anyway, there are many legitimate applications that used Twitter APIs to automate several processes, including emergency alerts.

Twitter also announced the introduction of new default app-level rate limits for common POST endpoints to fight the spamming through the platform.

“Alongside changes to the developer account application process, we’re introducing new default app-level rate limits for common POST endpoints, as well as a new process for developers to obtain high volume posting privileges. These changes will help cut down on the ability of bad actors to create spam on Twitter via our APIs, while continuing to provide the opportunity to build and grow an app or business to meaningful scale.” concludes Twitter.


Twitter Curbs Access for 143,000 Apps in New Crackdown
26.7.18 securityweek
Social

Twitter said Tuesday it had removed more than 143,000 apps from the messaging service since April in a fresh crackdown on "malicious" activity from automated accounts.

The San Francisco-based social network said it was tightening access to its application programming interfaces (APIs) that allows developers to make automated Twitter posts.

"We're committed to providing access to our platform to developers whose products and services make Twitter a better place," said Twitter senior product management director Rob Johnson.

"However, recognizing the challenges facing Twitter and the public -- from spam and malicious automation to surveillance and invasions of privacy -- we're taking additional steps to ensure that our developer platform works in service of the overall health of conversation on Twitter."

Johnson offered no details on the revoked apps, but Twitter has been under pressure over automated accounts or "bots" which spread misinformation or falsely amplify a person or political cause.

"We do not tolerate the use of our APIs to produce spam, manipulate conversations, or invade the privacy of people using Twitter," he said.

"We're continuing to invest in building out improved tools and processes to help us stop malicious apps faster and more efficiently."

As of Tuesday, any developer seeking access to create a Twitter app will have to go through a new application process, providing details of how they will use the service.

"We're committed to supporting all developers who want to build high-quality, policy-compliant experiences using our developer platform and APIs, while reducing the impact of bad actors on our service," Johnson said.

Automated accounts are not always malicious -- some are designed to tweet our emergency alerts, art exhibits or the release of a Netflix program -- but "bots" have been blamed for spreading hoaxes and misinformation in a bid to manipulate public opinion.


Facebook faces £500,000 fine in the U.K. over Cambridge Analytica scandal

19.7.18 securityaffairs Social

Facebook has been fined £500,000 ($664,000) in the U.K. for its conduct in the Cambridge Analytica privacy scandal.
Facebook has been fined £500,000 in the U.K., the maximum fine allowed by the UK’s Data Protection Act 1998, for failing to protect users’ personal information.

Facebook- Cambridge Analytica

Political consultancy firm Cambridge Analytica improperly collected data of 87 million Facebook users and misused it.

“Today’s progress report gives details of some of the organisations and individuals under investigation, as well as enforcement actions so far.

This includes the ICO’s intention to fine Facebook a maximum £500,000 for two breaches of the Data Protection Act 1998.” reads the announcement published by the UK Information Commissioner’s Office.

“Facebook, with Cambridge Analytica, has been the focus of the investigation since February when evidence emerged that an app had been used to harvest the data of 50 million Facebook users across the world. This is now estimated at 87 million.

The ICO’s investigation concluded that Facebook contravened the law by failing to safeguard people’s information. It also found that the company failed to be transparent about how people’s data was harvested by others.”

This is the first possible financial punishment that Facebook is facing for the Cambridge Analytica scandal.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” reads ICO’s report.

Obviously, the financial penalty is negligible compared to the gains of the giant of social networks, but it is a strong message to all the company that must properly manage users’ personal information in compliance with the new General Data Protection Regulation (GDPR).

What would have happened if the regulation had already been in force at the time of disclosure?

According to the GDPR, the penalties allowed under the new privacy regulation are much greater, fines could reach up to 4% of the global turnover, that in case of Facebook are estimated at $1.9 billion.

“Facebook has failed to provide the kind of protections they are required to under the Data Protection Act.” Elizabeth Denham, the UK’s Information Commissioner said. “People cannot have control over their own data if they don’t know or understand how it is being used. That’s why greater and genuine transparency about the use of data analytics is vital.”

Facebook still has a chance to respond to the ICO’s Notice of Intent before a final decision on the fine is made.

“In line with our approach, we have served Facebook with a Notice setting
out the detail of our areas of concern and invited their representations on
these and any action we propose. ” concludes the ICO update on the investigation published today by Information Commissioner Elizabeth Denham.

“Their representations are due later this month, and we have taken no final view on the merits of the case at this time. We will consider carefully any representations Facebook may wish to make before finalising our views,”


Facebook Faces Australia Data Breach Compensation Claim
18.7.18 securityweek 
Social

Facebook could face a hefty compensation bill in Australia after a leading litigation funder lodged a complaint with the country's privacy regulator over users' personal data shared with a British political consultancy.

The social networking giant admitted in April the data of up to 87 million people worldwide -- including more than 300,000 in Australia -- was harvested by Cambridge Analytica.

Under Australian law, all organisations must take "reasonable steps" to ensure personal information is held securely and IMF Bentham has teamed up with a major law firm to lodge a complaint with the Office of the Australian Information Commissioner (OAIO).

The OAIO launched an investigation into the alleged breaches in April and depending on its outcome, a class action could follow.

IMF said in a statement late Tuesday it was seeking "compensation for Facebook users arising from Facebook's alleged breaches of the Australian Privacy Principles contained in the Privacy Act 1988".

"The alleged breaches surround the circumstances in which a third party, Cambridge Analytica, gained unauthorised access to users' profiles and information.

"The complaint seeks financial recompense for the unauthorised access to, and use of, their personal data."

In its statement, IMF Bentham said it appeared Facebook learned of the breach in late 2015, but failed to tell users about it until this year.

IMF investment manager Nathan Landis told The Australian newspaper most awards for privacy breaches ranged between Aus$1,000 and Aus$10,000 (US$750-US$7,500).

This implies a potential compensation bill of between Aus$300 million and Aus$3 billion.

Facebook did not directly comment on the IMF Bentham action but a spokesperson told AFP Wednesday: "We are fully cooperating with the investigation currently underway by the Australian Privacy Commissioner.

"We will review any additional evidence that is made available when the UK Office of the Information Commissioner releases their report."


Britain to Fine Facebook Over Data Breach
18.7.18 securityweek  Incindent 
Social

Britain's data regulator said Wednesday it will fine Facebook half a million pounds for failing to protect user data, as part of its investigation into whether personal information was misused ahead of the Brexit referendum.

The Information Commissioner's Office (ICO) began investigating the social media giant earlier this year, when evidence emerged that an app had been used to harvest the data of tens of millions of Facebook users worldwide.

In the worst ever public relations disaster for the social media giant, Facebook admitted that up to 87 million users may have had their data hijacked by British consultancy firm Cambridge Analytica, which was working for US President Donald Trump's 2016 campaign.

Cambridge Analytica, which also had meetings with the Leave.EU campaign ahead of Britain's EU referendum in 2016, denies the accusations and has filed for bankruptcy in the United States and Britain.

"In 2014 and 2015, the Facebook platform allowed an app... that ended up harvesting 87 million profiles of users around the world that was then used by Cambridge Analytica in the 2016 presidential campaign and in the referendum," Elizabeth Denham, the information commissioner, told BBC radio.

Wednesday's ICO report said: "The ICO's investigation concluded that Facebook contravened the law by failing to safeguard people's information."

Without detailing how the information may have been used, it said the company had "failed to be transparent about how people's data was harvested by others".

The ICO added that it plans to issue Facebook with the maximum available fine for breaches of the Data Protection Act -- an equivalent of $660,000 or 566,000 euros.

Because of the timing of the breaches, the ICO said it was unable to impose penalties that have since been introduced by the European General Data Protection, which would cap fines at 4.0 percent of Facebook's global turnover.

In Facebook's case this would amount to around $1.6 billion (1.4 billion euros).

"In the new regime, they would face a much higher fine," Denham said.

- 'Doing the right thing' -

"We are at a crossroads. Trust and confidence in the integrity of our democratic processes risk being disrupted because the average voter has little idea of what is going on behind the scenes," Denham said.

"New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters. But this cannot be at the expense of transparency, fairness and compliance with the law."

In May, Facebook chief Mark Zuckerberg apologised to the European Parliament for the "harm" caused.

EU Justice Commissioner Vera Jourova welcomed the ICO report.

"It shows the scale of the problem and that we are doing the right thing with our new data protection rules," she said.

"Everyone from social media firms, political parties and data brokers seem to be taking advantage of new technologies and micro-targeting techniques with very limited transparency and responsibility towards voters," she said.

"We must change this fast as no-one should win elections using illegally obtained data," she said, adding: "We will now assess what can we do at the EU level to make political advertising more transparent and our elections more secure."

- Hefty compensation bill -

The EU in May launched strict new data-protection laws allowing regulators to fine companies up to 20 million euros ($24 million) or four percent of annual global turnover.

But the ICO said because of the timing of the incidents involved in its inquiry, the penalties were limited to those available under previous legislation.

The next phase of the ICO's work is expected to be concluded by the end of October.

Erin Egan, chief privacy officer at Facebook, said: "We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We're reviewing the report and will respond to the ICO soon."

The British fine comes as Facebook faces a potential hefty compensation bill in Australia, where litigation funder IMF Bentham said it had lodged a complaint with regulators over the Cambridge Analytica breech -- thought to affect some 300,000 users in Australia.

IMF investment manager Nathan Landis told The Australian newspaper most awards for privacy breaches ranged between Aus$1,000 and Aus$10,000 (US$750-$7,500).

This implies a potential compensation bill of between Aus$300 million and Aus$3 billion.


Vietnam Activists Flock to 'Safe' Social Media After Cyber Crackdown
6.7.18 securityweek
Social

Tens of thousands of Vietnamese social media users are flocking to a self-professed free speech platform to avoid tough internet controls in a new cybersecurity law, activists told AFP.

The draconian law requires internet companies to scrub critical content and hand over user data if Vietnam's Communist government demands it.

The bill, which is due to take effect from January 1, sparked outcry from activists, who say it is a chokehold on free speech in a country where there is no independent press and where Facebook is a crucial lifeline for bloggers.

The world's leading social media site has 53 million users in Vietnam, a country of 93 million.

Many activists are now turning to Minds, a US-based open-source platform, fearing Facebook could be complying with the new rules.

"We want to keep our independent voice and we also want to make a point to Facebook that we're not going to accept any censorship," Tran Vi, editor of activist site The Vietnamese, which is blocked in Vietnam, told AFP from Taiwan.

Some activists say they migrated to Minds after content removal and abuse from pro-government Facebook users.

Two editors' Facebook accounts were temporarily blocked and The Vietnamese Facebook page can no longer use the "instant article" tool to post stories.

Nguyen Chi Tuyen, an activist better known by his online handle Anh Chi, says he has moved to Minds as a secure alternative, though he will continue using Facebook and Twitter.

"It's more anonymous and a secretive platform," he said of Minds.

He has previously had to hand over personal details to Facebook to verify his identity and now fears that information could be used against him.

- 'Scary' law -

About 100,000 new active users have registered in Vietnam in less than a week, many posting on politics and current affairs, Minds founder and CEO Bill Ottman told AFP.

"This new cybersecurity law is scaring a lot of people for good reason," he said from Connecticut.

"It's certainly scary to think that you could not only be censored but have your private conversations given to a government that you don't know what they're going to use that for."

The surge of new users from Vietnam now accounts for nearly 10 percent of Minds total user base of about 1.1 million.

Users are not required to register with personal data and all chats are encrypted.

Vietnam's government last year announced a 10,000-strong cybersecurity army tasked with monitoring incendiary material online.

In its unabashed defence of the new law, Vietnam has said it is aimed at protecting the regime and avoiding a "colour revolution", but refused to comment to AFP on Thursday.

Facebook told AFP it is reviewing the law and says it considers government requests to take down information in line with its Community Standards -- and pushes back when possible.

Google declined to comment on the new law when asked by AFP, but their latest Transparency report showed that it had received 67 separate requests from the Vietnamese government to remove more than 6,500 items since 2009, the majority since early last year.

Most were taken down, though Google does not provide precise data on content removal compliance.

Ottman says countries like Vietnam are fighting a losing battle trying to control online expression.

"It's like burning books, it just causes more attention to be brought to those issues and it further radicalises those users because they're so upset that they're getting censored," he said.


Facebook Responding to US Regulators in Data Breach Probe

5.7.18 securityweek  Social

Facebook acknowledged Tuesday it was facing multiple inquiries from US and British regulators about the major Cambridge Analytica user data scandal.

The leading social network offered no details but its admission confirmed reports of a widening investigation into the misuse of private data by Facebook and its partners.

"We are cooperating with officials in the US, UK and beyond," a Facebook spokesman said in response to an AFP query.

"We've provided public testimony, answered questions, and pledged to continue our assistance as their work continues."

The Washington Post reported that the Securities and Exchange Commission, Federal Trade Commission and FBI as well as the Justice Department are looking into the massive breach of users' personal data and how the company handled it.

Facebook shares closed the shortened Nasdaq trading day down 2.35 percent to $192.73, heading into an Independence Day holiday with investors mulling what effect the investigations may have on the California-based internet giant.

Facebook has admitted that up to 87 million users may have had their data hijacked by British consultancy Cambridge Analytica, which worked for US President Donald Trump during his 2016 campaign.

Facebook chief Mark Zuckerberg apologized to the European Parliament in May and said the social media giant is taking steps to prevent such a breach from happening again.

Zuckerberg said at a hearing in Brussels that it became clear in the last two years that Facebook executives didn't do enough to prevent the platform "from being used for harm."

Zuckerberg was grilled about the breach in US Congress in April.

It remains unclear what if any penalties Facebook may face from the latest requests but the tech giant is legally bound to comply with a 2011 consent decree with the FTC on protecting private user data.

Any SEC inquiry could look at whether Facebook adequately disclosed key information to investors.


The Social network giant Facebook confirms it shared data with 61 tech firms after 2015
3.7.18 securityaffairs
Social

On Friday, Facebook provided a 748-page long report to Congress that confirms the social network shared data with at least 61 tech firms after 2015.
This is the worst period in the history of the social network, now Facebook admitted to having shared users’ data with 61 tech firms.

The problem is that Facebook allowed tech companies and app developers to access its users’ data after announcing it had restricted third-party firms to access its data in 2015.

Immediately after the Cambridge Analytica privacy scandal that affected 87 million users, Facebook attempted to mitigate the pressure of the media by confirming that it already restricted third-party access to its users’ data since May 2015.

On Friday, Facebook provided a 748-page long report to Congress that confirms the practice of sharing data with 61 tech firms after 2015.

The company also granted a “one-time” six-month extension to the companies to come into compliance with Facebook’s new privacy policy.

“In April 2014, we announced that we would more tightly restrict our platform APIs to
prevent abuse. At that time, we made clear that existing apps would have a year to transition—at which point they would be forced (1) to migrate to the more restricted API and (2) be subject to Facebook’s new review and approval protocols.” reads the report.

“The vast majority of companies were required to make the changes by May 2015; a small number of companies (fewer than 100) were given a one-time extension of less than six months beyond May 2015 to come into compliance.”

In addition, the company admitted that a very small number of companies (fewer than 10) have had access to limited friends’ data as a result of API access that they
received in the context of a beta test.

The social media firm also shared a list containing 52 companies that it has authorized to build versions of Facebook or Facebook features for their devices and products.

The list includes Acer, Amazon, Apple, Blackberry, Microsoft, Motorola/Lenovo, Samsung, Sony, Spotify, and the Chinese companies Huawei and Alibaba.

“The partnerships—which we call “integration partnerships”—began before iOS and
Android had become the predominant ways people around the world accessed the internet on their mobile phones. ” explained Facebook.

“We engaged companies to build integrations for a variety of devices, operating systems, and other products where we and our partners wanted to offer people a way to receive Facebook or Facebook experiences,” the document reads. “These integrations were built by our partners, for our users, but approved by Facebook.”

The social network firm confirmed it has already interrupted 38 of these 52 partnerships and additional seven will be discontinued by the end of July, and another one by the end of this October. The company will continue the partnership with Tobii, an accessibility app that enables people with ALS to access Facebook, Amazon, Apple, Mozilla, Alibaba and Opera.

“Three partnerships will continue: (1) Tobii, an accessibility app that enables people with ALS to access Facebook; (2) Amazon; and (3) Apple, with whom we have agreements that extend beyond October 18. We also will continue partnerships with Mozilla, Alibaba and Opera— which enable people to receive notifications about Facebook in their web browsers—but their integrations will not have access to friends’ data.” added the company.

Privacy advocated and security experts defined as questionable the way the social network managed users’ data, especially after 2015.

Just a few days ago, I reported the news that a popular third-party quiz app named NameTests was found exposing data of up to 120 million Facebook users.


Facebook is notifying 800,000 users affected by a blocking bug
3.7.18 securityaffairs
Social

Yesterday the social network giant Facebook started notifying 800,000 users affected by a blocking bug. The company has already fixed it.
When a Facebook user blocks someone, the blocked user will be not able to interact with him, this means that he will not see his posts, it will not able to start conversations on Messenger or add him as a friend. The blocked user may have also been able to contact the blocker via Messenger.

Facebook discovered a bug affecting its platform that allowed blocked users to interact with the accounts that decided to block them. As result, blocked users were able to see some of the content posted by individuals who had blocked them.

The issue was introduced on May 29, and the social network giant addressed it on June 5.

“Starting today we are notifying over 800,000 users about a bug in Facebook and Messenger that unblocked some people they had blocked. The bug was active between May 29 and June 5 — and while someone who was unblocked could not see content shared with friends, they could have seen things posted to a wider audience. For example pictures shared with friends of friends. ” wrote Facebook Chief Privacy Officer Erin Egan.

According to Egan, one a user has been blocked will not see content shared only with friends, but he may have been shown content shared with “friends of friends.

Egan clarified that blocking also automatically unfriends users if they were previously friends.

Below the details shared by Egan on this specific bug.

It did not reinstate any friend connections that had been severed;
83% of people affected by the bug had only one person they had blocked temporarily unblocked; and
Someone who was unblocked might have been able to contact people on Messenger who had blocked them.
Facebook has fixed the bug and everyone has been blocked again, the company is sending a notification t the affected accounts encouraging them to check their blocked list.

Facebook bug


Facebook Notifies 800,000 Users of Blocking Bug
3.7.18 securityweek 
Social

Facebook on Monday started notifying 800,000 users affected by a bug that resulted in blocked individuals getting temporarily unblocked. The social media giant also detailed some new API restrictions designed to better protect user information.

When you block someone on Facebook, you prevent them from seeing your posts, starting conversations on Messenger, or adding you as a friend. However, a Facebook and Messenger bug introduced in May 29 and addressed on June 5 led to users being able to see some of the content posted by individuals who had blocked them.

According to Facebook Chief Privacy Officer Erin Egan, blocked users could not see content shared only with friends, but they may have been shown content shared with “friends of friends.” The blockee may have also been able to contact the blocker via Messenger.

Egan clarified that friend connections were not reinstated as a result of the bug and 83 percent of impacted users had only one blocked person temporarily unblocked. Affected users will see a notification in their account.

New API restrictions and changes

Facebook also announced on Monday additional measures taken following the Cambridge Analytica incident, in which personal data on tens of millions of users was improperly shared with the British political consultancy through an app.

The social media giant previously shared some information on the steps taken to better protect elections and user data, and it has now announced new changes affecting application developers.

Developers have been informed that several APIs have been or will be deprecated, including the Graph API Explorer App, Profile Expression Kit, Trending API, the Signal tool, Trending Topics, Hashtag Voting, Topic Search, Topic Insights, Topic Feed, and Public Figure. The Trending and Topic APIs are part of the Media Solutions toolkit.

Some APIs will be deprecated – including due to low usage – while others will be restricted.

Developers will once again be allowed to search for Facebook pages via the Pages API, but they will need Page Public Content Access permissions, which can only be obtained via the app review process.

As for marketing tools, Facebook announced that the Marketing API can only be used by reviewed apps, and that it’s introducing new app review permissions for the Live Video and Lead Ads Retrieval APIs.


Facebook App Exposed Data of 120 Million Users
2.7.18 securityweek 
Social

A recently addressed privacy bug on Nametests.com resulted in the data of over 120 million users who took personality quizzes on Facebook to be publicly exposed.

Patched as part of Facebook’s Data Abuse Bounty Program, the vulnerability resided in Nametests.com serving users’ data to any third-party that requested it, something that shouldn’t normally happen.

Facebook launched its Data Abuse Bounty Program in April, as part of its efforts to improve user privacy following the Cambridge Analytica scandal. The company also updated its terms on privacy and data sharing, but also admitted to tracking people over the Internet, even those who are not Facebook users.

The issue in Nametests.com was reported by Inti De Ceukelaire, who discovered that, when loading a personality test, the website would fetch all of his personal information from http://nametests.com/appconfig_user and display it on the page.

Websites shouldn’t normally be allowed to access the information, as web browsers do prevent such behavior. The data requested from Nametests.com, however, was wrapped in JavaScript, meaning that it could be shared with other websites.

“Since NameTests displayed their user’s personal data in JavaScript file, virtually any website could access it when they would request it,” the researcher explains.

To verify that this was indeed happening, he set up a website that connected to Nametests.com and would fetch information about the visitor. The access token provided by Nametests.com could also be used to gain access to the visitor’s posts, photos and friends, depending on the permissions granted.

“It would only take one visit to our website to gain access to someone’s personal information for up to two months,” De Ceukelaire says.

Another issue the researcher discovered was that the user information would continue to be exposed even after they deleted the application. With no log out functionality available, users would have had to manually delete the cookies on their devices to prevent their data from being leaked.

The bug was reported to Facebook’s Data Abuse program on April 22 and a fix was rolled out by June 25, when the researcher noticed that third-parties could no longer access visitors’ personal information as before.

The vulnerability could “have affected Facebook information people shared with nametests.com. To be on the safe side, we revoked the access tokens for everyone on Facebook who has signed up to use this app. So people will need to re-authorize the app in order to continue using it,” Facebook said.

The social platform also donated $8,000 (they apparently doubled the $4,000 bounty because the researcher chose to donate it to charity) to the Freedom of the Press foundation.

“I also got a response from NameTests. The public relations team claims that, according to the data and knowledge they have, they found no evidence of abuse by a third party. They also state that they have implemented additional tests to find such bugs and avoid them in the future,” the researcher notes.


Twitter shared details about its strategy for fighting spam and bots
30.6.18 securityaffairs
Social

Twitter provided some details on new security processes aimed at preventing malicious automation and spam.
The tech giant also shared data on the success obtained with the introduction of the new security measures.
Social media platform are a privileged tool for psyops and malicious campaign, for this reason, Twitter rolled out new features to detect and prevent any abuse.

Threat actors make a large use of bots to spread propaganda and malicious links, and social media platforms are spending significant efforts in threats mitigation.

Twitter claims it challenged in May more than 9.9 million potentially automated accounts used for malicious activity every week. The data shows a significant decrease from 6.4 million in December 2017.
The social media platform said that the security measures allowed to drastically reduce spam reports received from users, from 25,000 daily reports in March to 17,000 in May.
The company is removing 214% more spam accounts compared to 2017. Twitter suspended over 142,000 apps in the first quarter of 18, most of them were shut down within a week or even within hours after being registered.

Twitter introduced measures to evaluate account metrics in near-real time.

The platform is able to recognize bots activity detecting synchronized operations conducted by multiple accounts.

Twitter announced it will remove follower and engagement counts from accounts flagged as suspicious that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“So, if we put an account into a read-only state (where the account can’t engage with others or Tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number.” reads the blog post published by Twitter.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,”
The company introduced measures to audit existing accounts and control the creation of New ones.
Twitter
Twitter is incresing checks on the sign-up process to make idifficult to register spam accounts, for example requesting more iteration ti the user such as the confermatuon of an email address.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” continues the post. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

The company is investing in behavioral detection, its engineers are working to introduce measures that one detected suspicions activities by challenging the owner of the account in actions that request its interaction.


Facebook Quiz app NameTests left 120 Million users’ data exposed online
30.6.18 securityaffairs
Social

Experts discovered a third-party quiz app, called NameTests, that was found exposing data of up to 120 million Facebook users.
A bug on the Nametests.com exposed data of over 120 million users who took personality quizzes on Facebook, the good news is that the flaw was addressed as part of the Facebook’s Data Abuse Bounty Program launched in April.

nametests

The issue resided in Nametests.com that shares users’ data with any third-party that requested it.

The flaw was reported by the researchers Inti De Ceukelaire, who explained that when loading a personality test, the website displays personal information loaded from http://nametests.com/appconfig_user.

The data loaded from Nametests.com was wrapped in JavaScript, this means that it could be shared with other websites.

“In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening.” the researcher wrote in a blog post.

“Since NameTests displayed their user’s personal data in JavaScript file, virtually any website could access it when they would request it,”

The experts set up a website that fetched data about the visitor from the Nametests.com website. In turn, ametests.com provided the access token that could also be used to gain access to the visitor’s posts, photos and friends, depending on the permissions granted.

“NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.” De Ceukelaire added.

Below the video PoC published by the expert that shows how NameTests was revealing visitor’s identity even after deleting the app.

nametests

In order to prevent such behavior, the user would have had to manually delete the cookies on their device.

The expert also discovered that the user information would continue to be available through the website even after they deleted the application. Users would have had to manually delete the cookies on their devices to prevent their data from being leaked.

The issue was reported to Facebook’s Data Abuse program on April 22 and the company and a fix was rolled out on June 25.

According to Facebook, the bug could “have affected Facebook information people shared with nametests.com”, in response to the incident the tech giant revoked the access tokens for everyone on Facebook who has signed up to use this app

“It was reported by Inti De Ceukelaire and we worked with the app’s developer — Social Sweethearts — to address the website vulnerability he identified which could have affected Facebook information people shared with nametests.com.” reads a post published by Facebook.

” To be on the safe side, we revoked the access tokens for everyone on Facebook who has signed up to use this app. So people will need to re-authorize the app in order to continue using it.”
Facebook awarded the expert with $8,000 instead $4,000 bounty because he chose to donate it to charity.

“I also got a response from NameTests. The public relations team claims that, according to the data and knowledge they have, they found no evidence of abuse by a third party. They also state that they have implemented additional tests to find such bugs and avoid them in the future,” the researcher concluded.


Twitter shared details about its strategy for fighting spam and bots
29.6.18 securityaffairs 
Social 

Twitter provided some details on new security processes aimed at preventing malicious automation and spam.
The tech giant also shared data on the success obtained with the introduction of the new security measures.
Social media platform are a privileged tool for psyops and malicious campaign, for this reason, Twitter rolled out new features to detect and prevent any abuse.

Threat actors make a large use of bots to spread propaganda and malicious links, and social media platforms are spending significant efforts in threats mitigation.

Twitter claims it challenged in May more than 9.9 million potentially automated accounts used for malicious activity every week. The data shows a significant decrease from 6.4 million in December 2017.
The social media platform said that the security measures allowed to drastically reduce spam reports received from users, from 25,000 daily reports in March to 17,000 in May.
The company is removing 214% more spam accounts compared to 2017. Twitter suspended over 142,000 apps in the first quarter of 18, most of them were shut down within a week or even within hours after being registered.

Twitter introduced measures to evaluate account metrics in near-real time.

The platform is able to recognize bots activity detecting synchronized operations conducted by multiple accounts.

Twitter announced it will remove follower and engagement counts from accounts flagged as suspicious that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“So, if we put an account into a read-only state (where the account can’t engage with others or Tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number.” reads the blog post published by Twitter.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,”
The company introduced measures to audit existing accounts and control the creation of New ones.
Twitter
Twitter is incresing checks on the sign-up process to make idifficult to register spam accounts, for example requesting more iteration ti the user such as the confermatuon of an email address.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” continues the post. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

The company is investing in behavioral detection, its engineers are working to introduce measures that one detected suspicions activities by challenging the owner of the account in actions that request its interaction.


Twitter Unveils New Processes for Fighting Spam, Bots
29.6.18 securityweek 
Social

Twitter this week shared some details on new processes designed to prevent malicious automation and spam, along with data on the positive impact of the measures implemented in the past period.

Spam and bots are highly problematic on Twitter, but the social media giant says it has rolled out some new systems that have helped its fight against these issues. The company claims that last month it challenged more than 9.9 million potentially spammy or automated accounts every week, up from 6.4 million in December last year.

Twitter says it now removes 214% more spam accounts compared to 2017. It also claims that recent changes have led to a significant drop in spam reports received from users, from 25,000 daily reports in March to 17,000 in May.

The company also reported suspending over 142,000 apps in the first quarter of 18, more than half of which were shut down within a week or even within hours after being registered.

One measure implemented recently by Twitter involves updating account metrics in near-real time. Spam accounts and bots often follow other accounts in bulk and this type of behavior should quickly be caught by Twitter’s systems. However, the company has now also decided to remove follower and engagement counts from suspicious accounts that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,” Twitter’s Yoel Roth and Del Harvey said in a blog post.

The company has also made some changes to its sign-up process to make it more difficult to register spam accounts. This includes requiring new accounts to confirm an email address or phone number.

Existing accounts are also being audited to ensure that they weren’t created using automation.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” Roth and Harvey explained. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

Finally, Twitter says it has expanded its malicious behavior detection systems with tests that can involve solving a reCAPTCHA or responding to a password reset request. Complex cases are passed on to Twitter employees for review.

Twitter also announced this week that users can configure a USB security key as part of the two-factor authentication (2FA) process.

On June 21, Twitter revealed that it entered an agreement to acquire Smyte, which specializes in safety, spam and security issues. By acquiring the company, the social media giant hopes to “improve the health of conversation on Twitter.”


Facebook Claims 99% of Extremist Content Removed Without Users' Help
15.6.18 securityweek
Social

Facebook claims growing success in fight against extremist content

At this week's International Homeland Security Forum (IHSF) hosted in Jerusalem by Israel’s minister of public security, Gilad Erdan, Facebook claimed growing success in its battle to remove extremist content from the network.

Dr. Erin Marie Saltman, Facebook counterterrorism policy lead for EMEA, said, "On Terrorism content, 99% of terrorist content from ISIS and al-Qaida we take down ourselves, without a single user flagging it to us. In the first quarter of 18 we took down 1.9 million pieces of this type of terrorist content."

This was achieved by a combination of Facebook staff and machine learning algorithms. "Focusing our machine learning tools on the most egregious terrorist content we are able to speak to scale and speed of efforts much more openly. But human review and operations is also always needed."

However, the implication that Facebook is winning the war against extremism is countered by a report ('Spiders of the Caliphate: Mapping the Islamic Stateís Global Support Network on Facebook' PDF) published in May 18 by the Counter Extremism Project (CEP).

CEP was launched in 2014 by former U.S. government officials, including former Homeland Security adviser Frances Townsend, former Connecticut Senator Joseph Lieberman, and Mark Wallace, a former U.S. Ambassador to the United Nations.

Its report mapped 1,000 Facebook profiles explicitly supporting IS between October 2017 and March 18. Using the open source network analysis and visualization program, Gephi, it found that visible 'friends' expanded the 1,000 nodes with 5,347 edges. Facebook's friending mechanism is particularly criticized as a means by which IS accounts find new targets to recruit.

The report actually refers to the 99% claim, implying that Saltman's claim is not a new development superseding the findings of CEP: "Given ISís ongoing presence on the platform, it is clear that Facebookís current content moderation systems are inadequate, contrary to the companyís public statements. Facebook has said that they remove 99% of IS and Al Qaeda content using automated systems..."

In fact, CEP fears that Facebook relies too heavily on its algorithms for finding and removing terrorist content. "This reliance on automated systems means IS supportersí profiles often go unremoved by Facebook and can remain on the platform for extended periods of time." It gives the example of a video from the IS Amaq news agency that was posted in September 2016 and remained available when the report was written in April 18.

"The video depicts combat footage from the Battle of Mosul and shows how IS produced a variety of weapon systems including car bombs and rocket launchers," notes the report.

Another example describes an ISIS supporter friending a non-Muslim and then gradually radicalizing him during the six-month period. "ID 551 played a clear role in radicalizing ID 548 and recruiting him as an IS supporter," says the report. "Facebook was the platform that facilitated the process, and it also functioned as an IS news source for him. Furthermore, given his connections with existing IS networks on Facebook, the moment that ID 548 wishes to become more than an online supporter he has the necessary contacts available to him. These are individuals who can assist with traveling to fight for the group or staging an attack in America. This case provides a detailed insight into the scope to which IS has taken advantage of Facebookís half-measures to combating extremism online."

This is not a simple problem. Taking down suspect terrorist content that is posted and used legitimately is a direct infringement of U.S. users' First Amendment rights. Dr Saltman described this issue at the IHSF conference. "We see," she said, "that pieces of terrorist content and imagery are used by legitimate voices as well; activists and civil society voices who share the content to condemn it, mainstream media using imagery to discuss it within news segments; so, we need specialized operations teams with local language knowledge to understand the nuance of how some of this content is shared."

To help avoid freedom of speech issues, Facebook has made its enforcement process more transparent. "I am pleased to say," said Saltman, "that just last month we made the choice to proactively be more transparent about our policies, releasing much more information about how we define our global policies through our Comprehensive Community Standards. These standards cover everything from keeping safe online to how we define dangerous organizations and terrorism."

At the same time, appealing removal decisions is made easier and adjudicated by a human. This can be problematic. According to a January 18 report in the Telegraph, an IS supporter in the UK who shared large amounts of IS propaganda had his account reactivated nine times by Facebook after he complained to the moderators that Facebook was stifling his free speech.

Where clearly illegal material is visible, Facebook cooperates proactively with law enforcement. Waheba Issa Dais, a Wisconsin 45-year-old mother of two, is in federal custody after being charged on Wednesday this week with providing 'material support or resources to a foreign terrorist organization.'

The Milwaukee Journal Sentinel reports, "The investigation appears to have started in January after Facebook security told the FBI that there was a 'Wisconsin-based user posting detailed instructions on how to make explosive vest bombs in support of ISIS,' the affidavit states. The person behind the Facebook posts, who the FBI said they determined was Dais, 'also appeared to be engaged in detailed question and answer sessions discussing substances used to make bombs'."

Ricin is mentioned. It would be easy enough for a word like 'ricin' to activate an alert. It is in the less obvious extremist content that machine learning algorithms need to be used. But machine learning is still a technology with great promise and partial delivery. "The real message is that Facebook has made it more difficult for ISIS and Al-Qaida to use their platform for recruiting," Ron Gula, president and co-founder of Gula Tech Adventures told SecurityWeek.

"Machine learning is great at recognizing patterns. Unfortunately, if the terrorists change their content and recruiting methods, they may still be able to leverage Facebook. This type of detection could turn into a cat and mouse game where terror organizations continuously change their tactics, causing Facebook to constantly have to update their rules and intelligence about what should be filtered."

The extremists won't make it easy. "They have become very good at putting a reasonable 'face' on much of their online recruiting material," explains John Dickson, Principal at the Denim Group. "Once they have someone interested is when they fully expose their intent. Given this situation, I’m not sure how [the algorithms] don’t create a ton of false positives and start taking down legitimate Islamic content."

Nearly every security activity creates false positives. "I suspect this will be no different," he continued. "Machine learning or more specifically supervised learning likely will help aid security analysts attempting to distinguish between legitimate jihadist recruiting material and generic Islamic content. But it will still need a human to make the final decisions – and that human is likely to be biased by the American attitude towards freedom of speech."

In the final analysis, Facebook is caught between competing demands: a very successful business model built on making 'friending' and posting easy, the First Amendment protecting free speech; and moral and legal demands to find and exclude disguised extremist needles hidden in a very large haystack of 2.2 billion active Facebook users every month.


Facebook Admits Privacy Settings 'Bug' Affecting 14 Million Users
8.6.18 securityweek
Social

Facebook acknowledged Thursday a software glitch that changed the settings of some 14 million users, potentially making some posts public even if they were intended to be private.

The news marked the latest in a series of privacy embarrassments for the world's biggest social network, which has faced a firestorm over the hijacking of personal data on tens of millions of users and more recently for disclosures on data-sharing deals with smartphone makers.

Erin Egan, Facebook's chief privacy officer, said in a statement that the company recently "found a bug that automatically suggested posting publicly when some people were creating their Facebook posts."

Facebook said this affected users posting between May 18 and May 27 as it was implementing a new way to share some items such as photos.

That left the default or suggested method of sharing as public instead of only for specific users or friends.

Facebook said it corrected the problem on May 22 but was unable to change all the posts, so is now notifying affected users.

"Starting today we are letting everyone affected know and asking them to review any posts they made during that time," Egan said.

"To be clear, this bug did not impact anything people had posted before -- and they could still choose their audience just as they always have. We'd like to apologize for this mistake."

Facebook confirmed earlier this week that China-based Huawei -- which has been banned by the US military and is a lightning rod for cyberespionage concerns -- was among device makers authorized to see user data in agreements that had been in place for years.

Facebook has claimed the agreements with some 60 device makers dating from a decade ago were designed to help the social media giant get more services into the mobile ecosystem.

Nonetheless, lawmakers expressed outrage that Chinese firms were given access to user data at a time when officials were trying to block their access to the US market over national security concerns.

The revelations come weeks after chief executive Mark Zuckerberg was grilled in Congress about the hijacking of personal data on some 87 million Facebook users by Cambridge Analytica, a consultancy working on Donald Trump's 2016 presidential campaign.


Facebook confirms privacy settings glitch in a new feature exposed private posts of 14 Million users

8.6.18 securityaffairs Social

Facebook admitted that a bug affecting its platform caused the change of the settings of some 14 million users, potentially exposing their private posts to the public.
This is the worst period in the history of the social network giant that was involved in the Cambridge Analytica privacy scandal that affected at least 87 Million users.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” said Erin Egan, Facebook’s chief privacy officer.

“To be clear, this bug did not impact anything people had posted before—and they could still choose their audience just as they always have. We’d like to apologize for this mistake.”

According to Facebook, the glitch affected some of its users that published posts between May 18 and May 27 because in that period of time it was implementing a new feature for the sharing of data such as images and videos.

Evidently, something went wrong, and the overall private messages were shared as public by defaults.

The social network giant confirmed to have corrected the bug on May 22, but it was unable to change the visibility of all the posts.

The company is now notifying affected users apologizing for the technical issue.

This is the last embarrassing case that involved Facebook in the last weeks, in April, researchers from Princeton researchers reported that the Facebook’s authentication feature “Login With Facebook” can be exploited to collect user information that was supposed to be private.

Early this week, Facebook confirmed that its APIs granted access to the data belonging to its users to more than 60 device makers, including Amazon, Apple, Microsoft, Blackberry, and Samsung so that they could implement Facebook messaging functions.

The Chinese vendor Huawei was one of the device makers authorized to use the API, the firm, in May the Pentagon ordered retail outlets on US military bases to stop selling Huawei and ZTE products due to unacceptable security risk they pose.

Facebook highlighted that the agreement was signed ten years and that its operated to prevent any abuse of the API.


Facebook Deals With Chinese Firm Draw Ire From U.S. Lawmakers
7.6.18 securityweek
Social

Facebook drew fresh criticism from US lawmakers following revelations that it allowed Chinese smartphone makers, including one deemed a national security threat, access to user data.

The world's largest social network confirmed late Tuesday that China-based Huawei -- which has been banned by the US military and a lightning rod for cyberespionage concerns -- was among device makers authorized to see user data.

Facebook has claimed the agreements with some 60 device makers dating from a decade ago were designed to help the social media giant get more services into the mobile ecosystem.

Nonetheless, lawmakers expressed outrage that Chinese firms were given access to user data at a time when officials were trying to block their access to the US market over national security concerns.

Senator Ed Markey said Facebook's chief executive has some more explaining to do following these revelations.

"Mark Zuckerberg needs to return to Congress and testify why @facebook shared Americans' private information with questionable Chinese companies," the Massachusetts Democrat said on Twitter.

"Our privacy and national security cannot be the cost of doing business."

Other lawmakers zeroed in on the concerns about Huawei's ties to the Chinese government, even though the company has denied the allegations.

"This could be a very big problem," tweeted Senator Marco Rubio, a Florida Republican.

"If @Facebook granted Huawei special access to social data of Americans this might as well have given it directly to the government of #China."

Representative Debbie Dingell called the latest news on Huawei "outrageous" and urged a new congressional probe.

"Why does Huawei, a company that our intelligence community said is a national security threat, have access to our personal information?" said Dingell, a Michigan Democrat, on Twitter.

"With over 184 million daily Facebook users in US & Canada, the potential impact on our privacy & national security is huge."

'Approved experiences'

Facebook, which has been blocked in China since 2009, also had data-access deals with Chinese companies Lenovo, OPPO and TCL, according to the company, which had similar arrangements with dozens of other devices makers.

Huawei, which has claimed national security fears are unfounded, said in an emailed statement its access was the same as other device makers.

"Like all leading smartphone providers, Huawei worked with Facebook to make Facebook's service more convenient for users. Huawei has never collected or stored any Facebook user data."

The revelations come weeks after Zuckerberg was grilled in Congress about the hijacking of personal data on some 87 million Facebook users by Cambridge Analytica, a consultancy working on Donald Trump's 2016 campaign.

Facebook said its contracts with phone makers placed tight limits on what could be done with data, and "approved experiences" were reviewed by engineers and managers before being deployed, according to the social network.

Any data obtained by Huawei "was stored on the device, not on Huawei's servers," according to Facebook mobile partnerships chief Francisco Varela.

Facebook said it does not know of any privacy abuse by cellphone makers who years ago were able to gain access to personal data on users and their friends.

It has argued the data-sharing with smartphone makers was different from the leak of data to Cambridge Analytica, which obtained private user data from a personality quiz designed by an academic researcher who violated Facebook's rules.

Facebook is winding up the interface arrangements with device makers as the company's smartphone apps now dominate the service. The integration partnership with Huawei will terminate by the end of this week, according to the social network.

The news comes following US sanctions on another Chinese smartphone maker, ZTE -- which was not on the Facebook list -- for violating export restrictions to Iran.

The ZTE sanctions limiting access to US components could bankrupt the manufacturer, but Trump has said he is willing to help rescue the firm, despite objections from US lawmakers.


Germany's Continental Bans WhatsApp From Work Phones
6.6.18 securityweek
Social

German car parts supplier Continental on Tuesday said it was banning the use of WhatsApp and Snapchat on work-issued mobile phones "with immediate effect" because of data protection concerns.

The company said such social media apps had "deficiencies" that made it difficult to comply with tough new EU data protection legislation, especially their insistence on having access to a user's contact list.

"Continental is prohibiting its employees from using social media apps like WhatsApp and Snapchat in its global company network, effective immediately," the firm said in a statement.

Some 36,000 employees would be affected by the move, a Continental spokesman told AFP.

The company, one of the world's leading makers of car parts, has over 240,000 staff globally.

A key principle of the European Union's new general data protection regulation (GDPR), which came into force on May 25, is that individuals must explicitly grant permission for their data to be used.

But Continental said that by demanding full access to address books, WhatsApp for example had shifted the burden onto the user, essentially expecting them to contact everyone in their phone to let them know their data was being shared.

"We think it is unacceptable to transfer to users the responsibility of complying with data protection laws," said Continental's CEO Elmar Degenhart.

The Hanover-based firm said it stood ready to reverse its decision once the service providers "change the basic settings to ensure that their apps comply with data-protection regulations by default".

The issue of how personal information is used and shared online was given fresh urgency after Facebook earlier this year admitted to a massive privacy breach that allowed a political consultancy linked to US President Donald Trump's 2016 campaign to harvest the data of up to 87 million users.


Facebook Says Chinese Phone Makers Got Access to Data
6.6.18 securityweek
Social

Facebook on Tuesday confirmed that a Chinese phone maker deemed a national security threat by the US was among companies given access to data on users.

Huawei was able to access Facebook data to get the leading social network's applications to perform on smartphones, according to the California-based company.

"Facebook along with many other US tech companies have worked with them and other Chinese manufacturers to integrate their services onto these phones," Facebook mobile partnerships leader Francisco Varela said in a released statement.

"Given the interest from Congress, we wanted to make clear that all the information from these integrations with Huawei was stored on the device, not on Huawei's servers."

Facebook also had data access deals with Lenovo, OPPO and TCL of China, according to Varela.

"Facebook's integrations with Huawei, Lenovo, OPPO and TCL were controlled from the get go," Varela said.

Huawei has long disputed any links to the Chinese government, while noting that its infrastructure and computing products are used in 170 countries.

"Concerns about Huawei aren't new," US Senator Mark Warner, vice chairman of the senate select committee on intelligence, said Tuesday in a released statement.

"I look forward to learning more about how Facebook ensured that information about their users was not sent to Chinese servers."

Facebook said that it does not know of any privacy abuse by cellphone makers who years ago were able to gain access to personal data on users and their friends.

Before now-ubiquitous apps standardized the social media experience on smartphones, some 60 device makers like Amazon, Apple, Blackberry, HTC, Microsoft and Samsung worked with Facebook to adapt interfaces for the Facebook website to their own phones, the company said.

Facebook said it is winding up the interface arrangements with device makers as the company's smartphone apps dominate the service. The integration partnership with Huawei will terminate by the end of this week, according to the social network.

The social media leader said it "disagreed" with the conclusions of a New York Times report that found that the device makers could access information on Facebook users' friends without their explicit consent.

Facebook enabled device makers to interface with it at a time when it was building its service and they were developing new smartphone and social media technology.

But the report raised concerns that massive databases on users and their friends -- including personal data and photographs -- could be in the hands of device makers.