- Security -

Last update 09.10.2017 13:17:23

Introduction  List  Kategorie  Subcategory 0  1  2  3  4  5  6  7  8 



Purging Long-Forgotten Online Accounts: Worth the Trouble?
14.10.2018 securityweek
Security
The internet is riddled with long-forgotten accounts on social media, dating apps and various shopping sites used once or twice. Sure, you should delete all those unused logins and passwords. And eat your vegetables. And go to the gym.

But is it even possible to delete your zombie online footprints — or worth your time to do so?

Earlier this month, a little-used social network notified its few users that it will soon shut down. No, not Google Plus; that came five days later, following the disclosure of a bug that exposed data on a half-million people. The earlier shutdown involved Path, created by a former Facebook employee in 2010 as an alternative to Facebook. Then there's Ello sending you monthly emails to remind you that this plucky but little-known social network still exists somehow.

It might not seem like a big deal to have these accounts linger. But with hacking in the news constantly, including a breach affecting 50 million Facebook accounts, you might not want all that data sitting around.

You might not have a choice if it's a service you use regularly. But for those you no longer use, consider a purge. Plus, it might feel good to get your online life in order, the way organizing a closet does.

Take dating apps such as Tinder, long after you found a steady partner or gave up on finding one. You might have deleted Tinder from your phone, but the ghost of your Tinder account is still out there — just not getting any matches, as Tinder shows only "active" users to potential mates.

Or consider Yahoo. Long after many people stopped using it, Yahoo in 2016 suffered the biggest publicly disclosed hack in history, exposing the names, email addresses, birth dates and other information from 3 billion active and dormant accounts. This sort of information is a goldmine for malicious actors looking to steal identities and gain access to financial accounts.

Trouble is, cleaning up your digital past isn't easy.

For one, finding all the old accounts can be a pain. For some of us, it might not even be possible to recall every dating site and every would-be Twitter that never was, not to mention shopping or event ticketing sites you bought one thing from and forgot about.

Then, you'll have to figure out which of your many email accounts you used to log in to a service, then recover passwords and answer annoying security questions — assuming you even remember what your favorite movie or fruit was at the time. Only then might you discover that you can't even delete your account. Yahoo, for instance, didn't allow users to delete accounts or change personally identifying information they shared, such as their birthday, until pressured to do so after the breach.

Even without these hurdles, real life gets in the way. There are probably good reasons you still haven't organized your closet, either.

Perhaps a better approach is to focus on the most sensitive accounts. It might not matter than a news site still has your log in, if you never gave it a credit card or other personal details (of course, if you reused your bank password you might be at risk).

Rich Mogull, CEO of data security firm Securosis, said people should think about what information they had provided to services they no longer use and whether that information could be damaging should private posts and messages inadvertently become public.

Dating sites, in particular, can be a trove of potentially damaging information. Once you're in a relationship, delete those accounts.

It's wise to set aside a time each year — maybe after you do your taxes or right after the holidays — to manage old accounts, said Theresa Payton, who runs the security consulting company Fortalice Solutions and served under President George W. Bush as White House chief information officer.

For starters, visit haveibeenpwned.com. This popular tool lets you enter your email addresses and check if it has been compromised in a data breach. Ideally, the attacked company should have notified you already, but that's not guaranteed. Change passwords and close accounts you don't need.

You might also check justdeleteme.xyx, which Payton said could help navigate the "complexities of saying goodbye." The site has a list of common and obscure services. Looking through it might remind you of some of the services you've used back in the days. Click on a service for details on how to delete your account.

You might discover that some services simply won't let you go. That could be an oversight from a startup prioritizing other features over a deletion tool. Or, it could be intentional to keep users coming back. There's not much you can do beyond deleting as many posts, photos and other personal data as you can.

What to do with accounts of people who have died is a whole other story . That said, the prospect of the Grim Reaper — and what sorts of information about you may be exposed after you shed this mortal coil — might just be the motivation you need to clean up your online trail.


Mozilla Delays Distrust of Symantec Certificates
12.10.2018 securityweek
Security

Mozilla this week announced that the distrust of older Symantec certificates, initially planned for Firefox 63, will be delayed.

Following a long series of problems regarding the wrongful issuance of certificates issued by the Certification Authority (CA) run by Symantec, one of the oldest and largest CAs, browser makers have decided to remove trust in all Symantec-issued certificates before the end of this year.

Both Google and Mozilla said they would gradually remove trust in all TLS/SSL certificates issed by Symantec. Google, which removed trust in certificates that Symantec issued before June 1, 2016, with the release of Chrome 66 in April, wants to remove trust in all Symantec certificates in Chrome 70.

Mozilla was aiming at making a similar move in October 2018, with the release of Firefox 63, but now says it has decided to delay the distrust plans. The browser is currently only warning users when encountering a website that uses a Symantec-issued certificate.

According to the browser maker, it took this decision after learning that well over 1% of the top 1,000,000 websites still use Symantec certificates, meaning that impact on users would be much greater than initially anticipated.

Last year, Symantec sold its CA business to DigiCert, which immediately started issuing new certificates to replace those issued by Symantec. In March, DigiCert said it had replaced most of the Symantec-issued certificates and that less than 1% of the top 1 million websites hadn’t made the switch yet.

As it turns out, many popular sites are still using Symantec certificates, apparently unaware of the planned distrust. Others, Mozilla says, are likely waiting until Chrome 70 arrives on October 23 to finally replace their Symantec certificates.

“Unfortunately, because so many sites have not yet taken action, moving this change from Firefox 63 Nightly into Beta would impact a significant number of our users. It is unfortunate that so many website operators have waited to update their certificates, especially given that DigiCert is providing replacements for free,” Mozilla’s Wayne Thayer notes.

He says that Mozilla is well aware of the additional risk caused by a delay in the implementation of the distrust plan, but also points out that the delay is in the best interest of Firefox users, given the current situation.

The distrust, however, continues to be planned for later this year, when more sites have replaced their Symantec TLS certificates. Firefox 63 Nightly is already distrusting Symantec-issued certificates, but the change won’t be implemented in Firefox 63 Beta, but Firefox 64 Beta instead.

“We continue to strongly encourage website operators to replace Symantec TLS certificates immediately. Doing so improves the security of their websites and allows the 10’s of thousands of Firefox Nightly users to access them,” Thayer concludes.


KnowBe4 Brings Artificial Intelligence to Security Awareness Training
10.10.2018 securityweek
Security

It seems that you cannot have a new security product without a machine learning component. It makes sense. Machine learning recognizes patterns and returns probabilities. Risk, and cyber security is all about risk, is also about patterns and probabilities. Binary security is beginning to look a bit old.

Now machine learning has entered security awareness training. Security awareness training firm KnowBe4 has added a Virtual Risk Officer (VRO), a Virtual Risk Score (VRS), and Advanced Reporting (AR) features to its security awareness training and simulated phishing platform.

"We've integrated a deep learning neural network that evaluates how risk changes over time within an organization," explains Stu Sjouwerman, CEO of KnowBe4, "which helps cybersecurity professionals measure how their security awareness program performs."

Traditional simulated phishing tells organizations which of its employees are deceived by a simulated phish, and which ones recognized it. On its own, it gives no real measure on the probability of the employees falling for a future -- perhaps malicious -- phish.

This is the purpose of the VRO and the VRS. The VRO helps the security team to identify risk at the user, group or organizational level. This makes future awareness training plans more relevant. The VRS highlights which groups are particularly vulnerable to social engineering attacks -- again allowing the security team to more finely focus its training.

Machine learning works by analyzing data and detecting patterns that would normally be missed by human analysts. KnowBe4's approach is to draw the raw data from five categories. These are breach history (has the user been exposed in a prior breach made publicly known); extent of training; the state of their 'phish-prone percentage' (which is a KnowBe4 measure of the user's fail points); the level of risk for their operational group (for example, working in finance would be a high risk level); and a booster feature that allows the security team to adjust for known risk factors.

Sjouwerman told SecurityWeek how this works. "Each user will have a Personal Risk Score. The risk score for an organization's groups and an organization is a calculation based on the Personal Risk Scores of all of the members of that group or organization."

That personal risk score, he continued, "is calculated by several different factors including how likely the user is to be targeted with a phishing or social engineering attack, how they will react to these types of events, and how severe the consequences would be if they fell for an attack."

For example, the Personal Risk Score of employees in an Accounting Department will be higher than those of employees in the Graphic Design Department, because an Accounting Department has access to sensitive financial data. "Similarly," he added, "a CEO or CFO will have a higher risk score than a Marketing Director, because the C-level executives may have access to classified or proprietary information about the organization."

The effect of KnowBe4's neural network is to bring together all of these different factors into a single metric: the virtual risk score that is based on more than just the user's phishing and training performance. The process is rounded off by KnowBe4's new Advanced Reporting feature. This, says the company, gives access to more than 60 built-in reports with insights that give a holistic view of the entire organization over time. Each report, which is now available immediately, gives visibility into the organization's security awareness performance based on trainings taken and simulated phishing data.

"Before AR and VRO," explains Sjouwerman, "the admin could see Phish-prone percentage and training but could not correlate those two items. AR allows the correlation and VRO takes that to the next level by also incorporating additional data such as user exposure and role within the organization."

Clearwater, FL-based KnowBe4 was founded by Stu Sjouwerman in 2010. It raised $30 million in Series B financing led by Goldman Sachs Growth Equity (GS Growth) in October 2017; bringing the total funding to date to $44 million.


How Secure Are Bitcoin Wallets, Really?
9.10.2018 securityaffairs
Security

Purchasers of Bitcoin wallets usually have one priority topping their lists: security. What’s the truth about the security of these wallets?
When buying conventional wallet coins and paper money, people often prioritize characteristics like the size, color, shape, and number of compartments.

However, purchasers of Bitcoin wallets — the software programs that facilitate storing someone’s cryptocurrency-related wealth — usually have one priority topping their lists: security.

So, the companies behind those wallets wisely emphasize why their products are more secure than what competitors offer and why that’s the case. But, beyond the marketing language, what’s the truth about the security of these wallets?

Guessing an Individual Bitcoin Wallet Key Is Tremendously Unlikely, Crypto Expert Says
People appreciate comparisons when thinking about the likelihood something might happen. Brian Liotti of the website Crypto Aquarium had that in mind when he carried out research and found the probability of guessing a Bitcoin key for one wallet is as likely as winning the Powerball nine times in a row.

So, that’s undoubtedly comforting to people who raise their eyebrows at the prospect of using a digital method to store their cryptocurrency investments.

A Wallet Owner Gets Locked out for Months
There’s also the detailed account of Mark Frauenfelder, who owned a Trezor wallet and couldn’t access it for several traumatizing months after misplacing the PIN that served as recovery words for the software. His tale of woe proves a hacker couldn’t contact a Bitcoin wallet manufacturer, masquerade as a wallet owner and get the goods for access.

A Teenager Hacked a Tamper-Proof Wallet
Ledger, a French company that sells Bitcoin wallets, found itself receiving unwanted publicity when a British teenager disclosed a proof of concept that allowed him to break into the Ledger Nano S, a wallet the company had advertised as unhackable. The hack focuses on the device’s microcontrollers.

One of them stores the wallet’s private key and the other acts as a proxy. The proxy microcontroller is reportedly so insecure it cannot differentiate between authentic firmware and that which a cybercriminal creates.

This case study, as well as others associated with less-than-locked-down Bitcoin wallets, emphasizes how people should not get too comfortable after buying a Bitcoin wallet, even one considered as being among the best of the best. The same goes for storing other types of money: Following best practices is always the ideal approach.

If a person owns collector coins, it’s essential to learn how to protect them from potential sources of damage — such as temperature extremes, acids and humidity. Although they exist in the cyber-realm, Bitcoins need safeguards of their own concerning hackers, especially as even the most high-tech options show they need improvement.

Alleged Break-Ins to McAfee’s Wallet
The Bitfi Bitcoin wallet, backed by cybersecurity executive John McAfee, offered a $250,000 bounty to anyone who could successfully hack it. And, in August 2018, a security research firm called OverSoft NL claimed success. The company behind the wallet then issued a second bounty in an attempt to find the weaknesses.

People in the cybersecurity sector expressed their frustrations about the reward, since participants have to abide by the company’s rules. In other words, if cybersecurity experts hacked the wallet in a way the company didn’t specify, they would not win the reward.

But, hacks carried out by malicious players never seem to follow such parameters. Often, they involve unusual methods that exploit vulnerabilities the manufacturer never fathomed. Other people said they had hacked the wallet before OverSoft NL, but not per the company’s rules.

Even representatives from the cybersecurity firm expressed doubts that they’d actually receive the money, believing the bounty to be nothing more than a marketing ploy. The bounty program has since become discontinued, with the company promising to launch another soon.

The Marketing Language Could Tempt Hackers
Whenever something in the tech industry gets presented as impossible to infiltrate, both ethical and malicious hackers frequently see a challenge to try and prove otherwise.

As John McAfee spoke of his wallet on Twitter, the tone could easily come across as overconfident and cocky: “For all you naysayers who claim that ‘nothing is unhackable’ & who don’t believe that my Bitfi wallet is truly the world’s first unhackable device, a $100,000 bounty goes to anyone who can hack it…” And indeed, hackers got to work and accepted the challenge.

Cryptocurrency Wallet Owners Cannot Be Too Careful
Although we’ve seen here how research shows Bitcoin wallet hacks are unlikely and that a wallet owner himself couldn’t even get access to his funds after losing the PIN, case studies show hacks are still possible.

People should always perform adequate research about security measures built into individual wallets but also use them intelligently by following good cyber security habits and never assuming a wallet couldn’t get hacked.


Windows 10 October 2018 Update could cause CCleaner stop working
7.10.2018 securityaffairs
Security

Users are reporting problems with the CCleaner software that appears to be partially broken after the installation of Windows 10 October 2018 Update
Many Windows users are reporting problems after the installation of Windows 10 October 2018 Update, a few days ago a Reddit user discovered the Task Manager tool was showing inaccurate CPU usage after the upgrade.

Other users discovered that some files on their machines were deleted after the Windows 10 October 2018 Update was installed.

Now users are reporting problems with the CCleaner software that appears to be partially broken after the installation of Windows 10 October 2018 Update (version 1809).

ccleaner

Some users claim that the certain features have stopped working after upgrading their Operating System. Some users reported that CCleaner failed to clean recent files and documents in File Explorer.

According to the member crizal of the official Piriform forum, CCleaner 5.47.6716 no longer cleans the following:

Recent files/documents in File Explorer
Reliability History
Windows Event Logs (CCleaner shows they’re cleaned but they’re still there)
Registry Cleaner keeps finding the same Application Paths Issue after every reboot (System32\DriverStore\FileRepository)
CCleaner must force close Edge browser prior to every cleaning, even if the browser has been closed (not that big a deal to me)
Piriform plan to fix the issue very soon.

“Thank you for reporting. We are aiming to fix this for the next release. Keep your eyes on the Beta Releases forum as we may publish it there first to get the fix out more quickly,” said a forum moderator.


Hackers Earn $150,000 in Marine Corps Bug Bounty Program
5.10.2018 securityweek
Security

The U.S. Department of Defense’s sixth public bug bounty program, Hack the Marine Corps, has concluded, and white hat hackers who took part in the challenge earned more than $150,000.

Hack the Marine Corps was hosted by the HackerOne bug bounty platform and it ran for 20 days. Over 100 experts were invited to test the security of the Marine Corps’ public websites and services and they discovered nearly 150 unique vulnerabilities.

Of the total number of flaws, roughly half were reported during a live hacking event that took place at the DEF CON conference in August. More than $80,000 was awarded for the security holes discovered during the event.

“Hack the Marine Corps was an incredibly valuable experience,” said Major General Matthew Glavy, Commander of the U.S. Marine Corps Forces Cyberspace Command. “When you bring together this level of talent from the ethical hacker community and our Marines we can accomplish a great deal. What we learn from this program assists the Marine Corps in improving our warfighting platform. Our cyber team of Marines demonstrated tremendous efficiency and discipline, and the hacker community provided critical, diverse perspectives. The tremendous effort from all of the talented men and women who participated in the program makes us more combat ready and minimizes future vulnerabilities.”

The Pentagon and HackerOne have been organizing bug bounty programs since 2016, including Hack the Pentagon, Hack the Army, Hack the Air Force, and Hack the Defense Travel System.

The ethical hackers who took part in these challenges discovered more than 5,000 vulnerabilities in government systems, for which they earned over $500,000.


Wickr Announces General Availability of Anti-Censorship Tool
5.10.2018 securityweek
Security

As the balkanization of the internet continues, traveling businessmen are left with concerns over the integrity of their communications from some regions of the globe. Increasing censorship, blocking and other restrictions in many world regions have left internet users unprotected because secure communications are banned.

In some countries such as Saudi Arabia and UAE, says Wickr, enterprise deployments may be difficult because of the national Telco's monopoly over networks. They restrict various end points and UDP, so all traffic goes through them for monetization or tracking purposes. As a result, some customers have to deploy outside of their region (such as India), to avoid having UDP packets get rate-limited and their tools rendered unusable.

To help solve this problem, Wickr has announced the general availability of its secure open access protocol to circumvent censorship for all Wickr Me and Wickr Pro (via admin console) users. It combines unrestricted access and end-to-end encrypted collaboration features in a single app, no matter where users are located.

The enterprise version of the tool was announced in August 2018, with the promise that it would be rolled out to other versions of Wickr, including the free version, in the future. That roll out is confirmed today. The tool comes from Wickr's collaboration with Psiphon. Psiphon describes it as a circumvention tool that utilizes VPN, SSH and HTTP Proxy technology to provide uncensored access to Internet content.

The Psiphon technology uses SSH as its core protocol. This prevents deep packet inspection by ISPs. On top of this, Psiphon has added an obfuscated-openssh layer that transforms the SSH handshake into a random stream, and adds random padding to the handshake.

When a Wickr client starts with Psiphon enabled, the client attempts to connect to up to 10 different servers simultaneously, and uses the first to be fully established. This minimizes user wait time if any of the servers are blocking certain protocols, are blocked by their address, or already running at full capacity and rejecting new connections.

This last point is important. It means that the Wickr/Psiphon product has value beyond just foreign travel. Domestic mobile workers often use low capacity public wifi with limited security. Wickr's encryption can secure the content, while Psiphon ensures minimal delay in the communications.

It is important to note that the Wickr/Psiphon tool is a communication optimization, security and anti-censorship tool -- it is not an anti-law enforcement tool. "Wickr provides full transparency to both law enforcement and our users on the type of metadata that is collected through our products, as well as any data requests we receive," CEO & President at Wickr told SecurityWeek. "The data we capture is very limited in scope to protect user privacy but done in a way that also supports law enforcement."

ISPs, however, remain the weak link in any secret communication. "As to ISPs," continued Wallenstrom, "they are in the business of monetizing user data and were given the green light to do so last year." They can legally collect and sell the data they collect -- but their storage of collected data presents a further risk.

"The risk to users of exposure could be very high and breaches over the years have pretty much confirmed this," he continued. "Short of stopping customer data collection and monetization altogether, ISPs should be transparent about what information they take and ensure proper safeguards are in place. In turn, users can limit their exposure by using privacy tools such as a VPN that masks browsing data from ISPs and encrypted messengers that protect sensitive communications from getting caught in a data sweep."

Psiphon was started more than 10 years ago at Citizen Lab, one of the worldís top research hubs dedicated to building anti-surveillance tools. Psiphon was responsible for keeping access to Telegram during Iranian protests, WhatsApp in Brazil and other tools. "There are probably 30 to 40 countries in the world where governments, ISPs and security agencies are all colluding together to control the local population and economy," Michael Hull, president of Psiphon Inc, told SecurityWeek. "This is the problem that Psiphon was founded to solve."

San Francisco-based Wickr was founded in 2011 by Chris Howell, Kara Coppa, Nico Sell, at Robert Statica. It has raised a total of $73 million in venture funding.


Chronicle Unveils VirusTotal Enterprise
28.9.2018 securityweek
Security

Chronicle on Thursday announced VirusTotal Enterprise, a new platform that combines existing VirusTotal capabilities with expanded functionality and new features to help organizations protect their networks.

Chronicle is a subsidiary of Google's parent company, Alphabet Inc. VirusTotal became part of Chronicle in January 2018.

According to the cybersecurity firm, VirusTotal Enterprise allows users to search for known and unknown malware, and analyze relationships between malware samples. These tasks can be automated using the company’s API.VirusTotal Enterprise

Chronicle told SecurityWeek that pricing for VirusTotal Enterprise starts at $10,000 per year and goes up depending on usage.

With VirusTotal Enterprise, the existing VirusTotal malware intelligence service is extended with new capabilities provided by Private Graph, an improved version of the Graph visualization tool.

Private Graph allows security teams to enhance malware relationship graphs with information from their own assets, including machines, departments and users. And unlike regular graphs, private graphs cannot be seen by users of the public VirusTotal service.

Chronicle says private graphs allow teams to collaborate securely in incident investigations, and they automatically extract node commonalities to identify indicators of compromise (IoC).

The malware search features are also more advanced in VirusTotal Enterprise. Chronicle promises that searches are 100 times faster, more powerful, and more accurate due to additional search parameters. For instance, users can extract a fake app’s icon and identify all malware samples that use the same icon file.

All of the features and capabilities provided by VirusTotal Enterprise are accessible from a single and unified interface. Existing two-factor authentication can be used to protect Enterprise accounts, and new API management helps control corporate access.

“We continue to leverage the power of Google infrastructure to expand the search and analysis capabilities of VirusTotal,” Chronicle said in a blog post. “As part of Chronicle, we also continue to add features to make VirusTotal more useful for enterprise security analysts. VirusTotal Enterprise will give those analysts new ability to search more data, faster, and to visualize it in new ways.”

The company says the features in VirusTotal Enterprise will become available to new and existing customers in the coming weeks.


Microsoft Boosts Azure Security With Array of New Tools
25.9.2018 securityweek
Security

At its Ignite conference this week, Microsoft announced improved security features for Azure with the addition of Microsoft Authenticator, Azure Firewall, and several other tools to the cloud computing platform.

After announcing Azure Active Directory (AD) Password Protection in June to combat bad passwords, Microsoft is now bringing password-less logins to Azure AD connected apps with the addition of support for Microsoft Authenticator.

The tool, Microsoft claims, can replace passwords with “a more secure multi-factor sign in that combines your phone and your fingerprint, face, or PIN.” In addition to reducing risks, this approach also offers a better user experience by eliminating passwords.

To better protect networked resources in Azure, Microsoft is making ExpressRoute Global Reach and Azure Virtual WAN generally available, adding them to built-in services such as network security groups, Web Application Firewall (WAF), Virtual Private Network, and DDoS protection.

Microsoft also announced ExpressRoute support in preview for Virtual WAN, for seamless transit across VPN, SDWAN and ExpressRoute circuits connected to Virtual WAN.

Azure Firewall also becomes generally available, allowing organizations to enforce their network security polices while also taking advantage of the cloud. Additionally, there’s Azure Virtual Network TAP, which delivers “tap” capabilities for virtual networks, allowing for the continuous mirroring of traffic from a virtual network to a packet collector with Virtual Network terminal access point (TAP).

“The mirrored traffic is a deep copy of the inbound and outbound VM network traffic and can be streamed to a destination IP endpoint, a 3rd party security appliance or an internal load balancer, in the same virtual network or peered virtual network,” Microsoft explains.

To protect data not only when in transit or being stored, but also while it’s in use, Microsoft is enabling confidential computing for its cloud platform, to protect “the confidentiality and integrity of customer data and code while it’s processed in the public cloud through the use of Trusted execution environments (TEEs).”

Backed by the latest generation of the Intel Xeon processors with Intel SGX, a new family of virtual machines in Azure (DC series) is now accessible to all Azure customers, allowing them to build, run, and test SGX based applications and leverage confidential computing.

The Redmond-based software giant also plans on open-sourcing a new SDK “to provide a consistent API surface and enclaving abstraction, supporting portability across enclave technologies and flexibility in architecture across all platforms from cloud to edge,” and which will get support for Intel SGX technology and ARM TrustZone soon afterward.

Customers can now leverage Azure Security Center to customize their SQL Information Protection policy, in addition to being able to discover, classify, label, and protect sensitive data in Azure SQL Database using the capabilities in Azure SQL.

The Security Center continuously assesses the security state of workloads across Azure, other clouds, and on-premises, and can also identify vulnerabilities and provide customers with actionable recommendations. Starting this week, new capabilities will arrive in Security Center, such as Secure Score, which delivers a dynamic report card for one’s security posture and which now covers all of Microsoft 365.

Microsoft also announced Microsoft Threat Protection this week, which combines detection, investigation, and remediation across endpoints, email, documents, identity, and infrastructure in the Microsoft 365 admin console.

The company is also expanding its threat protection capabilities “to include detecting threats on Linux, Azure Storage, and Azure Postgress SQL and providing endpoint detection and response capabilities for Windows Server by integrating with Windows Defender ATP.”

Building on the Information Protection solutions launched last year, Microsoft is now rolling out the Security & Compliance center to deliver a single, integrated approach to creating data sensitivity and data retention labels.

“We are also previewing labeling capabilities that are built right into Office apps across all major platforms, and extending labeling and protection capabilities to include PDF documents. The Microsoft Information Protection SDK, now generally available, enables other software creators to enhance and build their own applications that understand, apply, and act on Microsoft’s sensitivity labels,” Microsoft says.

Microsoft says it is also working with tech companies, policymakers, and institutions on strategies to protect the midterm elections. In June, the Windows maker launched the Defending Democracy program to “protect political campaigns from hacking, increase security of the electoral process, defend against disinformation, and bring greater transparency to political advertising online” and plans on expanding it globally.

“Part of this program is the AccountGuard initiative that provides state-of-the-art cybersecurity protection at no extra cost to all candidates and campaign offices at the federal, state, and local level, as well as think tanks and political organizations. We’ve had strong interest in AccountGuard and in the first month onboarded more than 30 organizations,” the software company notes.

The tech giant also plans on launching a new key management solution, Azure Dedicated hardware security module (HSM), to provide customers with full administrative and cryptographic control over the HSMs that process their encryption keys. Furthermore, Microsoft plans to improve the existing processes for the instances when a customer asks it to access their computer resources to resolve an issue.


Symantec Completes Internal Accounting Investigation
25.9.2018 securityweek  
Security

Symantec announced on Monday that it has completed its internal accounting audit, and while some issues have been uncovered, only one customer transaction has an impact on financial statements.

Symantec stock dropped from nearly $30 to just under $20 after the company announced the investigation on May 10. It recovered slightly a few days later after more details were made public, but again dove under $20 after the firm revealed plans to cut as much as 8% of its workforce, representing roughly 1,000 employees.

Shares went up approximately 4 percent after the firm announced the completion of the audit.

The investigation was launched after a former employee raised concerns about “the Company’s public disclosures including commentary on historical financial results, its reporting of certain Non-GAAP measures including those that could impact executive compensation programs, certain forward-looking statements, stock trading plans and retaliation.”

The investigation, conducted with the help of a forensic accounting firm and independent legal counsel, identified issues related to the review, approval and tracking of transition and transformation expenses. It also found “certain behavior inconsistent with the Company’s Code of Conduct and related policies.”

The audit uncovered a customer transaction for which $13 million was erroneously recognized as revenue in the fourth quarter of FY 2018. The company has determined that $12 million of that amount should be deferred and the financial results for the fourth quarter of FY 2018 and the first quarter of FY 2019 be revised.

Symantec says it’s taking steps to address issues uncovered by the investigation and its board of directors has adopted recommendations made by the audit committee. This includes appointing a separate chief accounting officer and a separate chief compliance officer, and clarifying and enhancing the code of conduct.

No employment actions have been recommended as a result of the investigation, the cybersecurity firm said.

The audit has prevented Symantec from filing its annual report for the previous fiscal year (Form 10-K) and the report for the first fiscal quarter that ended on June 29 (Form 10-Q) with the Securities and Exchange Commission (SEC). The company is working to complete the preparation of the forms and hopes to have the annual report ready within a month.

The SEC has launched its own investigation into the matter after being contacted by Symantec.


Symantec Completes Internal Accounting Investigation
25.9.2018 securityweek  
Security

Symantec announced on Monday that it has completed its internal accounting audit, and while some issues have been uncovered, only one customer transaction has an impact on financial statements.

Symantec stock dropped from nearly $30 to just under $20 after the company announced the investigation on May 10. It recovered slightly a few days later after more details were made public, but again dove under $20 after the firm revealed plans to cut as much as 8% of its workforce, representing roughly 1,000 employees.

Shares went up approximately 4 percent after the firm announced the completion of the audit.

The investigation was launched after a former employee raised concerns about “the Company’s public disclosures including commentary on historical financial results, its reporting of certain Non-GAAP measures including those that could impact executive compensation programs, certain forward-looking statements, stock trading plans and retaliation.”

The investigation, conducted with the help of a forensic accounting firm and independent legal counsel, identified issues related to the review, approval and tracking of transition and transformation expenses. It also found “certain behavior inconsistent with the Company’s Code of Conduct and related policies.”

The audit uncovered a customer transaction for which $13 million was erroneously recognized as revenue in the fourth quarter of FY 2018. The company has determined that $12 million of that amount should be deferred and the financial results for the fourth quarter of FY 2018 and the first quarter of FY 2019 be revised.

Symantec says it’s taking steps to address issues uncovered by the investigation and its board of directors has adopted recommendations made by the audit committee. This includes appointing a separate chief accounting officer and a separate chief compliance officer, and clarifying and enhancing the code of conduct.

No employment actions have been recommended as a result of the investigation, the cybersecurity firm said.

The audit has prevented Symantec from filing its annual report for the previous fiscal year (Form 10-K) and the report for the first fiscal quarter that ended on June 29 (Form 10-Q) with the Securities and Exchange Commission (SEC). The company is working to complete the preparation of the forms and hopes to have the annual report ready within a month.

The SEC has launched its own investigation into the matter after being contacted by Symantec.


Cloudflare Launches Security Service for Tor Users
24.9.2018 securityweek
Security

Cloudflare on Thursday announced a new service to provide Tor users with improved security and performance, while also aiming at reducing malicious network traffic.

The service is being launched in collaboration with the Tor Project and is set to become available for all those using Tor Browser 8.0. Because the idea and mechanics behind this service are not specific to Cloudflare, anyone can reuse them on their own site, the company says.

The idea behind the new service, the website protection provider says, is that, while the Tor Browser does mitigate the issue of privacy on the web, it does filter malicious traffic, but actually hides its source. To tackle this, many use CAPTCHA challenges, thus making it more expensive for bots to reside on the Tor network, but these challenges are displayed to real users as well.

Cloudflare’s newly announced service aims at eliminating this problem and ensures that Tor users visiting Cloudflare websites won’t have to face a CAPTCHA. The feature also “enables more fine-grained rate-limiting to prevent malicious traffic,” the company says.

“From an onion service’s point of view each individual Tor connection, or circuit, has a unique but ephemeral number associated to it, while from a normal server’s point of view all Tor requests made via one exit node share the same IP address,” Cloudflare’s Mahrud Sayrafi explains.

The circuit number allows onion services to distinguish individual circuits and terminate those that behave maliciously.

The idea behind the Cloudflare Onion Service, the site protection company explains, is to have domain names first resolve to an .onion address, with the browser then asking for a valid certificate to establish an encrypted connection with the host.

“As long as the certificate is valid, the .onion address itself need not be manually entered by a user or even be memorable. Indeed, the fact that the certificate was valid indicates that the .onion address was correct,” Sayrafi points out.

This approach, Cloudflare claims, only requires for the certificate presented by the onion service to be valid for the original hostname, meaning that even a free certificate for a domain can be used instead of an expensive EV certificate.

“The Cloudflare Onion Service presents the exact same certificate that we would have used for direct requests to our servers, so you could audit this service using Certificate Transparency (which includes Nimbus, our certificate transparency log), to reveal any potential cheating,” Sayrafi says.

Because the service works without running entry, relay, or exit nodes, the only requests that Cloudflare would see as a result of this feature are those already headed to them. No new traffic is introduced and the company “does not gain any more information about what people do on the internet,” Sayrafi explains.

Cloudflare has made the Onion Routing service available to all of its customers and has enabled it by default for Free and Pro plans. The option can be accessed in the Crypto tab of the Cloudflare dashboard. The company recommends the use of Tor Browser 8.0 to take full advantage of the feature.


New Tool Helps G Suite Admins Uncover Security Threats
20.9.2018 securityweek
Security

Google on Tuesday announced the general availability of a tool that helps G Suite customers identify security issues within their domains, and take action.

Referred to as Investigation tool, the feature was made available as part of an Early Adopter Program in July, and is now accessible to all G Suite Enterprise and Enterprise for Education editions.

Building on existing capabilities in the security center, Google says the tool will provide admins and security analysts with the ability to identify, triage, and remediate security threats.

The investigation tool includes advanced search capabilities, to easily identify security issues within a domain, and can be used to triage threats regardless of whether they are targeting users, devices, or data.

More importantly, the utility provides admins with the option to take bulk actions on any of the discovered issues, to limit the propagation and impact of threats.

Based on the feedback received from those participating in the Early Adopter Program, Google has already improved the investigation tool with a series of new features.

Thus, the search giant says it enhanced security to prevent insider risk through offering the option to require a second admin to verify large actions in the investigation tool.

Customers now also take advantage of more fine-grained visibility while investigating incidents, the company says. There’s email header analysis available to see attributes and the delivery path for the email, along with visibility into Team Drive settings and the option to change access permissions directly from the utility.

The investigation tool also includes a simplified interface, featuring user auto-complete. Emails and names from the organization , for examples, will be auto-completed as an admin types in parameters.

“The investigation tool, with its simple UI, makes it easier for admins to identify threats without having to worry about analyzing logs which can be time-consuming and require complex scripting,” Google said.


Secureworks Launches New Security Maturity Model
15.9.2018 securityweek Security

Secureworks has launched the Secureworks Security Maturity Model. It is released, announces Secureworks, in response to "research which shows that more than one-third of US organizations (37%) face security risks that exceed their overall security maturity. Within that group, 10% face a significant deficiency when it comes to protecting themselves from the threats in their environment."

Secureworks is offering a complementary evaluation (an online process supported by a security expert) to help organizations benchmark their own security maturity. The model incorporates elements of well-known frameworks like National Institute of Standards and Technology (NIST) and ISO 27001/02 with insight from Secureworks' global threat intelligence. It comprises four levels: guarded, informed, integrated and resilient.

Further information, and a route map for attaining security maturity, can be found in a white paper titled '5 Critical Steps to a More Mature Security Posture' (PDF). This paper suffers from one major drawback: security leaders who have achieved the title or function of CISO in a major organization will already know and understand everything contained in the paper.

It does, however, lay out the necessary steps for achieving greater maturity that would be useful for security officers that are either new to their function, or are employed by small organizations.

But there remains what is possibly a fundamental flaw. The very first step for the CISO is to "Agree on business needs, objectives and tolerance". The paper provides no solution on how that agreement can be reached; but agreement is the very basis of aligning security efforts with business priorities -- and is possibly the biggest difficulty faced by CISOs.

The problem is that defining risk is a business problem. Setting risk tolerance levels is ultimately a CEO function. The CISO function is to mitigate risk up to the tolerance level. The CISO's difficulty is getting accurate and timely information from the business -- with adequate budget -- in order to mitigate the risk. How to achieve this is possibly the biggest weakness for any maturity model, and is not resolved in the Secureworks white paper.

The paper gives an example: "The CIO determines that the business need is to 'introduce controls to reduce the risk of lost or stolen PII which subsequently reduces the chance of a data breach occurring and hence breaching government regulation.' This is more than just saying ëstop the organization being hackedí as it provides the need, the requirement and the consequences of not acting."

But the instruction comes downward. If the CIO doesn't give that instruction, the CISO isn't aware of the requirement -- unless he or she proactively ensures that he or she is independently aware of the need by fully understanding the business beforehand. This is one of security's biggest problems -- how to fully engage with business leadership so that the business side understands what security can and is doing, and that security understands what business needs (which can still be overridden at Board-level when setting risk tolerance levels).

A real-life example could potentially be seen in any large hypothetical tech giant that collects and keeps personal European data. There have been European laws requiring safe storage of personal data for decades. The regulatory sanctions on breach of those laws -- before GDPR -- were minor. A CISO could assume, this is the law, I must comply. The business leaders could override this and covertly say we can accept the risk and ultimately pay any fines out of petty cash. It is not for a CISO to make such decisions on risk tolerance; but the CISO must necessarily understand the business thinking.

There is no easy solution to this without the CISO getting the CEO on board, and the CEO giving the CISO authority to demand that business leaders engage fully with the security team. The extent of the problem was highlighted in a recent survey by Varonis. Nearly all security teams (96%) believe that their security planning is aligned with business risk, but far fewer (73%) of business leaders agree. Similarly, while 94% of the security teams believe that business acts on what they say, only 76% of the business leaders agreed.

There is no doubt that some organizations have solved this problem by having a business-enlightened CISO and a security-enlightened CEO. In such circumstances, the organization will probably already have achieved a high security maturity score. Going through the Secureworks security maturity model process will still be a useful process. The graphs and details will provide verification of existing practices and may highlight anything still missing.

Where the relationship between business and security does not yet exist, it will need to be solved before the model becomes useful.

It should be said however, that the process towards more mature security as outlined by Secureworks provides a valuable checklist of security processes. The irony is that the same paper warns, "Emerging, high profile issues like ransomware often trigger a reactive posture where the emphasis is on reviewing a checklist of specific 'known' threats and risks. In fact, being resilient to a breach is dependent on an integrated set of solutions and controls, instrumented for visibility across the whole environment, and made effective by people who follow the right policy, process and procedures to manage them." Conforming to checklists does not provide security.

Secureworks was founded in 1998 by Michael Pearson and Joan Wilbanks. It was acquired by Dell and became Dell Secureworks in 2011. It left Dell and became a public company (majority owned by Dell) in 2016.


OpenSSL 1.1.1 Released With TLS 1.3, Security Improvements
12.9.2018 securityweek Security

The OpenSSL Project on Tuesday announced the release of OpenSSL 1.1.1, the new Long Term Support (LTS) version of the cryptographic software library.

According to the organization, the most important new feature in OpenSSL 1.1.1 is TLS 1.3, which the Internet Engineering Task Force (IETF) published last month as RFC 8446.

Since OpenSSL 1.1.1 is API and ABI compliant with OpenSSL 1.1.0, most applications that work with the older version can take advantage of the benefits provided by TLS 1.3 simply by updating to the newer version.

TLS 1.3 has numerous benefits, but the ones highlighted by the OpenSSL Project are improved connection times, the ability of clients to immediately start sending encrypted data to servers, and improved security due to the removal of outdated cryptographic algorithms.

Other noteworthy changes in OpenSSL 1.1.1 include a complete rewrite of the random number generator, support for several new cryptographic algorithms, security improvements designed to mitigate side-channel attacks, support for the Maximum Fragment Length TLS extension, and a new STORE module that implements a uniform and URI-based reader of stores that contain certificates, keys, CRLs and other objects.

The new crypto algorithms include SHA3, SHA512/224 and SHA512/256, EdDSA, X448, multi-prime RSA, SM2, SM3, SM4, SipHash and ARIA.

“OpenSSL 1.1.1 has been a huge team effort with nearly 5000 commits having been made from over 200 individual contributors since the release of OpenSSL 1.1.0,” OpenSSL developer Matt Caswell wrote in a blog post. “These statistics just illustrate the amazing vitality and diversity of the OpenSSL community. The contributions didn’t just come in the form of commits though. There has been a great deal of interest in this new version so thanks needs to be extended to the large number of users who have downloaded the beta releases to test them out and report bugs.”

Since OpenSSL 1.1.1 is the new LTS release, it will receive support for at least five years. The 1.1.0 release will receive support for one year starting today, and the 1.0.2 branch, which until now was the LTS release, will receive full support until the end of 2018 and then only security updates until the end of next year.


Google Launches Alert Center for G Suite
10.9.2018 securityweek Security

Google is making it easier for G Suite administrators to access notifications, alerts, and actions by bringing them all together in a single place with the launch of a new alert center.

Currently available in Beta, the alert center provides admins with a comprehensive view on essential notifications, and allows them to easily take actions to better serve and protect their organizations, Google says. The new feature was designed deliver insights to help admins better assess an organization’s exposure to security issues at the domain and user levels.

“In addition, G Suite Enterprise edition domains can use the G Suite security center for integrated remediation of issues surfaced by alerts,” the Internet company explains.

The alert center will bring together notifications on security threats and monitoring, as well as critical system alerts.

As part of the Beta launch, the center includes three types of alerts: Google Operations (details on G Suite security and privacy issues that Google is investigating), Gmail phishing and spam (spikes in user-reported phishing), and mobile device management (information on devices that are exhibiting suspicious behavior or have been compromised).

The Beta program has been launched for all G Suite customers.

Additionally, Google is making it easier for users to set phones and tablets as company-owned devices. Starting on September 19, all users who add their G Suite account to a new Android device before adding their personal account will be asked to set up the device as their own or as company-owned.

“If you have advanced mobile device management but don’t register your company-owned devices in the Admin console, your users must choose to set up their devices as company-owned,” Google explained.

At the moment, the choice is only displayed to users if their organizations have Device Owner mode enabled. Starting September 19, that option will disappear from the Admin console and a new screen will be displayed to them on new (and recently factory-reset) devices running Android 6.0 or higher.

Also on September 19, users with company-owned Android devices and work profiles will be allowed to install any app from the managed Google Play store by default. Organizations, however, can restrict app availability to whitelisted apps.


Microsoft to Charge for Windows 7 Security Updates
8.9.2018 securityweek Security

Microsoft this week revealed plans to offer paid Windows 7 Extended Security Updates (ESU) for three years after traditional support for the operating system will officially end.

Released in 2009, Windows 7 currently powers around 39% of all machines running Microsoft’s Windows platform, but is slowly losing ground to Windows 10 (currently found on over 48% of Windows systems).

Microsoft stopped selling Windows 7 in 2014 (some variants are still available to OEMs) and ended mainstream support for the operating system in early 2015. The company plans on ending extended support for Windows 7 to January 14, 2020.

Past that date, organizations will have to pay in order to continue take advantage of support for the platform.

Paid Windows 7 Extended Security Updates (ESU), Microsoft now says, will be available through January 2023. The tech company will sell the Windows 7 ESU on a per-device basis and plans on increasing the price for it each year.

“Windows 7 ESUs will be available to all Windows 7 Professional and Windows 7 Enterprise customers in Volume Licensing, with a discount to customers with Windows software assurance, Windows 10 Enterprise or Windows 10 Education subscriptions,” Microsoft says.

The software giant also revealed that it will continue to provide support for Office 365 ProPlus on devices with active Windows 7 Extended Security Updates (ESU) through January 2023. This means that all those buying the Windows 7 ESU will continue to run Office 365 ProPlus.

January 2023, which is the end support date for Windows 8.1, also represents the end support date for Office 365 ProPlus on this platform version, Microsoft now reveals. Windows Server 2016, on the other hand, will offer support for Office 365 ProPlus until October 2025.

Currently, Microsoft is relying on a semi-annual schedule for Windows 10 and Office 365 ProPlus updates, targeting September and March, and the company will continue using this Windows 10 update cycle.

To make sure customers have enough time to plan for updates within their environments, however, Microsoft is making changes to the support life of Windows 10 updates.

Thus, currently supported feature updates of Windows 10 Enterprise and Education editions (versions 1607, 1703, 1709, and 1803) will be supported for 30 months from their original release date. As for future feature updates, those targeted for a September release will be supported for 30 months, while those targeted for a March release for 18 months.

According to Microsoft, all feature releases of Windows 10 Home, Windows 10 Pro, and Office 365 ProPlus will continue to be supported for 18 months, regardless of whether targeted for release in March or September.


Fighting Alert Fatigue With Security Orchestration, Automation and Response
7.9.2018 securityweek Security

New research confirms and quantifies two known challenges for security operations teams: they don't have enough staff and would benefit from automated tools.

Demisto's State of SOAR (security orchestration, automation and response) Report, 2018 (PDF) was researched via the ViB community of more than 1.2 million IT practitioners and decision makers. A total of 262 security professionals from 245 companies in a wide range of industry sectors and sizes, mostly in the U.S., took part in the survey. The results show that the two primary and related challenges for SOC and IR staff are not enough time (80.39% of respondents) and too few staff (78.76%) to handle the workload.

“We’ve seen plenty of research that highlights the unending growth in security alerts, a widening cyber security skills gap, and the ensuing fatigue that is heaped upon understaffed security teams," explains Rishi Bhargava, Co-founder of Demisto. "That’s why we conducted this study which allowed us to dig deeper into these issues, their manifestations, as well as possible solutions. Our results produced captivating insights into the state of SOAR in businesses of all sizes.”

"The pattern that stands out starkly from these results," notes the report, "is that the security skills gap continues to be a challenge." The finer detail of these results, however, is less expected: retaining staff is not much easier than finding them (60.1% against 75.2%). Sixty-seven percent of security staff move on to new companies in less than four years, with 26.4% leaving within two years.

This is primarily down to money. Nearly 65% of those who leave their current employment do so because they can earn more elsewhere. Furthermore, asked what is important to infosec employees, 71.26% replied, a 'higher salary'. The often lauded 'company culture' ranked only fifth in importance at 49.43%.

The implication is that smaller companies with smaller budgets hire newcomers, train them and provide the experience that is attractive to larger companies who simply poach experienced security staff with more money. This in turn means that it is the smaller business that is most affected by the overall security skills gap.

It's worth noting, however, that moving on to greener pastures is not the only cause of failing to retain existing staff. As many as 27.2% of security employees leave because of over work and fatigue. This echoes a comment from Jerome Segura at Malwarebytes: "There's a lot of burnout in infosec. It's tough, but that's the reality. If you're in infosec, you're on call 24/7."

According to the report's respondents, their primary concerns -- or pain points -- are they currently receive too many alerts (cited by 46.4% of respondents; an issue that will be aggravated by staff shortages); and too many false positives within those alerts (cited by 69% of respondents; an issue that is technology based).

Affecting both of these (but not specifically cited as a pain point) is the number of different tools used by the security team. More than three-quarters of the respondents have to learn how to use more than four different security tools for effective security operations and incident response. "With the number of tools constantly on the rise, high training times and attrition rates truly spell out the gravity of the human capital challenge facing the industry today."

Bhargava explains further. “Security deployment has become fractured with innumerous specialized tools, making it increasingly difficult for security teams to manage alerts across disparate systems and locations, particularly considering the talent shortage present in security today,” said Bhargava. “There is a great opportunity for SOAR tools to help unify these products and processes, using automated response to reduce alert fatigue and direct analyst resources to the alerts which are most likely to cause harm.”

It is Demisto's premise -- it is itself a SOAR vendor -- that SOAR technology can help alleviate these difficulties. "An important goal of our study was to find and validate linkages between high incident loads, high response teams, and the desire for automation." First the report quantifies the individual workload. More than 12,000 alerts are reviewed each week; and each alert takes more than 4 days to resolve.

There are simply too many alerts for the security team to handle manually. It is, says the report, "clear that there’s a vicious cycle in effect. Alert volume leads to increased MTTR [mean time to respond] which in turn leads to even more alert volume." Automation as a solution is already in use, with more than half of the respondents automating or seeing the benefit in automating much of the incident response workload.

"Proactively," says the report, "security operations and threat hunting ranked high on the ‘automation candidates’ list, highlighting security teams’ desire for automation to assist them in identifying incipient threats and vulnerabilities. Reactively, incident response, tracking IR metrics, and case management were felt as good candidates for partial or full automation."

For now, SOAR is still an emergent technology. "A sign of SOAR’s emergent nature is highlighted by around 20% of our responders being unsure about where to include SOAR in their budgets," admits the report. "A growing acknowledgement of SOAR in security budgets will come with increased awareness and continued verifiable benefits in existing SOAR deployments."

Demisto believes, however, that SOAR has the potential to improve proactive threat hunting, standardize incident processes, improve investigations, accelerate and scale incident response, simplify security operations and maintenance, and generally fight the alert fatigue that comes with too few staff responding to too many alerts.

Cupertino, Calif.-based Demisto raised $20 million in a Series B funding round in February 2017, bringing the total raised to $26 million. In May 2018, Gartner included Demisto in its report on 'Cool Vendors in Security Operations and Vulnerability Management'.


Mozilla Appoints New Policy, Security Chief
6.9.2018 securityweek Security

Mozilla on Tuesday announced that Alan Davidson has been named the organization’s new Vice President of Global Policy, Trust and Security.

According to Mozilla Chief Operating Officer Denelle Dixon, Davidson will work with her on scaling and reinforcing the organization’s “policy, trust and security capabilities and impact.”

His responsibilities will also include leading Mozilla’s public policy work on promoting an open and “healthy” Internet, and supervising a security and trust team whose focus is on promoting “innovative privacy and security features.”

“For over 15 years, Mozilla has been a driving force for a free and open Internet, building open source products with industry-leading privacy and security features. I am thrilled to be joining an organization so committed to putting the user first, and to making technology a force for good in people’s lives,” said Davidson.Alan Davidson named Mozilla’s new VP of Global Policy, Trust and Security

Prior to joining Mozilla, Davidson worked for the U.S. Department of Commerce, the New America think tank, and Google. At Google, he helped launch the tech giant’s Washington D.C. office and led the company’s public policy and government relations efforts in the Americas.

“Alan is not new to Mozilla,” Dixon said. “He was a Mozilla Fellow for a year in 2017-2018. During his tenure with us, Alan worked on advancing policies and practices to support the nascent field of public interest technologists — the next generation of leaders with expertise in technology and public policy who we need to guide our society through coming challenges such as encryption, autonomous vehicles, blockchain, cybersecurity, and more.”

Mozilla last week laid out plans to add various anti-tracking features to Firefox in an effort to protect users and help them choose what information they share with the websites they visit.

The new features include a mechanism designed to block trackers that slow down page loads, stripping cookies and blocking storage access from third-party tracking content, and blocking trackers that fingerprint users and sites that silently mine cryptocurrencies. Some of these new features are already present in Firefox Nightly and are expected to become available in the stable release of the web browser in the near future.


Uber Announces Ramped Up Passenger Security
6.9.2018 securityweek Security

Uber chief Dara Khosrowshahi said on Wednesday the smartphone-summoned ride service is reinforcing safeguards for passengers and their personal information.

Features to be added to the app in the coming months include "Ride Check," which uses location tracking already built into the service to detect when cars have stopped unexpectedly.

If a crash is suspected, the driver and passenger will receive a prompt on their phones to order a courtesy ride or use the in-app emergency call button introduced earlier this year.

"This technology can also flag trip irregularities beyond crashes that might, in some rare cases, indicate an increased safety risk," Khosrowshahi said in a blog post.

"For example, if there is a long, unexpected stop during a trip, both the rider and the driver will receive a Ride Check notification to ask if everything is OK."

Uber, which operates in 65 countries, has disrupted transport in many locations despite regulatory hurdles and resistance from taxi operators.

The company has been touting a safety-first message amid plans for an initial public offering of shares late next year.

Khosrowshahi said the service will begin leaving pick-up and drop-off addresses out of drivers' trip history logs, showing only general areas to avoid creating databases of sensitive locations such as home addresses.

The service already lets drivers and passengers waiting to be picked up communicate through the app without revealing their phone numbers. People can also request pick-ups at intersections instead of specific street addresses.

"Uber has a responsibility to help keep people safe, and it's one we take seriously," Khosrowshahi said.

"We want you to have peace of mind every time you use Uber, and hope these features make it clear that we've got your back."


Arjen Kamphuis, the Dutch associate of Julian Assange, went missing in Norway

4.9.2018 securityaffairs Security

Julian Assange associate and author of “Information Security for Journalists” Arjen Kamphuis has disappeared, the Norwegian police is working on the case.
Media agencies worldwide are reporting the strange disappearance of Arjen Kamphuis, the Julian Assange associate. The news was confirmed by WikiLeaks on Sunday, the man has been missing since August 20, when he left his hotel in the Norwegian town of Bodo.

WikiLeaks

@wikileaks
.@JulianAssange associate and author of "Information Security for Journalists" @ArjenKamphuis has disappeared according to friends (@ncilla) and colleagues. Last seen in Bodø, #Norway, 11 days ago on August 20.

1:43 AM - Sep 1, 2018
1,514
2,060 people are talking about this
Twitter Ads info and privacy
According to WikiLeaks, Kamphuis had bought a ticket for a flight departing on August 22 from Trondheim that is far from Bodo.

His friends believe he disappeared either in Bodo, Trondheim or on the way to the destination.

WikiLeaks

@wikileaks
Update on the strange disappearance of @ArjenKamphuis. Arjen left his hotel in Bodø on August 20. He had a ticket flying out of Trondheim on August 22. The train between the two takes ~10 hours, suggesting that he disappeared in within hours in Bodø, Trondheim or on the train.

WikiLeaks

@wikileaks
.@JulianAssange associate and author of "Information Security for Journalists" @ArjenKamphuis has disappeared according to friends (@ncilla) and colleagues. Last seen in Bodø, #Norway, 11 days ago on August 20.

View image on Twitter
3:42 AM - Sep 2, 2018
762
833 people are talking about this
Twitter Ads info and privacy
“A website set up to gather information on the missing person says: “He is 47 years old, 1.78 meters tall and has a normal posture. He was usually dressed in black and carrying his black backpack. He is an avid hiker.”” reported the German website dw.com.
Kamphuis
At the time of writing, there have been two unconfirmed sightings, one in Alesund, Norway, and the other in Ribe, Denmark.

The Norwegian authorities have started an investigation on the case on Sunday.

“We have started an investigation,” police spokesman Tommy Bech told the news agency AFP. At the time, the police “would not speculate about what may have happened to him,”.
Kamphuis

Ancilla

@ncilla
Hi everyone, small update about #FindArjen; The Norwegian police is working hard on the case now. We are keeping all options open, and hoping he will soon be found🤞

11:13 AM - Sep 3, 2018
183
56 people are talking about this
Twitter Ads info and privacy
According to the Norwegian Verdens Gang tabloid newspaper, the Norwegian authorities cannot access location data collected by the Kamphuis’s mobile phone until he is officially reported missing in the Netherlands.


Oath Pays Over $1 Million in Bug Bounties
24.8.2018 securityweek Security

As part of its unified bug bounty program, online publishing giant Oath has paid over $1 million in rewards for verified bugs, the company announced this week.

In April, Oath paid more than $400,000 in bug bounties during a one-day HackerOne event in San Francisco, where 40 white hat hackers were invited to find bugs in the company’s portfolio of brands and online services, including Tumblr, Yahoo, Verizon Digital Media Services and AOL.

The event also represented an opportunity for the company to formally introduce its unified bug bounty program, which brought together the programs that were previously divided across AOL, Yahoo, Tumblr and Verizon Digital Media Services (VDMS).

Only two months later, the program had already surpassed $1 million in payouts for verified bugs, the media and tech company says.

“This scale represents a significant decrease in risk and a considerable reduction of our attack surface. Every bug found and closed is a bug that cannot be exploited by our adversaries,” Oath CISO Chris Nims now says.

Nims also points out that, following the feedback received from participants, the company also made a series of changes to their program policy. The company is now willing to hand out rewards for more types of vulnerabilities, although SQLi, RCE and XXE/XMLi flaws are still a priority.

Oath also published the payout table to increase the transparency of the program.

Additionally, the media giant has added EdgeCast to the bug bounty program, by opening the VDMS-EdgeCast-Partners and VDMS-EdgeCast-Customers private programs, which were previously operated separately, to the unified program.

Oath also plans on defining a structured scope for the suite of brands included in the unified bug bounty program. The purpose of this is to “help separate and define bugs for different assets.”

“The security landscape changes constantly, and we hope these updates to the bug bounty program will keep both Paranoids and security researchers alike more adept to detect threats before they cause damage to our community,” Nims concludes.


Code of App Security Tool Posted to GitHub
21.8.2018 securityweek Security

Code of DexGuard, software designed to secure Android applications and software development kits (SDKs), was removed from GitHub last week, after being illegally posted on the platform.

The tool is developed by Guardsquare, a company that specializes in hardening Android and iOS applications against both on-device and off-device attacks, and is designed to protect Android applications and SDKs against reverse engineering and hacking.

The DexGuard software is built on top of ProGuard, a popular optimizer for Java and Android that Guardsquare distributes under the terms of the GNU General Public License (GPL), version 2. Unlike ProGuard, however, DexGuard is being distributed under a commercial license.

In the DMCA takedown notice published on GitHub, Guardsquare reveals that the DexGuard code posted on the Microsoft-owned code platform was illegally obtained from one of their customers.

“The listed folders (see below) contain an older version of our commercial obfuscation software (DexGuard) for Android applications. The folder is part of a larger code base that was stolen from one of our former customers,” the notice reads.

The leaked code was quickly removed from the open-source hosting platform, but it did not take long for it to appear on other repositories as well. In fact, Guardsquare said it discovered nearly 200 forks of the infringing repository and that demanded all be taken down.

HackedTeam, the account that first published the stolen code, also maintains repositories of open-source malware suite RCSAndroid (Remote Control System Android).

The spyware was attributed several years ago to the Italy-based Hacking Team, a company engaged in the development and distribution of surveillance technology to governments worldwide. Earlier this year, Intezer discovered a new backdoor based on the RCS surveillance tool.


ESET Launches New Enterprise Security Tools
17.8.2018 securityweek Security

ESET on Thursday announced the general availability of a new line of enterprise security solutions that include endpoint detection and response (EDR), forensic investigation, threat monitoring, sandbox, and management tools.

The new EDR tool is ESET Enterprise Inspector, which provides real-time data from the cybersecurity firm’s endpoint security platform. The product is fully customizable and ESET claims it offers “vastly more visibility for complete prevention, detection and response against all types of cyber threats.”

The new enterprise solutions also include ESET Threat Hunting, an on-demand forensic investigation tool that provides details on alarms and events, and ESET Threat Monitoring, which constantly monitors all Enterprise Inspector data for threats.

Enterprise Inspector is also complemented by ESET Dynamic Threat Defense, a cloud sandbox designed for a quick analysis of potential threats.

ESET also announced the availability of Security Management Center, a successor of Remote Administrator that provides network visibility, security management and reporting capabilities from a central console.

“We understand that global enterprises require cybersecurity solutions tailored specifically for their business as we have cooperated with a number of them to create our all-new suite of security solutions,” said Juraj Malcho, Chief Technology Officer at ESET. “We believe that any enterprise should be able to manage and customize their security solutions with ease, and we are proud that our new lineup reduces complexity and integrates seamlessly into their network.”

The new solutions were first demoed at the RSA Conference in May and they are now available to organizations in the United States, Canada, Czech Republic, Slovakia and the Netherlands.


New G Suite Alerts Provide Visibility Into Suspicious User Activity
9.8.2018 securityweek Security

After bringing alerts on state-sponsored attacks to G Suite last week, Google is now also providing administrators with increased visibility into user behavior to help identify suspicious activity.

Courtesy of newly introduced reports, G Suite administrators can keep an eye on account actions that seem suspicious and can also choose to receive alerts when critical actions are performed.

Admins can set alerts for password changes, and can also receive warnings when users enable or disable two-step verification or when they change account recovery information such as phone number, security questions, and recovery email.

By providing admins with visibility into these actions, Google aims at making it easier to identify suspicious account behavior and detect when user accounts may have been compromised.

Should an admin notice that a user has changed both the password and the password recovery info, which could be a sign that the account has been hijacked, they can leverage the reports to track time and IP address and determine if the change indeed seems suspicious.

Based on the findings, the G Suite administrator could then take the appropriate action to mitigate the issue and restore the user account, such as password reset and disable 2-step verification.

Admins can also use the new reports to gain visibility into an organization's security initiatives, such as the monitoring of domain-wide initiative to increase the adoption of two-step verification.

Access to these reports is available in Admin console > Reports > Audit > Users Accounts.

The new capabilities are set to gradually roll out to all G Suite editions and should become available to all customers within the next two weeks.

“G Suite admins have an important role in protecting their users’ accounts and ensuring their organization’s security. To succeed, they need visibility into user account actions. That’s why we’re adding reports in the G Suite Admin console that surface more information on user account activity,” Google notes.


NERC Names Bill Lawrence as VP, Chief Security Officer
8.8.2018 securityweek Security

North American Electric Reliability Corporation (NERC) on Tuesday announced that Bill Lawrence has been named vice president and chief security officer (CSO), and will officially step into the lead security role on August 16, 2018.

In his new role, Lawrence will be tasked with heading NERC's security programs executed through the Electricity Information Sharing and Analysis Center (E-ISAC), where he currently serves as senior director. He will also be responsible for directing security risk assessments and mitigation initiatives to protect critical electricity infrastructure across North America, the regulatory authority said.

ICS Cyber Security Conference

As VP and CSO, Lawrence will also lead coordination efforts with government agencies and stakeholders on cyber and physical security matters, including analysis, response and sharing of critical sector information, NERC said.

Lawrence joined NERC in July 2012 and has directed the development of NERC’s grid security exercise, GridEx.

A not-for-profit international regulatory authority formed to reduce risks to the reliability and security of the grid, NERC's jurisdiction includes owners and operators that serve more than 334 million people.

Lawrence is a graduate of the U.S. Naval Academy with a bachelor’s degree in Computer Science, and flew F-14 Tomcats and F/A-18F Super Hornets for the U.S. Navy prior to joining NERC. He holds a master’s degree in International Relations from Auburn Montgomery and a master’s degree in Military Operational Art and Science from the Air Command and Staff College.

NERC is subject to oversight by the Federal Energy Regulatory Commission (FERC) and governmental authorities in Canada.


Enterprises: Someone on Your Security Team is Likely a Grey Hat Hacker
8.8.2018 securityweek Security

Companies Should Not Dismiss a Bit of Grey Hatting by Staff as Just a Form of Letting Off Steam

The cost of cybercrime is normally described as direct costs: the cost of remediation, forensic support, legal costs and compliance fines, etcetera. A new survey has sought to take a slightly different approach, looking at the organizational costs associated with cybercriminal activity.

Sponsored by Malwarebytes, Osterman Research surveyed 900 security professionals during May and June 2018 across five countries: the United States (200), UK (175), Germany (175), Australia (175), and Singapore (175). All respondents were employed either managing or working on cybersecurity issues in an organization of between 200 and 10,000 employees.

The survey (PDF) relates staff salaries, security budgets and remediation costs; and concludes that the average firm employing 2,500 staff in the U.S. can expect to spend more than $2 million per year for cybersecurity-related costs. The amount is lower in the other surveyed countries, but still close to, or above, $1 million per year. Interestingly, the survey took the unusual step to see if there is any correlation in the number of grey hats employed by a firm and the overall cost of cybersecurity.

The basic findings are much as we would expect, and have been confirmed by numerous other research surveys: most companies have been breached; phishing is the most common attack vector; mid-market companies are attacked more frequently than small companies and as frequently as large companies; and attacks occur with alarming frequency.

The most surprising revelation from this survey is the number of grey hats working within organizations, and black hats that have been employed by organizations. Grey hats are defined as computer security experts who may sometimes violate laws or typical ethical standards, but do not have the full malicious intent associated with a full-time black hat hacker.

Overall, the 900 respondents believe that 4.6 of their colleagues are grey hats -- or, as the report puts it, a full-time security professional that is a black hat on the side. This varies by country: 3.4% in Germany, Australia and Singapore, 5.1% in the U.S., and as much as 7.9% in the UK.

Motivations provided by the respondents include black hat activity being more lucrative (63%), the challenge (50%), retaliation against an employer (40%), philosophical (39%), and, well, it's not really wrong, is it (34%)?

The extent of the income differential between a white hat employee and a black hat hacker is confirmed in a separate report from Bromium, published in April 2018: "High-earning cybercriminals can make $166,000+ per month; Middle-earners can make $75,000+ per month; Low-earners can make $3,500+ per month."

According to the Malwarebytes survey, the highest average starting salary for security professionals (in the U.S.) is $65,578 or just $5,464 per month (compared to $75,000 for middle-earning black hats). The difference is far greater in the UK, where the average starting salary for security professionals is less than $3,000 per month.

"It's interesting," Jerome Segura, lead malware intelligence analyst at Malwarebytes told SecurityWeek: "that despite the skills shortage, when companies hire new security staff, they generally don't pay them very much. There's kind of a contrast here, where companies and governments claim it's difficult to find the right people -- but when they do hire people they don't always pay them accordingly."

There appears to be an inevitable conclusion when correlating figures between the U.S. and the U.K. Not only do the U.S. companies pay their security staff much more than UK companies, they also have a considerably higher security budget ($1,573,197 in the U.S. compared to $350,157 in the UK). Can it be simply coincidence that the UK then has a higher percentage of grey hats within their companies, and that the cost of remediation is proportionately higher (14.7% of the security budget in the U.S., and 17.0% in the UK)?

It makes sense that remediation would take up a higher percentage of a small budget -- and it is tempting to think that the higher rewards of black-hattery would be attractive to lowly paid British staff. The U.S government believes it has found an example in Marcus Hutchins, the British researcher who found and triggered the 'kill-switch' in WannaCry. That was pure white hat behavior -- but Hutchins was later arrested in the US and accused of involvement in making and distributing the Kronos banking malware.

"Hutchins has many who support him," commented Segura, "and many who don't. But given the surprising number of employed white hats who are considered by their peers to be grey hats, it will be interesting to see how this turns out."

Segura accepts that comparatively low pay in the industry could be a partial cause for the surprisingly high number of grey hats working in infosec. He points out that the highest percentage of grey hats appear to work for mid-size companies that cannot afford the highest salaries, and which predominate in the UK. But he does not believe that finance is the only motivating factor. "There is a tricky line in the security profession," he told SecurityWeek. "Some people are pure hackers in the original non-malevolent sense, and they like to poke around to understand things better -- even if it is strictly speaking illegal. It also helps the job -- by peaking behind the curtain you get a better understanding of how the criminals operate and you can better defend against them."

But there's more. "Don't forget the social issues," he added. "Techies can be socially awkward and have difficulty in fitting into a corporate structure. The nerd in his bedroom is a bit of a cliche, but there is some truth to it. Working in a business corporate environment is not for everybody. And in infosec there is a lot of pressure. You can't fit the work into 9-to-5, five days a week -- so people work up to 80 hours or more per week without getting recompensed for it. That's a lot of mental pressure -- there's a lot of burnout in infosec. It's tough, but that's the reality. If you're in infosec, you're on call 24/7."

It would be wrong for companies to dismiss a bit of grey hatting by staff as just a form of letting off steam -- that could prove disastrous. But at the same time, the onus is on the employer to find the solution. Companies probably cannot compete with black hats financially -- but they should do as much as possible to be as inclusive and supportive as possible to the pressures of working in infosec.


New Law May Force Small Businesses to Reveal Data Practices
8.8.2018 securityweek Security

NEW YORK (AP) — A Rhode Island software company that sells primarily to businesses is nonetheless making sure it complies with a strict California law about consumers' privacy.

AVTECH Software is preparing for what some say is the wave of the future: laws requiring businesses to be upfront with customers about how they use personal information. California has already passed a law requiring businesses to disclose what they do with people's personal information and giving consumers more control over how their data is used — even the right to have it deleted from companies' computers.

Privacy rights have gotten more attention since news earlier this year that the data firm Cambridge Analytica improperly accessed Facebook user information. New regulations also took effect in Europe.

For AVTECH, which makes software to control building environmental issues, preparing now makes sense not only to lay the groundwork for future expansion, but to reassure customers increasingly uneasy about what happens to their personal information.

"People will look at who they're dealing with and who they're making purchases from," says Russell Benoit, marketing manager for the Warren, Rhode Island-based company.

Aware that California was likely to enact a data law, AVTECH began reviewing how it handles customer information last year. Although most of the company's customers are businesses, it expects it will increase its sales to consumers.

While it may yet face legal challenges, the California Consumer Privacy Act is set to take effect Jan. 1, 2020. It covers companies that conduct business in California and that fit one of three categories: Those with revenue above $25 million; those that collect or receive the personal information of 50,000 or more California consumers, households or electronic devices; and those who get at least half their revenue from selling personal information.

Although many small businesses may be exempt, those subject to the law will have to ensure their systems and websites can comply with consumer inquiries and requests. That may be an added cost of thousands for small companies that don't have in-house technology staffers and need software and consulting help.

Under California's law, consumers have the right to know what personal information companies collect from them, why it's collected and who the businesses share, transfer or sell it to. That information includes names, addresses, email addresses, browsing histories, purchasing histories, professional or employment information, educational records and information about travel from GPS apps and programs. Companies must give consumers at least two ways to find out their information, including a toll-free phone number and an online form, and companies must also give consumers a copy of the information they've collected.

Consumers also have the right to have their information deleted from companies' computer systems, and to opt out of having the information sold or shared.

The law was modeled on the European Union's General Data Protection Regulation, which took effect May 25. The California Legislature passed its law to prevent a more stringent proposed law from being placed on the November election ballot.

Frank Samson hopes the California law will help prevent what he sees as troubling marketing tactics by some in his industry, taking care of senior citizens. When people inquire about senior care companies online, it's sometimes on sites run by brokers rather than care providers themselves.

"It may be in the fine print, or it may not be: We're going to be taking your info and sending it out to a bunch of people," says Samson, founder of Petaluma, California-based Senior Care Authority.

That steers many would-be clients to just a handful of companies, he says, and can mean seniors and families get bombarded with calls while dealing with stressful situations.

But many unknowns remain about the California law. The state attorney general's office must write regulations to accompany several provisions. There are inconsistencies between different sections of the law, and the Legislature would need to correct them, says Mark Brennan, an attorney with Hogan Lovells in Washington, D.C., who specializes in technology and consumer protection laws. Questions about the law might need to be litigated, including whether California can force businesses based in other states to comply, Brennan says. There are similar questions about the European GDPR.

In the meantime, small business owners who want to start figuring out if they're likely to be subject to the California law and GDPR can talk to attorneys and technology consultants who deal with privacy rights. Brennan suggests companies contact professional and industry organizations that are gathering information about the laws and how to comply.

Some small businesses may benefit, such as any developing software tied to the law. Among other things, the software is designed to allow companies and customers to see what information has been gathered, who has access to it and who it has been shared with.

The software, expected to stay free for consumers, could cost companies into the thousands of dollars a year depending on their size, says Andy Sambandam, CEO of Clarip, one of the software makers. But, he says, "over time, the price is going to come down."

And other states are expected to adopt similar laws.

"This is the direction the country is going in," says Campbell Hutcheson, chief compliance officer with Datto, an information technology firm.


Mozilla to Researchers: Stay Away From User Data and We Won’t Sue
6.8.2018 securityweek  Security

Security researchers looking to find bugs in Firefox should not worry about Mozilla suing them, the Internet organization says. That is, of course, as long as they don’t mess with user data.

Mozilla, which has had a security bug bounty program for over a decade, is discontent with the how legal issues are interfering with the bug hunting process and has decided to change its bug bounty program policies to mitigate that.

Because legal protections afforded to those participating in bounty programs failed to evolve, security researchers are often at risk, and the organization is determined to offer a safe harbor to those researchers seeking bugs in its web browser.

According to the Internet organization, bug bounty participants could end up punished for their activities under the Computer Fraud and Abuse Act (CFAA),the anti-hacking law that criminalizes unauthorized access to computer systems.

“We often hear of researchers who are concerned that companies or governments may take legal actions against them for their legitimate security research. […] The policy changes we are making today are intended to create greater clarity for our own bounty program and to remove this legal risk for researchers participating in good faith,” Mozilla says.

For that, the browser maker is making two changes to its policy. On the one hand, the organization has clarified what is in scope for its bug bounty program, while on the other it has reassured researchers it won’t take legal action against them if they don’t break the rules.

Now, Mozilla makes it clear that participants to its bug bounty program “should not access, modify, delete, or store our users’ data.” The organization also says that it “will not threaten or bring any legal action against anyone who makes a good faith effort to comply with our bug bounty program.”

Basically, the browser maker says it won’t sue researchers under any law (the DMCA and CFAA included) or under its applicable Terms of Service and Acceptable Use Policy for their research performed as part of the bug bounty program.

“We consider that security research to be ‘authorized’ under the CFAA,” Mozilla says.

These changes, which are available in full in the General Eligibility and Safe Harbor sections of organization’s main bounty page, should help researchers know what to expect from Mozilla.


Do Businesses Know When They’re Using Unethical Data?
5.8.2018 securityweek Security

Data breaches are costly for businesses that expterience them, this data fuel the black markets and sometime are offered to complanies as legitimate data.
Data breaches are extraordinarily costly for businesses that experience them, both concerning reputational damage and money spent to repair the issues associated with those fiascos. And, on the consumer side of things, the scary thing is hackers don’t just steal data for notoriety. They do it to profit, typically by selling the snatched details online.

But, then, are other businesses aware of times when the data they just bought might have been stolen instead of legally obtained?

People Can Access Most of the Relevant Black Market Sites on Standard Browsers
There was a time when venturing into the world of the online black market typically meant downloading encryption software that hid the identity of users. However, most black market transactions happen on the “open” web so that it’s possible to access the respective sites via browsers like Firefox and Chrome without downloading special software first.

That means business representatives aren’t safe from coming across stolen data if they decide only to browse the internet normally. However, the kind of information advertised on the open web should be enough to raise eyebrows by itself. It often contains credit card information or sensitive medical details — not merely names, email addresses or phone numbers.

Companies can reduce the chances of unknowingly benefiting from stolen data by not proceeding with purchases if they contain private, not readily obtainable details.

Illegitimate Sellers Avoid Giving Payment Details
Even when people seek to profit by peddling stolen data, their desire to make money typically isn’t stronger than their need to remain anonymous. Most criminals who deal with data from illegal sources don’t reveal their names even when seeking payment. They’ll often request money through means that allow keeping their identities secret, such as Bitcoin.

Less Information, More Suspicion
If companies encounter data sellers that stay very secretive about how they get their data and whether it is in compliance with data protection and sharing standards, those are red flags.

However, even when data providers do list information about how they obtain data, it’s a good idea to validate the data on your own. For example, if you get calling data from a third-party provider, you should always check it against current Do Not Call lists.

Dark Web Monitoring Services Exist
As mentioned above, stolen data frequently works its way through the open web rather than the dark web. However, it’s still advisable for companies to utilize monitoring services that search the dark web for stolen data. The market for such information is lucrative, and some clients pay as much as $150,000 annually for such screening measures. If businesses provide data that comes up as originating from the dark web, that’s a strong indicator that it came from unethical sources.

data breaches

Do Legitimate Companies Create the Demand for Stolen Data?
It’s difficult to quantify how many reputable companies might be purchasing stolen data. If they do it knowingly, such a practice breaks the law. And, even if it happens without their knowledge, that’s still a poor reflection on those responsible. It means they didn’t carefully check data sources and sellers before going through with a purchase.

Unfortunately, analysts believe it happens frequently. After data breaches occur, some of the affected companies discover their data being sold online and buy it back. When hackers realize even those who initially had the data seized will pay for it, they realize there’s a demand for their criminal actions.

After suffering data breaches, some companies even ask their own employees to find stolen data and buy it back.

Most use intermediary parties, though representatives at major companies, including PayPal, acknowledge that this process of compensating hackers for the data they took occurs regularly. They say it’s part of the various actions that happen to protect customers — or to prevent them from knowing breaches happened at all.

If companies can find and recover their stolen data quickly enough, customers might never realize hackers had their details. That’s especially likely, since affected parties often don’t hear about breaches until months after companies do, giving those entities ample time to locate data and offer hackers a price for it.

Plus, it’s important to remember that companies pay tens of thousands of dollars to recover their data after ransomware attacks, too.

Should Businesses Bear the Blame?
When companies buy data that’s new to them, they should engage in the preventative measures above to verify its sources and check that it’s not stolen. Also, although businesses justify buying compromised data back from hackers, they have to remember that by doing so, they are stimulating demand — and that makes them partially to blame.

Instead of spending money to retrieve data that hackers take, those dollars would be better spent cracking down on the vulnerabilities that allow breaches to happen so frequently.


Mozilla Reinforces Commitment to Distrust Symantec Certificates
1.8.2018 securityweek Security 

Mozilla this week reaffirmed its commitment to distrust all Symantec certificates starting in late October 2018, when Firefox 63 is set to be released to the stable channel.

The browser maker had decided to remove trust in TLS/SSL certificates issued by the Certification Authority (CA) run by Symantec after a series of problems emerged regarding the wrongful issuance of such certificates.

Despite being one of the oldest and largest CAs, Symantec sold its certificate business to DigiCert after Internet companies, including Google and Mozilla, revealed plans to gradually remove trust in said certificates, even after DigiCert said it won’t repeat the same mistakes as Symantec.

The first step Mozilla took was to warn site owners about Symantec certificates issued before June 1, 2016, and encourage them to replace their TLS certificates.

Starting with Firefox 60, users see a warning when the browser encounters websites using certificates issued before June 1, 2016 that chain up to a Symantec root certificate.

According to Mozilla, less than 0.15% of websites were impacted by this change when Firefox 60 arrived in May. Most site owners were receptive and replaced their old certificates.

“The next phase of the consensus plan is to distrust any TLS certificate that chains up to a Symantec root, regardless of when it was issued […]. This change is scheduled for Firefox 63,” Mozilla’s Wayne Thayer notes in a blog post.

That browser release is currently planned for October 23, 2018 (it will arrive in Beta on September 5).

At the moment, around 3.5% of the top 1 million websites are still using Symantec certificates that will be impacted by the change. While the number is high, it represents a 20% improvement over the past two months, and Mozilla is confident that site owners will take action in due time.

“We strongly encourage website operators to replace any remaining Symantec TLS certificates immediately to avoid impacting their users as these certificates become distrusted in Firefox Nightly and Beta over the next few months,” Thayer concludes.

Google too is on track to distrust all Symantec certificates on October 23, 2018, when Chrome 70 is expected to land in the stable channel. Released in April, Chrome 66 has already removed trust in certificates issued by Symantec's legacy PKI before June 1, 2016.


State of Email Security: What Can Stop Email Threats?
30.7.2018 securityweek Security

Neither Current Technology Nor Security Awareness Training Will Stop Email Threats

A survey of 295 professionals -- mostly but not entirely IT professionals -- has found that 85% of respondents see email threats bypass email security controls and make it into the inbox; 40% see weekly threats; and 20% have to take significant remediation action on a weekly basis.

Email security firm GreatHorn wanted to examine the state of email security today, nearly fifty years after email was first developed. Its findings (PDF) will not surprise security professionals. Breach analyses regularly conclude that more than 90% of all breaches start with an email attack. Indeed, the GreatHorn research shows that the majority (54.4%) of corporate security leaders (that is, those who hold the CISO role) consider email security to be a top 3 security priority.

What is surprising is not that email security is failing (almost half -- 46.1% -- of all respondents said they were less than 'satisfied' with their current email security solution), but the discrepancy in threat perception between the security professional respondents (comprising 61% of the sample) and the non-security respondents (the laypeople, comprising 39% of the sample).

"Sixty-six percent of all the people we interviewed said the only threat they saw in their inbox was spam," GreatHorn's CEO and co-founder Kevin O'Brien told SecurityWeek. "I suspect there is a little bit of a confluence of different things in this figure, and that when they say 'spam', they don't only mean unsolicited marketing emails. Nevertheless, it is a dismissal of the severity of the risk that email poses."

This figure changes dramatically when asked of the security professionals among the respondents. "When you narrow the interview stats to security professionals, less than 16% said that spam was the main threat they faced," he continued. "So, you have 85% of all security teams saying that there is a wide range of different kinds of threats that come in every single day via email -- but to the lay user, the only thing that ever goes wrong is that you get some email you don't want."

O'Brien also quoted statistics from Gartner email specialist Neil Wynne: "The email open rate for the average white-collar professional within the bounds of their work email is 100%," said O'Brien. "Whether or not you take any action in response to it, you will open the email."

It is true that you can open a malicious email and take no action whatsoever and you will remain safe. But that clearly doesn't happen. GreatHorn's figures show that 20% of the security professional respondents are forced into direct remediation from email threats (such as suspending compromised accounts, PowerShell scripts, resetting compromised third-party accounts, etc).

The implication, at a simplistic level, is that the average non-security member of staff is highly likely to open all emails; is not likely to expect anything other than spam (31% of the laypeople respondents said they never saw any email threats other than spam); and clearly -- from empirical proof -- will too often click on a malicious link or open a weaponized attachment.

Asked if a further implication from these figures is that security awareness training is failing, O'Brien said, "Yes." There are qualifications to this response, because phish training companies' built-in metrics clearly demonstrate an improvement in the click-thru rates for users trained with their systems. Reductions in successful phishing from a 30% success rate to just 10% is not uncommon.

But, said O'Brien, "Verizon has reported that one in 25 people click on any given phishing attack." This suggests that for every 100 members of staff targeted by a phishing email, four will become victims -- and only one is necessary for a breach to occur.

The difficulty is the nature of modern email attacks. Many involve some form of impersonation, including BEC attacks, business spoofing attacks, and pure social engineering attacks from a colleague whose credentials have been acquired by the attacker. "You cannot train people to have awareness of an email threat when information about that threat is not visible to the user. There is very little functional way to train a user to differentiate between an email from a colleague and an email from someone who has stolen the colleague's credentials. So, we have a security awareness market that has used marketing to say that email security is an awareness problem, a people problem, and that you can train your way out of it. You cannot."

He added, "The reason that security awareness training companies are successful is because awareness training represents a tick in a compliance box that clears a company of gross negligence in the event they suffer a data breach." So, despite the fact it isn't really effective, you still need to do it.

GreatHorn's own view of the problem is that the solution must come from not just technology, nor simply people, but from using technology against the social engineering aspect of the threat -- that is, the content as well as the mechanics of the email.

Belmont, Mass-based GreatHorn announced a $6.3 million Series A funding round led by Techstars Venture Capital Fund and .406 Ventures in June 2017. It brings machine-learning technology to the continuing threat and problem of targeted spear phishing and the related BEC threat -- the latter of which, according to the FBI in May 2016, is responsible for losses "now totaling over $3 billion."


Google Announces New Security Tools for Cloud Customers
28.7.2018 securityweek Security

Google on Wednesday took the wraps off a broad range of tools to help cloud customers secure access to resources and better protect data and applications.

To improve security and deliver flexible access to business applications on user devices, Google has introduced context-aware access, which brings elements from BeyondCorp to Google Cloud.

With context-aware access, Google explains that organizations can “define and enforce granular access to GCP APIs, resources, G Suite, and third-party SaaS apps based on a user’s identity, location, and the context of their request.” This should increase security posture and decrease complexity for users, allowing them to log in from anywhere and any device.

The new capabilities are now available for select VPC Service Controls customers and should soon become available for those using Cloud Identity and Access Management (IAM), Cloud Identity-Aware Proxy (IAP), and Cloud Identity.

For increased protection against credential theft, Google announced Titan Security Key, “a FIDO security key that includes firmware developed by Google to verify its integrity.” Meant to protect users from the potentially damaging consequences of credential theft, Titan Security Keys are now available to Google Cloud customers and will soon arrive in Google Store.

Also revealed on Wednesday, Shielded VMs were designed to ensure that virtual machines haven’t been tampered with and allow users to monitor and react to any changes in the VM baseline or its current runtime state. Shielded VMs can be easily deployed on websites.

According to Google, organizations running containerized workloads should also ensure that only trusted containers are deployed on Google Kubernetes Engine. For that, the Internet giant announced Binary Authorization, which allows for the enforcing of signature validation when deploying container images.

Coming soon to beta, the tool allows for integration with existing CI/CD pipelines “to ensure images are properly built and tested prior to deployment” and can also be combined with Container Registry Vulnerability Scanning to detect vulnerable packages in Ubuntu, Debian and Alpine images before deployment.

Google also announced the beta availability of geo-based access control for Cloud Armor, a distributed denial of service (DDoS) and application defense service. The new capability allows organizations to control access to their services based on the geographic location of the client.

Cloud Armor, however, can also be used for “whitelisting or blocking traffic based on IP addresses, deploying pre-built rules for SQL injection and cross-site scripting, and controlling traffic based on Layer 3-Layer 7 parameters of your choice.”

Cloud HSM, a managed cloud-hosted hardware security module (HSM) service coming soon in beta, allows customers to host encryption keys and perform cryptographic operations in FIPS 140-2 Level 3 certified HSMs and to easily protect sensitive workloads without having to manage a HSM cluster.

Courtesy of tight integration with Cloud Key Management Service (KMS), Cloud HSM makes it “simple to create and use keys that are generated and protected in hardware and use it with customer-managed encryption keys (CMEK) integrated services such as BigQuery, Google Compute Engine, Google Cloud Storage and DataProc,” Google says.

Earlier this year, the search company launched Asylo, an open source framework and software development kit (SDK) meant to “protect the confidentiality and integrity of applications and data in a confidential computing environment.”

With Access Transparency, Google logs the activity of Google Cloud Platform administrators who are accessing content. While GCP’s Cloud Audit Logs no longer provide visibility into the actions of administrators when the cloud provider’s Support or Engineering team is engaged, Access Transparency captures “near real-time logs of manual, targeted accesses by either support or engineering.”

Google also announced the investigation tool for G Suite customers, to help identify and act upon security issues within a domain. With this tool, admins can “conduct organization-wide searches across multiple data sources to see which files are being shared externally” and then perform bulk actions on limiting files access.

Google is also making it easier to move G Suite reporting and audit data from the Admin console to Google BigQuery. Furthermore, there are five new container security partner tools in Cloud Security Command Center to help users gain more insight into risks for containers running on Google Kubernetes Engine.

To meet customer requirements on where their data is stored, Google announced data regions for G Suite, a tool that allows G Suite Business and Enterprise customers “to designate the region in which primary data for select G Suite apps is stored when at rest—globally, in the U.S., or in Europe.”

To these, Google adds the Password Alert policy for Chrome Browser, which allows IT admins to “prevent their employees from reusing their corporate password on sites outside of the company’s control, helping guard against account compromise.”


Chrome Now Marks HTTP Sites as "Not Secure"
26.7.2018 securityweek Security

The latest version of Google's Chrome web browser (Chrome 68) represents another step the search giant is making toward a more secure web: the browser now marks HTTP sites as “Not Secure.”

The change comes three and a half years after the Chrome Security Team launched the proposal to mark all HTTP sites as affirmatively non-secure, so as to make it clearer for users that HTTP provides no data security.

When websites are loaded over HTTP, the connection is not encrypted, meaning not only that attackers on the network can access the transmitted information, but also that they can modify the contents of sites before they are served to the user.

HTTPS, on the other hand, encrypts the connection, meaning that eavesdroppers can’t access the transmitted data and that user’s information remains private.

Google, which has been long advocating the adoption of HTTPS across the web, is only marking HTTP pages with a gray warning in Chrome. Later this year, however, the browser will display a red “Not Secure” alert for HTTP pages that require users to enter data.

The goal, however, is to incentivize site owners to adopt HTTPS. For that, Google is also planning on removing the (green) “Secure” wording and HTTPS scheme from Chrome in September 2018.

This means that the browser will no longer display positive security indicators, but will warn on insecure connections. Starting May 1, Chrome is also warning when encountering certificates that are not compliant with the Chromium Certificate Transparency (CT) Policy.

“To ensure that the Not Secure warning is not displayed for your pages in Chrome 68, we recommend migrating your site to HTTPS,” Google tells website admins.

According to Google’s Transparency Report, HTTPS usage has increased considerably worldwide, across all platforms: over 75% of pages are served over an encrypted connection on Chrome OS, macOS, Android, and Windows. The same applies to 66% of pages served to Linux users.

To help site admins move to HTTPS, the Internet giant has published a migration guide that includes recommendations and which also addresses common migration concerns such as SEO, ad revenue and performance impact.

In addition to marking HTTP sites as Not Secure, Chrome 68 includes patches for a total of 42 vulnerabilities, 29 of which were reported by external researchers: 5 High severity flaws, 19 Medium risk bugs, and 5 Low severity issues.

The 5 High risk issues include a stack buffer overflow in Skia, a heap buffer overflow in WebGL, a use after free in WebRTC, a heap buffer overflow in WebRTC, and a type confusion in WebRTC.

The remaining flaws included use after free, same origin policy bypass, heap buffer overflow, URL spoof, CORS bypass, permissions bypass, type confusion, integer overflow, local user privilege escalation, cross origin information leak, UI spoof, local file information leak, request privilege escalation, and cross origin information leak.


Expert discovered it was possible to delete all projects in the Microsoft Translator Hub
22.7.2018 securityaffairs  Security

Microsoft has addressed a serious vulnerability in the Microsoft Translator Hub that could be exploited to delete any or all the projects hosted by the service.
Microsoft has fixed a severe vulnerability in the Microsoft Translator Hub that could be exploited to delete any or all projects hosted by the service.

The Microsoft Translator Hub “empowers businesses and communities to build, train, and deploy customized automatic language translation systems—-”.

The vulnerability was discovered by the security expert Haider Mahmood that was searching for bugs on the Translator Hub, he discovered that is was possible to remove a project by manipulating the “projectid” parameter in an HTTP request.

“POST request with no content and parameter in the URL (its kinda weird isn’t it?) the “projectid” parameter in the above request is the ID of the individual project in the database, which in this case is “12839“, by observing the above HTTP request, a simple delete project query could be something like:-” wrote the expert in a blog post.

The expert also discovered a Cross-Site Request Forgery (CSRF) vulnerability that could be used by an attacker to impersonate a legitimate, logged in user and perform actions on its behalf.

An attacker with the knowledge of the ProjectID of a logged user just needs to trick victims into clicking a specifically crafted URL that performs the delete action on behalf of the user. Another attack scenario sees the attacker including the same URL in a page that once visited by the victim will allow the project to be removed.

“Wait a minute, if you take a look at the Request, first thing to notice is there is no CSRF protection. This is prone to CSRF attack.” continues the expert. “In simple words, CSRF vulnerability allows attacker to impersonate legit logged in user, performing actions on their behalf. Consider this:-

Legit user is logged in.
Attacker includes the URL in a page. (img tag, iframe, lots of possibilities here) “http://hub.microsofttranslator.com/Projects/RemoveProject?projectId=12839”
Victim visits the page, above request will be sent from their browser.
Requirement is that one should know the ProjectID number of logged in victim.
As it has no CSRF projection like antiCSRF tokens it results in the removal of the project.
Even if it has Anti-CSRF projection, here are ways to bypass CSRF Token protections.”
Further analysis allowed the expert to discover the worst aspect of the story.

Mahmood discovered an Indirect Object Reference vulnerability, which could be exploited by an attacker to set any ProjectID in the HTTP request used to remove project.

Theoretically, an attacker can delete all projects in Microsoft Translator Hub by iterating through project IDs starting from 0 to 13000.

“The project whose projectID I used in the HTTP request got deleted. Technically this vulnerability is called Indirect Object Reference. now if I just loop through the values starting from 0 to 13000 (last project), I’m able to delete all projects from the database.” continues the expert. “The vulnerability could have been avoided using simple checks, either the project that the user requested is owned by the same user, associating the project owner with the project is another way, but its Microsoft so….”

Microsoft Translator Hub SecurityBulletin-1024x307

Mahmood reported the flaw to Microsoft in late February 2018 that addressed it is a couple of weeks,


‘IT system issue’ caused cancellation of British Airways cancelled flights at Heathrow
20.7.2018 securityaffairs Security

British Airways canceled flights at Heathrow due to an ‘IT system issue,’ the incident occurred on Wednesday and affected thousands of passengers.
The problem had severe repercussions on the air traffic, many passengers also had their flights delayed.

“On one of the busiest days of the summer, British Airways cancelled dozens of flights to and from Heathrow, affecting at least 7,000 passengers

Problems began for BA when the control tower was closed for around 35 minutes on Wednesday afternoon when a fire alarm was triggered. Landings and take-offs were stopped.” reported the British Independent,

“Then an IT issue emerged which caused further disruption for BA and other airlines. Hundreds of flights were delayed, and some evening outbound departures were canceled. Around 3,000 British Airways passengers were stranded overnight abroad.”
The IT problem affected 7,000 passengers and more than 3,000 were forced to spend the night abroad attempting to fly back to London.

Officially the problem was originated by the IT supplier Amadeus that caused disruption to the flights, below the official statement of British Airways on its Twitter account. Reportedly, the British Airways passengers stranded at the airport were advised to ‘look for overnight accommodation or seek alternative travel arrangements’.

It seems that the IT problems affected also online-check in service of the company.

British Airways

“We are aware that British Airways is currently experiencing an issue which is impacting their ability to provide boarding passes to some passengers. We will be working with the airline to support their efforts to resolve the issue as quickly as possible.” stated a spokesperson for Heathrow.

The problems began a few hours after a fire alarm at Heathrow’s air traffic control tower was triggered causing delays for several airlines. According to the airport, this event is not related to the British Airways issue, while airline glitch has “impacted operation of the airfield for a short while”.

“The vast majority of customers affected by the supplier system issue and the temporary closure of Heathrow airport’s air traffic control tower are now on route to their destinations.”

“The supplier, Amadeus, resolved their system issue last night, and our schedule is now operating as normal.” said a spokesperson for British Airways.”

“We have apologised to our customers for disruption to their travel plans.”

British Airways experienced another technical problem at its IT systems in May 2017.


Microsoft Offers $100,000 in New Identity Bug Bounty Program
19.7.2018 securityweek Security

Microsoft on Tuesday announced the launch of a new bug bounty program that offers researchers the opportunity to earn up to $100,000 for discovering serious vulnerabilities in the company’s various identity services.

White hat hackers can earn a monetary reward ranging between $500 and $100,000 if they find flaws that impact Microsoft Identity services, flaws that can be leveraged to hijack Microsoft and Azure Active Directory accounts, vulnerabilities affecting the OpenID or OAuth 2.0 standards, or weaknesses that affect the Microsoft Authenticator applications for iOS and Android.

The list of domains covered by the new bug bounty program includes login.windows.net, login.microsoftonline.com, login.live.com, account.live.com, account.windowsazure.com, account.activedirectory.windowsazure.com, credential.activedirectory.windowsazure.com, portal.office.com and passwordreset.microsoftonline.com.

The top reward can be earned for a high quality submission describing ways to bypass multi-factor authentication, or design vulnerabilities in the authentication standards used by Microsoft. OpenID and OAuth implementation flaws can earn hackers up to $75,000.

The smallest rewards are offered for XSS (up to $10,000), authorization issues ($8,000), and sensitive data exposure ($5,000).

“A high-quality report provides the information necessary for an engineer to quickly reproduce, understand, and fix the issue. This typically includes a concise write up containing any required background information, a description of the bug, and a proof of concept. We recognize that some issues are extremely difficult to reproduce and understand, and this will be considered when adjudicating the quality of a submission,” Microsoft wrote on a page dedicated to its new bug bounty program.

The tech giant currently runs several bug bounty programs that offer hundreds of thousands of dollars for a single vulnerability report. This includes the speculative execution side-channel program, which offers up to $250,000 and which the company launched following the disclosure of Meltdown and Spectre; the Hyper-V program, which also offers up to $250,000; the mitigation bypass bounty, with rewards of up to $100,000 for novel exploitation techniques against Windows protections; and the Bounty for Defense, which offers an additional $100,000 for defenses to the mitigation bypass techniques.


Support for Python Packages Added to GitHub Security Alerts
18.7.2018 securityweek  Security

GitHub announced on Thursday that developers will be warned if the Python packages used by their applications are affected by known vulnerabilities.

The code hosting service last year introduced a new feature, the Dependency Graph, that lists the libraries used by a project. It later extended it with a capability designed to alert developers when one of the software libraries used by their project has a known security hole.

The Dependency Graph and security alerts initially worked only for Ruby and JavaScript packages, but, as promised when the features were launched, GitHub has now also added support for Python packages.

“We’ve chosen to launch the new platform offering with a few recent vulnerabilities,” GitHub said in a blog post. “Over the coming weeks, we will be adding more historical Python vulnerabilities to our database.”

The security alerts feature is powered by information collected from the National Vulnerability Database (NVD) and other sources. When a new flaw is disclosed, GitHub identifies all repositories that use the affected version and informs their owners.

The security alerts are enabled by default for public repositories, but the owners of private repositories will have to manually enable the feature.

When a vulnerable library is detected, a “Known security vulnerability” alert will be displayed next to it in the Dependency Graph. Administrators can also configure email alerts, web notifications, and warnings via the user interface, and they can configure who should see the alerts.

GitHub reported in March that the introduction of the security alerts led to a significant decrease in the number of vulnerable libraries on the platform.

When the feature was launched, GitHub’s initial scan revealed over 4 million vulnerabilities across more than 500,000 repositories. Roughly two weeks after the first notifications were sent out, over 450,000 of the flaws were addressed by updating the impacted library or removing it altogether.


Intel Pays $100,000 Bounty for New Spectre Variants
18.7.2018 securityweek  Security

Researchers have discovered new variations of the Spectre attack and they received $100,000 from Intel through the company’s bug bounty program.

The new flaws are variations of Spectre Variant 1 (CVE-2017-5753) and they are tracked as Spectre 1.1 (CVE-2018-3693) and Spectre 1.2.

The more serious of these issues is Spectre 1.1, which has been described as a bounds check bypass store (BCBS) issue.

“[Spectre1.1 is] a new Spectre-v1 variant that leverages speculative stores to create speculative buffer overflows,” researchers Vladimir Kiriansky of MIT and Carl Waldspurger of Carl Waldspurger Consulting explained in a paper.

New Spectre vulnerabilities discovered

“Much like classic buffer overflows, speculative out-of-bounds stores can modify data and code pointers. Data-value attacks can bypass some Spectre-v1 mitigations, either directly or by redirecting control flow. Control-flow attacks enable arbitrary speculative code execution, which can bypass fence instructions and all other software mitigations for previous speculative-execution attacks. It is easy to construct return-oriented-programming (ROP) gadgets that can be used to build alternative attack payloads,” they added.

Spectre 1.2 impacts CPUs that fail to enforce read/write protections, allowing an attacker to overwrite read-only data and code pointers in an effort to breach sandboxes, the experts said.

Both Intel and ARM have published whitepapers describing the new vulnerabilities. AMD has yet to make any comments regarding Spectre 1.1 and Spectre 1.2.

Microsoft also updated its Spectre/Meltdown advisories on Tuesday to include information on CVE-2018-3693.

“We are not currently aware of any instances of BCBS in our software, but we are continuing to research this vulnerability class and will work with industry partners to release mitigations as required,” the company said.

Oracle is also assessing the impact of these vulnerabilities on its products and has promised to provide technical mitigations.

“Note that many industry experts anticipate that a number of new variants of exploits leveraging these known flaws in modern processor designs will continue to be disclosed for the foreseeable future,” noted Eric Maurice, Director of Security Assurance at Oracle. “These issues are likely to primarily impact operating systems and virtualization platforms, and may require software update, microcode update, or both. Fortunately, the conditions of exploitation for these issues remain similar: malicious exploitation requires the attackers to first obtain the privileges required to install and execute malicious code against the targeted systems.”

Just as the researchers published their paper, Intel made a $100,000 payment to Kiriansky via the company’s HackerOne bug bounty program. The experts did reveal in their paper that the research was partially sponsored by Intel.

Following the disclosure of the Spectre and Meltdown vulnerabilities in January, Intel announced a bug bounty program for side-channel exploits with rewards of up to $250,000 for issues similar to Meltdown and Spectre. The reward for flaws classified “high severity” can be as high as $100,000.


Intel pays a $100K bug bounty for the new CPU Spectre 1.1 flaw
12.7.2018 securityaffairs Security

A team of researchers has discovered new variant of the famous Spectre attack (Spectre 1.1), and Intel has paid a $100,000 bug bounty as part of its bug bounty program.
Intel has paid out a $100,000 bug bounty for new vulnerabilities that are related to the first variant of the Spectre attack (CVE-2017-5753), for this reason, they have been tracked as Spectre 1.1 (CVE-2018-3693) and Spectre 1.2.

Intel credited Kiriansky and Waldspurger for the vulnerabilities to Intel and paid out $100,000 to Kiriansky via the bug bounty program on HackerOne.

Early 2018, researchers from Google Project Zero disclosed details of both Spectre Variants 1 and 2 (CVE-2017-5753 and CVE-2017-5715) and Meltdown (CVE-2017-5754).

Both attacks leverage the “speculative execution” technique used by most modern CPUs to optimize performance.

The team of experts composed of Vladimir Kiriansky of MIT and Carl Waldspurger of Carl Waldspurger Consulting discovered two new variants of Spectre Variant 1.

Back to the present, the Spectre 1.1 issue is a bounds-check bypass store flaw that could be exploited by attackers to trigger speculative buffer overflows and execute arbitrary code on the vulnerable processor.

This code could potentially be exploited to exfiltrate sensitive data from the CPU memory, including passwords and cryptographic keys.

“We introduce Spectre1.1, a new Spectre-v1 variant that leverages speculative stores to create speculative buffer overflows. Much like classic buffer overflows, speculative out-ofbounds stores can modify data and code pointers. Data-value attacks can bypass some Spectre-v1 mitigations, either directly or by redirecting control flow.” reads the research paper.

“Control-flow attacks enable arbitrary speculative code execution, which can bypass
fence instructions and all other software mitigations for previous speculative-execution attacks.”

The second sub-variant discovered by the experts, called Spectre1.2 is a read-only protection bypass

It depends on lazy PTE enforcement that is the same mechanism exploited for the original Meltdown attack.

Also in this case, the issue could be exploited by an attacker to bypass the Read/Write PTE flags and write code directly in read-only data memory, code metadata, and code pointers to avoid sandboxes.
“Spectre3.0, aka Meltdown [39], relies on lazy enforcement of User/Supervisor protection flags for page-table entries (PTEs). The same mechanism can also be used to bypass the Read/Write PTE flags. We introduce Spectre1.2, a minor variant of Spectre-v1 which depends on lazy PTE enforcement, similar to Spectre-v3.”In a Spectre1.2 attack, speculative stores are allowed to overwrite read-only data, code pointers, and code metadata, including vtables, GOT/IAT, and control-flow mitigation metadata. As a result, sandboxing that depends on hardware enforcement of read-only memory is rendered ineffective.,” continues the research paper.

ARM confirmed that Spectre 1.1 flaw affects also its processor but avoided to mention flawed ARM CPUs.

Mayor tech firms, including Microsoft, Red Hat and Oracle have released security advisories, confirming that they are investigating the issues and potential effects of the new Spectre variants.

“Microsoft is aware of a new publicly disclosed class of vulnerabilities referred to as “speculative execution side-channel attacks” that affect many modern processors and operating systems including Intel, AMD, and ARM. Note: this issue will affect other systems such as Android, Chrome, iOS, MacOS, so we advise customers to seek out guidance from those vendors.” reads the advisory published by Microsoft.

“An attacker who successfully exploited these vulnerabilities may be able to read privileged data across trust boundaries. In shared resource environments (such as exists in some cloud services configurations), these vulnerabilities could allow one virtual machine to improperly access information from another.”


Mozilla Announces Root Store Policy Update
3.7.2018 securityweek  Security

Mozilla announced on Monday that its Root Store Policy for Certificate Authorities (CAs) has been updated to version 2.6.

The Root Store Policy governs CAs trusted by Firefox, Thunderbird and other Mozilla-related software. The latest version of the policy, discussed by the Mozilla community over a period of several months, went into effect on July 1.

The new Root Store Policy includes nearly two dozen changes and some of the more important ones have been summarized in a blog post by Wayne Thayer, CA Program Manager at Mozilla.

Version 2.6 of the Root Store Policy requires CAs to clearly disclose email address validation methods in their certificate policy (CP) and certification practice statement (CPS). The CP/CPS must also clearly specify IP address validation methods, which have now been banned in specific circumstances.

CAs need to periodically obtain certain audits for their root and intermediate certificates in order to remain in the root store. Mozilla now requires auditors to provide reports written in English.

The new policy also states that starting with January 1, 2019, CAs will be required to create separate intermediate certificates for S/MIME and SSL certificates.

“Newly issued Intermediate certificates will need to be restricted with an EKU extension that doesn’t contain anyPolicy, or both serverAuth and emailProtection. Intermediate certificates issued prior to 2019 that do not comply with this requirement may continue to be used to issue new end-entity certificates,” Thayer explained.

Another new requirement is that root certificates must have complied with the Mozilla Root Store Policy from the moment they were created.

“This effectively means that roots in existence prior to 2014 that did not receive BR audits after 2013 are not eligible for inclusion in Mozilla’s program. Roots with documented BR violations may also be excluded from Mozilla’s root store under this policy,” Thayer said.

Mozilla takes digital certificate management very seriously. Last year it announced taking action against Chinese certificate authority WoSign and its subsidiary StartCom as a result of over a dozen incidents. It also targeted Symantec after the company and its partners were involved in several incidents involving mississued TLS certificates, and later raised concerns over DigiCert’s acquisition of Symantec’s CA business.


Hidden Tunnels: A Favored Tactic to Evade Strong Access Controls
22.6.2018 securityweek  Security

Use of Hidden Tunnels to Exfiltrate Data Far More Widespread in Financial Services Than Any Other Industry Sector

Financial services have perhaps the largest cyber security budgets and are the best protected companies in the private sector. Since cyber criminals generally have little difficulty in obtaining a quick return on their effort, it would be unsurprising to find that financial services are less overtly targeted by average hackers than other, easier targets. At the same time, the data held by finserv is so attractive to criminals that it remains an attractive target for more sophisticated hackers.

Both premises are confirmed in a report (PDF) published this week by Vectra. From August 2017 through January 2018, Vectra's AI-based Cognito cyberattack-detection and threat-hunting platform monitored network traffic and collected metadata from more than 4.5 million devices and workloads from customer cloud, data center and enterprise environments.

An analysis of this data showed that financial services displayed fewer criminal C&C communication behaviors than the overall industry average. This could be caused by the efficiency of large finserv budgets (Bank of America spends $600 million annually, with no upper limit, while JPMorgan Chase spends $500 million annually) warding off basic criminal activity.

Even the much smaller Equifax has a budget of $85 million. But Equifax, with its massive 2017 loss of 145.5 million social security numbers, around 17.6 million drivers' license numbers, 20.3 million phone numbers, and 1.8 million email addresses, demonstrates that finserv is a target for, and can be successfully breached by, the more advanced hackers.

Vectra analyzed the Equifax breach and then compared the attack methodology to what its Cognito platform was finding in other financial services companies -- and it discovered the same breach methodology in other financial services firms. This is the use of hidden tunnels to hide the C&C servers and disguise the exfiltration of data.

Vectra's new analysis shows that the criminal use of hidden tunnels is far more widespread in financial services than in any other industry sector. Across all industries Vectra found 11 hidden exfiltration tunnels disguised as encrypted web traffic (HTTPS) for every 10,000 devices. In finserv, this number jumped to 23. Hidden HTTP tunnels jumped from seven per 10,000 devices to 16 in financial services.

Chris Morales, head of security analytics at Vectra, commented, "What stands out the most is the presence of hidden tunnels, which attackers use to evade strong access controls, firewalls and intrusion detection systems. The same hidden tunnels enable attackers to sneak out of networks, undetected, with stolen data."

"Hidden tunnels are difficult to detect," explains the report, "because communications are concealed within multiple connections that use normal, commonly-allowed protocols. For example, communications can be embedded as text in HTTP-GET requests, as well as in headers, cookies and other fields. The requests and responses are hidden among messages within the allowed protocol."

These hidden tunnels need to be protected at all times, says Will LaSala, director security solutions and security evangelist at OneSpan. "Many app developers put holes through firewalls to make services easier to access from their apps, but these same holes can be exploited by hackers. Using the proper development tools, app developers can properly encrypt and shape the data being passed through these holes."

One of the problems is that developers are rushed to implement a new feature to maintain or gain customers, "and this," he adds, "often leads to situations where a hidden tunnel is created and not secured."

Once a hidden tunnel is established by an attacker, it is almost impossible to detect with traditional security. There is no signature to detect while specially created C&C servers will unlikely show up on reputation lists. Furthermore, because the traffic using a hidden tunnel is ostensibly legitimate traffic, there is no clear anomaly for anomaly detection systems to detect.

What Vectra's analysis shows is that while there may be fewer overt attacks against financial services, the industry is a prime target for advanced hackers willing and able to invest in more covert attacks.

San Francisco, Calif-based Vectra Networks closed a $36 million Series D funding round in February 2018, bringing the total amount raised to date by the company to $123 million.


Microsoft Combats Bad Passwords With New Azure Tools
22.6.2018 securityweek Security 

Microsoft this week announced the public preview of new Azure tools designed help its customers eliminate easily guessable passwords from their environments.

Following a flurry of data breaches in recent years, it has become clear that many users continue to protect their accounts with weak passwords that are easy to guess or brute force. Many people also tend to reuse the same password across multiple services.

Attackers continually use leaked passwords in live attacks, Verizon’s 2017 Data Breach Investigations Report (DBIR) revealed, and Microsoft banned commonly used passwords in Azure AD a couple of years ago.

Now, the company is taking the fight against bad passwords to a new level, with the help of Azure AD Password Protection and Smart Lockout, which were just released in public preview. These tools should significantly lower the risk of compromise through password spray attacks, Alex Simons, Director of Program Management, Microsoft Identity Division, says.

The new Azure AD Password Protection allows admins to prevent users from securing accounts in Azure AD and Windows Server Active Directory with weak passwords. For that, Microsoft uses a list of 500 most used passwords and over 1 million character substitution variations for them.

Management of Azure AD Password Protection is available in the Azure Active Directory portal for Azure AD and on-premises Windows Server Active Directory and admins will also be able to specify additional passwords to block.

To ensure users don’t use passwords that meet a complexity requirement but are easily guessable, or engage into predictable patterns if required to change their passwords frequently, organizations should apply a banned password system when passwords are changed, Microsoft says.

“Today’s public preview gives you both the ability to do this in the cloud and on-premises—wherever your users change their passwords—and unprecedented configurability. All this functionality is powered by Azure AD, which regularly updates the database of banned passwords by learning from billions of authentications and analysis of leaked credentials across the web,” Simons notes.

With Smart Lockout, Microsoft wants to lock out bad actors trying to guess users’ passwords. Leveraging cloud intelligence, it can recognize sign-ins from valid users and attempts from attackers and other unknown sources. Thus, users can remain productive while attackers are locked out.

Designed as an always-on feature, Smart Lockout is available for all Azure AD customers. While its default settings offer both security and usability, organizations can customize those settings with the right values for their environment.

By default, all Azure AD password set and reset operations for Azure AD Premium users are configured to use Azure AD password protection, Simons says. To configure their own settings, admins should access Authentication Methods under Azure AD Active Directory > Security.

Available options include setting a smart lockout threshold (number of failures until the first lockout) and duration (how long the lockout period lasts), choosing banned password strings, and extending the banned password protection to Windows Server Active Directory.

Organizations can also download and install the Azure AD password protection proxy and domain controller agents in their on-premises environment (both support silent installation), meaning that they can use Azure AD password protection across Azure AD and on-premises.


6 Security Flaws in Smart Speakers You Need to Know About
22.6.2018 securityaffairs Security

Connectivity and functionality may offer us convenience, but as with any new connected technology like smart speakers also come with security concerns.
How would you feel about having a device in your home that’s always listening to what’s going on, standing ready to record, process and store any information it receives? That might be a somewhat alarmist way of putting it, but it’s essentially what smart home speakers do.

Smart speakers offer audio playback but also feature internet connectivity and often a digital assistant, which dramatically expands their functionality.

With today’s smart speakers, you can search the internet, control other home automation devices, shop online, send text messages, schedule alarms and more.

This connectivity and functionality may offer us convenience, but as with any new connected technology, these speakers also come with security concerns. Any time you add a node to your network, you open yourself up to more potential vulnerabilities. Since smart home tech is still relatively new, it’s also bound to have bugs.

Although smart home companies work to fix these flaws as quickly as possible and want to ensure their devices are secure, there’s still always a chance you’ll run into security issues. Here are six potential risks you should be aware of.

Unexpected Activation
Although smart speakers are always listening since their microphones are continuously on, they don’t record or process anything they hear unless they detect their activation phrase first. For Google Home, this phrase is “OK, Google.” For an Amazon speaker, say “Alexa.”

There are several problems. The first is that the technology isn’t perfect yet, and it’s entirely possible that the device will mishear another phrase as it’s wake-up phrase. For example, an Oregon couple recently discovered that their Amazon Echo speaker had been recording them without their knowledge. Amazon blamed the mistake on the device mishearing something in a background conversation as “Alexa.”

Misheard Cues
Unfortunately, these misunderstandings can extend beyond just activation. After the Oregon family’s Echo recorded their conversation, it sent the recording to a random person on their contact list. They only knew about the incident because the person who received the recording contacted them and told them.

Amazon offered the same explanation for this part of the event. According to the company, the speaker misheard the background conversation as a whole string of commands, resulting in sending the discussion to the couple’s acquaintance. This situation suggests that these speakers’ listening skills might not be as advanced as they need to be to function properly.

Unwanted Interaction From Ads
Smart speakers may misunderstand cues and unexpectedly wake up, but people could also purposefully wake them without your permission. Once they do so, they could potentially gain access to some of your information.

Burger King demonstrated this vulnerability when it ran an ad that purposely activated Google Home speakers and prompted them to read off a description of the Whopper burger. Google reacted quickly and prevented the devices from responding. Burger King fired back by altering the ad so that it triggered the speakers again.

While this prank might be relatively harmless, people could also potentially activate your speaker without your permission, even by yelling through your front door or an open window. Because of this vulnerability, you should avoid using a smart speaker for things like unlocking your front door. You can also change the wake word and set up pins for specific features.

Smart Speakers

Hacks and Malware
Thus far, most of the reports of problems with smart speakers have revolved around unauthorized access or faulty functionality. The devices are certainly also vulnerable to malicious hacks as well.

Security experts in the tech realm have already discovered various susceptibilities, enabling companies to fix them. Hackers may at some point, however, find some of these vulnerabilities first. If they do, they may be able to access sensitive personal information.

To protect yourself from becoming a victim of a hacking incident, use hardware and software only from companies you trust. Also, use secure passwords, and change them often wherever you can.

Voice Hacks
Using smart speakers could also increase your vulnerability to voice hacks, a subset of identity theft in which someone obtains an audio recording of your voice and uses it to access your information. Once they have this recording, they use it to trick authentication systems into thinking they’re you. This hack is a potential way to get around smart speakers’ voice recognition capabilities.

Smart home speakers provide a potential goldmine of audio recordings that someone could use for voice hacking. If a bad actor manages to hack into the speaker or cloud service where your records get stored, they could use it to hack into various accounts of yours.

Storage of Your Data
The fact that some cloud service is storing these recordings may make users uncomfortable. These recordings may be used to personalize your experience, improve the smart assistant’s effectiveness, serve you ads or do a range of other things.

Luckily, you can delete these recordings if you’d like through your account settings. In addition, advocates have called for more transparency about how these companies use customer data.

Be Smart About Using Smart Speakers

All smart technology comes with security risks. That doesn’t necessarily mean we shouldn’t use it, but it does mean we should be careful about how we use it and take appropriate security measures.

If you choose to get a smart speaker, take the time to set up your security settings, and allow access only to people and companies you trust.


Chronicle launches VirusTotal Monitor to reduce false positives
21.6.2018 securityaffairs  Security

Alphabet owned cybersecurity firm Chronicle announced the launch of a new VirusTotal service that promises to reduce false positives.
VirusTotal Monitor service allows developers to upload their application files to a private cloud store where they are scanned every day using anti-malware solutions from antivirus vendors in VirusTotal.

Every time the service flags the file as malicious, VirusTotal notifies it to antivirus vendor and to the developer.

Of course, files analyzed by the VirusTotal Monitor service will remain private and are not shared by the company with third-parties.

The service implements a Google-drive like interface to allow developers to upload their files and a dashboard to display the scan results. Both developers and AV companies could access the dashboard, the service also provided APIs to integrate Monitor with their tools implemented by developers and antivirus vendors.

“Enter VirusTotal Monitor. VirusTotal already runs a multi-antivirus service that aggregates the verdicts of over 70 antivirus engines to give users a second opinion about the maliciousness of the files that they check.” reads the announcement published by VirusTotal.

“For antivirus vendors this is a big win, as they can now have context about a file: who is the company behind it? when was it released? in which software suites is it found? What are the main file names with which it is distributed? For software developers it is an equally big win, as they can upload their creations to Monitor at pre-publish stage, to ensure a release without issues.”

VirusTotal Monitor

VirusTotal pointed out that Monitor service is not a free pass to get any file whitelisted.

“Sometimes vendors will indeed decide to keep detections for certain software, however, by having contextual information about the author behind a given file, they can prioritize work and take better decisions, hopefully leading to a world with less false positives,” continues the announcement.

“The idea is to have a collection of known source software, then each antivirus can decide what kind of trust-based relationship they have with each software publisher.”

Are you interested in this service? Now you can request a trial period for VirusTotal Monitor.


New VirusTotal Service Aims to Reduce False Positives
20.6.2018 securityweek Security

VirusTotal, which recently became part of Alphabet’s new cybersecurity company Chronicle, announced on Tuesday the launch of a new service designed to help software developers and security vendors reduce the number of false positive detections.

VirusTotal Monitor is a premium service that allows software developers to upload their application files to a private cloud store where they are scanned every day by the products of the more than 70 antivirus vendors in VirusTotal.

If a file is flagged as malicious, both the developer and the antivirus vendor are automatically notified.

Developers can upload their files using an interface similar to Google Drive, and both developers and AV companies are provided a dashboard where they can view results. In addition to the web interface, both parties can leverage APIs to integrate Monitor with their own tools.

VirusTotal Monitor

“For antivirus vendors this is a big win, as they can now have context about a file: who is the company behind it? when was it released? in which software suites is it found? What are the main file names with which it is distributed?” explained VirusTotal’s Emiliano Martinez. “For software developers it is an equally big win, as they can upload their creations to Monitor at pre-publish stage, to ensure a release without issues.”

VirusTotal highlighted that the uploaded files will not be shared with third-parties, except for the antivirus vendors, which will get access to the files their products detect.

While it may seem that Monitor opens a door to abuse, VirusTotal pointed out that the new service is “not a free pass to get any file whitelisted.”

“Sometimes vendors will indeed decide to keep detections for certain software, however, by having contextual information about the author behind a given file, they can prioritize work and take better decisions, hopefully leading to a world with less false positives,” Martinez said. “The idea is to have a collection of known source software, then each antivirus can decide what kind of trust-based relationship they have with each software publisher.”

VirusTotal Monitor has been in pre-release testing and is now accepting its first users. Developers can request a trial period.


Google Won't Use Artificial Intelligence for Weapons
8.6.2018 securityweek Security

Google announced Thursday it would not use artificial intelligence for weapons or to "cause or directly facilitate injury to people," as it unveiled a set of principles for these technologies.

Chief executive Sundar Pichai, in a blog post outlining the company's artificial intelligence policies, noted that even though Google won't use AI for weapons, "we will continue our work with governments and the military in many other areas" including cybersecurity, training, and search and rescue.

The news comes with Google facing pressure from employees and others over a contract with the US military, which the California tech giant said last week would not be renewed.

Pichai set out seven principles for Google's application of artificial intelligence, or advanced computing that can simulate intelligent human behavior.

He said Google is using AI "to help people tackle urgent problems" such as prediction of wildfires, helping farmers, diagnosing disease or preventing blindness.

"We recognize that such powerful technology raises equally powerful questions about its use," Pichai said in the blog.

"How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right."

The chief executive said Google's AI programs would be designed for applications that are "socially beneficial" and "avoid creating or reinforcing unfair bias."

He said the principles also called for AI applications to be "built and tested for safety," to be "accountable to people" and to "incorporate privacy design principles."

Google will avoid the use of any technologies "that cause or are likely to cause overall harm," Pichai wrote.

That means steering clear of "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people" and systems "that gather or use information for surveillance violating internationally accepted norms."

The move comes amid growing concerns that automated or robotic systems could be misused and spin out of control, leading to chaos.

Several technology firms have already agreed to the general principles of using artificial intelligence for good, but Google appeared to offer a more precise set of standards.

The company, which is already a member of the Partnership on Artificial Intelligence including dozens of tech firms committed to AI principles, had faced criticism for the contract with the Pentagon on Project Maven, which uses machine learning and engineering talent to distinguish people and objects in drone videos.

Faced with a petition signed by thousands of employees and criticism outside the company, Google indicated the $10 million contract would not be renewed, according to media reports.


FIFA public Wi-Fi guide: which host cities have the most secure networks?
8.6.2018 Kaspersky Security
We all know how easy it is for users to connect to open Wi-Fi networks in public places. Well, it is equally straightforward for criminals to position themselves near poorly protected access points – where they can intercept network traffic and compromise user data.

A lack of essential traffic encryption for Wi-Fi networks where official and global activities are taking place – such as at locations around the forthcoming FIFA World Cup 2018 – offers especially fertile ground for criminals.

With this in mind, can football fans feel digitally safe in host cities? How does the situation with Wi-Fi access differ from town to town? To answer these questions, we have analyzed existing reliable and unreliable access points in 11 FIFA World Cup host cities – Saransk, Samara, Nizhny Novgorod, Kazan, Volgograd, Moscow, Ekaterinburg, Sochi, Rostov, Kaliningrad, and Saint Petersburg.

The main feature of the research is telemetry, which aims to secure users’ Wi-Fi connections and turn on VPNs when needed. Statistics were generated from users who voluntarily agreed to having their data collected. For the research, we only evaluated the security of public Wi-Fi spots. Even with relatively few public Wi-Fi spots in small towns, we still obtained a sufficient base for analysis – almost 32,000 Wi-Fi hotspots. While checking encryption and authentication algorithms, we counted the number of WPA2 and open networks, as well as their share among all the access points.

Security of Wireless Networks in FIFA World Cup host cities
Using the methodology described above, we have evaluated the security of Wi-Fi access points in 11 FIFA World Cup 2018 host cities.

Encryption types used in public Wi-Fi hotspots in FIFA World Cup host cities

Over a fifth (22.4%) of Wi-Fi hotspots in FIFA World Cup 2018 host cities use unreliable networks. This means that criminals simply need to be located near an access point to grab the traffic and get their hands on user data.

Around three quarters of all access points use encryption based on the Wi-Fi Protected Access (WPA/WPA2) protocol family, which is considered to be one of the most secure. The level of protection mostly depends on the settings, such as the strength of the password set by the hotspot owner. The complicated encryption key can take years to successfully hack.

It should also be noted that even reliable networks, like WPA2, cannot be automatically considered as totally secure. They still give in to brute-force, dictionary, and key reinstallation attacks, of which there are a large number of tutorials and open source tools available online. Any attempt to intercept traffic from WPA Wi-Fi in public access points can also be made by penetrating the gap between the access point and the device at the beginning of the session.

Encryption types used in public Wi-Fi hotspots in FIFA World Cup host cities

The safest city (in terms of public Wi-Fi) turned out to be Saransk, with 72% of access points secured by WPA/WPA2 protocol encryption.

The top-three cities with the highest proportion of unsecured connections are Saint Petersburg (48% of Wi-Fi access points are unsecured), Kaliningrad (47%) and Rostov (44%).

Again, the relativity of the results should be noted. Even a WPA2 connection in a cafe couldn’t be considered as secure, if the password is visible to everyone. Nevertheless, we believe that the methodology used represents the Wi-Fi hot-spot security situation in the host cities, with a fair degree of accuracy.

The results of this research show that the security of Wi-Fi connections in FIFA World Cup hosts cities varies. Therefore. We therefore recommend that users follow some key safety rules.

Recommendations for Users
If you are going to visit any of the FIFA World Cup 2018 host cities and use open Wi-Fi networks while you are there, remember to follow these simple rules to help protect your personal data:

Whenever possible, connect via a Virtual Private Network (VPN). With a VPN, encrypted traffic is transmitted over a protected tunnel, meaning that criminals won’t be able to read your data, even if they gain access to it. For example, the Kaspersky Secure Connection VPN solution can switch on automatically when a connection is not safe.
Do not trust networks that are not password-protected, or have easy-to-guess or easy-to-find passwords.
Even if a network requests a strong password, you should remain vigilant. Fraudsters can find out the network password at a coffee shop, for example, and then create a fake connection using the same password. This allows them to easily steal personal user data. You should only trust network names and passwords given to you by the employees of an establishment.
To maximize your protection, turn off your Wi-Fi connection whenever you are not using it. This will also save your battery life. We recommend you also disable automatic connections to existing Wi-Fi networks.
If you are not 100% sure that the wireless network you are using is secure, but you still need to connect to the Internet, try to limit yourself to basic user actions such as searching for information. You should refrain from entering your login details for social networks or mail services, and definitely do not perform any online banking operations or enter your bank card details anywhere. This will avoid situations where your sensitive data or passwords are intercepted and then used for malicious purposes later on.
To avoid becoming a cybercriminal target, you should enable the “Always use a secure connection” (HTTPS) option in your device settings. Enabling this option is recommended when visiting any websites you think may lack the necessary protection.
One example of a dedicated solution is the Secure Connection tool included in the latest versions of Kaspersky Internet Security and Kaspersky Total Security. This module protects users who are connected to Wi-Fi networks by providing them with a secure encrypted connection channel. Secure Connection can be launched manually or, depending on the settings, activated automatically when connecting to public Wi-Fi networks, when navigating to online banking and payment systems or online stores, and when communicating online (via mail services, social networks, etc.).


ALTR Emerges From Stealth With Blockchain-Based Data Security Solution
7.6.2018 securityweek Security

Austin, Texas-based ALTR emerged from stealth mode on Wednesday with a blockchain-based data security platform and $15 million in funding.

ALTR announced the immediate availability of its product, which has been in development for nearly four years while the company operated in stealth mode.

Originally designed to serve as the public transactions ledger for the Bitcoin cryptocurrency, blockchain is a distributed database consisting of blocks that are linked and secured using cryptography. Companies have been increasingly using blockchain for purposes other than cryptocurrency transactions, including for identity verification and securing data and devices.

ALTR’s platform uses blockchain technology for secure data access and storage. Built on what the company names ALTRchain, the solution allows organizations to monitor, access and store highly sensitive information.

ALTR emerges from stealth

The ALTR platform is designed to sit between data and applications, and it can be deployed without making any changes to existing software or hardware infrastructure. It offers support for all major database systems, including from Oracle, Microsoft and others.

The platform has three main components: ALTR Monitor, ALTR Govern, and ALTR Protect. ALTR Monitor provides intelligence on data access activities, creating an audit trail of blockchain-based log files.

ALTR Govern is designed for controlling how users access business applications. Organizations can create and apply rule-based locks and access thresholds in an effort to prevent breaches.

ALTR Protect is designed to protect data at rest. It decentralizes sensitive data and stores it across a private blockchain in an effort to protect it against unauthorized access in case any single node has been compromised.

The company also announced that it has opened access to its proprietary blockchain technology by making available its ChainAPI, which allows developers to add ALTRchain to their applications.

ALTR has raised $15 million in funding from private and institutional sources in the cybersecurity, financial services and IT sectors. The money will be used to extend the reach of the company’s platform and launch additional products based on ALTRchain.

ALTR told SecurityWeek that its platform has already been deployed at a healthcare organization, a mid-sized service provider that caters to both Fortune 1000 companies and government agencies, and a couple of firms in the financial services sector.


Thousands of organizations leak sensitive data via misconfigured Google Groups
6.6.2018 securityaffairs Security

Security experts reported widespread Google Groups misconfiguration exposes sensitive information.
Administrators of organizations using Google Groups and G Suite must review their configuration to avoid the leakage of internal information.

Security researchers from Kenna Security have recently discovered that 31 percent of 9,600 organizations analyzed is leaking sensitive e-mail information.

The list of affected entities also includes Fortune 500 companies, hospitals, universities and colleges, newspapers and television stations, and even US government agencies.

“Organizations utilizing G Suite are provided access to the Google Groups product, a web forum directly integrated with an organization’s mailing lists. Administrators may configure a Google Groups interface when creating a mailing list.” reads the blog post published by Kenna Security.

“Due to complexity in terminology and organization-wide vs group-specific permissions, it’s possible for list administrators to inadvertently expose email list contents. In practice, this affects a significant number of organizations”

The discovery is not new, back in 2017 experts discovered wrong configurations of G Suite that can lead to data leakage.

Unfortunately, since the first advisory published by experts at RedLock, many installs continue to leak data. According to Kenna Security, the main reason is Google Groups uses a complex terminology and organisation-wide vs group-specific permissions.

“Due to complexity in terminology and organization-wide vs group-specific permissions, it’s possible for list administrators to inadvertently expose email list contents. In practice, this affects a significant number of organizations” continues the post.

When a G Suite admin creates a Groups mailing list for specific recipients, it configures a Web interface for the list, available to users at https://groups.google.com.

Google Group privacy settings for individuals can be adjusted on both a domain and a per-group basis. In affected organizations, the Groups visibility setting is available by searching “Groups Visibility” after logging into https://admin.google.com and it is configured to “Public on the Internet”

Google Groups

To discover if an organization is affected, administrators can browse to the configuration page by logging into G Suite as an administrator and typing “Settings for Groups for Business” or simply using this direct link.

“In almost all cases – unless you’re explicitly using the Google Groups web interface – this should be set to “Private”.” continues the post.

“If publicly accessible, you may access your organization’s public listing at the following link: https://groups.google.com/a/[DOMAIN]/forum/#!forumsearch/”

Administrators have to set as private the “Google Group” to protect internal information such as customer reviews, invoices payable, password recovery / reset e-mails, and more.

It is important to highlight that Google doesn’t consider configuration issues as a vulnerability, experts recommend administrators to read the Google Groups documentation, set the sharing setting for “Outside this domain – access to groups” to “private”.


HTTP Parameter Pollution Leads to reCAPTCHA Bypass
31.5.2018 securityweek Security

Earlier this year, a security researcher discovered that it was possible to bypass Google’s reCAPTCHA via HTTP parameter pollution.

The issue, application and cloud security expert Andres Riancho says, can be exploited when a web application crafts the request to /recaptcha/api/siteverify in an insecure way. Exploitation allows an attacker to bypass the protection every time.

When a web application using reCAPTCHA challenges the user, “Google provides an image set and uses JavaScript code to show them in the browser,” the researcher notes.

After solving the challenge, the user clicks verify, which triggers an HTTP request to the web application, which in turn verifies the user’s response with a request to Google’s reCAPTCHA API.

The application authenticates itself and sends a {reCAPTCHA-generated-hash} to the API to query the response. If the user solved the challenge correctly, the API sends an "OK" that the web application receives, processes, and most likely grants the user access to the requested resource.

Riancho discovered that an HTTP parameter pollution in the web application could be used to bypass reCAPTCHA (the requirement, however, reduced the severity of the vulnerability).

“HTTP parameter pollution is almost everywhere: client-side and server-side, and the associated risk depends greatly on the context. In some specific cases it could lead to huge data breach, but in most cases it is a low risk finding,” Riancho explains.

He notes that it was possible to send two HTTP requests to Google’s service and receive the same response. The reCAPTCHA API would always use the first secret parameter on the request but ignore the second, an issue the researcher was able to exploit.

Additionally, Google is providing web developers interested in testing their web applications with a hard-coded site and secret key to disable reCAPTCHA verification in staging environments and perform their testing, and the bypass leverages this functionality as well.

“If the application was vulnerable to HTTP parameter pollution AND the URL was constructed by appending the response parameter before the secret then an attacker was able to bypass the reCAPTCHA verification,” the researcher notes.

Two requirements should be met for the vulnerability to be exploitable: the web application needs to have an HTTP parameter pollution flaw in the reCAPTCHA URL creation, and to create the URL with the response parameter first, and then the secret. Overall, only around 3% of reCAPTCHA implementations would be vulnerable.

Riancho points out that Google addressed the issue in the REST API by returning an error when the HTTP request to /recaptcha/api/siteverify contains two parameters with the same name.

“Fixing it this way they are protecting the applications which are vulnerable to the HTTP Parameter Pollution and the reCAPTCHA bypass, without requiring them to apply any patches,” the researcher notes.

The issue was reported to Google on January 29, and a patch was released on March 25. The search giant paid the researcher $500 for the discovery.


Chinese researchers from Tencent discovered exploitable flaws in several BMW models
23.5.2018 securityaffairs Security

A team of security researchers from Chinese firm Tencent has discovered 14 security vulnerabilities in several BMW models.
Researchers from the Tencent Keen Security Lab have discovered 14 vulnerabilities affecting several BMW models, including BMW i Series, BMW X Series, BMW 3 Series, BMW 5 Series, and BMW 7 Series.

The team of experts conducted a year-long study between January 2017 and February 2018. They reported the issues to BMW and after the company started rolling out security patches the researchers published technical details for the flaws.

“we systematically performed an in-depth and comprehensive analysis of the hardware
and software on Head Unit, Telematics Control Unit and Central Gateway Module of multiple BMW vehicles.” reads the report published by Tencent Keen Security Lab.

“Through mainly focusing on the various external attack surfaces of these units, we discovered that a remote targeted attack on multiple Internet-Connected BMW vehicles in a wide range of areas is feasible, via a set of remote attack surfaces (including GSM Communication, BMW Remote Service, BMW ConnectedDrive Service, UDS Remote Diagnosis, NGTP protocol, and Bluetooth protocol).”

According to the experts, the vulnerabilities affect car produced from the year 2012. White hat hackers focused their tests on the infotainment and telematics systems of the vehicles.

Eight of the vulnerabilities impact the infotainment system, four issues affect the telematics control unit (TCU), and two the central gateway module.

bmw models hack 2

The TCU provides telephony services, accident assistance services, and implements remote controls of the doors and climate. The central gateway receives diagnostic messages from the TCU and the head unit and sends them to other Electronic Control Units (ECUs) on different CAN buses.

The experts discovered that an attacker could exploit the flaws, or chain some of them, to execute arbitrary code and take complete control of the affected component.

The experts demonstrated that a local attacker could hack BMW vehicles via a USB stick, in another attack scenario the researchers illustrated a remote hack through a software-defined radio.

Remote attacks can be conducted via Bluetooth or via cellular networks, remote hack of a BMW car is very complex to carry on because the attacker would need to hack a local GSM mobile network.

BMW-models Attack-Chains

“Our research findings have proved that it is feasible to gain local and remote access to infotainment, T-Box components and UDS communication above certain speed of selected BMW vehicle modules and been able to gain control of the CAN buses with the execution of arbitrary, unauthorized diagnostic requests of BMW in-car systems remotely,” states the researchers.

BMW issued some security updates to the backend systems, it also rolled out over-the-air patches for the TCU. The company also developed firmware updates that will be made available to customers at dealerships.

Neither BMW nor Keen Lab have revealed the list of affected models.

BMW awarded the Keen Lab as the first winner of the BMW Group Digitalization and IT Research Award.

In July 2017, the same team of security researchers from Chinese firm Tencent demonstrated how to remotely hack a Tesla Model vehicle.


Utimaco to Acquire Atalla Hardware Security Module Business From Micro Focus
21.5.2018 securityweek  Security

Aachen, Germany-based firm Utimaco will acquire the Atalla hardware security module (HSM) and enterprise secure key manager (ESKM) lines from UK-based Micro Focus.

Announced on Friday, the financial details of the transaction were not disclosed. The deal is expected to complete by September 2018, subject to regulatory approval.

Both Utimaco and Atalla have been in the HSM business for around thirty years. Utimaco, the world's second largest supplier, has focused on general purpose HSMs sold via OEMs and the channel. Atalla has particular strengths in the financial services market, with access to top brand banking and financial services players, especially in the USA, UK and Asia.

"Both Utimaco and Atalla are pioneers in hardware security modules, the combination of which leads to an unrivalled wealth of experience and know-how," said Malte Pollmann, Utimaco’s CEO. "The acquisition of Atalla will mark a key milestone in the further implementation of our growth strategy. It is complementary in terms of product portfolio and regional footprint as well as the vertical markets we are addressing."

"As two of the leading pioneers in the hardware security modules business, Atalla and Utimaco are a perfect match, operating in complementary markets with aligned strengths that will help drive better alignment for customers and position Atalla for future growth,” said John Delk, general manager of security for Micro Focus."

Utimaco says it will maintain the existing Atalla team and further invest at Atalla's Sunnyvale, CA, location.

HSMs are specially hardened devices used to house and protect digital keys and signatures. Atalla's HSM is a payments hardware security module for protecting sensitive data and associated keys for non-cash retail payment transactions, cardholder authentication, and cryptographic keys.

The ESKM line provides a centralized key management hardware-based solution for unifying and automating an organization’s encryption key controls by creating, protecting, serving, and auditing access to encryption keys.

Micro Focus acquired Atalla after HPE CEO Meg Whitman announced, in September 2016, that it would be spun out and then merged with Micro Focus.

Utimaco was acquired by Sophos in 2009. One year later, Sophos sold a majority interest to Apax Partners, and this was followed by a management buyout in 2013. Today, Utimaco's primary investors are EQT, PINOVA Capital and BIP Investment Partners S.A.