je internetový portál zaměřený na počítačovou bezpečnost, hacking, anonymitu, počítačové sítě, programování, šifrování, exploity, Linux a BSD systémy. Provozuje spoustu zajímavých služeb a podporuje příznivce v zajímavých projektech.


Thousands of Google Calendars Possibly Leaking Private Information Online

The Hacker News - 17 Září, 2019 - 22:03
"Warning — Making your calendar public will make all events visible to the world, including via Google search. Are you sure?" Remember this security warning? No? If you have ever shared your Google Calendars, or maybe inadvertently, with someone that should not be publicly accessible anymore, you should immediately go back to your Google settings and check if you're exposing all your events
Kategorie: Hacking & Security

AMD Radeon Graphics Cards Open VMware Workstations to Attack

Threatpost - 17 Září, 2019 - 19:03
Bug impacts VMware Workstation 15 running 64-bit versions of Windows 10 as the guest VM.
Kategorie: Hacking & Security

How Google adopted BeyondCorp: Part 3 (tiered access)

Google Security Blog - 17 Září, 2019 - 18:48
Posted by Daniel Ladenheim, Software Engineer, and Hunter King, Security Engineer 


This is the third post in a series of four, in which we set out to revisit various BeyondCorp topics and share lessons that were learnt along the internal implementation path at Google.

The first post in this series focused on providing necessary context for how Google adopted BeyondCorp, Google’s implementation of the zero trust security model. The second post focused on managing devices - how we decide whether or not a device should be trusted and why that distinction is necessary. This post introduces the concept of tiered access, its importance, how we implemented it, and how we addressed associated troubleshooting challenges.

High level architecture for BeyondCorp
What is Tiered Access?
In a traditional client certificate system, certificates are only given to trusted devices. Google used this approach initially as it dramatically simplified device trust. With such a system, any device with a valid certificate can be trusted. At predefined intervals, clients prove they can be trusted and a new certificate is issued. It’s typically a lightweight process and many off-the-shelf products exist to implement flows that adhere to this principle.

However, there are a number of challenges with this setup:
  • Not all devices need the same level of security hardening (e.g. non-standard issue devices, older platforms required for testing, BYOD, etc.).
  • These systems don’t easily allow for nuanced access based on shifting security posture.
  • These systems tend to evaluate a device based on a single set of criteria, regardless of whether devices require access to highly sensitive data (e.g. corporate financials) or far less sensitive data (e.g. a dashboard displayed in a public space).
The next challenge introduced by traditional systems is the inherent requirement that a device must meet your security requirements before it can get a certificate. This sounds reasonable on paper, but it unfortunately means that existing certificate infrastructure can’t be used to aid device provisioning. This implies you must have an additional infrastructure to bootstrap a device into a trusted state.
The most significant challenge is the large amount of time in between trust evaluations. If you only install a new certificate once a year, this means it might take an entire year before you are able to recertify a device. Therefore, any new requirements you wish to add to the fleet may take up to a year before they are fully in effect. On the other hand, if you require certificates to be installed monthly or daily, you have placed a significant burden on your users and/or support staff, as they are forced to go through the certification issuance process far more often, which can be time consuming and frustrating. Additionally, if a device is found to be out of compliance with security policy, the only option is to remove all access by revoking the certificate, rather than degrading access, which can create a frustrating all-or-nothing situation for the user.

Tiered access attempts to address all these challenges, which is why we decided to adopt it. In this new model, certificates are simply used to provide the device’s identity, instead of acting as proof of trust. Trust decisions are then made by a separate system which can be modified without interfering with the certificate issuance process or validity. Moving the trust evaluation out-of-band from the certificate issuance allows us to circumvent the challenges identified above in the traditional system. Below are three ways in which tiered access helps address these concerns.

Different access levels for different security states

By separating trust from identity, we can define infinite levels of trust, if we so desired. At any point in time, we can define a new trust level, or adjust existing trust level requirements, and reevaluate a device's compliance. This is the heart of the tiered access system. It provides us the flexibility to define different device trust criteria for low sensitivity applications from those used for high trusted applications.

Solving the bootstrapping challenge

Multiple trust states enable us to use the system to initiate an OS installation. We can now allow access to bootstrapping (configuration and patch management) services based solely on whether we own the device. This enables provisioning to occur from untrusted networks allowing us to replace the traditional IP-based checks.

Configurable frequency of trust evaluations

The frequency of device trust evaluation is independent from certificate issuance in a tiered access setup. This means you can evaluate trust as often as you feel necessary. Changes to trust definitions can be immediately reflected across the entire fleet. Changes to device posture can similarly immediately impact trust.

We should note that the system’s ability to quickly remove trust from devices can be a double edged sword. If there are bugs in the trust definitions or evaluations themselves, this can also quickly remove trust from ‘good’ devices. You must have the ability to adequately test policy changes to mitigate the blast radius from these types of bugs, and ideally canary changes to subsets of the fleet for a baking period. Constant monitoring is also critical. A bug in your trust evaluation system could cause it to start mis-evaluating trust. It’s wise to add alarms if the system starts dropping (or raising) the trust of too many machines at once. The troubleshooting section below provides additional techniques to help minimize the impact of misconfigured trust logic.

How did we define access tiers?
The basic concept of tiers is relatively straightforward: access to data increases as the device security hardening increases. These tiers are useful for coarse grain access control of client devices, which we have found to be sufficient in most cases. At Google, we allow the user to choose the device tier that allows them to weigh access needs with security requirements and policy. If a user needs access to more corporate data, they may have to accept more device configuration restrictions. If a user wants more control over their device and less restrictions but don’t need access to higher risk resources, they can choose a tier with less access to corporate data. For more information about the properties of a trusted platform you can measure, visit our paper about Maintaining a Healthy Fleet.

We knew this model would work in principle, but we didn’t know how many access tiers we should define. As described above, the old model only had two tiers: Trusted and Untrusted. We knew we wanted more than that to enable trust build up at the very least, but we didn’t know the ideal number. More tiers allow access control lists to be specified with greater fidelity at the cost of confusion for service owners, security engineers, and the wider employee base alike.

At Google, we initially supported four distinct tiers ranging from Untrusted to Highly-Privileged Access. The extremes are easy to understand: Untrusted devices should only access data that is already public while Highly-Privileged Access devices have greater privilege internally. The middle two tiers allowed system owners to design their systems with the tiered access model in mind. Certain sensitive actions required a Highly-Privileged Access device while less sensitive portions of the system could be reached with less trusted devices. This degraded access model sounded great to us security wonks. Unfortunately, employees were unable to determine what tier they should choose to ensure they could access all the systems they needed. In the end, we determined that the extra middle tier led to intense confusion without much benefit.

In our current model, the vast majority of devices fit in one of three distinct tiers: Untrusted, Basic Access, and Highly-Privileged Access. In this model, system owners are required to choose the more trusted path if their system is more sensitive. This requirement does limit the finesse of the system but greatly reduces employee confusion and was key to a successful adoption.

In addition to tiers, our system is able to provide additional context to access gateways and underlying applications and services. This additional information is useful to provide finer grained, device-based access control. Imposing additional device restrictions on highly sensitive systems, in addition to checking the coarse grain tier, is a reasonable way to balance security vs user expectations. Because highly sensitive systems are only used by a smaller subset of the employee population, based on role and need, these additional restrictions typically aren’t a source of user confusion. With that in mind, please note that this article only covers device-based controls and does not address fine-grained controls based on a user’s identity.

At the other end of the spectrum, we have OS installation/remediation services. These systems are required in order to support bootstrapping a device which by design does not yet adhere to the Basic Access tier. As described earlier, we use our certificates as a device identity, not trust validation. In the OS installation case, no reported data exists, but we can make access decisions based on the inventory data associated with that device identity. This allows us to ensure our OS and security agents are only installed on devices we own and expect to be in use. Once the OS and security agents are up and running, we can use them to lock down the device and prove it is in a state worthy of more trust.

How did we create rules to implement the tiers?

Device-based data is the heart of BeyondCorp and tiered access. We evaluate trust tiers using data about each device at Google to determine its security integrity and tier level. To obtain this data, we built an inventory pipeline which aggregates data from various sources of authority within our enterprise to obtain a holistic, comprehensive view of a device's security posture. For example, we gather prescribed company asset inventory in one service and observed data reported by agents on the devices in other services. All of this data is used to determine which tier a device belongs in, and trust tiers are reevaluated every time corporate data is changed or new data is reported.

Trust level evaluations are made via "rules", written by security and systems engineers. For example, for a device to have basic access, we have a rule that checks that it is running an approved operating system build and version. For that same device to have highly-privileged access, it would need to pass several additional rules, such as checking the device is encrypted and contains the latest security patches. Rules exist in a hierarchical structure, so several rules can combine to create a tier. Requirements for tiers across device platforms can be different, so there is a separate hierarchy for each. Security engineers work closely with systems engineers to determine the necessary information to protect devices, such as determining thresholds for required minimum version and security patch frequency.

Rule Enforcement and User Experience

To create a good user experience, rules are created and monitored before being enforced. For example, before requiring all users to upgrade their Chrome browser, we monitor how many users will drop trust if that rule was enforced. Dashboards track rule impact on Googlers over 30 day periods. This enables security and systems teams to evaluate rule change impact before they affect end users.

To further protect employee experience, we have measures called grace periods and exceptions. Grace periods provide windows of a predefined duration where devices can violate rules but still maintain trust and access, providing a fallback in case of unexpected consequences. Furthermore, grace periods can be implemented quickly and easily across the fleet in case for disaster recovery purposes. The other mechanism is called exceptions. Exceptions allow rule authors to create rules for the majority while enabling security engineers to make nuanced decisions around individual riskier processes. For example, if we have a team of Android developers specializing on user experience for an older Android version, they may be granted an exception for the minimum version rule.

How did we simplify troubleshooting?

Troubleshooting access issues proves challenging in a system where many pieces of data interact to create trust. We tackle this issue in two ways. First, we have a system to provide succinct and actionable explanations to end users on how to resolve problems on their own. Second, we have the capability to notify users when their devices have lost trust or are about to lose trust. The combination of these efforts improves the user experience of the tiered access solution and reduces toil for those supporting it.

We are able to provide self-service feedback to users by closely integrating the creation of rule policy with resolution steps for that policy. In other words, security engineers who write rule policies are also responsible for attaching steps on how to resolve the issue. To further aid users, the rule evaluation system provides details about the specific pieces of data causing the failure. All this information is fed into a centralized system that generates user-friendly explanations, guiding users to self-diagnose and fix problems without the need for IT support. Likewise, a tech may not be able to see pieces of PII about a user when helping fix the device. These cases are rare but necessary to protect the parties involved in these scenarios. Having one centralized debugging system helps deal with all these nuances, enabling us to provide detailed and safe explanations to end users in accordance with their needs.

Remediation steps are communicated to users in several ways. Before a device loses trust, notification pop-ups appear to the user explaining that a loss of access is imminent. These pop-ups contain directions to the remediation system so the user can self-diagnose and fix the problem. This circumvents user pain by offering solutions before the problem impacts the user. Premeditated notifications work in conjunction with the aforementioned grace periods, as we provide a window in which users can fix their devices. If the issue is not fixed and the device goes out of compliance, there is still a clear path on what to do. For example, when a user attempts to access a resource for which they do not have permission, a link appears on the access denied page directing them to the relevant remediation steps. This provides fast, clear feedback on how to fix their device and reduces toil on the IT support teams.

Next time

In the next and final post in this series, we will discuss how we migrated services to be protected by the BeyondCorp architecture at Google.

In the meantime, if you want to learn more, you can check out the BeyondCorp research papers. In addition, getting started with BeyondCorp is now easier using zero trust solutions from Google Cloud (context-aware access) and other enterprise providers.
Thank you to the editors of the BeyondCorp blog post series, Puneet Goel (Product Manager), Lior Tishbi (Program Manager), and Justin McWilliams (Engineering Manager).

Kategorie: Hacking & Security

Cisco Extends Patch for IPv6 DoS Vulnerability

Threatpost - 17 Září, 2019 - 17:24
The bug was first found in 2016.
Kategorie: Hacking & Security

Google Calendar Settings Gaffes Exposes Users’ Meetings, Company Details

Threatpost - 17 Září, 2019 - 17:20
A configuration setting in Google Calendars does not sufficiently warn users that it makes their calendars public to all, a researcher argues.
Kategorie: Hacking & Security

SSCP versus CCSP: Cloud security or systems security?

InfoSec Institute Resources - 17 Září, 2019 - 16:42

Introduction The SSCP (Systems Security Certified Practitioner) and CCSP (Certified Cloud Security Professional) certifications focus on systems security and cloud security, respectively. Both certifications are vendor-neutral and are offered by the same vendor — the International Information System Security Certification Consortium, or (ISC)². No matter how small or large an organization is, it needs to […]

The post SSCP versus CCSP: Cloud security or systems security? appeared first on Infosec Resources.

SSCP versus CCSP: Cloud security or systems security? was first posted on September 17, 2019 at 9:42 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at
Kategorie: Hacking & Security

Nejvíce spamů míří do Německa a Ruska - bezpečnost - 17 Září, 2019 - 15:25
Každou sekundu jsou odeslány tisíce nevyžádaných e-mailů. Spam je přitom nejčastěji odesílán z Číny a končí ve schránkách německých a ruských uživatelů. Vyplývá to z průzkumu antivirové společnosti Kaspersky, který pojednává o situaci na internetu za druhé čtvrtletí letošního roku.
Kategorie: Hacking & Security

Ethical hacking: What are exploits?

InfoSec Institute Resources - 17 Září, 2019 - 15:01

Introduction The very soul of ethical hacking consists of searching for vulnerabilities and weaknesses within an organization’s system, using methods and tools that attackers would use (with permission, of course). Taking this path will lead you to exploits — kind of like a twisted pot of gold at the end of the rainbow. This article […]

The post Ethical hacking: What are exploits? appeared first on Infosec Resources.

Ethical hacking: What are exploits? was first posted on September 17, 2019 at 8:01 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at
Kategorie: Hacking & Security

Top 10 network recon tools

InfoSec Institute Resources - 17 Září, 2019 - 15:00

Introduction: The need for recon Reconnaissance is an important first stage in any ethical hacking attempt. Before it’s possible to exploit a vulnerability in the target system, it’s necessary to find it. By performing reconnaissance on the target, an ethical hacker can learn about the details of the target network and identify potential attack vectors. […]

The post Top 10 network recon tools appeared first on Infosec Resources.

Top 10 network recon tools was first posted on September 17, 2019 at 8:00 am.
©2017 "InfoSec Resources". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement. Please contact me at
Kategorie: Hacking & Security

LastPass Fixes Bug That Leaks Credentials

Threatpost - 17 Září, 2019 - 14:18
The company has patched a vulnerability that could allow malicious sites unauthorized access to usernames and passwords.
Kategorie: Hacking & Security

Robocalls now flooding US phones with 200m calls per day

Sophos Naked Security - 17 Září, 2019 - 13:24
According to a new report, nearly 30% of all US calls placed in the first half of 2019 were garbage, as in, nuisance, scam or fraud calls.

Former hacker warns against password reuse

Sophos Naked Security - 17 Září, 2019 - 13:07
Kyle Milliken is back from jail, and he has some advice for you: Do. Not. Reuse. Your. Passwords.

US Treasury targets North Korean hacking groups

Sophos Naked Security - 17 Září, 2019 - 12:49
The US has formally sanctioned the Lazarus Group and offshoots Bluenoroff and Andariel, which are allegedly acting on behalf of the DPRK.

Podvody jsou stále dokonalejší, útočníci cílí na uživatele Instagramu - bezpečnost - 17 Září, 2019 - 12:05
Před novým phishingovým útokem, který cílí na uživatele sociální sítě Instagram, varovali výzkumníci z kyberbezpečnostního týmu SophosLab. Ti zároveň upozornili, že techniky počítačových pirátů jsou stále dokonalejší. Napálit se tak snadno mohou i uživatelé, kteří by si dříve dali na triky kybernetických nájezdníků pozor.
Kategorie: Hacking & Security

Assessing the impact of protection from web miners

Kaspersky Securelist - 17 Září, 2019 - 12:00

Brief summary:

We present the results of evaluating the positive economic and environmental impact of blocking web miners with Kaspersky products. The total power saving can be calculated with known accuracy using the formula <w>·N, where <w> is the average value of the increase in power consumption of the user device during web mining, and N is the number of blocked attempts according to Kaspersky Security Network (KSN) data for 2018. This figure is equal to 18.8±11.8 gigawatts (GW), which is twice the average power consumption rate of all Bitcoin miners in that same year. To assess the amount of saved energy based on this power consumption rate, this number is multiplied by the average time that user devices spend on web mining, that is, according to the formula <w>·N·t, where t is the average time that web miners would have been working had they not been blocked by our products. Since this value cannot be obtained from Kaspersky data, we used information from open sources provided by third-party researchers, according to which the estimated amount of electricity saved by users of our products ranges from 240 to 1,670 megawatt hours (MWh). Using the average prices for individual consumers, this amount of electricity could cost up to 200 thousand dollars for residents in North America or up to 250 thousand euros for residents in Europe.

So what’s our contribution to the fight against excess energy consumption?

Cryptocurrency mining is an energy-intensive business. According to some estimates, Bitcoin miners consume the same amount of energy as the Czech Republic, a country of more than 10 million people (around 67 terawatt hours per year). At the same time, as we already noted, they do this with multiple redundancy — but only as long as this is economically justified. But what about users that are forced to mine against their will — that is, systems affected by web miners (websites mining cryptocurrency)? Since this most often happens illicitly, such sites are detected by security solutions as malicious and blocked.

In 2018, Kaspersky products blocked 470 million attempts to download scripts and attempts to connect to mining resources on the computers and devices participating in Kaspersky Security Network. Is it possible to assess the economic (and environmental) impact of this undoubtedly positive activity? To answer this question, we had to tackle several issues.

1. How much does power consumption increase when the system is mining cryptocurrency?

We couldn’t find any open source data on this matter, since most researchers are interested in, so to speak, integral power consumption of cryptocurrency mining systems — that is, how much a particular hardware set up consumes in total and how much they’d spend to cover the electricity bill. Data on the most common mining systems can be found at the sites like, which help to make an informed decision on economic viability and, accordingly, whether to mine or not to mine. We were interested in the question of what portion of a system’s total energy consumption is related specifically to web mining that happens without the user’s consent.

To get an answer, we used a measuring bench previously set up to study surges in mobile energy consumption during USB charging and data exchange.

Using the computers of 18 volunteers (a big thank-you to them once again), we were able to experimentally determine the rise in the power consumption of 21 different devices when mining Monero on CoinHive (the most common cryptocurrency mining service). In brief, here’s what we managed to figure out:

  • Is there a dependence on the type of processor? Definitely.
  • Is there a dependence on the amount and type of memory? Definitely not.

This is clear from this image showing the increase in CPU load when mining begins:

As can be seen, the amount of used memory used does not change and does not depend on the amount of processor load.

  • Is there a dependence on Internet connection speed? We did not check this; in all experiments the connection speed was more or less the same.
  • Is there a dependence on the browser? Definitely not.
  • Is there a dependence on the type of operating system? Probably not.

It’s worth explaining here that we lack sufficient data to draw a definite conclusion regarding the operating system. We saw slightly different results for the same hardware running under different operating systems (namely, Mac OS and Windows); the difference falls within the boundaries of statistical error, and there are too few points for a reliable conclusion.

For comparison, the processor load under Mac OS looked something like this:

The situation is identical to the CPU load under Windows, not exceeding 10–12% in idle mode, and 100% during web mining.

The graph showing the dependence of the measured increase in energy consumption on the processors’ nominal TDP (thermally dissipated power), taken from the manual, looks something like this:

The red line shows the result of a linear approximation in the ax + b form, where a = 1.013±0.017 and b = –0.237±0.044 (determined by the least squares method taking into account the measurement error at each point), as well as the range of values predicted by the model with 95% probability. There are slightly more outliers on this graph towards energy consumption exceeding TDP than those that energy consumption below TDP. Overall, however, for the purposes of further approximation, it is sufficient to use TDP as an estimate of the increase in energy consumption in web mining mode.

Question 2. But processors that are part of devices that block web miners have different TDPs. How to evaluate the contribution of each of them?

To analyze the distribution pattern of processors by TDP, we used a random sample containing about 1% of the total number of devices participating in KSN. In this sample, we managed to identify 2,497 CPU types. Reference data on 1,550 types of processors were pulled automatically by playing with regular expressions and scraping open sources, the most useful of which was PassMark CPU Benchmark. Information about the remaining 947 types of processors had to be added manually.

The weighted average TDP from this data was calculated as

where fi is the frequency of the i-th type of processor, and ni is the number of processors of the i-th type in the distribution of CPU’s TDP. However, the frequency distribution of CPU’s TDP is far from normal, so we will have to use a coarse estimate covering all TDP values from 15 to 65 W, that is, <w> = 40±25 W.

3. How to calculate the average working time of a web miner?

This is perhaps the most difficult question, since Kaspersky products are there to block web miners. A study by our colleagues from the Foundation for Research and Technology — Hellas (FORTH), a research center in Greece, gives an estimate of the average working time of web miners at 5.3 minutes. And a joint report by colleagues from the University of California, Santa Barbara, the University of Amsterdam, and the University of Utrecht estimates the average time that people spend on websites where web mining activity has been detected as approximately one minute.

At the time of writing this article, web analytics service SimilarWeb had calculated the average time of a visit to (a mirror) to be 46 seconds. As such, time is the most volatile parameter in our energy consumption formula Wtotal = <w>·N·t, where N is the number of detections and t is the time that the web miner would have been in operation had it not been blocked by our product. Substituting the corresponding values, we obtain an estimate of Wtotal: 240 to 1,670 megawatt hours (MWh). Being several orders of magnitude less than Bitcoin’s total energy consumption of 67 terawatt hours, this is still a serious amount of energy. It is comparable to the annual energy consumption of a city with a population of several hundred thousand.

Incidentally, the cost of the maximum amount of power that the miners we blocked could have consumed (1.67 GWh) varies around the world. If this amount of power were consumed entirely in Europe, European consumers would have to fork out €250,000, while for US residents the figure would be $200,000. The cheapest electricity of all would be available to residents of China and India, where negligence in the face of web miners would cost “only” $133,000. And in Japan, where electricity is the most expensive, that would cost half a million dollars.

As for the ecological impact, based on the IEA (International Energy Agency) global average value for carbon emission of 475 kg/MWh, we can assume that we have prevented the release from 115 to 800 tons of CO2 into the atmosphere.

As you can see, the confidence intervals that we have for energy estimates are quite wide. This is because we had to use the duration of time, which we could not estimate directly. If we remove it from equation, we get the “detection power” (<w>·N) – or the power consumption rate of all blocked attempts. For 470 million web mining attempts detected (and blocked) in 2018, this value is equal to 18.8±11.8 gigawatts (GW). To compare that with Bitcoin as a reference point, we can divide the known amount energy Bitcoin miners have consumed the same year by time – and we get roughly 7.647 GW, that is, half as much! Remember that the total power energy consumed Bitcoin miners was compared with electricity spent by residents of Czech Republic? Looking into IEA’s statistics for OECD countries, we found that 18.8 GW of power consumption rate is comparable to the power consumption rate (the amount of energy they consume within a unit of time) of such country as Poland, with almost three times as much residents as Czech Republic. We can also compare this with the infamous Chernobyl nuclear plant, whose four reactors before the accident produced around 4 GW of power in total. In other words, in one year Kaspersky products saved as much power as the output of four Chernobyl plants, or double the power consumption rate of all Bitcoin miners worldwide.


The ongoing fight against web mining has been quite successful, in both legal and technological aspects. However, as long as profit can be made, cyber criminals will find ways to utilize the CPUs of unsuspecting victims. For instance, we had no trouble finding the above-mentioned CoinHive mirror, and it’s not unlikely that owners of Coihhive and other web mining sites would continue their assaults on unsuspecting users in future as well — on the systems without information security solutions, of course.

The most effective constraint, in our view, is the situation on the cryptocurrency market as a whole: web miners will continue to exist as a threat as long as it remains possible to convert cryptoassets mined this way into fiat currency.

Which means that in order to make our foreseeable future greener, literally, security products would have to continue working for the good of their owners — and the entire world.

  1. We have been able to experimentally measure the dependence of increase in power consumption on nominal TPD for 21 types of processors, which amounts to 0.8% of the total number of processor types in the random sample of 1% CPUs in Kaspersky Security Network (2497 processor types). Given that, and also assuming that the list of CPU types evolves with time, we plan to study this matter in more detail going forward.
  2. The estimates above were made in the assumption that frequency distribution of CPU TDPs that were blocking web mining attempts, was similar to the frequency distribution of CPU TDPs determined from a random sampling of approximately 1% of the total number of devices participating in Kaspersky Security Network. This assumption may be incorrect, but there is no technical way of checking it, because Kaspersky Security Network operates only with depersonalized statistics and thus we cannot match data on processor types with data on detections.

125 New Flaws Found in Routers and NAS Devices from Popular Brands

The Hacker News - 17 Září, 2019 - 11:58
The world of connected consumer electronics, IoT, and smart devices is growing faster than ever with tens of billions of connected devices streaming and sharing data wirelessly over the Internet, but how secure is it? As we connect everything from coffee maker to front-door locks and cars to the Internet, we're creating more potential—and possibly more dangerous—ways for hackers to wreak havoc.
Kategorie: Hacking & Security

Talking to machines: Lisp and the origins of AI - 17 Září, 2019 - 11:44
This article explores the invention of Lisp and the rise of thinking computers powered by open-source software:
Kategorie: Hacking & Security

Russia reportedly breached encrypted FBI comms in 2010 - 17 Září, 2019 - 11:36
Are you aware that Russia reportedly breached FBI communications starting in 2010? The Obama administration seized two US compounds in response. Learn more:
Kategorie: Hacking & Security

Teen music hacker arrested in UK for stealing bands’ unreleased music

Sophos Naked Security - 17 Září, 2019 - 11:36
"If he's guilty, he'll face the music." Heh.

WhatsApp 'Delete for Everyone' Doesn't Delete Media Files Sent to iPhone Users

The Hacker News - 17 Září, 2019 - 11:17
Mistakenly sent a picture to someone via WhatsApp that you shouldn't have? Well, we've all been there, but what's more unfortunate is that the 'Delete for Everyone' feature WhatsApp introduced two years ago contains an unpatched privacy bug, leaving its users with false sense of privacy. WhatsApp and its rival Telegram messenger offer "Delete for Everyone," a potentially life-saving feature
Kategorie: Hacking & Security
Syndikovat obsah