Kategorie
How to Get Going with CTEM When You Don't Know Where to Start
Cloudflare Thwarts Largest-Ever 3.8 Tbps DDoS Attack Targeting Global Sectors
Cloudflare Thwarts Largest-Ever 3.8 Tbps DDoS Attack Targeting Global Sectors
WordPress LiteSpeed Cache Plugin Security Flaw Exposes Sites to XSS Attacks
WordPress LiteSpeed Cache Plugin Security Flaw Exposes Sites to XSS Attacks
Recently patched CUPS flaw can be used to amplify DDoS attacks
‘Pig butchering’ trading apps found on Google Play, App Store
The EU wants to know more about social media algorithms
Via the Digital Services Regulation (DSA), the European Commission has requested information from Youtube, Snapchat and Tiktok about which parameters their algorithms use to recommend social media content to users.
The Commission then wants to evaluate the extent to which these algorithms can amplify risks linked to, for example, democratic elections, mental health and children’s well-being. The authority also wants to look at how the platforms work to reduce the potential impact their recommendation systems have on the spread of illegal content, such as the promotion of drugs and incitement against ethnic groups.
The social media companies have until Nov. 15 to provide the requested information.
Dutch Police: ‘State actor’ likely behind recent data breach
What’s new for Apple Intelligence?
Most Apple watchers may have noticed that the company’s iPhone 16 marketing really does put Apple Intelligence front and center, even though its home-baked breed of AaI (Artificial [Apple] Intelligence) isn’t available quite yet.
All the same, the system, which we explain in great depth here, is on the way. And in the run up to its arrival, we’re learning more about it, and when and how it will be introduced. As we wait on data about the extent to which Apple Intelligence boosts future iPhone sales, read on to learn when Apple Intelligence will come to your nation, what schedule the various tools are shipping on, and other recently revealed details concerning Apple’s hugely hyped service.
When is Apple Intelligence coming?Apple will introduce the first of its Apple Intelligence services with the release of iOS 18.1. More tools and services will be made available later this year and across 2025, when the company will likely introduce brand new and unannounced features. You will require an iPhone 16 series device, an iPhone 15 Pro series device, or an iPad or Mac running an M1 chip or later to run the system.
What schedule are service releases on?A Bloomberg report tells us when to expect Apple Intelligence features to appear:
iOS 18.1:Due in mid-October, this first set of features will include various Writing tools, phone call recording and transcription, a smart focus mode and Memories movies. Apple tells us the feature list includes:
- Writing Tools.
- Clean Up in Photos.
- Create a Memory movie in Photos.
- Natural language search in Photos.
- Notification summaries.
- Reduce Interruptions Focus.
- Intelligent Breakthrough and Silencing in Focus.
- Priority messages in Mail.
- Smart Reply in Mail and Messages.
- Summaries in Mail and Messages.
- And Siri enhancements, including product knowledge, more resilient request handling, a new look and feel, a more natural voice, the ability to type to Siri, and more.
In December, we should see Apple make Genmoji and Image Playground services available.
iOS 18.4:This is when Siri will be overhauled to become more contextually aware and capable of providing more personally relevant responses. This release is thought to be coming in March and will be preceded by a more minor update (iOS 18.3).
Where will Apple Intelligence be available?Bad news, good news. The good news is that US iPhone owners will get to use Apple Intelligence as soon as iOS 18.1 ships. The other good news is that any user anywhere willing to set their device language to US English should also be able to run the services; if you want to keep your iPhone running your language, you’ll have to wait a little while.
Apple has promised to introduce localized language support for the following English nationalities in December: Australia, Canada, New Zealand, South Africa, and the United Kingdom.
Throughout 2025, the company has promised to introduce Apple Intelligence support for English (India), English (Singapore), French, German, Italian, Japanese, Korean, Portuguese, Spanish, and Vietnamese. The company also promised support for “other” languages, but hasn’t announced which ones. For the moment, at least, Apple Intelligence will not be available in the EU.
How much storage does the system need?An Apple document confirms that Apple Intelligence requires 4GB of available iPhone storage to download, install, and use. The company hasn’t disclosed how much space is required on iPads or Macs, but it seems reasonable to expect it’s close to the same. Apple also warns that the amount of required storage could increase as new features are introduced.
What else to knowApple now sees AI as a hugely important component to its business moving forward. That means the service will work on all future iPads, Macs, and iPhones (including iPhone SE). It also means the company is plotting a path to support the service on visionOS devices and Homepod and deploy it in future products, including an intelligent home automation and management system it apparently plans, along with the introduction (at last) of a “HomeOS.” There’s more information here.
Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.
Microsoft and DOJ disrupt Russian FSB hackers' attack infrastructure
Over 4,000 Adobe Commerce, Magento shops hacked in CosmicSting attacks
Effective Fuzzing: A Dav1d Case Study
Guest post by Nick Galloway, Senior Security Engineer, 20% time on Project Zero
Late in 2023, while working on a 20% project with Project Zero, I found an integer overflow in the dav1d AV1 video decoder. That integer overflow leads to an out-of-bounds write to memory. Dav1d 1.4.0 patched this, and it was assigned CVE-2024-1580. After the disclosure, I received some questions about how this issue was discovered, since dav1d is already being fuzzed by at least oss-fuzz. This blog post explains what happened. It’s a useful case study in how to construct fuzzers to exercise as much code as possible. But first, some background...
BackgroundDav1dDav1d is a highly-optimized AV1 decoder. AV1 is a royalty-free video coding format developed by the Alliance for Open Media, and achieves improved data compression compared to older formats. AV1 is widely supported by web browsers, and a significant parsing vulnerability in AV1 decoders could be used as part of an attack to gain remote code execution. In the right context, where AV1 is parsed in a received message, this could allow a 0-click exploit. Testing some popular messaging clients by sending AV1 videos and AVIF images (which uses the AV1 codec) yielded the following results:
- AVIF images are displayed in iMessage
- AVIF images are NOT displayed in Android Messages when sent as an MMS
- AVIF images are displayed in Google Chat
- AV1 videos are not immediately displayed in Google Chat, but can be downloaded by the receiver and eventually can be played after being downscaled
Dav1d is written primarily in C and notably has different code paths for different architectures. There are x86, x86-64, ppc, riscv, arm32, and arm64 code paths in the repository, most of these containing optimized assembly. As noted in their roadmap, support for some of these is ongoing work, but at least ARMv7, ARMv8, and x86-64 have been thoroughly tested in the field. Based on this being a library written in C and assembly, as well as dav1d’s ubiquitous support in web browsers, I might expect it already has excellent fuzzing coverage from multiple sources.
The integer overflowThe full details, including two proof-of-concepts that can be used to reproduce the vulnerability, are available from the project zero bug tracker. The short explanation is that when multiple decoding threads are used, a signed 32-bit integer overflow can occur when calculating the values to put in the tile start offset array. In the excerpt below, the addition overflows:
f->frame_thread.tile_start_off[tile_idx++] = row_off + b_diff *
f->frame_hdr->tiling.col_start_sb[tile_col] * f->sb_step * 4;
These overflowed values in tile_start_off are then passed to setup_tile():
setup_tile(&f->ts[j], f, data, tile_sz, tile_row, tile_col++,
c->n_fc > 1 ? f->frame_thread.tile_start_off[j] : 0);
The tile_start_off parameter to setup_tile() is from f->frame_thread.tile_start_off[j] above, and used to calculate values for several pointers. (Note that pal_idx, cbi, and cf are pointers in the frame_thread struct, as can be seen in internal.h.
static void setup_tile(Dav1dTileState *const ts,
const Dav1dFrameContext *const f,
const uint8_t *const data, const size_t sz,
const int tile_row, const int tile_col,
const int tile_start_off)
...
ts->frame_thread[p].pal_idx = f->frame_thread.pal_idx ?
&f->frame_thread.pal_idx[(size_t)tile_start_off * size_mul[1] / 8] :
NULL;
ts->frame_thread[p].cbi = f->frame_thread.cbi ?
&f->frame_thread.cbi[(size_t)tile_start_off * size_mul[0] / 64] :
NULL;
ts->frame_thread[p].cf = f->frame_thread.cf ?
(uint8_t*)f->frame_thread.cf +
(((size_t)tile_start_off * size_mul[0]) >> !f->seq_hdr->hbd) :
NULL;
Those pointers are later written to, resulting in an out of bounds write to memory. Two test cases are provided with the bug, the first of which (poc1.obu) will result in an address which is outside the valid range of addresses, and so might not be exploitable. The other test case (poc2.obu) enables high bit depth mode and so has higher memory requirements, but results in pointers that are within the normal range of addresses, and so is more likely to be useful in an exploit.
Fuzzing Space DefinitionA fuzzer’s success is typically measured by “coverage”, where the fuzz target's execution is traced to examine which lines of assembly code have been covered. When I talk about the “fuzzing space”, I specifically mean space in the sense of a mathematical space, where the set of lines of code that are executed by a given set of test cases is something we would like to maximize. In other words, a good fuzzer will execute as many lines of code as possible with the smallest possible set of test cases. To fully define the space we would also consider the fuzzing engine that generates test cases, the initial seed corpus, and the various configurations and architectures supported by the code to be fuzzed.
Modified Dav1d FuzzerThe dav1d fuzzer in oss-fuzz at the time I was looking at dav1d is visible on GitHub. This contains build instructions and a dockerfile for oss-fuzz to run this at scale. The fuzzer implementation is in the dav1d source repository. The meson.build file shows a couple of configurations, one for building dav1d_fuzzer and the other for building dav1d_fuzzer_mt, which additionally defines DAV1D_MT_FUZZING.
The fuzzing code is written in C, found in dav1d_fuzzer.c. The fuzzer implements LLVMFuzzerTestOneInput, the standard way to use libfuzzer. The first thing the fuzzer does is the usual variable declarations in any C function, including instantiating a Dav1dSettings struct to all zeroes. A bit later, the fuzzer uses a function to initialize the settings struct with defaults:
dav1d_default_settings(&settings);
#ifdef DAV1D_MT_FUZZING
settings.max_frame_delay = settings.n_threads = 4;
#elif defined(DAV1D_ALLOC_FAIL)
settings.max_frame_delay = max_frame_delay;
settings.n_threads = n_threads;
dav1d_setup_alloc_fail(seed, probability);
#else
settings.max_frame_delay = settings.n_threads = 1;
#endif
#if defined(DAV1D_FUZZ_MAX_SIZE)
settings.frame_size_limit = DAV1D_FUZZ_MAX_SIZE;
#endif
It’s good that the fuzzer will create one or four threads, depending on the fuzzer configuration, but if vulnerabilities exist only when there are three threads, or 19 threads, these will not be detected by any use of this fuzzer. That said, since the code paths for the threaded option mostly seem to differ based only on whether the number of threads is 1 or some other number, that seems unlikely.
There are some other configuration items that users of the dav1d library might configure differently. As one example, output_invisible_frames is always zero in this fuzzer. If a vulnerability existed only when this was nonzero, the fuzzer would not catch this in either the threaded or multi threaded fuzzing configurations.
Another example of untested coverage in the dav1d fuzzer, and the most interesting one for me because it led to the discovery of the integer overflow vulnerability, is the usage of DAV1D_FUZZ_MAX_SIZE.
#define DAV1D_FUZZ_MAX_SIZE 4096 * 4096
…
#if defined(DAV1D_FUZZ_MAX_SIZE)
settings.frame_size_limit = DAV1D_FUZZ_MAX_SIZE;
#endif
This maximum frame size did not exist in all configurations used by dav1d users, and although 32-bit platforms have an internally applied limit, there is no limit by default for 64-bit platforms. Removing this line (and using the multithreaded fuzzer with ubsan enabled) was enough to trigger the integer overflow. To allow the fuzzer to explore more of the configuration space, I added support for the fuzzer to also fuzz many of the configuration settings that can be passed to dav1d. The code for this "configuration fuzzing" is shown in the excerpt below. It contains a few range restrictions only to avoid triggering asserts. This might be an area to explore in the future in case any of these asserts do not occur in production systems and where the configuration can be influenced by an attacker.
struct SettingsFuzz {
int n_threads;
int max_frame_delay;
int apply_grain;
int operating_point;
int all_layers;
unsigned frame_size_limit;
int strict_std_compliance;
int output_invisible_frames;
int inloop_filters;
int decode_frame_type;
};
Dav1dSettings newSettings(struct SettingsFuzz sf) {
Dav1dSettings settings = {0};
dav1d_default_settings(&settings);
// Some of these trigger an assert if they're out of range
if (sf.n_threads < 0 || sf.n_threads > DAV1D_MAX_THREADS) {
sf.n_threads = DAV1D_MAX_THREADS / 2;
}
settings.n_threads = sf.n_threads;
if (sf.max_frame_delay < 0 || sf.max_frame_delay > DAV1D_MAX_FRAME_DELAY) {
sf.max_frame_delay = DAV1D_MAX_FRAME_DELAY;
}
settings.max_frame_delay = sf.max_frame_delay;
settings.apply_grain = sf.apply_grain;
if (sf.operating_point < 0 || sf.operating_point > 31) {
sf.operating_point = 0;
}
settings.operating_point = sf.operating_point;
settings.all_layers = sf.all_layers;
settings.frame_size_limit = sf.frame_size_limit;
settings.strict_std_compliance = sf.strict_std_compliance;
settings.output_invisible_frames = sf.output_invisible_frames;
settings.inloop_filters = (enum Dav1dInloopFilterType)sf.inloop_filters;
if (sf.decode_frame_type < 0 ||
sf.decode_frame_type > (int)DAV1D_DECODEFRAMETYPE_KEY) {
sf.decode_frame_type = 0;
}
settings.decode_frame_type = (enum Dav1dDecodeFrameType)sf.decode_frame_type;
return settings;
}
Instead of placing limits on the fuzzer, these are better placed in the code itself. Taking the maximum frame size, on 32-bit systems dav1d will restrict frame sizes to 8192*8192, regardless of the frame_size_limit configuration. (See code excerpt below) On 64-bit systems, there is no such limit, and so very large 50,000x50,000 sized frames are possible.
/* On 32-bit systems extremely large frame sizes can cause overflows in
* dav1d_decode_frame() malloc size calculations. Prevent that from occuring
* by enforcing a maximum frame size limit, chosen to roughly correspond to
* the largest size possible to decode without exhausting virtual memory. */
if (sizeof(size_t) < 8 && s->frame_size_limit - 1 >= 8192 * 8192) {
c->frame_size_limit = 8192 * 8192;
if (s->frame_size_limit)
dav1d_log(c, "Frame size limit reduced from %u to %u.\n",
s->frame_size_limit, c->frame_size_limit);
}
There’s an understandable desire to avoid reporting errors when a fuzzer triggers huge allocations. When possible, this part of the fuzzing space could be explored with a shard configured to run with a much larger amount of memory than is usually available.
I also tested a number of other avenues that did not lead to discovering vulnerabilities. One example is fuzzing on ARM, which I had expected might result in a vulnerability due to it not being covered by OSS-Fuzz. Despite this not uncovering anything, I still believe that it’s worthwhile to run fuzz tests on other architectures when possible, especially when the target has different code paths and optimized assembly for different architectures, as is the case with dav1d.
ConclusionThe ultimate lesson I took away from this is that a fruitful area to look for vulnerabilities is artificial limits within a fuzzer. By setting a relatively small frame_size_limit, the dav1d fuzzer missed the integer overflow. There is a good reason for this limit, which is that oss-fuzz only supports 2.5GB of RAM. This highlights a tradeoff for fuzzers. By limiting the amount of RAM we can hope to increase coverage overall by fitting more fuzzers within the machines we have. Unfortunately this means limited coverage for the part of the fuzzing space that requires more memory.
Until memory safe parsers are available and widely used, memory corruption issues will continue to present a serious threat to users. For now, perhaps we can create fuzzers that are configured to occasionally explore parts of the fuzzing space that require more memory.
P.S.: OSS-Fuzz Bughunters Reward ProgramFinally, I would like to mention that as of writing there is a Google bughunters reward program for improving fuzzing coverage in critical OSS projects. See the bughunters site for more details.
Google Adds New Pixel Security Features to Block 2G Exploits and Baseband Attacks
Google Adds New Pixel Security Features to Block 2G Exploits and Baseband Attacks
Pixel's Proactive Approach to Security: Addressing Vulnerabilities in Cellular Modems
Pixel phones have earned a well-deserved reputation for being security-conscious. In this blog, we'll take a peek under the hood to see how Pixel mitigates common exploits on cellular basebands.
Smartphones have become an integral part of our lives, but few of us think about the complex software that powers them, especially the cellular baseband – the processor on the device responsible for handling all cellular communication (such as LTE, 4G, and 5G). Most smartphones use cellular baseband processors with tight performance constraints, making security hardening difficult. Security researchers have increasingly exploited this attack vector and routinely demonstrated the possibility of exploiting basebands used in popular smartphones.
The good news is that Pixel has been deploying security hardening mitigations in our basebands for years, and Pixel 9 represents the most hardened baseband we've shipped yet. Below, we’ll dive into why this is so important, how specifically we’ve improved security, and what this means for our users.
The Cellular Baseband
The cellular baseband within a smartphone is responsible for managing the device's connectivity to cellular networks. This function inherently involves processing external inputs, which may originate from untrusted sources. For instance, malicious actors can employ false base stations to inject fabricated or manipulated network packets. In certain protocols like IMS (IP Multimedia Subsystem), this can be executed remotely from any global location using an IMS client.
The firmware within the cellular baseband, similar to any software, is susceptible to bugs and errors. In the context of the baseband, these software vulnerabilities pose a significant concern due to the heightened exposure of this component within the device's attack surface. There is ample evidence demonstrating the exploitation of software bugs in modem basebands to achieve remote code execution, highlighting the critical risk associated with such vulnerabilities.
The State of Baseband Security
Baseband security has emerged as a prominent area of research, with demonstrations of software bug exploitation featuring in numerous security conferences. Many of these conferences now also incorporate training sessions dedicated to baseband firmware emulation, analysis, and exploitation techniques.
Recent reports by security researchers have noted that most basebands lack exploit mitigations commonly deployed elsewhere and considered best practices in software development. Mature software hardening techniques that are commonplace in the Android operating system, for example, are often absent from cellular firmwares of many popular smartphones.
There are clear indications that exploit vendors and cyber-espionage firms abuse these vulnerabilities to breach the privacy of individuals without their consent. For example, 0-day exploits in the cellular baseband are being used to deploy the Predator malware in smartphones. Additionally, exploit marketplaces explicitly list baseband exploits, often with relatively low payouts, suggesting a potential abundance of such vulnerabilities. These vulnerabilities allow attackers to gain unauthorized access to a device, execute arbitrary code, escalate privileges, or extract sensitive information.
Recognizing these industry trends, Android and Pixel have proactively updated their Vulnerability Rewards Program in recent years, placing a greater emphasis on identifying and addressing exploitable bugs in connectivity firmware.
Building a Fortress: Proactive Defenses in the Pixel Modem
In response to the rising threat of baseband security attacks, Pixel has incrementally incorporated many of the following proactive defenses over the years, with the Pixel 9 phones (Pixel 9, Pixel 9 Pro, Pixel 9 Pro XL and Pixel 9 Pro Fold) showcasing the latest features:
- Bounds Sanitizer: Buffer overflows occur when a bug in code allows attackers to cram too much data into a space, causing it to spill over and potentially corrupt other data or execute malicious code. Bounds Sanitizer automatically adds checks around a specific subset of memory accesses to ensure that code does not access memory outside of designated areas, preventing memory corruption.
- Integer Overflow Sanitizer: Numbers matter, and when they get too large an “overflow” can cause them to be incorrectly interpreted as smaller values. The reverse can happen as well, a number can overflow in the negative direction as well and be incorrectly interpreted as a larger value. These overflows can be exploited by attackers to cause unexpected behavior. Integer Overflow Sanitizer adds checks around these calculations to eliminate the risk of memory corruption from this class of vulnerabilities.
- Stack Canaries: Stack canaries are like tripwires set up to ensure code executes in the expected order. If a hacker tries to exploit a vulnerability in the stack to change the flow of execution without being mindful of the canary, the canary "trips," alerting the system to a potential attack.
- Control Flow Integrity (CFI): Similar to stack canaries, CFI makes sure code execution is constrained along a limited number of paths. If an attacker tries to deviate from the allowed set of execution paths, CFI causes the modem to restart rather than take the unallowed execution path.
- Auto-Initialize Stack Variables: When memory is designated for use, it’s not normally initialized in C/C+ as it is expected the developer will correctly set up the allocated region. When a developer fails to handle this correctly, the uninitialized values can leak sensitive data or be manipulated by attackers to gain code execution. Pixel phones automatically initialize stack variables to zero, preventing this class of vulnerabilities for stack data.
We also leverage a number of bug detection tools, such as address sanitizer, during our testing process. This helps us identify software bugs and patch them prior to shipping devices to our users.
The Pixel Advantage: Combining Protections for Maximum Security
Security hardening is difficult and our work is never done, but when these security measures are combined, they significantly increase Pixel 9’s resilience to baseband attacks.
Pixel's proactive approach to security demonstrates a commitment to protecting its users across the entire software stack. Hardening the cellular baseband against remote attacks is just one example of how Pixel is constantly working to stay ahead of the curve when it comes to security.
Special thanks to our colleagues who supported our cellular baseband hardening efforts: Dominik Maier, Shawn Yang, Sami Tolvanen, Pirama Arumuga Nainar, Stephen Hines, Kevin Deus, Xuan Xing, Eugene Rodionov, Stephan Somogyi, Wes Johnson, Suraj Harjani, Morgan Shen, Valery Wu, Clint Chen, Cheng-Yi He, Estefany Torres, Hungyen Weng, Jerry Hung, Sherif Hanna
OpenAI continues to burn money
Although OpenAI’s revenues are increasing significantly, the generative AI (genAI) pioneer remains dependent on financial injections, according to Reuters.
The maker of ChatGPT generated revenue of $300 million in September alone, sources said — an increase of 1700% compared to the beginning of 2023. And the company expects revenue to jump to $11.6 billion next year.
Nevertheless, OpenAI expects to lose around $5 billion this year despite sales of $3.7 billion.
Expenses can only be partially tracedVarious factors are responsible for the high losses, reports The New York Times. This year one of the biggest increased operating costs has been increased energy consumption tied to an enormous upswing since the launch of ChatGPT at the end of 2022. The company sells subscriptions for various tools and the startup grants licenses to numerous companies for the use of large language models (LLMs) from its GPT family.
Employee salaries and office rent also have a financial impact.
AI needs more moneyIn order to cover existing debts and further increase growth, the genAI company has for some time been aiming for another round of financing, which should also help manage energy costs.
The latest financing round — led by Thrive Capital, a US venture capital firm that plans to invest $1 billion — brought in $6.6 billion and pushed the company’s valuation to $157 billion. At the same time, OpenAI is warning investors away from rivals like Anthropic, xAI and Safe Superintelligence (SSI), a startup launched by OpenAI co-founder Ilya Sutskever.
Microsoft on board, Apple shies awayMicrosoft, which like Thrive has previously invested several billion dollars in OpenAI, also wants to participate in this round. But Apple, which was also interested in investing, has since dropped out, according to Reuters.
One reason for Apple’s change of heart could be internal turmoil caused by the board’s plans to transform OpenAI into a for-profit company. Following the announcement of those plans, there were a number of key departures at OpenAI, most notably the departure of CTO Mira Murati.
In the near term, the growth of OpenAI is likely to continue; according to analysts’ calculations, the company has now achieved a market share of 30%.
Fraudsters imprisoned for scamming Apple out of 6,000 iPhones
Cloudflare blocks largest recorded DDoS attack peaking at 3.8Tbps
Evaluating Mitigations & Vulnerabilities in Chrome
The Chrome Security Team is constantly striving to make it safer to browse the web. We invest in mechanisms to make classes of security bugs impossible, mitigations that make it more difficult to exploit a security bug, and sandboxing to reduce the capability exposed by an isolated security issue. When choosing where to invest it is helpful to consider how bad actors find and exploit vulnerabilities. In this post we discuss several axes along which to evaluate the potential harm to users from exploits, and how they apply to the Chrome browser.
Historically the Chrome Security Team has made major investments and driven the web to be safer. We pioneered browser sandboxing, site isolation and the migration to an encrypted web. Today we’re investing in Rust for memory safety, hardening our existing C++ code-base, and improving detection with GWP-asan and lightweight use-after-free (UAF) detection. Considerations of user-harm and attack utility shape our vulnerability severity guidelines and payouts for bugs reported through our Vulnerability Rewards Program. In the longer-term the Chrome Security Team advocates for operating system improvements like less-capable lightweight processes, less-privileged GPU and NPU containers, improved application isolation, and support for hardware-based isolation, memory safety and flow control enforcement.
When contemplating a particular security change it is easy to fall into a trap of security nihilism. It is tempting to reject changes that do not make exploitation impossible but only make it more difficult. However, the scale we are operating at can still make incremental improvements worthwhile. Over time, and over the population that uses Chrome and browsers based on Chromium, these improvements add up and impose real costs on attackers.
Threat Model for Code ExecutionOur primary security goal is to make it safe to click on links, so people can feel confident browsing to pages they haven’t visited before. This document focuses on vulnerabilities and exploits that can lead to code execution, but the approach can be applied when mitigating other risks.
Attackers usually have some ultimate goal that can be achieved by executing their code outside of Chrome’s sandboxed or restricted processes. Attackers seek information or capabilities that we do not intend to be available to websites or extensions in the sandboxed renderer process. This might include executing code as the user or with system privileges, reading the memory of other processes, accessing credentials or opening local files. In this post we focus on attackers that start with JavaScript or the ability to send packets to Chrome and end up with something useful. We restrict discussion to memory-safety issues as they are a focus of current hardening efforts.
User Harm ⇔ Attacker UtilityChrome Security can scalably reduce risks to users by reducing attackers’ freedom of movement. Anything that makes some class of attackers’ ultimate goals more difficult, or (better) impossible, has value. People using Chrome have multiple, diverse adversaries. We should avoid thinking only about a single adversary, or a specific targeted user, the most advanced-persistent attackers or the most sophisticated people using the web. Chrome’s security protects a spectrum of people from a spectrum of attackers and risks. Focussing on a single bug, vector, attacker or user ignores the scale at which both Chrome and its attackers are operating. Reducing risks or increasing costs for even a fraction of threat scenarios helps someone, somewhere, be safer when using the web.
There are still better exploits for attackers and we should recognise and prioritize efforts that meaningfully prevent or fractionally reduce the availability or utility of the best bugs and escalation mechanisms.
Good Bugs and Bad BugsAll bugs are bad bugs but some bugs are more amenable to exploitation. High value bugs and escalation mechanisms for attackers have some or all of the following attributes:
ReliableAn exploit that sometimes crashes, or that when launched only sometimes allows for exploitation, is less useful than one that can be mechanically triggered in all cases. Crashes might lead to detection by the target or by defenders that collect the crashes. Attackers might not always have more than one chance to launch their attacks. Bugs that only surface when different threads must do things in a certain order require more use of resources or time to trigger. If attackers are willing to risk detection by causing a crash they can retry their attacks as Chrome uses a multi-process architecture for cross-domain iframes. Conversely, bugs that only occur when the main browser process shuts down are more difficult to trigger as attackers get a single attempt per session.
Low-interactionChrome exists so that people can visit websites and click on links so we take that as our baseline for minimal interaction. Exploits that only work if a user performs an action, even if that action might be expected, are more risky for an attacker. This is because the code expressing the bug must be resident on a system for longer, the exploit likely has a lower yield as the action won’t always happen, and the bug is less silent as the user might become suspicious if they seem to be performing actions they are not used to performing.
UbiquitousA bug that exists on several platforms and can be exploited the same way everywhere will be more useful than one which is only exploitable on one platform or needs to be ported to several platforms. Bugs that manifest on limited hardware types, or in fewer configurations, are only useful if the attacker has targets using them. Every bug an attacker has to integrate into their exploitation flow requires some ongoing maintenance and testing, so the fewer bugs needed the better. For Chrome some bugs only manifest on Linux, while others are present on all of our platforms. Chrome is one of the most ubiquitous software products today, but some of its libraries are even more widely used, so attackers may invest extra effort in finding and exploiting bugs in third party code that Chrome uses. Bugs that require a user to install an extension or rely on particular hardware configurations are less useful than ones reachable from any web page.
FastAttacks that require more than a few seconds to set up or execute are less likely to succeed and more likely to be caught. It is more difficult to test and develop a reliable exploit using a slow bug as the compile-test-debug cycle will be stretched.
ScriptableBugs that require an exploit to perform grooming or state manipulation to succeed are more valuable if their environment can be scripted. The closer the scripting is to the bug, the easier it is to control the context in which the bug will be triggered. Bugs deep in a codec, or a race in a thread the attacker does not control, are more difficult to script. Scriptable bugs are more easily integrated into an exploitation flow, while bugs that are not scriptable might only be useful if they can be integrated with a related weird machine. Bugs that are adjacent to a scripting engine like JavaScript are easier to trigger - making some bugs in third party libraries more serious in Chrome than they might be in other contexts. Bugs in a tightly coupled API like WebGPU are easy to script. Chrome extensions can manipulate Chrome’s internal state and user-interface (for example, they can open, close and rearrange tabs), making some user-interaction scriptable.
Easy to TestAttackers need long-term confidence in their exploits, and will want to test them against changing versions of Chrome and the operating system running Chrome. Bugs that can be automatically reproduced in a test environment can be tested easily. Bugs that can only be triggered with user interaction, or after complex network calls, or that require interaction with third-party services are harder to test. They need a complex test environment, or a patched version of Chrome that mimics the environment in a way that triggers the bug. Maintaining this sort of system takes time and resources, making such bugs less attractive. Note that being scriptable relates to the environment of the bug. Scriptable environments lend themselves to easier testing.
SilentBugs that cause side effects that can be detected are less useful than those which operate without alerting a user, modifying system state, emitting events, or causing repeatable and detectable network traffic. Side effects include metrics, crashes or slowdowns, pop ups & prompts, system logs and artifacts like downloaded files. Side effects might not alert a specific target of an attack as it happens but might lead to later identification of targeted systems. A bug that several groups know about could be detected without the attacker’s knowledge, even if it seems to succeed.
Long-livedAttackers will prefer bugs that are not likely to be fixed or found by others. Analyzing and integrating a bug into an exploitation suite likely involves significant up-front work, and attackers will prefer bugs that are likely to last a long time. Many attackers sell exploits as a subscription service, and their economic model might be disrupted if they need to find bugs at a higher rate. Bugs recently introduced into a product, or that might be found with widely known fuzzing techniques, are likely to be found (and possibly fixed) faster.
TargetedAttackers will try to protect their exploits from discovery and will prefer bugs that can be triggered only when they are confident they will only be exposed to chosen targets. It is relatively easy to fingerprint a web user using cookies, network knowledge and features of the web platform. Removing classes of delivery mechanisms (e.g. no unencrypted HTTP) can make it more difficult to target every exploit.
Easy to escalateModern browsers do have several mitigations that make it more difficult to exploit some bugs or bug classes. Attackers usually must take the primitives offered by a bug, then control them to achieve a sub-goal like executing arbitrary system calls. Some bugs won’t chain well to a follow-on stage, or might need significant integration effort or tooling to allow a follow-on stage to proceed. The utility of some bugs is related to how well they couple with later escalation or lateral movement mechanisms. Some bugs by themselves are not useful — but can be combined with other bugs to make them reliable or feasible. Many info leaks fit into this category. A stable read-what-where primitive or a way to probe which memory is allocated makes an arbitrary write easier to execute. If a particular escalation technique crops up often in exploit chains or examples it is worth seeing if it can be remediated.
Easy to findThis may be counter-intuitive but a bug that is easy to find can be useful until Chrome finds and fixes it and potential targets update. Chrome’s source code is publicly available and attackers can look for recent security or stability fixes and exploit them until the fixes are rolled out (N-days). Fuzzing finds the shallow bugs but does not hit those with even simple state requirements that are still amenable to manual discovery. An attacker may choose to specialize in finding bugs in a particular area that does not otherwise receive much security attention. Finally attackers might introduce the bug themselves in a library (a supply-chain attack).
Difficult to findSome bugs might be easy to find for an attacker because they created the bug, or difficult to find because they are in an under-studied area of the code base, or behind state that is difficult to fuzz. This makes the bug, once found, more valuable as it is likely to be long-lived as other actors will be less likely to find it. Attackers willing to reverse engineer and target closed-source components of Chrome may have access to vulnerabilities that the wider security community are unlikely to discover.
Attacker Goals & EconomicsSome attackers have a business model, others have a budget. Coarsely we worry about attackers that want to make money, and attackers that want to spy on people. Bugs and escalation mechanisms are useful to either group if they are well suited to their way of working. We can evaluate mitigations against different attacker's differing economic models. An unsophisticated actor targeting unsophisticated users might use a widely delivered unreliable attack with a low yield (e.g. encouraging people to run a malicious download). They only need to win a small fraction of the time. Other groups may do limited bug discovery but instead take short-lived, already-fixed bugs and integrate them into exploit kits. Some attackers could be modeled as having an infinite budget but they will still choose the cheapest most reliable mechanism to achieve their goals. The deprecation of Flash and the subsequent move to exploiting v8 perhaps best illustrates this.
When deploying mitigations or removing attack-surface we are ultimately trying to hinder adversaries from achieving their goals. Some attackers might make different decisions if the economics of their operations are changed by reducing the yield of the bugs that enable their activities. Some actors may be willing to devote substantial resources to maintaining a capability to target people using the web - and we can only speculate about their response to changes we introduce. For these sophisticated attackers, removing whole classes of vulnerabilities or escalation mechanisms will be more effective.
Avoid linear thinkingWe perceive successful exploits as chains — linear steps that start with a bug, proceed through various escalation stages, and achieve an attacker’s immediate goal of code execution or data access outside the sandboxed renderer process. We even ask for such chains through our Vulnerability Rewards Programme. For example, a JS type confusion allows for an out of bounds read/write in the v8 sandbox, a v8 sandbox escape bug allows read/write in the renderer, overwriting a JIT write/execute region allows for arbitrary code execution, and calls to system or browser APIs lead to a browser sandbox escape. The attacker starts with the ability to serve JavaScript to a Chrome user, and ends up with unconstrained code execution on the user’s device, presumably to later use this to meet their higher-level goals. Even useful models of layered defense tend to focus on limited paths that trigger an incident (like the single arrow often drawn piercing slices of swiss-cheese).
In reality the terrain presented to the universe of attackers is a complex web of latent possibilities, some known to some, and many yet to be discovered. This is more than ‘attackers think in graphs’, as we must acknowledge that a defensive intervention can succeed even if it does not prevent every attacker from reaching every possible person they wish to exploit.
ConclusionIt is tempting to reject a mitigation or removal of attack surface on the basis that attackers can simply find another way to achieve their goals. However this mindset presumes the most sophisticated attackers and their most desired targets. Our frame of analysis should be wider. We must recognize that many attackers have limited capability and expertise. Some may graft N-days onto red team tools. Some may have an expert or an exploit pipeline that performs well on a small subset of the Chrome codebase, but need training or more resources to obtain useful bugs if their current domain is taken away. Some will sell exploit kits that need rewriting if an escalation mechanism is removed. Previously reliable exploits might become less reliable, or take longer. Making life more difficult for attackers helps protect people using Chrome.
Although we argue that we should not “give up” on mitigations for escalation paths, it is still clearly more important to implement mitigations that make it impossible or difficult to trigger wide classes of initial vulnerabilities, or bypass a significant fraction of mitigations. Reported attacks always start with an initial vulnerability so it is tempting to invest all of our effort there, but this neglects beneficial interventions later in the attack mesh. Reductions in attacker utility translate to increases in attacker costs and reduction in aggregate risk.
A mitigation or bug-reduction mechanism that affects any of the axes of utility outlined above has some value to some of the people using Chrome.
Resources- Project Zero: What is a "good" memory corruption vulnerability?
- An Introduction to Exploit Reliability & What is a "good" Linux Kernel bug? (Isosceles)
- Zero Day Markets with Mark Dowd (Security Cryptography Whatever podcast)
- Escaping the Sandbox (Chrome and Adobe Pdf Reader) on Windows, Zer0Con 2024, Zhiniang Peng, R4nger, Q4n
- Exploring Memory Safety in Critical Open Source Projects (CISA.gov)
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »