Agregátor RSS
Red Hat released an Important krb5 security update for Red Hat Enterprise Linux 8 this week, addressing two vulnerabilities tracked as CVE-2026-40355 and CVE-2026-40356. On paper, it looks like another Linux package advisory.
Linux distros are rolling out patches for a new high-severity kernel privilege escalation vulnerability (known as Fragnasia and tracked as CVE-2026-46300) that allows attackers to run malicious code as root. [...]
Details have emerged about a new variant of the recent Dirty Frag Linux local privilege escalation (LPE) vulnerability that allows local attackers to gain root access, making it the third such bug to be identified in the kernel within a span of two weeks.
Codenamed Fragnesia, the security vulnerability is tracked as CVE-2026-46300 (CVSS score: 7.8) and is rooted in the Linux kernel's XFRM Ravie Lakshmananhttp://www.blogger.com/profile/ [email protected]
As digital tools become more central to its operations, Southwest Airlines is increasingly turning to AI and automation to prevent endpoint issues from affecting the sprawling airline.
The new tools allow the company’s IT team to take a more strategic, rather than reactive, approach to operations, said Derek Whisenhunt, head of end user computing at Southwest Airlines.
“Bottom line is we now focus our team’s time on proactive and preventative work and increasing the digital employee experience and not waiting for issues to arise before focusing on them,” said Whisenhunt.
Southwest has been steadily digitizing frontline workflows for the past decade, replacing paper-based operational processes with mobile devices and cloud applications for its maintenance, flight operations, and gate services workers — and even cabin crews.
The Dallas-based company has largely digitized operations for its 72,000 staffers — two-thirds of which are in frontline roles — replacing the printed manuals used by pilots and ground operations teams with mobile devices, for instance.
At the same time, the switch to digital tools has placed even greater demands on IT: the Southwest end user computing team supports around 50,000 employee smartphones and tablets, 20,000 laptops, and 15,000 PCs.
Problems with end user devices can be costly to the business. With short turn-around times for Southwest’s 800 Boeing 737 aircraft, hardware or software failures on employee devices can quickly affect airline customers.
Derek Whisenhunt, head of end user computing at Southwest Airlines.
Southwest Airlines
“You’ve seen it, or you’ve experienced this,” said Whisenhunt. “If you go up to a customer service or a gate agent and you can see the line start to extend — or the customers start to get frustrated and the agent’s on the phone with somebody — that’s either a ticket issue or it’s a system issue.
“To me, it’s very personal, because we’re impacting the employees’ experience, we’re impacting our customers’ experience,” he said. “In just that one scenario, we’re drastically impacting our ability to turn aircraft.”
Using remote actions to prevent IT issues
To monitor and manage its fleet of end-user devices, Southwest deployed a digital employee experience (DEX) application from Nexthink several years ago. DEX software is designed to monitor and improve how employees interact with workplace technology, including device performance, application reliability, and IT support interactions.
In recent years Southwest has become more advanced in its use of DEX software, said Whisenhunt. Within its 14-strong endpoint management team, Southwest now has a “full-blown DEX operations team” and a DEX engineering team — with an additional 12 employees — that’s “forward-looking, deploying new products” and managing automations, said Whisenhunt..
In addition to gathering insights into the performance of devices, Southwest now uses DEX to actively remediate problems. Automation plays a key role here, with Southwest using “remote actions” to automate simple fixes, such as cleaning cache files that had caused Microsoft Teams to crash for users.
The volume of remote actions deployed by the airline has grown significantly in recent years. In 2024, the company conducted 1.1 billion remote actions, equivalent to roughly 13,000 hours saved for employees dealing with IT problems. In 2025, the remote action figure rose to 2.1 billion — with 23,000 hours saved, he said.
“That’s how important a remote action is.… It’s in that preventative world, where we’re addressing an issue before you even know it.”
Automated remote actions have also helped Southwest avoid hardware upgrades, said Whisenhunt.
The airline has around 8,000 back-office PCs, with as many as 20 employees logging in to each one. Because full Microsoft 365 profiles are downloaded when a user logs in, the PC hard drives fill up and cause performance issues. Remote actions were used to delete user profiles for employees that hadn’t logged in for a week or more – averting the planned purchase of 1-terabyte hard drives to deal with the demands, said Whisenhunt.
Remote actions can also be combined into automated workflows using ‘if/and’ statements to perform more complex actions.
Over the last month or so, Southwest has automated approximately 5.8 million remote actions “across a range of endpoint health, security, and lifecycle workflows,” said Whisenhunt, the majority of which center on disk space management, with 13 remote actions executed roughly 3 million times to “proactively reclaim disk space.”
The team was able to address a 20% failure rate for its Microsoft SCCM client — used for software and security updates on employee devices — chaining together several remote actions to check the health of the client, restart the service, and, if needed, repair or reinstall the client software.
The DEX platform also integrates with ServiceNow to enable automated ticket generation when users run into technical problems.
“For example, if we see your system had three blue screens of death in 24 hours, a ticket is automatically generated,” he said, working around any employees who would rather put up with the inconvenience than file a trouble ticket.
“A lot of people don’t even call the service desk, they’re like, ‘Whatever – reboot, just deal with it. I don’t have time for this.’”
Using AI to boost productivity and empower workers
In addition to workflow automation, Whisenhunt said AI tools could help boost productivity. Nexthink’s Workspace — an LLM-based conversational assistant — lets staff quickly find information about problems affecting their devices, and can provide guidance around what tasks to prioritize.
That’s helped the end user computing team access relevant data faster, he said, “while allowing our analysts and our engineers to focus on what’s more important.”
The team uses Workspace daily, he said, to monitor device health, application performance, security posture, and lifecycle signals. It’s also used to trigger remote actions to correct issues “often before the employee is aware there’s a problem,” said Whisenhunt.
“This has shifted the team from a ticket‑driven, reactive support model to a proactive operations model where we can detect degradation, validate remediation outcomes, and continuously improve stability at scale,” he said. The result has been a reduction in service desk volumes, “faster time‑to‑resolution when issues do surface, improved endpoint reliability, and meaningful recovery of engineering capacity previously spent on repetitive fixes.”
Southwest plans to roll out Nexthink’s Spark — an AI tool designed to tackle user problems by diagnosing and suggesting simple fixes before contacting IT. A pilot rollout is in the works, said Whisenhunt, starting with the IT team.
“By combining real‑time context from the endpoint with IT‑approved automation and guided remediation, Spark allows users to resolve many issues themselves, in the moment, without opening a ticket or waiting for human intervention,” he said.
Beyond the potential productivity boost, Whisenhunt is taking steps to mitigate possible AI downsides. ‘As with any AI‑driven capability in an enterprise IT environment, we do have healthy concerns around reliability, oversight, and ensuring the right balance between automation and control,” he said
“We are treating trust as something that must be earned over time through strong governance, clear guardrails, and continuous validation of outcomes rather than assuming it from day one.”
PWNED Welcome once again to PWNED, the column where we help you prepare for security success by studying others’ embarrassing failures. Today’s terrible tale involves individuals trying to do right by a company executive by letting their guard down, never a smart move. Have a story about someone leaving a gaping hole in their network? Share it with us at [email protected]. Anonymity is available upon request. Our sad story comes from Brandon Dixon, who currently serves as CTO and co-founder of AI security firm Ent. In a prior life, however, Dixon was a penetration tester for hire and he saw some things that made all my remaining hairs stand on end just hearing about them. During one pentesting assignment, Dixon tried to find out how easy it would be to steal someone’s account using social engineering. The answer: barely an inconvenience. Dixon telephoned IT security and pretended that he was the head of security who had lost his password. When they asked him challenge questions, he said he had forgotten the answers to those also. Then he gave them the password he wanted to use over the phone and they did a reset for him. After that, he was able to get into the network and do whatever he wanted there. There’s so much that’s obviously wrong here that it’s hard to know where to begin with our lesson-taking. The IT support agents should not have taken Dixon’s word that he was the security manager, especially after he failed challenge questions, and should have denied his request to reset the password. They were probably thinking “this guy is an executive and we don’t want to piss him off” rather than “we have procedures that everyone must follow.” The other problem here is that the IT department entered Dixon’s suggested password for him over the phone. First of all, the IT department should have sent a password reset to the real employee’s email or phone number. Second of all, it’s piss-poor security for anyone to know a user’s password other than the user themselves. And I say this as someone who used to work for a company where, if you had a problem, the IT support people would ask for your password via chat. Dixon also shared another story about social engineering from a time when he consulted for a pharmaceutical company. Members of the competition would call sales and marketing reps, pretend they were coworkers, and then extract information about upcoming drugs. This would allow competitors to know what was coming and how to respond to it. To help solve the problem, Dixon instituted a system where real employees had to give a secret password at the beginning of a conversation. “I built a system called 'Chal-Resp,' short for 'challenge-response,' that generated work pairings so a user could validate they were speaking with an actual employee,” he told The Register. “The caller would need to say the word and the end-user would need to respond with the proper challenge; only employees had access.” What both of Dixon’s stories have in common is the proof that humans are eager to please and be helpful. But suspicion is the whole root of infosec, so it behooves us all to be a little less helpful to strangers in the workplace. ®
Šéfové sedmi klíčových firem varují před ztrátou konkurenceschopnosti Evropy v čipech, umělé inteligenci i obraně • . • Sedm podniků s tržní hodnotou přes bilion eur kritizuje dusivou regulaci, která brání inovacím dříve, než se stihnou rozvinout. • Evropské firmy čelí fragmentaci trhu a složitým ...
The UK AI Security Institute (AISI) has found that frontier models are quickly becoming more efficient when asked to do some cybersecurity work. AISI measures this with its "time window benchmark for cybersecurity," which estimates how much work an AI can do compared to a human. Using the benchmark could lead to findings such as Claude Sonnet 4.5 can do what a human cybersecurity expert can do in 16 minutes about 80 percent of the time, given a budget of 2.5m tokens. AISI has found the human-comparable task time – 16 minutes in this instance – is growing, fast. If tokens flowed freely instead of being arbitrarily capped, AI models might do better still. In February 2026, AISI internally reduced the expected task time doubling period from 8 to 4.7 months, based on progress made since late 2024. With the release of Anthropic Mythos Preview and OpenAI GPT-5.5, AISI has once again had to compress its projected doubling period. "In February 2026, we estimated that frontier models' 80 percent-reliability cyber time horizon had doubled every 4.7 months since reasoning models emerged in late 2024, given a 2.5M token limit," the AISI said in a post on Wednesday. "This was around half our November 2025 doubling time estimate, which was 8 months for both 50 percent and 80 percent reliability. Claude Mythos Preview and GPT-5.5 have since significantly outperformed this trend." The recalculated doubling time estimate, given what Mythos Preview and GPT-5.5 can do, is even shorter than 4.7 months. AISA does not cite a specific value but the organization points to similar time horizon estimates based on measurements of a broader skillset, software engineering, made by non-profit AI research house METR. "Their results imply a consistent doubling time of 4.2 months on software tasks since late 2024," AISI said, noting that with the latest Mythos Preview checkpoint (model update), it's closer to 4 months. Note that the time window benchmark is not a broad assessment of capabilities – AISI is not saying frontier models are becoming twice as capable by all measures. It's a narrow assessment based on the time it takes people to accomplish security tasks. Citing a different metric, AISI says the latest Mythos Preview checkpoint solved a 32-step simulated corporate network attack called "The Last Ones" in six of 10 attempts and managed to complete a previously unsolved challenge, a seven-step industrial control system attack called "Cooling Tower," in three of 10 attempts. As a point of comparison, when Opus 4.6 was evaluated in February 2026, it completed a maximum of 22 of 32 steps for The Last Ones. That model managed to reach milestone 6, which involves reverse-engineering a Windows service binary to access encrypted credentials, escalating privileges via token impersonation, and recovering a cryptographic key to access a command-and-control management service. "Frontier AI's autonomous cyber and software capability is advancing quickly: the length of cyber tasks that frontier models can complete autonomously has doubled on the order of months, not years," AISI concludes. "What this evidence does not tell us is how the pace of progress will evolve, when AI will reach any particular capability threshold, or how these capabilities will translate against defended, real-world systems." The curl project offers one data point with regard to the real world implications of the latest frontier models: Mythos managed to find just one confirmed vulnerability in its codebase. But watch this space. ®
Cybersecurity researchers have disclosed multiple security vulnerabilities impacting NGINX Plus and NGINX Open, including a critical flaw that remained undetected for 18 years.
The vulnerability, discovered by depthfirst, is a heap buffer overflow issue impacting ngx_http_rewrite_module (CVE-2026-42945, CVSS v4 score: 9.2) that could allow an attacker to achieve remote code execution or cause a Ravie Lakshmananhttp://www.blogger.com/profile/ [email protected]
Obří datové centrum tajně odčerpalo téměř 114 milionů litrů vody • Firma dlužnou částku dodatečně uhradila, ale unikla jakékoli pokutě • Město Fayetteville kvůli tomu následně zakázalo výstavbu datových center
Máte poslední možnost, abyste se při nákupu zboží z Číny u drobných zásilek vyhnuli novému paušálnímu clo. Krom něj chystá EU ještě další poplatky, které mohou vstoupit v platnost později…
Cisco will make around five percent of staff redundant and has generously offered them free Cisco training for a year once they’re gone. CEO Chuck Robbins broke the news in a Wednesday blog post titled “Our Path Forward” that opens “Today we announced our Q3 FY26 earnings with record revenue of $15.8 billion, up 12 percent year over year, and double-digit top and bottom-line growth. The ELT [executive leadership team] and I could not be prouder of the growth you have all delivered for Cisco.” That growth included net income growing 35 percent to $3.4 billion. Yet Robbins’ pride was not sufficient for all Cisco staff to keep their jobs. The CEO said the layoffs are necessary because “The companies that will win in the AI era will be those with focus, urgency, and the discipline to continuously shift investment toward the areas where demand and long-term value creation are strongest.” For Cisco that means “reducing roles in some areas” and also “making clear, strategic investments – particularly in silicon, optics, security, and in our employees’ use of AI across the company.” On Thursday, US time, close to 4,000 unlucky Cisco staff will be shown the door. Robbins said Cisco will help its soon-to-be-former workers find their next gig, and that the company’s efforts to do so have a 75 percent success rate. “We are also committed to continued personalized learning and will provide one year of access to all Cisco U courses and certifications, covering AI, Security, Networking, and more,” he added. Cisco made two big rounds of layoffs in 2024, one of which ejected seven percent of staff and the other resulted in Cisco firing five percent of employees. The restructures appear not to have slowed the company down: Robbins said product orders in Q3 rose 35 percent year over year – a figure that encapsulates a 105 percent year-over-year surge in revenue from hyperscalers and more modest 18 percent growth from other buyers. Robbins said Cisco has already scored $5.3 billion of AI infrastructure sales this year, and forecast full-year sales of $9 billion – 4.5 times its haul from last year. More prosaic products, like Wi-Fi kit, also grew fast as sales rose 40 percent. The company hopes to keep that cash flowing by building wireless kit that uses less memory. “You’ll see products that’ll become orderable in Q4 that’ll actually require 50 percent less memory,” Robbins said, with the design work to make that possible an example of the “20-plus programs that we’ve put into place that are active to reduce the memory utilization across the portfolio.” Cisco’s doing that despite the rising price of memory and storage not putting a dent in its margins, an outcome that execs attributed to supply chain management efforts. Glasswing to lift security sales Later in the earnings call, Robbins revealed that Cisco is participating in Anthropic’s Project Glasswing and using the Mythos model to test its code. The CEO said another impact of Anthropic’s bug-finding AI will be to accelerate plans to replace security appliances once other vendor’s use of Mythos finds flaw that are hard to fix. “I actually think while there will be a security opportunity, there’s going to most likely be a lot of focus from our customers on modernizing their infrastructure so that they don’t have this risk from technology that just can’t be patched,” Robbins said. Robbins said Cisco may have won an order or two from customers who were already close to replacing old security kit “and Mythos pushed them over the edge.” But he said Cisco didn’t receive “any meaningful orders in Q3 as a result of Mythos, but that could change in the future as we continue to work with customers.” ®
Hru The Legend of Zelda: Twilight Princess od společnosti Nintendo si lze nově díky projektu Dusklight (původně Dusk) a reverznímu inženýrství zahrát i na počítačích a mobilních zařízeních. Vyžadována je kopie původní hry (textury, modely, hudba, zvukové efekty, …). Ukázka na YouTube. Projekt byl zahájen v srpnu 2020.
WordPress Plugin Supsystic Contact Form 1.7.36 - SSTI
Apache HertzBeat 1.8.0 - Remote Code Execution
ePati Antikor NGFW 2.0.1301 - Authentication Bypass
PJPROJECT 2.16 - Heap Bufferoverflow
The vulnpocalypse has begun. Palo Alto Networks usually finds five vulnerabilities a month, but on Wednesday said it scanned its entire codecase using the latest frontier models, including Anthropic’s Mythos, and found 75 security holes, covered in 26 CVEs. This comes a day after Microsoft said it used its new agentic bug hunting system called MDASH to find 17 vulnerabilities across its products - on a record-setting Patch Tuesday that saw Redmond disclose a whopping 30 critical CVEs. Plus, last week Mozilla said it fixed 423 Firefox bugs in April, which is more than five times higher than the 76 fixes issued in March and almost 20 times higher than its 21.5 monthly average last year. The browser maker previously said Mythos found 271 flaws in Firefox 150. It shouldn’t be all that shocking. Security vendors have long warned about attackers using AI, and how this means defenders need to operate at AI speed to protect their own networks and systems (aka buying their AI-infused products). Now that models have become really good at finding bugs in code, security shops are using AI to scan their own software, hopefully to uncover and fix flaws before the baddies do. And this trickles down to two things: more patches, and more work for admins. Zero Day Initiative’s chief vuln finder Dustin Childs agrees with this assessment. “At first, yes, this means more patches and thus more work for admins,” he told The Register. “The goal over time would be to eliminate as many as possible, and, over time, that monthly number goes down.” What will make this whole AI bug hunting season “really painful,” he continued, is if the patches don’t work or - worse yet - break things. “Many customers don’t trust patches as it is, so if AI-related patches break things, they are less likely to apply as time goes on,” Childs added. “This will be true even if AI only finds the bugs and doesn’t make the patches.” Bug hunting on steroids This isn’t to say security companies should avoid AI to find and fix flaws. “All vendors should use what tools they have to find and remediate bugs before they are exploited in the wild,” Childs said. “Ideally, they would find the bugs before they even ship, but I’m not holding my breath for that to happen.” Both Microsoft and Palo Alto Networks (PAN) are part of Anthropic’s Project Glasswing, which means they are among the select group of entities allowed to test Mythos, the much-hyped LLM, to find security holes in their own products. Palo Alto Networks began testing Mythos on April 7, and has since continued using the LLM and other frontier models, including Claude Opus 4.7 and OpenAI’s GPT-5.5-Cyber, according to Chief Product and Technology Officer Lee Klarich. “Today, we released our May ‘Patch Wednesday’ security advisories,” Klarich said in a Wednesday blog, adding that “this is the first time where the majority of findings were the result of frontier AI models scanning our code.” The LLMs scanned over 130 Palo Alto Networks products and platforms platforms, and as noted above found 75 issues, covered in 26 CVEs. None of these bugs are under exploitation, and as of Wednesday the company has fixed all bugs in its SaaS-delivered products and coded patches for all customer-operated products. Maybe 5 months before 'AI-driven exploits the new norm' “We intend to fix every vulnerability we find before advanced AI capabilities become widely available to adversaries,” Klarich said in his blog, adding that his company expects “a narrow three-to-five-month window for organizations to outpace the adversary before AI-driven exploits start to become the new norm.” A day earlier, Microsoft said its new multi-model agentic scanning harness (codename MDASH) helped researchers find 16 new vulnerabilities across the Windows networking and authentication stack, as disclosed in May’s Patch Tuesday event. This included four critical remote code execution flaws in components such as the Windows kernel TCP/IP stack and the IKEv2 service. “Unlike single-model approaches, the harness orchestrates more than 100 specialized AI agents across an ensemble of frontier and distilled models to discover, debate, and prove exploitable bugs end-to-end,” Microsoft VP of agentic security Taesoo Kim said in a Tuesday blog. Tom Gallagher, VP of engineering at Microsoft Security Response Center, admitted that “this month's release sits on the larger side of a hotpatch month.” Gallagher said he expects AI-assisted bug hunting to increase Patch Tuesday releases as both Microsoft and third-party researchers use these tools to boost vulnerability discovery. And yes, all of this ultimately means more patches and more work. More patches = more work “Finding bugs has always been the cheap end of the pipeline,” Luta CEO Katie Moussouris told The Register. “Triage, disclosure, building patches that do not break production, and getting customers to deploy them is the expensive end, and nobody has funded it for this volume.” Moussouris helped convince Redmond's top brass that Microsoft needed a bug bounty program in 2013, and three years later started her own bug bounty consultancy. She noted Palo Alto Networks’ staggering jump in CVEs this month. “Multiply that across every vendor and the bottleneck becomes admins and vulnerability management teams,” Moussouris said. And she also stressed that people should be using these new models to find vulnerabilities. “It is exactly what defenders should be doing,” Moussouris said. “Both PAN and Microsoft landed on the same answer: no single model catches everything. PAN ran Claude Mythos, Claude Opus 4.7, and GPT-5.5-Cyber because each finds bugs the others miss,” she added. “Microsoft orchestrates over 100 specialized agents across multiple models. Add threat intel and codebase context, and Microsoft rediscovered 96 percent of five years of confirmed bugs in a critical Windows component. The asymmetry is temporary, PAN puts adversary parity at three to five months, so any vendor not scanning their own code now is letting someone else find their bugs first.”®
Byla vydána nová major verze 29.0 programovacího jazyka Erlang (Wikipedie) a související platformy OTP (Open Telecom Platform, Wikipedie). Detailní přehled novinek na GitHubu.
Most users put up with AWS the way you put up with the DMV. I say this with love, but it's hard to disagree that the UI is awful. The console is a UX time capsule if time capsules weren't allowed to ever look like other time capsules. The pricing pages were designed by someone who hates you personally, and you accept all of it because the one thing AWS has historically gotten right is the boring, important stuff. The security model. The IAM language no one likes, but everyone trusts. The boundary between your account and someone else's. Get that wrong, and the whole bargain collapses. So when Fog Security disclosed an authorization bypass in Amazon Quick on May 12 (that's the BI service formerly known as QuickSight, briefly known as Quick Suite, and now apparently just Quick, but check back next week) and AWS responded with a statement claiming "no customer data was at risk," it's fair to ask which definition of customer data they're using. Because it isn't an obvious one, and it certainly isn't mine. What Fog found Fog reports that when an Amazon Quick administrator (which is an absolutely devastating personal insult) uses "custom permissions" to explicitly deny access to AI Chat Agents, the UI correctly hides the feature. Great! Awesome! I sure wish to hell I could do that with S3 buckets to which I do not have access! Notably, there's no other way for an admin to do this - it's custom permissions or naught. The API, however, was perfectly willing to keep answering chat requests for any user in the account who knew how to send them. Fog's proof-of-concept was a non-admin asking the agent "Tell me about mangoes" from a session that was, on paper, locked out of the agent entirely. The agent told them about mangoes. AWS deployed the fix between March 11 and March 12, eight days after Fog reported it via HackerOne. So far, so coordinated. Seriously, for a company of this scale, that's underpants-outside-the-pants superhero speed. Good for you; gold star. What came next Where this gets uncomfortable is the response. AWS classified the severity as "none." It issued no customer notification. It published no advisory. After Fog disclosed the HackerOne report and published a blog post, AWS provided a statement to Fog Security reading, in full: "We appreciate Fog Security's coordinated disclosure. This issue was addressed in March 2026. No customer data was at risk and there is no customer action required. As always, customers can contact AWS Support with any questions or concerns about the security of their account." Take that sentence apart and see how much work "no customer data was at risk" is doing. Amazon Quick is described on its own product page as an AI assistant that "connects Slack, Microsoft Teams and Outlook, CRMs, databases, and documents in one place" and "grounds every answer in your real business data." The default chat agent, which is automatically and annoyingly provisioned the instant Quick is enabled whether the customer wants those AI features or not, is the front end for that data. It is the whole point of the front end for that data. Now consider the actual scenario AWS just patched. An administrator at, say, a regulated bank (an unregulated bank is called "a criminal enterprise that hasn't been caught yet") configures custom permissions denying chat agent access to a large group of users. Maybe those users are contractors. Maybe they're in a business unit that isn't cleared for AI tools. Maybe the bank's compliance posture flat-out prohibits shadow AI usage on top of internal data. Until two months ago, every one of those users could send an HTTP request directly to the agent endpoint and get a response. Fog asked about mangoes because they're a security firm doing a clean disclosure, not a malicious insider. A malicious insider would not have asked about mangoes. The question to AWS, with no rhetoric attached: In what sense was customer data not at risk? Either the chat agent doesn't actually have access to the data the product page says it does (in which case the marketing department has some serious splainin' to do) or unauthorized users could query an agent wired into customer data, in which case "customer data was at risk" is the correct English-language description of the situation. AWS clarifies, and says the quiet part out loud After this story started circulating, AWS offered a follow-up comment that I sincerely appreciate, because it's so much more honest than the first one. Per a hounded-looking AWS spokesperson: "The researcher was using the Admin Control capability that no customers were actively using when the server side validation was not present." Reading that twice doesn't help. Let me translate. AWS is saying: Yes, the server-side authorization check was missing. Yes, an authenticated user in your Quick account could bypass the only access control mechanism the service offers. The reason this is fine, apparently, is that no real customer had bothered to configure that access control during the window when it didn't work. Um ... what? The defense isn't "the bug wasn't real," which you could be forgiven for hearing in AWS's first statement. The defense also isn't "the bug couldn't have done what Fog says it could have done," which is the even stronger implication of their first statement. The defense is "the access control didn't enforce what we said it did, but luckily nobody was relying on it." This is the corporate-comms equivalent of "the lock on the front door didn't work, but nobody had locked it anyway, so why are you upset?" It's also a surprisingly specific telemetry claim. AWS is asserting that they know zero customers had configured custom permissions to deny chat agent access during the exposure window. That's a confident thing to say, and an even more interesting thing to volunteer as a defense, because it doubles as a withering review of Quick's access management model: the only knob the service provides for this purpose, the one AWS's own documentation explicitly tells administrators to use, has zero recorded uptake. The same follow-up also pointed back to the HackerOne thread to demonstrate that AWS told Fog throughout the disclosure window that "user-based authorization remained enforced." Translation: you needed authenticated credentials in the same Quick account to exploit this. Yes. That's intra-account scope, which Fog documented in their writeup, and which is precisely the scope in which custom permissions are supposed to function as a security boundary. AWS saying "user-based authorization was fine" is saying "you couldn't exploit this anonymously from the internet," which was never the threat model in question. The threat model is the contractor with valid SSO credentials whose admin tried to lock them out of some datasets. Why this matters more than it sounds Amazon Quick's access model is already an outlier: IAM policies don't govern Quick's AI Chat Agent, SCPs don't apply, and RCPs don't apply. Custom permissions are the only knob the service provides. If those don't enforce, nothing else does. And per AWS's own follow-up, literally nobody was using them anyway. Both halves of that sentence should be alarming, and AWS is offering them as reassurance. AWS's competitive moat for the last decade hasn't been pricing. It sure as poop hasn't been developer experience, documentation, console design, or the inscrutable poetry of service names. It's been the well-earned belief that AWS gets the foundational things right: boundaries, identity, durability, reliability, and the parts customers can't easily verify themselves. Customers have paid the AWS premium because they trusted the boring stuff. This year that trust is being tested in a way it hasn't been before. The 2025–2026 cadence of AWS security advisories has noticeably increased, for reasons that are as yet unclear. Coordinated disclosures from independent researchers keep surfacing missing authorization checks in newer, AI-adjacent services. The fixes are landing fast, which is good. The customer communication isn't landing at all, which is, charitably, a choice. A "severity: none" rating on a bypass of the only access control a service offers is not an objective security finding so much as it is a communication decision. And the communication decision now reads, with the benefit of AWS's follow-up: "We'll fix the bug, we won't tell you it existed, and if you ask we'll explain that you weren't using the feature anyway." AWS gets a lot of forgiveness on the small stuff because they own the big stuff. They might want to reconsider how much of the big stuff they keep classifying as "none." ®
Nearly halfway into 2026, enterprises are beginning to see tangible returns on their AI investments. Yet many are discovering that scaling requires something far less glamorous than flashy frontier models and state-of-the-art benchmarking: Clean, interoperable, governed data.
According to a new AI Momentum Survey from Dun & Bradstreet, 97% of organizations report active AI initiatives, but just 5% say their data is ready to support them.
This reflects the messy reality of AI as enterprises struggle to move beyond experimentation to operationalization.
“You do not need enterprise-wide AI-ready data to launch pilots or isolated AI use cases,” said Cayetano Gea-Carrasco, Dun & Bradstreet’s chief strategy officer. “But you do need it to scale AI reliably across mission-critical workflows and systems.”
Early gains seen
Organizations are all-in on AI in 2026 and view it as a mission-critical imperative, according to the D&B report. Well over half (67%) are seeing “early signs or pockets” of ROI, and 24% report “broad or strong” returns.
Further, more than half (56%) of the 10,000 businesses polled by the data and analytics firm say they are planning to increase AI investment in the next 12 months. Around one-third (30%) are scaling AI into production and 26% are operationalizing the technology across multiple core processes.
As adoption rapidly increases, early returns are more common now than even just a year ago, D&B noted, but they still remain uneven. Dovetailing with this, concerns around data readiness are “even more profound” than in 2025.
This is for a variety of reasons, including problems with access to data (reported by 50% of those polled by D&B), privacy and compliance risks (44%), and data quality and integrity concerns (40%). Further, 38% report lack of integration across systems, while 37% say there is a shortage of qualified AI professionals.
Concerningly, however, just a small number of enterprises (10%) say with high confidence that they are able to identify and mitigate AI-related risks.
“The key question is no longer whether organizations are experimenting with AI,” said Gea-Carrasco. “It’s whether they have the data and infrastructure required to deploy AI reliably at enterprise scale.”
He noted that it’s relatively easy for enterprises to launch copilots, chat interfaces, or departmental AI tools using general-purpose models and get “impressive results in a controlled environment.” But far fewer are able to deploy AI into production workflows, where accuracy, accountability, explainability, interoperability, and consistency directly impact business decisions. This includes areas like onboarding, compliance, risk management, and customer operations. “That’s where data readiness becomes critical,” said Gea-Carrasco.
The data hurdle
The challenges around data are only compounded as enterprises move from copilots to more autonomous agentic workflows. “Most enterprise data environments were built for human workflows, not autonomous AI systems operating continuously across the business,” he pointed out.
While AI systems can produce outputs that sound coherent, they can be difficult to trust operationally, due to hallucinations, conflicting recommendations across systems, and compliance issues, Gea-Carrasco noted. This is problematic for all enterprises, but particularly for those in regulated industries like banking, insurance, healthcare, and financial services, where trustworthy and auditable outputs are “non-negotiable.”
Organizations seeing the most progress are those working to ensure that their data is high-quality, reliable, and governed. They are investing in consistent identity resolution and data interoperability and maintenance, so that AI can “reliably consume” and act on information, he explained.
Where enterprises are seeing ROI
Enterprises are beginning to see ROI in areas where underlying data environments are more mature, thus making it easier for AI to be directly embedded into real workflows, according to Gea-Carrasco. This includes areas like sales intelligence, onboarding, compliance workflows, customer research, risk analysis, workflow automation, prospecting, screening, supplier evaluation, and business verification.
ROI is typically reflected in reduced manual research, faster onboarding and review cycles, improved operational consistency, accelerated sales workflows, and better decision support for employees, he said. “In many cases, organizations are using AI to help teams process and synthesize large amounts of information significantly faster than before.”
He emphasized that AI is most successful when it augments existing operational processes rather than fully replacing human decision-making. “Organizations are finding success where AI helps employees work faster, make better decisions, and it reduces repetitive manual work while humans remain involved in oversight and final approvals,” he said.
Enterprise approach to agentic AI
Agentic AI is beginning to enter production environments, although it is “still relatively early and targeted,” Gea-Carrasco pointed out.
Most enterprises today are deploying agents that are narrowly scoped rather than fully autonomous, he said. The near-term pattern is supervised autonomy, where agents execute portions of workflows while humans remain involved in approvals, oversight, and exception handling. Thus, agents are entering what he referred to as “clearly defined workflows,” such as research, onboarding support, and workflow orchestration.
Over the next several years, AI will move from standalone copilots to more connected agentic systems embedded directly into enterprise workflows, he noted. They will increasingly coordinate work across customers, suppliers, partners, employees, and enterprise apps. Agents will likely become ever more prominent in workflows around sales operations, onboarding, compliance, procurement, customer research, risk management, supplier evaluation, and monitoring.
“Enterprise AI is becoming less about isolated productivity tools,” said Gea-Carrasco, “and more about building intelligent operational systems that can support decision-making and workflow execution at scale.”
This article originally appeared on CIO.com.
|