Kategorie
Microsoft confirms April Windows updates cause backup failures
“Legitimate” phishing: how attackers weaponize Amazon SES to bypass email security
The primary goal for attackers in a phishing campaign is to bypass email security and trick the potential victim into revealing their data. To achieve this, scammers employ a wide range of tactics, from redirect links to QR codes. Additionally, they heavily rely on legitimate sources for malicious email campaigns. Specifically, we’ve recently observed an uptick in phishing attacks leveraging Amazon SES.
The dangers of Amazon SES abuseAmazon Simple Email Service (Amazon SES) is a cloud-based email platform designed for highly reliable transactional and marketing message delivery. It integrates seamlessly with other products in Amazon’s cloud ecosystem, AWS.
At first glance, it might seem like just another delivery channel for email phishing, but that isn’t the case. The insidious nature of Amazon SES attacks lies in the fact that attackers aren’t using suspicious or dangerous domains; instead, they are leveraging infrastructure that both users and security systems have grown to trust. These emails utilize SPF, DKIM, and DMARC authentication protocols, passing all standard provider checks, and almost always contain .amazonses.com in the Message-ID headers. Consequently, from a technical standpoint, every email sent via Amazon SES – even a phishing one – looks completely legitimate.
Phishing URLs can be masked with redirects: a user sees a link like amazonaws.com in the email and clicks it with confidence, only to be sent to a phishing site rather than a legitimate one. Amazon SES also allows for custom HTML templates, which attackers use to craft more convincing emails. Because this is legitimate infrastructure, the sender’s IP address won’t end up on reputation-based blocklists. Blocking it would restrict all incoming mail sent through Amazon SES. For major services, that kind of measure is ineffective, as it would significantly disrupt user workflows due to a massive number of false positives.
How compromise happensIn most cases, attackers gain access to Amazon SES through leaked IAM (AWS Identity and Access Management) access keys. Developers frequently leave these keys exposed in public GitHub repositories, ENV files, Docker images, configuration backups, or even in publicly accessible S3 buckets. To hunt for these IAM keys, phishers use various tools, such as automated bots based on the open-source utility TruffleHog, which is designed for detecting leaked secrets. After verifying the key’s permissions and email sending limits, attackers are equipped to spread a massive volume of phishing messages.
Examples of phishing with Amazon SESIn early 2026, one of the most common themes in phishing emails sent with Amazon SES was fake notifications from electronic signature services.
Phishing email imitating a Docusign notification
The email’s technical headers confirm that it was sent with Amazon SES. At first glance, it all looks legitimate enough.
Phishing email headers
In these emails, the victim is typically asked to click a link to review and sign a specific document.
Phishing email with a “document”
Upon clicking the link, the user is directed to a sign-in form hosted on amazonaws.com. This can easily mislead the victim, convincing them that what they’re doing is safe.
Phishing sign-in form
The resulting form is, of course, a phishing page, and any data entered into it goes directly to the attackers.
Amazon SES and BECHowever, Amazon SES is used for more than just standard phishing; it’s also a vehicle for a very sophisticated type of BEC campaigns. In one case we investigated, a fraudulent email appeared to contain a series of messages exchanged between an employee of the target organization and a service provider about an outstanding invoice. The email was sent as if from that employee to the company’s finance department, requesting urgent payment.
BEC email featuring a fake conversation between an employee and a vendor
The PDF attachments didn’t contain any malicious phishing URLs or QR codes, only payment details and supporting documentation.
Forged financial documents
Naturally, the email didn’t originate with the employee, but with an attacker impersonating them. The entire thread quoted within the email was actually fabricated, with the messages formatted to appear as a legitimate forwarded thread to a cursory glance. This type of attack aims to lower the user’s guard and trick them into transferring funds to the scammers’ account.
TakeawaysPhishing via Amazon SES is shifting from isolated incidents into a steady trend. By weaponizing this service, attackers avoid the effort of building dubious domains and mail infrastructure from scratch. Instead, they hijack existing access keys to gain the ability to blast out thousands of phishing emails. These messages pass email authentication, originate from IP addresses that are unlikely to be blocklisted, and contain links to phishing forms that look entirely legitimate.
Since these Amazon SES phishing attacks stem from compromised or leaked AWS credentials, prioritizing the security of these accounts is critical. To mitigate these risks, we recommend following these guidelines:
- Implement the principle of least privilege when configuring IAM access keys, granting elevated permissions only to users who require them for specific tasks.
- Transition from IAM access keys to roles when configuring AWS; these are profiles with specific permissions that can be assigned to one or several users.
- Enable multi-factor authentication, an ever-relevant step.
- Configure IP-based access restrictions.
- Set up automated key rotation and run regular security audits.
- Use the AWS Key Management Service to encrypt data with unique cryptographic keys and manage them from a centralized location.
We recommend that users remain vigilant when handling email. Do not determine whether an email is safe based solely on the From field. If you receive unexpected documents via email, a prudent precaution is to verify the request with the sender through a different communication channel. Always carefully inspect where links in the body of an email actually lead. Additionally, robust email security solutions can provide an essential layer of protection for both corporate and personal correspondence.
Relying on LLMs is nearly impossible when AI vendors keep changing things
Over the years, enterprise IT execs have gotten frighteningly comfortable having little control or visibility over mission-critical apps, from SaaS to cloud and even cybersecurity. But generative AI (genAI) and agentic systems are taking that problem to a new extreme, with vendors able to dumb down a system IT is paying billions for without so much as a postcard.
It’s not necessarily that AI changes are made to boost profits or revenue. Even if we accept the vendor argument that such changes are in the customer’s interest, companies still need for their systems to do on Thursday what they did on Tuesday, let alone what they did when the purchase order was signed.
Alas, that is no longer the case.
Consider a recent report from Anthropic that detailed a lengthy list of changes the company made to some of its AI offerings — including one that explicitly dumbed down answers — without asking or telling customers beforehand.
The report describes various changes the Anthropic team made on their own and then decided to reconsider the move only after users noticed and complained about the drop in quality.
“On March 4, we changed Claude Code’s default reasoning effort from high to medium to reduce the very long latency — enough to make the UI appear frozen — some users were seeing in high mode. This was the wrong tradeoff. We reverted this change on April 7 after users told us they’d prefer to default to higher intelligence and opt into lower effort for simple tasks,” the April 23 Anthropic report said. “On March 26, we shipped a change to clear Claude’s older thinking from sessions that had been idle for over an hour, to reduce latency when users resumed those sessions. A bug caused this to keep happening every turn for the rest of the session instead of just once, which made Claude seem forgetful and repetitive. We fixed it on April 10.”
Our bad — we’ll change it backThe fastest “Oops! Our bad. We’ll change it back” moment came last month. “On April 16, we added a system prompt instruction to reduce verbosity. In combination with other prompt changes, it hurt coding quality and was reverted on April 20,” Anthropic said.
Beyond forcing changes on customers — not necessarily for customers — the AI vendor said the interdependence among complex GenAI systems makes it more difficult to quickly detect performance problems, including weaker answers and the speed of delivering those answers.
“Because each change affected a different slice of traffic on a different schedule, the aggregate effect looked like broad, inconsistent degradation,” Anthropic said. When “we began investigating reports in early March, they were challenging to distinguish from normal variation in user feedback at first, and neither our internal usage nor evals initially reproduced the issues identified.”
This inability to reproduce errors and, for that matter, any behavior at all, is just one of the realities of genAI tools and agents. The fact that the same model is likely to give a different answer to the identical question posed two minutes apart is exactly why reproducibility is so difficult. That’s the case with all AI vendors, but it’s not their fault, in the same way hallucinations and ignored guardrails are not their fault. It’s just how LLMs operate. You want the good?Accept the bad. Blaming genAi technology for inconsistencies is like blaming the fabled scorpion.
All major AI vendors are in an awkward position: When deciding the performance they deliver, they face what looks like a conflict-of-interest. That’s because the vast majority of current enterprise clients are paying for token usage. That gives vendors like Anthropic, OpenAI and others a real financial incentive to make background changes that increase the number of tokens customers need to purchase. Anthropic tried to suggest that its team was trying to reduce problems where token usage was artificially increased.
For example, in its report, Anthropic said it “received user feedback that Claude Opus 4.6 in high effort mode would occasionally think for too long, causing the UI to appear frozen and leading to disproportionate latency and token usage for those users. In general, the longer the model thinks, the better the output. Effort levels are how Claude Code lets users set that tradeoff — more thinking versus lower latency and fewer usage limit hits. As we calibrate effort levels for our models, we take this tradeoff into account in order to pick points along the test-time-compute curve that give people the best range of options.”
Technology often backfiresSometimes, an effort to help customers backfires because, well, technology hates all of us.
The report details an incident on March 26, where an internal Anthropic change “was meant to be an efficiency improvement. We use prompt caching to make back-to-back API calls cheaper and faster for users. Claude writes the input tokens to the cache when it makes an API request, then after a period of inactivity the prompt is evicted from cache, making room for other prompts. Cache utilization is something we manage carefully.”
Then things got sticky. “The design should have been simple: if a session has been idle for more than an hour, we could reduce users’ cost of resuming that session by clearing old thinking sections. Since the request would be a cache miss anyway, we could prune unnecessary messages from the request to reduce the number of uncached tokens sent to the API.”
Turns out, “the implementation had a bug. Instead of clearing thinking history once, it cleared it on every turn for the rest of the session. After a session crossed the idle threshold once, each request for the rest of that process told the API to keep only the most recent block of reasoning and discard everything before it. This compounded: if you sent a follow-up message while Claude was in the middle of a tool use, that started a new turn under the broken flag, so even the reasoning from the current turn was dropped. Claude would continue executing, but increasingly without memory of why it had chosen to do what it was doing. This surfaced as the forgetfulness, repetition, and odd tool choices people reported. …We believe this is what drove the separate reports of usage limits draining faster than expected.”
And with Claude Opus 4.7, the vendor noted, it “has a notable behavioral quirk” of being “quite verbose. This makes it smarter on hard problems, but it also produces more output tokens.”
To be clear, I’m not suggesting Anthropic was doing anything especially poorly. Indeed, these are the kinds of problems all genAi companies face, and I applaud Anthropic’s transparency in publishing its reasoning openly.. (Anthropic executives do seem to be trying to portray themselves as more ethical and responsible than many of their rivals.)
What the report makes clear, however, is that the AI package your company is spending a lot of money on is entirely within the control of the hyperscalers. They can dumb down answers and even charge you more money by increasing token usage.
They don’t ask your team beforehand for permission to make these kinds of changes. They don’t even routinely disclose the changes after the fact. In many ways, it’s just like a cloud provider changing settings without your knowledge. Your team might have spent two days getting all of the settings just right for operations, security and compliance on Monday afternoon. You wouldn’t want that cloud team to change everything on Tuesday and not mention it. It’s the same story with SaaS.
Now more than ever, trust, honesty and integrity need to be critical vendor differentiations. That’s especially true for AI companies.You need to track accuracy, speed and a dozen other AI variables internally so you can detect any changes as quickly as possible. As boards push harder for IT to try and deliver clean ROI for AI efforts, these monitoring efforts are no longer optional.
Buyer beware indeed.
Global Crackdown Arrests 276, Shuts 9 Crypto Scam Centers, Seizes $701M
Instructure confirms data breach, ShinyHunters claims attack
Microsoft Defender wrongly flags DigiCert certs as Trojan:Win32/Cerdigent.A!dha
Telegram Mini Apps abused for crypto scams, Android malware delivery
Přibývá požárů elektrokol a koloběžek. Hasiči radí, jak eliminovat riziko – nenabíjejte v předsíni a připlaťte si za kvalitu
CISA Adds Actively Exploited Linux Root Access Bug CVE-2026-31431 to KEV
Critrical cPanel flaw mass-exploited in "Sorry" ransomware attacks
ConsentFix v3 attacks target Azure with automated OAuth abuse
Trellix Confirms Source Code Breach With Unauthorized Repository Access
Microsoft tests modern Windows Run, says it's faster than legacy dialog
Edu tech firm Instructure discloses cyber incident, probes impact
AI agents can bypass guardrails and put credentials at risk, Okta study finds
An AI agent that revealed sensitive data without being asked. An agent that overruled its own guardrails. Another that sent credentials to an attacker via Telegram, because it forgot it wasn’t supposed to do so after a reset.
It’s no secret that AI agents have huge potential, balanced by equally big risks. What’s becoming apparent, however, is how quickly agentic systems can veer wildly off course and start exposing critical information under real-world conditions.
A look at just how easily this can happen emerges from Phishing the agent: Why AI guardrails aren’t enough, a report on tests conducted by cloud identity and access management (IAM) company Okta Threat Intelligence, which uncovered all of the problems cited above, and more.
Their research focused on OpenClaw, a model-agnostic multi-channel AI assistant which has seen explosive growth inside enterprises since appearing in late 2025.
The Telegram hackIn common with the growing list of rival agents, OpenClaw is only as useful as the access it is given to files, accounts, browsers, network devices, and, most significant of all, credentials.
One test conducted by Okta assessed how easy it would be to trick OpenClaw running Claude Sonnet 4.6 into handing over an OAuth token. This shouldn’t be possible; the LLM should refuse this request. However, what might have held true when prompting Claude as a chatbot quickly fell apart when it was accessed through OpenClaw.
The test assumed that a user had given OpenClaw full access to their computer, that they regularly controlled the agent over Telegram, and that their Telegram account had been hijacked.
First, the attacker instructed the agent via Telegram to retrieve an OAuth token, but to only display it in a terminal window on the computer. Claude Sonnet’s guardrails would prevent it from copying the token, however, the testers were able to reset the agent, causing it to forget it had displayed the token in the terminal window.
At that point, Okta said in its writeup, “The agent was instructed to take a screenshot of the desktop, which included the token, and then drop the screenshot in the Telegram chat, which it did. Exfiltration accomplished.”
Agent-in-the-middleAgentic AI is really two things: a powerful orchestration system coupled to one or more highly-capable LLMs. What an agent isn’t is a simple interface, and it must be viewed as a separate system capable of autonomous, unpredictable reasoning.
In fact, Okta threat intelligence director Jeremy Kirk pointed out, “It opens up a new attack surface. Someone gets SIM swapped, their Telegram is hooked up to an agent that has carte blanche to run anything on their computer, and possibly their employer’s network. In an enterprise context, this is a total nightmare.”
OpenClaw is also so hard-wired to find ways around problems, it will sometimes do unexpected, improper things. Kirk said that an agent, when prompted in tests to access a website, requested the site’s login credentials in chat via a Telegram bot, an unencrypted channel which would expose them to anyone with access to that chat.
In another example, OpenClaw was asked to search X for AI stories. That shouldn’t have been possible; the machine was logged into X, but OpenClaw’s isolated Chrome profile was not. However, when prompted to grab the session cookies from the logged-in session and inject them into its own browser process, it happily attempted to do so.
This is similar in principle to adversary-in-the-middle phishing attacks, which allow attackers to bypass protections such as MFA. It should be a no-go, and yet OpenClaw thought the action was valid, underlining how an attacker could manipulate it to do the same.
“The agents are prompted to be as helpful as possible by default, a characteristic that poses particular concerns when it comes to credentials and tokens,” said Kirk.
‘Defying security gravity’According to Kirk, many enterprises are, sometimes unwittingly, running unsanctioned or weakly managed ‘shadow’ agents inside their networks. An example of how this could go wrong was the recent Vercel compromise in which the Context.ai app opened the door to the theft of downstream OAuth session tokens.
The problem stems from agents being used experimentally by developers and employees, with little or no governance or oversight. The answer is to secure them using the same controls applied to users or service accounts, said Kirk. And as well as limiting the scope of agents, enterprises should also look to securing the credentials and tokens themselves, avoiding giving them long expiry dates.
Agents are only the latest example of a technology that is being deployed faster than it can be secured, Kirk observed. “Much of AI right now is defying security gravity,” he said. “But there are ways to use agents safely and keep credentials out of their reach, which is the only safe way to use them.”
This article originally appeared on CSOonline.
Windows shell spoofing vulnerability puts sensitive data at risk
Microsoft and the US Cybersecurity and Infrastructure Security Agency (CISA) have sounded the alarm about a Windows shell spoofing vulnerability that is already being exploited by attackers. It is not clear by whom as yet, but the main suspects are hackers in Russia.
CISA has mandated that all federal agencies patch this vulnerability, designated CVE-2026-32202, by May 12. According to a Microsoft advisory, exploitation of the flaw could lead to access to sensitive data, but attackers would not be able to gain control of the system.
However, one security expert has warned that the considerable gap between the time Microsoft identified the bug and the date by which the systems must be patched leads to increased risk.
The patch gapLionel Litty, CISO for security company Menlo, said that an incomplete patch for CVE-2026-21510 that resulted in the issue tracked as CVE-2026-32202 adds to the problem. “This has been a theme for many years. A vulnerability exists and the vendor has not been thorough enough in dealing with it, so a small variation has not been fully patched. What normally happens is that they’ve dealt with the main vulnerability, but there are still side effects.” The result of this is that there is a further delay in a complete fix while a new update is developed.
The big problem, said Litty, is the so-called patch gap. He said that initially there’s a gap between the time the vendors find a vulnerability and the time it issues a patch, and there is also a subsequent gap between the patch being issued and organizations completing the update. For example, he noted, if an update interrupts users’ work, they may be reluctant apply it. ”We can see on our platform that many users don’t update for weeks, or even months,” he said.
He pointed out that the vendors themselves are acting efficiently. But, he said, “as a CISO, I have to decide what level of pain to inflict on our users.”
A difficult balanceErik Avakian, technical counselor at Info-Tech Research Group, noted that when it set the patching deadline, CISA had been operating within the guidelines laid down in Binding Operational Directive (BOD) 22-01, which requires US federal agencies to patch vulnerabilities within the timelines outlined under the policy, which range from 14 to 21 days.
“In cases of high-risk exploitation, CISA can shorten the deadline to three days,” he said. “But in the case of CVE-2026-32202, the CVSS score was rated at 4.3, and even though the vulnerability has been actively exploited, the rating does not meet the policy threshold for a faster patch cycle. In this case, CISA allotted a 14-day deadline, which meets its aggressive timeline standard based on the vendor rating.”
He said that there is indeed an argument that the 14 day window to patch a vulnerability that is being actively exploited in the wild is too long. But, he said, “I’m assuming in this case, the reason why it was not elevated to an emergency directive type patch cycle (which would require as little as 48 to 72 hours to patch) is due to Microsoft’s rating, as well as several other factors”.
Avakian explained his reasoning: “First, organizations can help mitigate the risk without applying a full patch by blocking certain ports for traffic at the firewall perimeter,” he said. “This type of countermeasure helps to reduce the risk while the 14-day patch window clock is ticking. The longer window gives testers added time to test patches being applied properly in a test/staging environment before rolling to production.”
Secondly, he said, “it’s one thing [for IT] to patch systems quickly, but it’s another when they’re rushed, because that carries the potential for additional unintended risk of breaking critical systems and applications if something goes wrong, or if the patch wasn’t tested properly.”
Avakian did agree that CISOs are facing a difficult balancing act, where they have to weigh risk against the stability of systems.
And, as Litty pointed out, the situation is constantly changing; the emergence of AI will cause more issues in the future. “We’re seeing a shrinking gap as AI becomes part of the problem,” he said, adding that AI use means people with fewer technical skills are able to exploit systems, and do so more quickly, so CISOs should not assume that sophisticated attacks are coming from nation states. There needs to be a change of mindset within organizations to deal with this.
“You can no longer spend a few weeks testing an upgrade and then implementing it: you have to do things much faster,” he said.
This article originally appeared on CSOonline.
Ubuntu infrastructure has been down for more than a day
Servers operated by Ubuntu and its parent company Canonical were knocked offline on Thursday morning and have remained down ever since, a situation that’s preventing the OS provider from communicating normally following the botched disclosure of a major vulnerability.
Attempts to connect to most Ubuntu and Canonical webpages and download OS updates from Ubuntu servers have consistently failed over the past 24 hours. Updates from mirror sites, however, have continued to work normally. A Canonical status page said: “Canonical’s web infrastructure is under a sustained, cross-border attack and we are working to address it.” Other than that, Ubuntu and Canonical officials have maintained radio silence since the outage began.
A decades-long scourgeA group sympathetic to the Iranian government has taken credit for the outage. According to posts on Telegram and other social media, the group is responsible for a DDoS attack using Beam, an operation that claims to test the ability of servers to operate under heavy loads but, like other “stressors,” are, in fact, fronts for services miscreants pay for to take down third-party sites. In recent days, the same pro-Iran group has taken credit for DDoSes on eBay.
30,000 Facebook Accounts Hacked via Google AppSheet Phishing Campaign
15-year-old detained over French govt agency data breach
Story retracted
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



