Agregátor RSS
Recently leaked Windows zero-days now exploited in attacks
Operation PowerOFF Seizes 53 DDoS Domains, Exposes 3 Million Criminal Accounts
Houby ukradly bakteriím gen, díky kterému dokážou ovlivňovat počasí. Větru sice neporučí, ale dešti ano
Ryzen 7 5800X3D se vrací jako edice k 10. výročí socketu AM4
Hry zadarmo, nebo se slevou: Výprodej klasik Warhammeru a plížení španělským klášterem zdarma
Apache ActiveMQ CVE-2026-34197 Added to CISA KEV Amid Active Exploitation
Anthropic’s latest model is deliberately less powerful than Mythos (and that’s the point)
Anthropic has today released a new, improved Claude model, Opus 4.7, but has deliberately built it to be less capable than the highly-anticipated Claude Mythos.
Anthropic calls Opus 4.7 a “notable improvement” over Opus 4.6, offering advanced software engineering capabilities and improved visioning, memory, instruction-following, and financial analysis.
However, the yet-to-be-released (and inadvertently leaked) Mythos seems to overshadow the Opus 4.7 release. Interestingly, Anthropic itself is downplaying Opus 4.7 to an extent, calling it “not as advanced” and “less broadly capable” than the Claude Mythos Preview.
The Opus upgrade also comes on the heels of the launch of Project Glasswing, Anthropic’s security initiative that uses Claude Mythos Preview to identify and fix cybersecurity vulnerabilities.
“For once in technological history, a product is being released with a marketing message that is focused more on what it does not do than on what it does,” said technology analyst Carmi Levy. “Anthropic’s messaging makes it clear that Opus 4.7 is a safer model, with capabilities that are deliberately dialed down compared to Mythos.”
‘Not fully ideal’ in some safety scenariosAnthropic touts Opus 4.7’s “substantially better” instruction-following compared to Opus 4.6, its ability to handle complex, long-running tasks, and the “precise attention” it pays to instructions. Users report that they’re able to hand off their “hardest coding work” to the model, whose memory is better than that of prior versions. It can remember notes across long, multi-session work and apply them to new tasks, thus requiring less up-front context.
Opus 4.7 has 3x more vision capabilities than prior models, Anthropic said, accepting high-resolution images of up to 2,576 pixels. This allows the model to support multimodal tasks requiring fine visual detail, such as computer-use agents analyzing dense screenshots or extracting data from complex diagrams.
Further, the company reported that Opus 4.7 is a more effective financial analyst, producing “rigorous analyses and models” and more professional presentations.
Opus 4.7 is relatively on par with its predecessor in safety, Anthropic said, showing low rates of concerning behavior such as “deception, sycophancy, and cooperation with misuse.” However, the company pointed out, while it improves in areas like honesty and resistance to malicious prompt injection, it is “modestly weaker” than Opus 4.6 elsewhere, such as in responding to harmful prompts, and is “not fully ideal in its behavior.”
Opus 4.7 comes amidst intense anticipation of the release of Claude Mythos, a general-purpose frontier model that Anthropic calls the “best-aligned” of all the models it has trained. Interestingly, in its release blog today, the company revealed that Mythos Preview scored better than Opus 4.7 on a few major benchmarks, in some cases by more than ten percentage points.
The Mythos Preview boasted higher scores on SWE-Bench Pro and SWE-Bench Verified (agentic coding); Humanity’s Last Exam (multidisciplinary reasoning); and agentic search (BrowseComp), while the two had relatively the same scores for agentic computer use, graduate-level reasoning, and visual reasoning.
Opus 4.7 is available in all Claude products and in its API, as well as in Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens, and $25 per million output tokens.
What sets Opus 4.7 apartClaude Opus is being branded in the industry as a “practical frontier” model, and represents Anthropic’s “most capable intelligent and multifaceted automation model,” said Yaz Palanichamy, senior advisory analyst at Info-Tech Research Group. Its core use cases include complex coding, deep research, and comprehensive agentic workflows.
The model’s core product differentiators have to do with how well-coordinated and composable its embedded algorithms are at scaling up various operational use case scenarios, he explained.
Claude Opus 4.7 is a “technically inclined” platform requiring a fair amount of deep personalization to fine-tune prompts and generate work outputs, he noted. It retains a strong lead over rival Google Gemini in terms of applied engineering use cases, even though Gemini 3.1 Pro has a larger context window (2M tokens versus Claude’s 1M tokens), although, he said, “certain [comparable] models do tend to converge on raw reasoning.”
The 4.7 update moves Opus beyond basic chatbot workflows, and positions it as more of “a copilot for complex, technical roles,” Levy noted. “It’s more capable than ever, and an even better copilot for knowledge workers.” At the same time, it poses less risk, making it a “carefully calculated compromise.”
He also pointed out that the Opus 4.7 release comes just two months after Opus 4.6 was introduced. That itself is “a signal of just how overheated the AI development cycle has become, and how brutally competitive the market now is.”
A guinea pig for Mythos?Last week, Anthropic also announced Project Glasswing, which applies Mythos Preview to defensive security. The company is working with enterprises like AWS and Google, as well as with 30-plus cybersecurity organizations, on the initiative, and claims that Glasswing has already discovered “thousands” of high-severity vulnerabilities, including some in every major operating system and web browser.
Anthropic is intentionally keeping Claude Mythos Preview’s release limited, first testing new cyber safeguards on “less capable models.” This includes Opus 4.7, whose cyber capabilities are not as advanced as those in Mythos. In fact, during training, Anthropic experimented to “differentially reduce” these capabilities, the company acknowledged.
Opus 4.7 has safeguards that automatically detect and block requests that suggest “prohibited or high-risk” cybersecurity uses, Anthropic explained. Lessons learned will be applied to Mythos models.
This is “an admission of sorts that the new model is somewhat intentionally dumber than its higher-end stablemate,” Levy observed, “all in an attempt to reinforce its cyber risk detection and blocking bona fides.”
From a marketing perspective, this allows Anthropic to position Opus 4.7 as an ideal balance between capability and risk, he noted, but without all the “cybersecurity baggage” of the limited availability higher-end model.
Mythos may very well be the “ultimate sacrificial lamb” at the root of broader Opus 4.7 mass adoption, Levy said. Even in the “increasing likelihood” that Mythos is never publicly released, it will serve as “an ideal means of glorifying Opus as the one model that strikes the ideal compromise for most enterprise decision-makers.”
Palanichamy agreed, noting that Opus 4.7 could serve as a public-facing guinea pig to live-test and fine-tune the automated cybersecurity safeguards that will ultimately “become a mandatory precursory requirement for an eventual broader release of Mythos-class frontier models.”
Google should share search data to break its monopoly, European Commission suggests
The European Commission this week requested, but did not order Google to allow third party search engines in Europe access to its search data as a means to comply with the Digital Markets Act (DMA), legislation the Commission describes as a law designed to “make the markets in the digital sector fairer and more contestable.”
Google was sent a set of proposed measures on Wednesday that, according to a release, would grant third party search engines, including Qwant from France, Mojeek, based in the UK, swisscows from Switzerland, and Ecosia, Good, and metaGer, all headquartered in Germany, the ability to access search data, such as ranking, query, and click and view data “on fair, reasonable and non-discriminatory terms.”
In a statement, Teresa Ribera, executive vice-president for Clean, Just and Competitive Transition with the Commission, said that the decision “sets out the specifications we expect Google to follow to comply with its obligations under the [DMA]. Data is a key input for online search and for developing new services, including AI.”
The measures themselves cover several areas, including the scope of the search data Google must share, the means and frequency by which it must happen, and parameters for “setting fair, reasonable and non-discriminatory prices for search data.”
Move ‘far exceeds DMA’s original mandate’In response to the Commission’s request, Clare Kelly, senior competition counsel for Google, said Thursday in a statement, “hundreds of millions of Europeans trust Google with their most sensitive searches, including private questions about their health, family, and finances, and the Commission’s proposal would force us to hand this data over to third parties, with dangerously ineffective privacy protections.”
The company, she said, “will continue to vigorously defend against this overreach, which far exceeds the DMA’s original mandate and jeopardizes people’s privacy and security.”
Phil Höfer, board member of SUMA-EV, which develops and runs MetaGer, said, “the planned measure might help with optimizing and developing European competitors to Google’s search service, but is not what’s needed most at this time. As long as the Commission isn’t planning on forcing Google to share their index data as well, this will not do much.”
Even better, he said, would be for the Commission “to decide to continue funding the European Open Web Index and allow European actors to build a competing infrastructure. We are convinced that without a European index, the EU will not be able to compete with American search engine giants.”
Forrester Senior Analyst Dario Maisto said the decision from the Commission is “not too timely but definitely in line with the measures Europe needs to free up businesses and citizens from risky dependencies on foreign organizations, vendors, and technologies. The final outcome is truly uncertain, though: one thing is to provide access to data to other players, one other thing is to modify users’ behaviors. We have to remember that the synonym for doing a search on the internet is actually: Google it.”
Brian Jackson, principal research director at Info-Tech Research Group, said that opening Google’s search data to third parties could make search more specialized again, especially in high-value verticals where users want results tailored to a specific industry or service need.
Enterprise digital teams, he said, may need to optimize for multiple discovery environments rather than relying just on Google alone, and software buyers could see more choice as search and intelligence vendors build on shared data.
In addition, said Jackson, “it could revive domain-specific search models, but I think a more fragmented search ecosystem might raise manipulation risks, fraud, and poisoned results. That would make governance and monitoring much more important.”
Sanchit Vir Gogia, chief analyst at Greyhound Research, noted that, in terms of the impact on enterprises if Google shares search data under DMA, “this is being framed as a competition move, but that is not where the real impact sits. What is actually shifting here is control over how enterprise information is interpreted by machines.”
Definition of optimization is changingFor a long time, he said, “enterprises have quietly relied on the stability of a dominant discovery layer led by Google. That stability shaped everything from how content was written to how digital performance was measured. What is changing now is not just who has access to data, but how many systems can interpret that data.”
Gogia pointed out, “as alternative engines improve and start to matter, enterprises will find themselves operating in an environment where the same content can be surfaced differently, depending on which engine or AI system is doing the interpreting. That creates inconsistency, and over time, inconsistency becomes risk.”
There is, he said, also a deeper shift underneath all this: “Search is no longer just about helping users find information. It is increasingly the layer that feeds AI systems, copilots, and automated decisions. Once that layer fragments, enterprises no longer have a single reference point for how they are represented externally. That loss of coherence is subtle at first, but it builds into something much more material.”
Addressing the question of whether or not enterprises will need to optimize for multiple algorithms, he said, “the short answer is yes, but the bigger point is that the definition of optimization itself is changing. Enterprises are moving away from a world where they could tune for one dominant system into one where relevance is decided differently across multiple engines that do not follow the same rules.”
Search engines such as Qwant, Ecosia, and Mojeek, “each approach indexing and ranking differently,” Gogia said. “Some rely on their own infrastructure, others blend multiple data sources. The result is that the same piece of content can behave very differently across environments, even when nothing about the content itself has changed.”
What complicates this further, he said, “is the rise of AI-generated answers. Enterprises are no longer competing for links, they are competing to be included in summaries that may not even reveal where the information came from. That shifts the focus away from keywords and toward clarity, context, and credibility. The organizations that do well will be the ones whose content holds up across systems, not just within one.”
Interested parties have until May 1 to submit views on the proposed measures prior to a final decision, which will be binding on Google and must be adopted by July 27.
Forgejo 15.0
Open Developer Summit 2026 v Praze vedle SUSECON 2026
Anthropic won't own MCP 'design flaw' putting 200K servers at risk, researchers say
A design flaw – or expected behavior based on a bad design choice, depending on who is telling the story – baked into Anthropic's official Model Context Protocol (MCP) puts as many as 200,000 servers at risk of complete takeover, according to security researchers.…
Cisco Webex SSO flaw needs manual certificate update to fix
Admins who use Cisco Webex Services configured to use trust anchors within the SSO integration with Control Hub must install a new identity provider certificate to close a critical vulnerability, or risk losing access control.
Cisco said in an advisory this week that admins must upload a new identity provider (IdP) SAML certificate to Webex Control Hub, the web-based management portal where IT administrators can control all Cisco Webex services, including certificate management, meetings, messaging and calling. Failure to close this hole will allow an unauthenticated, remote attacker to impersonate any user within the service.
The vulnerability, CVE-2026-20184, carries a CVSS score of 9.8.
Because Webex is a cloud service, Cisco can, and has, patched its side of the application. But admins using single-sign on (SSO) still need to install the new certificate. There are no workarounds.
A Webex support article on managing SSO integration says that information about certificates is found in the Webex Control Hub Alerts center, where customers can view which ones are installed, and their status. The Control Hub also contains an SSO wizard to aid in updating certificates. The article contains step-by-step details on the process.
Asked for comment, and for more details about the vulnerability, a Cisco spokesperson didn’t go beyond the advisory. “Cisco published a security advisory disclosing a vulnerability in the integration of single sign-on with Control Hub in Cisco Webex Services,” the spokesperson said. “At the time of publication (April 15) Cisco had addressed the vulnerability, and was not aware of any malicious use of this vulnerability. Affected customers must update their SAML certificate to ensure uninterrupted services.”
Gartner analyst Peter Firstbrook noted in an email that, since Cisco has applied the patch to the cloud service, this is more of a configuration change. But that doesn’t minimize the possible damage. “While we are not aware of exploits using this vulnerability, users can lose SSO access to Webex without this change,” he said.
“This does illustrate a bigger trend that identity and access management is the corporate perimeter,” he added, “and the majority of attacks include an identity and access management component. CISOs must increase their focus on IAM hygiene, particularly as agentic computing is accelerating.”
Identity and access management is, of course, the keystone of cybersecurity. As Crowdstrike observed in its 2026 Global Threat Report, abuse of valid accounts accounted for 35% of cloud incidents it investigated last year, “reinforcing that identity has become central to intrusion.” Single sign-on allows a user to authenticate to multiple applications through one set of credentials. It’s efficient, and, of more importance to a CSO, strengthens security.
Additional critical fixesThe Webex flaw is one of three critical vulnerabilities Cisco identified and issued patches for this week. In addition, multiple vulnerabilities have to be patched in Cisco Identity Services Engine (ISE) and Cisco ISE Passive Identity Connector (ISE-PIC).
These holes (CVE-2026-20147 and CVE-2026-20148, which carry CVSS scores of 9.9), could allow an authenticated, remote attacker to perform remote code execution or conduct path traversal attacks on an affected device. To exploit these vulnerabilities, the attacker must have valid administrative credentials, and send a crafted HTTP request to an affected device. There are no workarounds.
Separately, two more vulnerabilities were found in ISE that could lead to remote code execution on the underlying operating system of an affected device. To exploit these vulnerabilities (CVE-2026-20180 and CVE-2026-20186), the attacker would only need Read Only Admin credentials.
This article originally appeared on CSOonline.
Operation PowerOFF identifies 75k DDoS users, takes down 53 domains
ZionSiphon malware designed to sabotage water treatment systems
Týden na ScienceMag.cz: Baterie umístěná přímo na tkáň může ulevit od bolesti
Nejlepší místa pro hledání mimozemského života: Identifikovali 45 světů podobných Zemi. Je sklo pevná látka, nebo kapalina? Supernovy v minulosti asi způsobily náhlé změny klimatu – a mohlo by se to opakovat.
Nadbytečná regulace vs. snaha omezit střet zájmů. O co jde ve sporu České národní banky a finančních poradců
Novinky pro jádro 7.1 a 7.2: bez 486 i Baikalů, vylepšení pro exFAT
Intel chystá Raptor Lake-refresh-refresh(-refresh) na rok 2027
Hloubkový ponor na LHC: Co je uvnitř kvarku?
- « první
- ‹ předchozí
- …
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- …
- následující ›
- poslední »



