Kategorie
Researchers Uncover Pre-Stuxnet ‘fast16’ Malware Targeting Engineering Software
CISA Adds 4 Exploited Flaws to KEV, Sets May 2026 Federal Deadline
ADT confirms data breach after ShinyHunters leak threat
Meta’s compute grab continues with agreement to deploy tens of millions of AWS Graviton cores
Meta is continuing its compute grab as the agentic AI race accelerates to a sprint.
Today, the company announced a partnership with Amazon Web Services (AWS) that will bring “tens of millions” of AWS Graviton5 cores (one chip contains 192 cores) into its compute portfolio, with the option to expand as its AI capabilities grow. This will make the Llama builder one of the largest Graviton customers in the world.
The move builds on Meta’s expansive partnerships with nearly every chip and compute provider in the business. It’s working with Nvidia, Arm, and AMD, as well as building its own internal training and inference accelerator chip.
“It feels very difficult to keep track of what Meta is doing, with all of these chip deals and announcements around in-house development,” said Matt Kimball, VP and principal analyst at Moor Insights & Strategy. This makes for “exciting times that tell us just how incredibly valuable silicon is right now.”
Controlling the system, not just scaleGraphics processing units (GPUs) are essential for large language model (LLM) training, but agentic AI requires a whole new workload capability. CPUs like Graviton5 are rising to this challenge, supporting intensive workloads like real-time reasoning, multi-step tasks, frontier model training, code generation, and deep research.
AWS says Graviton5 has the ability to handle “billions of interactions” and to coordinate complex, multi-stage agentic tasks. It is built on the AWS Nitro System to support high performance, availability, and security.
“This is really about control of the AI system, not just scale,” said Kimball. As AI evolves toward persistent, agentic workloads, the role of the CPU becomes “quite meaningful;” it serves as the control plane, handling orchestration, managing memory, scheduling, and other intensive tasks across accelerators.
“This is especially true in agentic environments, where the workloads will be less linear and more stateful,” he pointed out. So, ensuring a supply of these resources just makes sense.
Reflecting Meta’s diversified approach to hardwareThe agreement builds on Meta’s long-standing partnership with AWS, but also reflects what the company calls its “diversified approach” to infrastructure. “No single chip architecture can efficiently serve every workload,” the company emphasized.
Proving the point, Meta recently announced four new generations of its MTIA training and inference accelerator chip and signed a massive deal with AMD to tap into 6GW worth of CPUs and AI accelerators. It also entered into a multi-year partnership with Nvidia to access millions of Blackwell and Rubin GPUs and to integrate Nvidia Spectrum-X Ethernet switches into its platform, and was also one of Arm’s first major CPU customers.
In the wake of all this, Nabeel Sherif, a principal advisory director at Info-Tech Research Group, posed the burning question: “What are they going to do with all this capacity?”
Primarily it will support Meta’s internal experimentation and innovation, he said, but it also lays the groundwork and provides the capacity for Meta to offer its own agentic AI services, for instance, its Llama AI model as an API, to the market.
“What those [services] will look like and what platforms and tools they’ll use, as well as what guardrails they’ll provide to users, is still unclear, but it’s going to be interesting to see it develop,” said Sherif.
The expanded capacity will enable a diversity of use cases and experimentation across various architectures and platforms, he said. Meta will have many options, and access to supply in an environment currently characterized not only by a wide variety of new CPU approaches, but by significant supply chain constraints. The AWS deal should be viewed as a complement to its partnerships and investments in other platforms like ARM, Nvidia, and AMD.
Kimball agreed that the move is “most definitely additive,” not a replacement or substitution. Meta isn’t moving off GPUs or accelerators, it’s building around them. “This is about assembling a heterogeneous system, not picking a single winner,” he said. “In fact, I think for most, heterogeneity is critical to long term success.”
Nvidia still dominates training and a lot of inference, while AMD is becoming “more and more relevant at scale,” Kimball noted. Arm, meanwhile, whether through CPU, custom silicon or other efforts, gives Meta architectural control, and Graviton5 fits into that mix as a “cost- and efficiency-optimized general-purpose compute layer.”
A question of strategyThe more interesting question is around strategy: Does this signal Meta is becoming a compute provider? Kimball doesn’t think so, noting that it’s likely the company isn’t looking to directly compete with hyperscalers as a general-purpose cloud. “This is more about vertical integration of their own AI stack,” he said.
The move gives them the ability to support internal workloads more efficiently, as well as providing the infrastructure foundation to expose more of that capability externally, whether through APIs, partnerships, or other means, he said.
And there’s a cost dynamic here, too, Kimball noted. As inference becomes persistent, especially with agentic systems, economics shift away from peak floating-point operations per second (FLOPS) (a measure of compute performance) and toward sustained efficiency and total cost of ownership (TCO).
CPUs like Graviton5 are well positioned for the parts of that workload that don’t require accelerators, but still need to run continuously. “At Meta’s scale, even small efficiency gains per workload compound quickly,” Kimball pointed out.
For developers and enterprise IT, the signal is pretty clear, he noted: The AI stack is getting more heterogeneous, not less so. Enterprises are going to see tighter coupling between CPUs, GPUs, and specialized accelerators, with workloads increasingly split across them based on behavior (prefill versus decode, stateless versus stateful, burst versus persistent).
“The implication is that infrastructure decisions have to become more workload-aware,” said Kimball. “It’s less about ‘which cloud?’ and more about ‘where does this specific part of the application run most efficiently?’”
This article originally appeared on NetworkWorld.
Firestarter malware survives Cisco firewall updates, security patches
Windows Update gets new controls to reduce forced restarts
Why are top university websites serving porn? It comes down to shoddy housekeeping.
Websites for some of the world’s most prestigious universities are serving explicit porn and malicious content after scammers exploited the shoddy record-keeping of the site administrators, a researcher found recently.
The sites included berkeley.edu, columbia.edu, and washu.edu, the official domains for the University of California, Berkeley, Columbia University, and Washington University in St. Louis. Subdomains such as hXXps://causal.stat.berkeley.edu/ymy/video/xxx-porn-girl-and-boy-ej5210.html, hXXps://conversion-dev.svc.cul.columbia[.]edu/brazzers-gym-porn, and hXXps://provost.washu.edu/app/uploads/formidable/6/dmkcsex-10.pdf. All deliver explicit pornography and, in at least one case, a scam site falsely claiming a visitor’s computer is infected and advising the visitor to pay a fee for the non-existent malware to be removed. In all, researcher Alex Shakhov said, hundreds of subdomains for at least 34 universities are being abused. Search results returned by Google list thousands of hijacked pages.
A handful of hijacked columbia.edu subdomains listed by Google One of the sites redirected by a UC Berkeley subdomain. Hijacking a university's good nameShakhov, founder of SH Consulting, said that the scammers—which a separate researcher has linked to a known group tracked as Hazy Hawk—are seizing on what amounts to a clerical error by site administrators of the affected universities. When they commission a subdomain such as provost.washu.edu, they create a CNAME record, which assignes a subdomain to a "cononical" domain. When the subdomain is eventually decommissioned—something that happens frequently for various reasons—the record is never removed. Scammers like Hazy Hawk then swoop in by hijacking the old record.
Germany’s sovereign AI hope changes hands
As Europe seeks to assert its technological independence from the US vendors Aleph Alpha, once seen as Germany’s sovereign AI hope, is the target of a transatlantic takeover.
Aleph Alpha is set to merge with Canada’s Cohere in a deal that will bring together Cohere’s global AI clout and Aleph Alpha’s background in research. The two companies hope they will be able to develop an AI powerhouse, with backing from their Canadian and German ecosystems
“Organizations globally are demanding uncompromising control over their AI stack. This transatlantic partnership unlocks the massive scale, robust infrastructure, and world-class R&D talent required to meet that demand,” said ” said Cohere CEO Aidan Gomez in a news release that artfully presents the deal as a merger of equals but that, according to a footnote, only requires the approval of the German company’s shareholders, a sure sign of a one-sided takeover.
The combined companies will be looking to offer customized AI in highly-regulated sectors including finance, defense and healthcare. By pooling their talents and offerings, theu hope to offer AI solutions to organizations according to local laws, cultural contexts, and institutional requirements.
The move comes at a time when businesses across the word are looking at non-US options as a reaction to the Trump administration’s policy on tariffs and the uncertainty caused by the war with Iran.
There have been several initiatives within Europe to counteract the US dominance. The EU’s Eurostack plan looked to make sure that major projects had a European option. Aleph Alpha was one of the companies highlighted within the scheme. The EU also launched Open Euro LLM, an attempt to slow down the US and China’s lead in AI.
This article first appeared on CIO.
New BlackFile extortion group linked to surge of vishing attacks
Microsoft to roll out Entra passkeys on Windows in late April
Agent Mode is now available in Microsoft Word, Excel, and PowerPoint
Microsoft has beefed up Copilot’s capabilities in Word, Excel and PowerPoint, claiming its Agent Mode will help speed up workers’ output.
The new features, announced last year, mean that Copilot can work more efficiently with Office applications, for example, understanding the richness of a pivot table in Excel or the use of animations in PowerPoint.
In tests with customers and researchers, Microsoft has learned a few things about how to improve the way Copilot is deployed, and laid out some of them in a post to a company blog. Now, it said, Copilot takes action rather than just suggesting steps — although ensuring that users maintain control. Other improvements include the ability to work with different models and better integration of Work IQ to deliver higher quality output.
Further developments are in the pipeline, Microsoft said, including improved editing for complex workflows such as finance spreadsheets and legal documents, more visibility on changes, and a more seamless integration of Copilot into the software, so that the experience for users is the same for Word, Excel, and PowerPoint.
The updates are available now and are the default experience for customers with Microsoft 365 Copilot and Microsoft 365 Premium subscriptions, the company said.
See PCWorld’s first impressions of the new Copilot agents in Word and PowerPoint.
CISA last in line for access to Anthropic Mythos
The US Cybersecurity and Infrastructure Security Agency (CISA) does not yet have access to Anthropic’s bug-hunting AI model, Claude Mythos, even though other government agencies do, Axios reported earlier this week.
As if that weren’t a big enough slap in the face for the national cyber-defense agency, the list of those who do have access to Mythos includes several unauthorized users, according to Bloomberg News. Members of a private Discord channel specializing in seeking information about unreleased AI models, have gained access to Mythos, according to one unnamed member of the group, Bloomberg reported. “The group has been using Mythos regularly since then, though not for cybersecurity purposes,” the person told Bloomberg, supplying screenshots to back up their claim.
As a result of its fear that the powerful model could be used to identify and exploit flaws in software and online services, Anthropic has limited access to a preview of Mythos to an exclusive group of government agencies, industry groups, and software providers through an initiative it calls Project Glasswing.
Even if CISA is shut out, some government agencies do have access to Mythos, including the US Department of Commerce’s Center for AI Standards and Innovation and the US National Security Agency, which Axios said are already assessing Mythos.
New ‘Pack2TheRoot’ flaw gives hackers root Linux access
Former OpenAI research scientist launches new AI model for Tencent
Tencent has updated its Hunyuan AI model, its first major release since it recruited Yao Shunyu, a leading AI scientist from OpenAI. Tencent’s Hy3 model, currently available in preview, offers improvements in areas from complex reasoning to coding.
The Chinese technology conglomerate is playing catch-up with other Chinese AI developers including ByteDance, Alibaba and DeepSeek. China is betting big on open-source AI to offer alternatives to major US players. Back in 2023, Tencent claimed its then-new Hunyuan LLM was a more powerful and intelligent option than the versions of ChatGPT and Llama available at the time.
Tencent has backed AI start-ups including Moonshot AI and StepFun, hoping that they will boost its cloud computing division. The company has also restructured its research team to improve the quality of training data. It aims to double its investment in AI to more than $5 billion this year.
Not to be outdone, DeepSeek announced its V4 Flash and V4 Pro Series, the newest versions of its LLM model. DeepSeek became an overnight hit in January 2025 with the launch of its R1 AI model and has gone on to develop other models since. It said the V4 model upgrades will offer users advances in reasoning and agentic tasks, while a new feature called Hybrid Attention Architecture improves the ability of the AI platform to remember queries across long conversations.
This article first appeared on InfoWorld.
FIRESTARTER Backdoor Hit Federal Cisco Firepower Device, Survives Security Patches
Adobe bets on AI agents to stay at the center of marketing workflows
Adobe is rolling out autonomous agents to orchestrate work across its applications, a move that will reinforce its position at the core of content and marketing workflows as AI disrupts the software landscape, analysts say.
“We’re living at true inflection point; a moment where creativity and marketing are being reshaped by AI, unlocking incredible new opportunities and raising the bar for speed, personalization, as well as scale,” said Shantanu Narayen, Adobe CEO, during his keynote presentation at Adobe Summit on Monday.
Liz Miller, vice president and principal analyst at Constellation Research, described the various agent-focused product updates at Adobe’s Summit conference this week as “an evolution of vision that brings the right AI capability into the right application.”
“The goal … is to continue evolving where and how AI is incorporated into the work of engagement,” she said.
[ More Adobe Summit 2026 coverage ]
Adobe’s recent launches indicate a “clear shift” to prioritize agentic AI investment, said Maria Bell, senior analyst at CCS Insight.
“Rather than focusing on standalone AI features, the emphasis is on building systems that can coordinate and execute work across workflows and functions,” said Bell. “Capabilities such as CX Enterprise, workflow agents and Firefly integrations point to an ambition to move from AI that supports decisions to systems that can act on them.”
Adobe kicked off its agent-related announcements ahead of the customer experience conference, unveiling its Firefly AI Assistant last week.
Using natural language prompts, the agent can autonomously carry out multi-step workflows across Adobe Creative Cloud apps such as Photoshop, Premiere, Express, and others. Aimed at both novice and expert users, Firefly AI Assistant can also guide users through tasks spanning image, video, audio, and design. A public beta is “coming soon,” according to Adobe.
The launch of Firefly AI Assistant signals Adobe’s intent to “lead in agentic AI for creative professionals, [by] directly addressing workflow friction, usability, and the demand for multi-model flexibility,” said Keith Kirkpatrick, research director at Futurum, in a blog post last week.
“Adobe’s Firefly Assistant is a signal that agentic AI is moving from experimental pilots to production-grade tools capable of handling real creative complexity,” he said, with enterprise buyers “no longer content with simple copilots or one-off automation.”
“The ability to automate multi-step tasks and orchestrate between image and video modalities is quickly becoming table stakes for creative AI platforms,” he said.
Adobe’s main announcement during the Summit event this week was CX Enterprise Coworker; an AI agent that coordinates multi-step workflows and tasks across Adobe’s customer experience applications.
“Adobe is moving to occupy the role of an automated operating system for marketing,” said Jim Lundy, CEO of Aragon Research, in a blog post on Wednesday. “While previous AI tools acted as individual assistants for specific tasks, the CX Enterprise Coworker acts as a supervisor that connects disparate silos of information.”
Lundy said that CX Enterprise Coworker represents a “significant evolution” in the way enterprises will manage the customer lifecycle, replacing manual hand-offs with automated orchestration across customer engagement apps.
“By anchoring this tool in its robust experience platform, Adobe is making a strong case for being the primary intelligence layer in the modern marketing stack,” said Lundy.
There were also updates to GenStudio with Brand Intelligence, a data layer that connects information across Adobe tools to provide context for agents to act upon, alongside a new agent capability in Adobe’s Workfront work management app.
While agents present an opportunity for Adobe, it also faces potential disruption from both design software vendors that build AI into their products and general-purpose AI assistants.
This risk has raised concerns in financial markets, and Adobe announced a $25 billion share buyback scheme this week — a move that can be seen as an attempt to shore up investor confidence amid a period of significant change, both across the industry as AI reshapes the software landscape, and within Adobe itself, with CEO Shantanu Narayen set to step down after 18 years in charge.
Ahead of the event, popular online design platform Canva unveiled its own agentic capabilities, with users able to access various Canva tools via a conversational interface that can complete multi-step processes, such as creating “a multi-channel campaign launch.”
“Canva is focused on accessibility, using AI to simplify and automate design for a broader audience,” said CCS Insight’s Bell. “This lowers barriers to entry and puts pressure on Adobe in lighter-weight and non-professional use cases.”
And, last week, Anthropic announced Claude Design, which lets users create design prototypes and marketing assets such as “landing pages, social media assets, and campaign visuals.”
In addition, Anthropic and Canva announced an integration that brings Claude Design outputs into Canva’s app.
Miller from Constellation Research said that tools such as Claude Design are “powerful additions” to the design ecosystem, enabling non-designers to quickly prototype and test ideas using text prompts. At the same time, these should be seen as more of a starting point. Professional-level design and editing tools are still required to create enterprise-ready prototypes.
“A creator may start in OpenAI, use that output in Claude to further build out the concept, but end in Firefly to ensure enterprise safety and brand controls in a more refined, finely tuned toolset,” Miller said.
Adobe is also working with a range of AI providers to make its software available where customers prefer. This includes the ability to interact with Adobe’s Firefly creative assistant directly from Claude, for instance.
“Our strategy is to meet where the users are,” said Varun Parmar, general manager of Adobe GenStudio and Firefly for Enterprise. A user might invoke an Adobe creative agent via Claude in the morning, he said, “and then later in the afternoon decide to do deeper precision and control work that requires a professional sort of interface, which is where Adobe’s product is world class.”
“We believe that these things will coexist; depending on the use case, you’ll go in and out [of different apps],” said Parmar.
As AI model providers expand into workplace software tools, it makes most sense for Adobe to focus on its core strength of serving creative and marketing professionals, Miller said.
“The risk to Adobe is more of an ongoing challenge to stay focused on customer demand and need, and not veer off course in a never-ending horse race with models proving what can be done, as opposed to commercially safe models that deliver what must be done,” she said.
And despite some media negativity around Adobe’s ability to transition into a new era of agentic AI, Miller said, Adobe’s strategy of embedding data, assets, and workflows into the tools marketers and creatives use remains sound.
Bell sees agentic systems as a “longer-term structural shift, while the more immediate pressure comes from accessibility-focused platforms like Canva.”
Yet Adobe’s access to data and expertise serving large customers provide it with an edge. “Adobe remains strong in professional and enterprise environments, where depth, control and integration still matter,” Bell said.
NASA Employees Duped in Chinese Phishing Scheme Targeting U.S. Defense Software
DORA and operational resilience: Credential management as a financial risk control
Tails 7.7 Surfaces Secure Boot Risk as 2026 Certificate Expiry Approaches
Over 10,000 Zimbra servers vulnerable to ongoing XSS attacks
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



