Agregátor RSS
Všimli jste si? Tesla znovu nepředstavila Roadster. Musk oznámil nový termín: „Za měsíc, nebo tak něco“
Facebook, Instagram a WhatsApp čeká velká změna. Jedno přihlašovací jméno nahradí oddělené účty
Nový virus z mořských plodů poprvé přeskočil na člověka. Může způsobit vážné poškození zraku
Crime crew impersonates help desk, abuses Microsoft Teams to steal your data
A previously unknown threat group using tried-and-tested social engineering tactics - Microsoft Teams chat invitations and helpdesk staff impersonation - is also using custom malware in its data-stealing attacks, according to Google's Threat Intelligence Group.…
Researchers Uncover Pre-Stuxnet ‘fast16’ Malware Targeting Engineering Software
Chytrý už může být i uchošťour. Bebird EarSight Ultra má kameru, Wi-Fi a spoustu příslušenství
GoPro se utrhlo ze řetězu. Nové kamery Mission mají výměnné objektivy a nakukují do filmové branže
Lasery místo paliva. V Texasu testují metajety, které by mohly dopravit sondu k Alfa Centauri za pouhých 20 let
CISA Adds 4 Exploited Flaws to KEV, Sets May 2026 Federal Deadline
raylib 6.0
10 slevových kuponů na tento týden: výhodné hodinky, kreditka zdarma i jídlo s dovozem
ADT confirms data breach after ShinyHunters leak threat
Meta’s compute grab continues with agreement to deploy tens of millions of AWS Graviton cores
Meta is continuing its compute grab as the agentic AI race accelerates to a sprint.
Today, the company announced a partnership with Amazon Web Services (AWS) that will bring “tens of millions” of AWS Graviton5 cores (one chip contains 192 cores) into its compute portfolio, with the option to expand as its AI capabilities grow. This will make the Llama builder one of the largest Graviton customers in the world.
The move builds on Meta’s expansive partnerships with nearly every chip and compute provider in the business. It’s working with Nvidia, Arm, and AMD, as well as building its own internal training and inference accelerator chip.
“It feels very difficult to keep track of what Meta is doing, with all of these chip deals and announcements around in-house development,” said Matt Kimball, VP and principal analyst at Moor Insights & Strategy. This makes for “exciting times that tell us just how incredibly valuable silicon is right now.”
Controlling the system, not just scaleGraphics processing units (GPUs) are essential for large language model (LLM) training, but agentic AI requires a whole new workload capability. CPUs like Graviton5 are rising to this challenge, supporting intensive workloads like real-time reasoning, multi-step tasks, frontier model training, code generation, and deep research.
AWS says Graviton5 has the ability to handle “billions of interactions” and to coordinate complex, multi-stage agentic tasks. It is built on the AWS Nitro System to support high performance, availability, and security.
“This is really about control of the AI system, not just scale,” said Kimball. As AI evolves toward persistent, agentic workloads, the role of the CPU becomes “quite meaningful;” it serves as the control plane, handling orchestration, managing memory, scheduling, and other intensive tasks across accelerators.
“This is especially true in agentic environments, where the workloads will be less linear and more stateful,” he pointed out. So, ensuring a supply of these resources just makes sense.
Reflecting Meta’s diversified approach to hardwareThe agreement builds on Meta’s long-standing partnership with AWS, but also reflects what the company calls its “diversified approach” to infrastructure. “No single chip architecture can efficiently serve every workload,” the company emphasized.
Proving the point, Meta recently announced four new generations of its MTIA training and inference accelerator chip and signed a massive deal with AMD to tap into 6GW worth of CPUs and AI accelerators. It also entered into a multi-year partnership with Nvidia to access millions of Blackwell and Rubin GPUs and to integrate Nvidia Spectrum-X Ethernet switches into its platform, and was also one of Arm’s first major CPU customers.
In the wake of all this, Nabeel Sherif, a principal advisory director at Info-Tech Research Group, posed the burning question: “What are they going to do with all this capacity?”
Primarily it will support Meta’s internal experimentation and innovation, he said, but it also lays the groundwork and provides the capacity for Meta to offer its own agentic AI services, for instance, its Llama AI model as an API, to the market.
“What those [services] will look like and what platforms and tools they’ll use, as well as what guardrails they’ll provide to users, is still unclear, but it’s going to be interesting to see it develop,” said Sherif.
The expanded capacity will enable a diversity of use cases and experimentation across various architectures and platforms, he said. Meta will have many options, and access to supply in an environment currently characterized not only by a wide variety of new CPU approaches, but by significant supply chain constraints. The AWS deal should be viewed as a complement to its partnerships and investments in other platforms like ARM, Nvidia, and AMD.
Kimball agreed that the move is “most definitely additive,” not a replacement or substitution. Meta isn’t moving off GPUs or accelerators, it’s building around them. “This is about assembling a heterogeneous system, not picking a single winner,” he said. “In fact, I think for most, heterogeneity is critical to long term success.”
Nvidia still dominates training and a lot of inference, while AMD is becoming “more and more relevant at scale,” Kimball noted. Arm, meanwhile, whether through CPU, custom silicon or other efforts, gives Meta architectural control, and Graviton5 fits into that mix as a “cost- and efficiency-optimized general-purpose compute layer.”
A question of strategyThe more interesting question is around strategy: Does this signal Meta is becoming a compute provider? Kimball doesn’t think so, noting that it’s likely the company isn’t looking to directly compete with hyperscalers as a general-purpose cloud. “This is more about vertical integration of their own AI stack,” he said.
The move gives them the ability to support internal workloads more efficiently, as well as providing the infrastructure foundation to expose more of that capability externally, whether through APIs, partnerships, or other means, he said.
And there’s a cost dynamic here, too, Kimball noted. As inference becomes persistent, especially with agentic systems, economics shift away from peak floating-point operations per second (FLOPS) (a measure of compute performance) and toward sustained efficiency and total cost of ownership (TCO).
CPUs like Graviton5 are well positioned for the parts of that workload that don’t require accelerators, but still need to run continuously. “At Meta’s scale, even small efficiency gains per workload compound quickly,” Kimball pointed out.
For developers and enterprise IT, the signal is pretty clear, he noted: The AI stack is getting more heterogeneous, not less so. Enterprises are going to see tighter coupling between CPUs, GPUs, and specialized accelerators, with workloads increasingly split across them based on behavior (prefill versus decode, stateless versus stateful, burst versus persistent).
“The implication is that infrastructure decisions have to become more workload-aware,” said Kimball. “It’s less about ‘which cloud?’ and more about ‘where does this specific part of the application run most efficiently?’”
This article originally appeared on NetworkWorld.
A Humanoid Robot Beat the Human World Record for a Half Marathon
A year after most robots failed to finish the Beijing race, nearly half the field autonomously ran a course of slopes, narrow passages, and 20 turns.
Humanoid robots are Silicon Valley’s latest obsession, but real-world performance has lagged the hype. That may be starting to change, however, after a robot beat the human record for a half marathon by nearly seven minutes in Beijing.
While tech companies around the world are piling into humanoid robots, China has made it a national priority. The government is pouring subsidies and infrastructure investment into the sector, and Chinese firms already account for around 80 percent of the humanoid machines shipped globally, according to the South China Morning Post.
Eager to show off its prowess, China has been staging sporting events for robots, most notably last year’s inaugural World Humanoid Robot Games. Another such event, the Beijing E-Town Half Marathon, pits humanoid robots against thousands of human runners over a 13-mile course. Last year, most of the non-human competitors failed to finish, and the fastest robots managed an unimpressive two hours and 40 minutes.
But this time around, four robots clocked times under an hour. And the winner, made by Chinese smartphone company Honor, registered a record-breaking 50 minutes, 26 seconds, eclipsing the benchmark set by Ugandan long-distance runner Jacob Kiplimo in Lisbon last month.
“Running faster may not seem meaningful at first, but it enables technology transfer, for example, into structural reliability and cooling, and eventually industrial applications,” Du Xiaodi, an engineer on the winning team, told Reuters.
More than 100 teams fielded 300 robots at this year’s event, up from just 21 entries at the inaugural event last year. But Honor, a spinoff from Chinese telecom giant Huawei, dominated the competition, with separate teams from the company taking all three podium spots.
The winning robot, Lightning, navigated the course entirely autonomously. The bot stands 5 feet 6 inches tall but features legs 37 inches long to mimic the physical attributes of elite runners. It also boasts liquid cooling technology used in the company’s smartphones.
The growing sophistication of the robots’ control software is perhaps one of the starkest shifts since last year, with roughly 40 percent of teams operating autonomously. This is particularly impressive given the challenging course, according to Bernstein Research analysts.
“The course included flat sections, slopes, narrow passages, and ~ 20 turns, demonstrating rapid improvement in robots’ intelligence to handle generalized environments in the real world,” they wrote, according to Bloomberg.
But the technology isn’t bulletproof yet. One robot ran into a barricade and had to be carried off on a stretcher. Another veered into a bush after crossing the finish line. And one continued racing with its torso held together by packing tape after a heavy fall.
Nonetheless, the race showcased the rapid progress China’s tech industry is making, particularly in the raw components used to build these machines, like motors, joints, and batteries. Liu Xiangquan, a robotics professor at Beijing Information Science and Technology University told The South China Morning Post that long-distance running is a great test of how well these components can stand up to the kind of repeated strain that will occur in industrial settings.
And that’s likely to cause some consternation in US policy circles, where many see robotics as a key battlefront in the growing technological rivalry between the two superpowers.
Behind Sunday’s spectacle is a higher-stakes contest between China and the US over who will dominate the next generation of humanoids. US robotics firms have been lobbying Washington to draft a national strategy to counter China, which could include tariffs or bans on Chinese robots to help protect domestic producers.
However, running fast in a straight line is a very different challenge than the fine motor control and perception demanded by commercial applications. Experts told Reuters that despite impressive hardware, robotics companies are still a long way from developing the sophisticated software required to put these humanoids to practical use.
Still, these machines struggled to get over the starting line just a year ago. The gap between humanoid robots and human athletes has closed faster than anyone expected, so betting against further rapid progress seems unwise.
The post A Humanoid Robot Beat the Human World Record for a Half Marathon appeared first on SingularityHub.
Událo se v týdnu 17/2026
Co všechno předává otec potomkům
Firestarter malware survives Cisco firewall updates, security patches
Windows Update gets new controls to reduce forced restarts
Why are top university websites serving porn? It comes down to shoddy housekeeping.
Websites for some of the world’s most prestigious universities are serving explicit porn and malicious content after scammers exploited the shoddy record-keeping of the site administrators, a researcher found recently.
The sites included berkeley.edu, columbia.edu, and washu.edu, the official domains for the University of California, Berkeley, Columbia University, and Washington University in St. Louis. Subdomains such as hXXps://causal.stat.berkeley.edu/ymy/video/xxx-porn-girl-and-boy-ej5210.html, hXXps://conversion-dev.svc.cul.columbia[.]edu/brazzers-gym-porn, and hXXps://provost.washu.edu/app/uploads/formidable/6/dmkcsex-10.pdf. All deliver explicit pornography and, in at least one case, a scam site falsely claiming a visitor’s computer is infected and advising the visitor to pay a fee for the non-existent malware to be removed. In all, researcher Alex Shakhov said, hundreds of subdomains for at least 34 universities are being abused. Search results returned by Google list thousands of hijacked pages.
A handful of hijacked columbia.edu subdomains listed by Google One of the sites redirected by a UC Berkeley subdomain. Hijacking a university's good nameShakhov, founder of SH Consulting, said that the scammers—which a separate researcher has linked to a known group tracked as Hazy Hawk—are seizing on what amounts to a clerical error by site administrators of the affected universities. When they commission a subdomain such as provost.washu.edu, they create a CNAME record, which assignes a subdomain to a "cononical" domain. When the subdomain is eventually decommissioned—something that happens frequently for various reasons—the record is never removed. Scammers like Hazy Hawk then swoop in by hijacking the old record.
Germany’s sovereign AI hope changes hands
As Europe seeks to assert its technological independence from the US vendors Aleph Alpha, once seen as Germany’s sovereign AI hope, is the target of a transatlantic takeover.
Aleph Alpha is set to merge with Canada’s Cohere in a deal that will bring together Cohere’s global AI clout and Aleph Alpha’s background in research. The two companies hope they will be able to develop an AI powerhouse, with backing from their Canadian and German ecosystems
“Organizations globally are demanding uncompromising control over their AI stack. This transatlantic partnership unlocks the massive scale, robust infrastructure, and world-class R&D talent required to meet that demand,” said ” said Cohere CEO Aidan Gomez in a news release that artfully presents the deal as a merger of equals but that, according to a footnote, only requires the approval of the German company’s shareholders, a sure sign of a one-sided takeover.
The combined companies will be looking to offer customized AI in highly-regulated sectors including finance, defense and healthcare. By pooling their talents and offerings, theu hope to offer AI solutions to organizations according to local laws, cultural contexts, and institutional requirements.
The move comes at a time when businesses across the word are looking at non-US options as a reaction to the Trump administration’s policy on tariffs and the uncertainty caused by the war with Iran.
There have been several initiatives within Europe to counteract the US dominance. The EU’s Eurostack plan looked to make sure that major projects had a European option. Aleph Alpha was one of the companies highlighted within the scheme. The EU also launched Open Euro LLM, an attempt to slow down the US and China’s lead in AI.
This article first appeared on CIO.
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



