Agregátor RSS
Velké srovnání deseti cloudových disků: Kam a za kolik uložit 100 GB, 1 TB nebo 10 TB dat?
Apple plans to make Mac minis in the US
Illustrating the extent to which it is willing to work with the Trump Administration — and as President Donald J. Trump prepares for tonight’s State of the Union address — Apple now says it will begin to make Mac minis in Houston later this year. The Macs will be made at the same factory where the company now makes server chips.
“Apple is deeply committed to the future of American manufacturing, and we’re proud to significantly expand our footprint in Houston with the production of Mac mini starting later this year,” said Apple CEO Tim Cook. “We began shipping advanced AI servers from Houston ahead of schedule, and we’re excited to accelerate that work even further.”
Supporting its own announcement, Apple Chief Operating Officer Sabih Khan also shared the promise during an interview with the Wall Street Journal. “We’re very excited to tell you that later this year we will be beginning Mac mini manufacturing right here in this space,” he said.
Khan suggested the new factory will churn out “thousands” of Macs each week.
Apple’s US manufacturing expansionThis decision means Mac mini now becomes the second modern Apple PC to be manufactured in the US, alongside the Mac Pro. The company also makes servers for Private Cloud Compute at the same facility, and intends to expand advanced AI server manufacturing there.
To some extent, the hardware Apple chooses to make in the US likely reflects the complexity and production volume demands of those products. In some cases, Apple cannot make the product because it can’t yet source enough of the highly-skilled people it needs to do so. To solve that problem, Apple is currently investing more than $600 billion, including the creation of an Academy in Detroit. The company is also launching a huge, 20,000-foot advanced manufacturing skills training center near the new Mac mini factory.
“The dedicated facility will provide hands-on training in advanced manufacturing techniques to students, supplier employees, and American businesses of all sizes,” Apple said.
Creating opportunity in manufacturingApple’s US investment plan includes directly hiring 20,000 people over four years, with a focus on R&D, silicon engineering, software development, and AI. Apple has particularly focused its job-reshoring efforts on such high-tech, high-value tasks and revealed it has now sourced more than 20 billion US-made chips from 24 factories across the US for use in its products.
Third-party partners are all aboard: GlobalWafers has begun production at its new $4 billion bare silicon wafer facility in Sherman, TX. Apple will also purchase 100 million advanced chips produced by TSMC at its new Arizona facility this year.
Each of these investments creates new employment opportunities for US workers, while also protecting some of the highest value components used in Apple devices from tariffs. Every job counts, of course, given that US job creation has slowed to around 15,000 new jobs a month. In that context, the additional employment Apple provides means a lot — particularly as we all now anticipate wide-scale replacement and displacement of many human employees with smart machines.
Tariffs and their impactApple’s promise to bring Mac mini production to the US might or might not be enough to persuade the US government to spare it from some of the tariffs put in place since the courts rejected the earlier tranche. Just like every other enterprise, company leaders will certainly hope for better business stability; Apple has paid more than $3 billion in tariffs since 2025 at a rate of around $1 billion a quarter.
While the truth about these added taxes is that consumers end up paying them on the products they buy, they also create instability and impose pressure on profitability across the entire supply chain, which is bad for business.
Many US brands have leaned on their manufacturing partners to take a haircut on price in order to sustain these tariffs while keeping existing prices, and there’s little reason to think Apple hasn’t done the same thing. That may be fine in the short term, but in the medium/long term, reducing profitability at key suppliers puts their business at risk. This creates a chain of events in which key suppliers cease trading, leaving US companies gasping and grasping for replacements in what becomes a seller’s market.
No matter how challenging the seas become, Apple must navigate them somehow; its current compliance shows it is attempting to do just that, even as it prepares for a changing of the guard of its leadership class. Who and what will the company be tomorrow?
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also, now on Mastodon.
Microsoft adds Copilot data controls to all storage locations
Titan v mobilech končí, měl jednu zásadní vadu. Místo něj bude znovu přítomen hliník
Go library maintainer brands GitHub's Dependabot a 'noise machine'
A Go library maintainer has urged developers to turn off GitHub's Dependabot, arguing that false positives from the dependency-scanning tool "reduce security by causing alert fatigue."…
„Zázračná“ baterie Donut znovu v záři reflektorů. Finové zveřejnili výsledky nezávislého testu
Vyzkoušejte nejrychlejšího AI chatbota na světě. Jimmy je možná trošku natvrdlý, ale pohání ho naprosto unikátní procesor
Prohlížeč Ladybird přechází na Rust
Identity-First AI Security: Why CISOs Must Add Intent to the Equation
Firefox 148.0
UK fines Reddit $19 million for using children’s data unlawfully
UAC-0050 Targets European Financial Institution With Spoofed Domain and RMS Malware
UAC-0050 Targets European Financial Institution With Spoofed Domain and RMS Malware
Logitech MX Creative Console zlevnil na minimum. Alternativa pro Stream Deck pomůže v každé kanceláři
UK data watchdog fines Reddit £14.47M for letting kids slip past the gate
The UK's data protection regulator has fined social media giant Reddit £14.47 million ($19.5 million) over its use of children's data.…
Critical SolarWinds Serv-U flaws offer root access to servers
What really caused that AWS outage in December?
For the first time, AWS has confirmed that one of its AI systems did indeed delete and recreate one of its environments in December, shutting down part of that service for about 13 hours. What happened behind the scenes — including an aggressive AWS statement against the media outlet that initially reported the issue — is far more interesting.
For IT, this raises questions of how users need to interact with AI systems. AI tools and service have today so effectively mastered natural language that people can forget there isn’t a human involved. That can include approving an AI action without insisting on more details.
Consider a human driver inside a self-driving vehicle, such as a Tesla with “full self-driving.” Let’s say the car is driving down a highway at 65 MPH and going around a curve. But instead of following the curve, the vehicle drives straight, plunges through a guardrail — and the car and passenger drop a few hundred feet to their demise.
In theory, the human driver is in charge and can take back driving control at any point. But if the incident happens with no warning, the driver won’t likely have the half-second needed to resume control in time. Is that the vehicle AI’s fault or is the human to blame for not taking over?
You could make a legitimate argument that it was absolutely the human’s fault because they never should have trusted the self-driving tech in the first place.
That brings us to the current state of IT decisions and AI, which in turn brings us back to the December AWS disaster.
The back-story was broken by the Financial Times, which reported the 13-hour outage was caused by a Kiro agentic coding system that decided to improve operations by deleting and then recreating a key environment.
AWS on Friday shot back to flag what it dubbed “inaccuracies” in the FT story. “The brief service interruption they reported on was the result of user error — specifically misconfigured access controls — not AI as the story claims,” AWS said.
To quote Obi-Wan Kenobi, “So, what I told you was true…from a certain point of view. Luke, you’re going to find that many of the truths we cling to depend greatly on our own point of view.” The more we look into the particulars of the December incident, the more user error doesn’t mean what the company is suggesting it means.
AWS continued: “The disruption was an extremely limited event last December affecting a single service (AWS Cost Explorer—which helps customers visualize, understand, and manage AWS costs and usage over time) in one of our 39 Geographic Regions around the world. It did not impact compute, storage, database, AI technologies, or any other of the hundreds of services that we run.”
That much seems true. It’s also a classic misdirection. The company conveniently forgot to confirm that the point of the story — that the system decided to delete and recreate an environment — was correct.
“The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool (AI powered or not) or manual action.” That’s an impressively narrow interpretation of what happened.
AWS then promised it won’t do it again. “We implemented numerous safeguards to prevent this from happening again — not because the event had a big impact (it didn’t), but because we insist on learning from our operational experience to improve our security and resilience. Additional safeguards include mandatory peer review for production access. While operational incidents involving misconfigured access controls can occur with any developer tool — AI-powered or not — we think it is important to learn from these experiences. The Financial Times‘ claim that a second event impacted AWS is entirely false.”
As for the AWS statement, the hyperscaler doth protest too much, methinks.
This is a critical issue for a few reasons. First, AWS is hardly the first AI firm shouting “user error” when their systems misbehave. Secondly, this is part of a disconcerting trend of AI systems overreaching or even flatly ignoring human instructions.
In an emailed comment, AWS added that, “Kiro puts developers in control — users need to configure which actions Kiro can take, and by default, Kiro requests authorization before taking any action. In this case, an engineer was using a role with broader permissions than expected — a user access control issue, not an AI autonomy issue. The issue stemmed from a misconfigured role — the same issue that could occur with any developer tool or manual action.”
In an interview ,an AWS spokesperson argued that the user error was not the approved system request, but that the AWS engineer apparently misunderstood what their own level of privilege was. “The human was confused by what privileges that they had. They thought that they had narrower privileges than they actually had,” the AWS spokesperson said.
This becomes relevant because most agentic systems, and Kiro is one of them, have the same privileges as the human they’re working with. The AWS argument is that the engineer might have been more careful or somehow acted differently had he or she understood the high level of privilege the agent had been granted.
The key detail missing — which AWS would not clarify — is just what was asked and how the engineer replied. Had the engineer been asked by Kiro “I would like to delete and then recreate this environment. May I proceed?” and the engineer replied, “By all means. Please do so,” that would have been user error. But that seems highly unlikely.
The more likely scenario is that the system asked something along the lines of “Do you want me to clean up and make this environment more efficient and faster?” Did the engineer say “Sure” or did the engineer respond, “Please list every single change you are proposing along with the likely result and the worst-case scenario result. Once I review that list, I will be able to make a decision.”
That gets into a key IT issue: Do we need training on how to interact with AI? If an employee starts answering AI tools as if they were human, problems will materialize. AI systems seem smart, but they do not process data as humans do.
Recently, an AWS executive posted about a glitch involving an AI system that was trying to replicate registration forms. It looked at fields such as username and password and saw that the system only permitted one user to have that exact string of characters. The AI extrapolated from that and started rejecting users if they were the same age, with the notice “user with this age already exists.”
It’s like a civil service employee who memorized a rule but never asked the point of the rule. Without knowing that context, that employee can’t make a rational decision about when the rule should be waived.
Like the car driver who went over the cliff, the smartest decision is to not use any autonomous AI system at all. But given that it seems all-but-unavoidable today, the second-best option is to insist that employees demand to know precisely what they are being asked to approve.
That may not eliminate AI disasters, but it will hopefully slow them down.
Jak paměťová krize zdražuje úložiště pro fotografy. Bude hůř, nakupujte raději hned
Nvidia plans a Windows PC SoC, setting up direct competition with Qualcomm, Intel, and AMD
Nvidia is developing a system-on-chip (SoC) for Windows PCs, with Dell and Lenovo among the OEMs planning to build notebooks and desktops around the processor later this year.
The chip would be based on the GB10, which Nvidia developed with MediaTek and launched in October 2025, the Wall Street Journal reported, citing people familiar with the matter.
The GB10 currently powers Linux-based AI workstations from Dell, Lenovo, Asus, MSI, and Gigabyte, priced between $3,000 and $4,000 and aimed at machine learning researchers and AI model developers. None of those systems runs Windows.
[ Related: More Nvidia news and insights ]Nvidia has previously explored Arm-based chip designs for Windows PCs, and the GB10 represents its most concrete step yet toward that goal.
The engineering gapThe GB10 was built for a specific purpose: running large AI models in a developer workstation environment. It pairs a MediaTek-designed CPU tile featuring 20 Arm cores with an Nvidia Blackwell GPU tile, delivering up to one petaFLOP of AI performance at FP4.
At 140 watts under full load, roughly three times the thermal budget of a high-end business laptop, it was never designed for standard notebook form factors.
A PC-grade derivative would need to ship at significantly lower compute configurations, said Rishi Padhi, principal analyst at Gartner.
“I would expect Nvidia’s product line to come with lower compute power to be more efficient on power consumption and also the heat generated, sufficient to be cooled by a laptop-based cooling system,” he said.
Nvidia would also need to leverage a unified memory architecture to bring power consumption within range of competing platforms, “similar to how Apple achieves power efficiency gains through a unified memory architecture,” Padhi added.
Should Nvidia bring a power-efficient, Windows-ready derivative to market, analysts said it would pose a direct competitive challenge to the vendors that currently dominate PC silicon.
Threat to incumbent chipmakersNvidia’s entry into integrated PC silicon poses a direct competitive challenge to Qualcomm, Intel, and AMD, according to analysts. The sharpest near-term pressure falls on Qualcomm. Windows-on-Arm laptops have historically traded strong battery life for weaker graphics performance, a trade-off Nvidia’s architecture is designed to eliminate.
“By fusing the performance compute of a Blackwell GPU directly onto an Arm die, Nvidia essentially nullifies the traditional trade-off associated with Windows-on-Arm machines,” Padhi said. “This will directly impact the share of Qualcomm’s offerings in AI PCs.”
For Intel and AMD, the threat operates differently, the analyst said. High-end mobile processors from both vendors are routinely paired with discrete Nvidia GPUs in premium laptops. “With Nvidia slated to launch a capable SoC of its own, it can prompt OEMs to consider purchasing a single integrated chip from Nvidia that offers better battery life and equivalent graphical performance to an x86 CPU plus Nvidia GPU combo,” Padhi said. “The economic and engineering incentives heavily favor the unified system-on-chip.”
Shreeya Deshpande, senior analyst at Everest Group, said the move signals a broader strategic shift. “Nvidia’s reported move into Arm-based PC SoCs marks a strategic shift from being primarily a GPU supplier to competing at the core platform level in Windows laptops,” she said. “If successful, it could materially intensify competition in the Windows-on-Arm ecosystem and increase pressure on incumbent PC silicon providers to further differentiate on AI performance and efficiency.”
The stacks are significant. AI PCs are on track to account for more than 50% of all PC shipments by 2026, according to Gartner. Beyond the competitive market dynamics, analysts said Nvidia’s entry into PC silicon carries a distinct implication for enterprises already running Nvidia infrastructure in the data center.
More Nvidia news:- Nvidia lines up partners to boost security for industrial operations
- Meta scoops up more of Nvidia’s AI chip output
- Reports of Nvidia/OpenAI deal in jeopardy are overblown, says Nvidia’s CEO
- Eying AI factories, Nvidia buys bigger stake in CoreWeave
- China clears Nvidia H200 sales to tech giants, reshaping AI data center plans
- Nvidia is still working with suppliers on RAM chips for Rubin
- RISC-V chip designer SiFive integrates Nvidia NVLink Fusion to power AI data centers
- Nvidia H200 chips in China: US says yes, China says no
- Lenovo-Nvidia partnership targets faster AI infrastructure rollouts
- Top 10 Nvidia stories of 2025 – From the data center to the AI factory
- HPE loads up AI networking portfolio, strengthens Nvidia, AMD partnerships
- Nvidia’s $2B Synopsys stake tests independence of open AI interconnect standard
Identity Prioritization isn't a Backlog Problem - It's a Risk Math Problem
- « první
- ‹ předchozí
- …
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- …
- následující ›
- poslední »



