Agregátor RSS
11 věcí, které nás štvou na Windows 11 a bohužel asi jen tak nepřestanou
Is this where Apple Silicon will be in 5 years?
Apple Silicon has another big journey to take, one that means Apple will probably be the first to introduce 1.4- and 1-nanometer chips inside its systems. If that happens, Macs, iPhones, and iPads will continue to lead the industry in performance per watt.
Why do I say this? Mainly because reports claim TSMC is working to build sub 1nm chips by 2029 — and Apple remains that company’s most important customer, despite competition from AI server manufacturers today.
Demand for AI servers could yet slow, given the looming energy crisis and the trend toward on-prem and edge AI services. I don’t think the current level of investment in AI is sustainable, which is why I think Apple will continue to be TSMC’s lead customer once that bubble, inevitably, bursts.
What’s happening at TSMC?The latest news is that TSMC intends to begin trial production of its sub-1nm A10 process tech by 2029, setting up Apple to be the first big company to use these new processors inside its hardware when volume production begins.
What’s interesting is that this move to 1nm isn’t just about making transistors smaller, but also about ensuring close integration between chips, memory, and energy systems. A report in 2021 said TSMC was able to reach 1nm by using bismuth instead of silicon in the design.
Apple, of course, already works very, very hard to integrate those different elements on its existing processors, which is why it delivers better performance at lower wattage than competitors. That integration means its systems can accomplish a great deal more from lower quantities of memory, which helps protect the company’s margins against rapidly accelerating RAM prices.
We currently expect up to 30% improvement in both performance and power efficiency from these new chip designs. That implies that iPhone Pro models introduced in 2030 (or possibly 2031) will be powered by these new chips.
Apple’s silicon road map seems secureTSMC is expected to introduce 1.6nm chips in the next 18 months, though Apple might choose to skip that iteration to guarantee a leadership position once the 1.4nm TSMC process hits in 2028. That iteration will deliver yet another big speed and performance boost to Apple’s devices, with Apple becoming the first PC, tablet, or smartphone manufacturer to ship 1.4nm systems at scale.
What benefits can we expect? During TSMC’s 2025 North American Symposium the company said 1.4nm chips should be 15% faster and consume around 30% less power than the processors inside Apple’s current devices. That’s all good, but it is also interesting to note that the iPhone 17 series hasn’t even made the leap to 2nm as yet, with Apple using TSMC’s N3P process. So, the company has lots of scope to secure the future of Apple Silicon.
Where next for Apple’s chips?If it is correct that Apple will skip TSMC’s 1.6nm process and then climb aboard the 1.4nm and 1nm chips, we could see the two big processor development chapters between now and 2030. This year we can see it introduce 2nm chips, with 1.4nm to follow probably in 2028 and the huge leap to sub-1nm processors to follow in 2030-31.
As these chips will be deployed across Apple’s hardware platforms, including within new designs we don’t know about yet, it means you can anticipate highly significant performance gains wherever in the ecosystem you happen to sit. Whether you’re looking at the next-generation MacBook Neo, MacBook Pro, iPhone or iPhone e, you’ll see impressive performance gains unlocked in all into the last half of this decade.
Those performance gains, combined with improved energy consumption, allows Apple’s hardware designers to work towards thinner, lighter and smaller devices in a range of design configurations — some of which could not have existed before. (Think about spectacles with the kind of performance you once got from a Mac.) The way ahead is clear. Apple has a wide open road for chip design, and while tensions between today’s US and China could derail some of these plans, TSMC’s continued investment in fabrication capacity in the US might help mitigate against even that potential calamity.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
ChatGPT ani Codex nefungují, problém není jen u vás
T-Mobile škrtí předplacenky. Při dobití pod 500 Kč má kredit platnost nově už jen půl roku
Microsoft: Teams increasingly abused in helpdesk impersonation attacks
Vytvořili přeplácaný prohlížeč, teď chtějí peníze za čistou verzi. Brave Origin stojí 1300 Kč
The backup myth that is putting businesses at risk
Rok ve vývoji humanoidních robotů. Loni běžel půlmaraton skoro tři hodiny, teď to zvládne za 50 minut
⚡ Weekly Recap: Vercel Hack, Push Fraud, QEMU Abused, New Android RATs Emerge & More
Český stát zřejmě bude Mironetu platit rekordní odškodné za protiprávní policejní zásah
British Scattered Spider hacker pleads guilty to crypto theft charges
LXQt 2.4.0
Microsoft releases Windows Server update fix to fix its April update fixes
Microsoft has pushed out an out-of-band update to address the restart loop that hit some Windows Server devices after its April update.…
Supervyhledávač Raycast pro Windows nabírá funkce. Ukáže náhled souboru a připne okno tam, kde ho potřebujete
Auditd vs eBPF: Modern Approaches to Linux System Monitoring
Microsoft tests Windows Explorer speed, performance improvements
Apple a Google neuhlídaly aplikace, které pomocí AI svlékají lidi. Doporučovaly je i nezletilým
Why Most AI Deployments Stall After the Demo
AI-ready skills are not what you think
Enterprises have spent the past two years rushing to make their workforces “AI-ready.” But many early training programs — focused on prompt writing and chatbot skills — are proving poorly suited to the realities of AI-powered work.
The reason is simple: the skills that matter most once AI enters real workflows have less to do with interacting with tools and more to do with judgment. The durable capabilities emerging in the AI era include output validation, data literacy, process understanding, and the ability to challenge automated recommendations. Tool-specific skills, by contrast, tend to age quickly as models and interfaces evolve.
“AI-ready is not defined by how many people took training or how many licenses you bought,” said Neal Sample, executive vice president and chief digital and technology officer at electronics retailer Best Buy. “It’s defined by whether you have redesigned real workflows, assigned accountability, and can show the technology is improving outcomes without introducing unmanaged risk.”
That shift — from tool proficiency to operational judgment — is forcing enterprises to rethink how they train employees for AI.
The illusion of AI readinessThe first wave of corporate AI training focused heavily on prompt engineering and basic familiarity with generative AI tools. That approach made sense early on, when employees needed help understanding the technology. But many organizations are discovering those skills have a short half-life.
“Prompt engineering aged the fastest,” said Rebecca Schalber, senior manager for generative AI at cosmetics company cosnova Beauty. As new models and interfaces appear, the effort invested in crafting perfect prompts quickly becomes obsolete.
When cosnova rolled out generative AI across its workforce, Schalber expected training to center on individual capability — understanding large language models, learning prompting techniques, and experimenting with tools. Early adoption looked promising. Within six months, a survey showed employees reporting productivity gains of nearly 10%.
Adoption alone was not enough. “You need broad adoption to move the needle,” Schalber said. “But what really matters is the workflow design.”
Instead of focusing on prompts, cosnova began examining how work actually happens inside teams — what tasks employees perform, where friction exists, and which parts of a workflow could be safely automated or augmented by AI. That shift forced employees to confront a different question: not how to use AI, but how to verify its output and integrate it into real business processes.
When AI hits real workflowsThe distinction becomes clear once AI leaves experimental environments and enters operational workflows. In testing, outputs can be compared against known answers. In real business processes, however, the answer often isn’t known in advance. AI systems are deployed precisely because they help employees analyze complex situations, interpret data, or generate insights.
That’s where human oversight becomes critical. “Human oversight is not second-guessing every output from the AI,” said Sample from Best Buy. “It means being explicit about where judgment, escalation, and accountability must remain human.”
The closer a decision comes to customer trust, regulatory obligations, or significant financial risk, the more important that judgment becomes. Organizations deploying AI at scale must build guardrails into workflows and clearly define who is responsible for final decisions.
“For every AI-enabled workflow, you need to know who owns the decision, who handles exceptions, and where a human must intervene before the business takes action,” Sample said.
In other words, the challenge of AI readiness is not teaching employees to interact with a model — it’s teaching them how to supervise it.
From training programs to workflow designAt cosnova, Schalber’s team moved away from generic training sessions toward hands-on workshops where managers and employees map their daily workflows. During these sessions, teams identify tasks that could benefit from AI support and then redesign processes around those opportunities.
When AI was introduced as simply another tool, enthusiasm was limited. But when employees saw how the technology could remove tedious tasks or reduce friction in their work, adoption accelerated.
“It was no longer just another tool that management wanted people to use,” Schalber said. Instead, teams were solving their own problems — removing repetitive tasks or speeding up processes they disliked.
The company also began emphasizing transferable skills that apply across AI tools and models, including critical thinking, workflow design, and data literacy. These capabilities remain valuable even as the technology evolves and have proven far more durable than prompt-writing techniques.
Experimentation before formal trainingSome organizations are taking a different approach: encouraging experimentation first and formal training later. At AI infrastructure company Turing, Taylor Bradley, vice president of talent strategy, deliberately began the company’s AI upskilling effort by encouraging non-technical employees to experiment with generative AI tools.
The goal was to spark curiosity rather than enforce compliance. Bradley compares the process to teaching his daughter to ride a bicycle. “The best way for her to learn was to actually have her ride the bike,” he said.
At Turing, employees experimented with AI through informal activities such as turning photos of pets into “royal portraits” or creating short AI-generated films for internal competitions. The exercises were designed to lower the barrier to experimentation. Once employees became comfortable with the technology, the company introduced practical workshops focused on real work tasks.
Bradley now sits down with teams to examine daily workflows and identify where generative AI could help. Employees often discover that AI can serve as a sounding board for ideas, a drafting assistant, or a way to accelerate communication.
Within weeks, those experiments often evolve into more formal systems. One early project began as a conversational tool helping HR specialists draft responses to employee support tickets before expanding into a broader internal knowledge system.
The key metric, Bradley said, is not course completion but whether teams develop useful AI applications. “We focus on quality use cases with measurable outcomes,” he said.
Learning inside the flow of workFor large enterprises, the challenge of AI skill development is even more complex. Traditional training models — where employees attend courses and then return to their jobs — are poorly suited to technology evolving as quickly as generative AI.
According to Margaret Burke, talent acquisition and development leader at professional services firm PwC, traditional training programs are inherently episodic. “Employees attend a course, return to work, and may or may not apply what they learned,” she said. “In an AI-accelerating environment, that model breaks down.”
PwC is embedding AI learning directly into everyday work. The firm still runs formal programs but is expanding apprenticeship-style learning and weaving AI capability development into routine business activities.
One example is the company’s “skills days,” where employees explore AI applications relevant to their work. During a recent session with advisory associates, participants documented how they were already using AI — or where they planned to apply it. Hundreds of ideas emerged. PwC then used AI to analyze the inputs, clustering them into categories and redistributing the results across the organization so teams could learn from one another.
Crucially, PwC pairs technical AI capabilities with what Burke calls “human edge” skills, including critical thinking, independent judgment, and storytelling. “We never teach an AI technical skill without teaching the human skill that goes with it,” Burke said.
As AI systems generate more content and analysis, those human capabilities become essential for interpreting results, spotting errors, and explaining insights to colleagues and clients.
Measuring real AI readinessAs organizations rethink AI capability, the metrics used to evaluate training programs are changing. Traditional learning programs often rely on course completion rates or certifications. But those metrics reveal little about whether employees can use AI responsibly inside real workflows.
Instead, organizations are looking for operational signals. Some track how frequently employees develop new AI use cases that improve productivity or decision-making. Others measure how quickly teams adapt when AI tools or models change.
For Bradley at Turing, the key indicator is whether employees continually find new ways to improve their work with AI. “If my team members come to me every week with ideas for improving or expanding AI use cases, that’s the signal that capability is growing,” he said.
From the CIO perspective, however, the ultimate measure is operational outcomes. AI readiness only becomes meaningful when organizations integrate AI into real workflows while maintaining accountability for the results.
“The most durable capabilities are not the current best prompt tricks,” said Best Buy’s Sample. “They are judgment, problem framing, systems thinking, and the ability to translate machine output into business action.”
But for CIOs deploying AI across the enterprise, workforce capability is only part of the equation. Organizations must also rethink how leadership defines accountability when AI systems influence decisions.
“An AI-ready workforce without an AI-ready leadership model is likely to stall,” Sample said. “AI can accelerate analysis and recommendations, but accountability doesn’t transfer to the model. Leaders still have to define guardrails, decision rights, and what success looks like.”
As enterprises move beyond early AI experimentation, that leadership clarity may prove just as important as any skill employees learn.
Related reading:Kingdom Come: Deliverance 2 získala cenu BAFTA v kategorii nejlepší příběh
- « první
- ‹ předchozí
- …
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- …
- následující ›
- poslední »



