Computerworld.com [Hacking News]
Cheap enterprise PCs? Not anytime soon — analysts
Historic price hikes for PCs are likely to linger for a long time, prompting many enterprises to put hardware upgrades on hold, analysts said.
PC prices — for both enterprise and consumer buyers — are expected to jump by about 17% this year, Gartner analyst Ranjit Atwal told Computerworld. And the era of the $500 PC for consumers will completely disappear in the coming years, Gartner said in a research study last week.
The substantial cost increases are due to price rises for memory chips, which are critical for AI applications. Memory manufacturing capacity is now being diverted to higher-margin chips used in AI servers, creating a shortage of lower-cost smartphone and PC chips.
The higher memory and component prices are expected to slow upgrades as PCs get more expensive and businesses hold on to older hardware longer, Atwal said.
Companies looking to upgrade to AI PCs could wait until prices come down, Atwal said. But with prices likely stay to high, enterprises might have to buy computers with less robust configurations.
“AI PCs were expected to come down in price, but now will increase in price in 2026, slowing adoption,” Atwal said.
Many new processors for AI PCs are already on the horizon. Intel in January released its upcoming PC chip called Panther Lake, and AMD unveiled new AI PC chips under the Ryzen banner. HP and Dell in March plan to share more details about laptops with the new chips.
But even PC makers are struggling to keep laptop prices in check. Memory prices “are roughly doubling versus the prior quarter,” Karen Parkhill, chief financial officer at HP, said during an earnings call last week.
Memory now accounts for 35% of the cost of making PCs, double the historic norm, Parkhill said. “Last quarter, memory and storage costs made up roughly 15% to 18% of our PC bill of materials,” she said, noting the comparative costs.
This year, analysts had predicted AI applications on PCs would grow as security-aware companies considered cutting cloud costs. But AI tools don’t work well on PCs with poor memory capacity. As a result, AI PCs will now fall from a necessity into a “nice to have” category, said Jack Gold, principal analyst at J. Gold Associates.
“For modern machines, reducing the amount of memory installed isn’t really an option, as performance will take a big hit — especially for AI,” Gold said.
Enterprises will extend upgrade cycles, buying fewer machines and keeping the ones they already have longer, in the hope prices will normalize. But this upgrade cycle is different, and PC prices won’t come down.
“The memory issue is going to be a long-term problem,” Gold said.
Enterprises usually negotiate with vendors on their large buys, but vendors can only absorb a limited amount of cost in terms of margins. “Yes, it will slow upgrades as PCs get more expensive and businesses hold on to PCs for longer,” Atwal said.
With price increases here to stay, the question now is how long it will take for enterprises to raise PC budgets.
Anthropic to Department of Defense: Drop dead
In recent weeks, AI giant Anthropic has been locked in a high‑stakes confrontation with the Trump administration’s Department of Defense (DoD) over new standard terms the Pentagon wants to impose on AI vendors. Defense Secretary Pete Hegseth had demanded contract language that would give the military “any lawful use” of Anthropic’s models, effectively stripping out the company’s long‑standing limits on certain battlefield and domestic applications.
Lawful, in Hegseth’s mind, means the DoD could do practically whatever it wanted, up to and including domestic mass surveillance and AI-controlled weapons.
If that sounds like the premise for how a war between Terminators and humans might begin, you’re not the only one to think so. Caution, however, is not a word Hegseth seems to know. But Anthropic CEO Dario Amodei, however, is well aware of the real-world risks of AI —and not just the ones torn from science-fiction horror movies.
Be that as it may, Hegseth summoned Amodei and demanded that Anthropic AI be used any way he wants or said he’d cancel the company’s existing $200 million contract and blacklist them from any further AI pacts. Hegseth gave Anthropic until 5 p.m. yesterday to bend the knee.
Amodel didn’t bend.
He publicly stated the company would rather walk away from work with the DoD than drop contractual safeguards meant to keep its AI from being used for mass surveillance of Americans or for fully autonomous weapons.
It’s not that he objects to using AI to defend the US. Amodei favors that. But, “using these systems for mass domestic surveillance is incompatible with democratic values,” he said. “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties.”
In addition, “frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk. We have offered to work directly with the Department of [Defense] on R&D to improve the reliability of these systems, but they have not accepted this offer.”
Oh, and by the way, Amodei said those use cases “have never been included in our contracts with the Department of [Defense], and we believe they should not be included now.”
The Pentagon kept the pressure on, describing the strong-arm tactics as “my way or the highway” and told Anthropic to pitch their “final offer” yesterday, Still, Anthropic rejected the DoD proposal, saying it “cannot, in good conscience,” agree to these overbroad terms.
It’s not, by the way, that Anthropic is some woke, liberal company as it’s now being painted in some pro-Trump circles. Far from it! As the National Review pointed out, “Amodei is just about the opposite of a dove when it comes to military applications of AI. For example, Anthropic’s Claude was used by the Trump administration to capture former Venezuelan President Nicolás Maduro in January.
Anthropic’s stance against using AI for domestic surveillance and self-guided weapon systems is less about political ideology and more about a rational realization of the dangers of trusting early-stage, unfettered AI.
Civil liberties groups, including the Electronic Frontier Foundation (EFF), have urged Anthropic to hold the line. They’re casting the Pentagon’s push as an attempt to bully tech firms into building tools for bulk spying and automated warfare. Within Anthropic, employees have posted public messages backing leadership’s stance. They describe the showdown as a visible test of the company’s founding commitment to steer frontier AI away from the most destabilizing military uses.
These workers are not alone in supporting Anthropic’s stance. Alphabet, Amazon, and Microsoft employees announced they were behind Anthropic. Simultaneously, hundreds of Google and OpenAI employees signed an open letter calling on their companies to maintain Anthropic’s red lines against mass surveillance and fully automated weaponry. They said they “hope our leaders will stand together” to reject the current Pentagon terms.
Donald Trump, on the other hand, late yesterday threw a fit. “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution. Their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY. Therefore, I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology.“
Government agencies now have six months to transition to alternative tools.
Some people on the right politically were behind Anthropic’s position. Retired General Jack Shanahan, for instance, who was in the middle of an earlier military-vs.-AI conflict between Project Maven and Google, did not take Trump’s side. He wrote: “Despite the hype, frontier models are not ready for prime time in national security settings. Over-reliance on them at this stage is a recipe for catastrophe. Mass surveillance of US citizens? No thanks.”
None of this stopped other AI companies from flirting with the Defense Department. In an internal memo, OpenAI CEO Sam Altman wrote: “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines.”
He went on to say OpenAI was still open to making “a deal with the DoW that allows our models to be deployed in classified environments.” That sounded like classic waffling to me, and sure enough last night, OpenAI agreed to work with Defense Department.
Let’s face it OpenAI has a bottomless need for revenue to cover its endless capital expenses, so the execs were willing to make a deal with the devil. (Yeah, yeah, I know Altman talked about guardrails and protections. One word for you: hallucinations.)
Sadly, if OpenAI hadn’t made that deal, someone else surely would have. So, if in 2028, AI-driven autonomous drones drop bombs on suspected illegal foreigners’ homes in Minneapolis or anywhere else in the world, we’ll know who to blame — much good that will do us then.
This insane adoption of out-of-control AI for military purposes must be stopped now lest the Terminator wars become fact rather than science fiction.
Trump administration bans Anthropic, seemingly embraces OpenAI
The Trump administration on Friday moved to ban the use of products from artificial intelligence company Anthropic by federal businesses, escalating a high-stakes clash over whether private AI makers can limit how the US military uses their systems. Just hours later, Anthropic rival OpenAI’s CEO, Sam Altman, announced that his company had reached a deal to supply the Pentagon with its technology, ostensibly under the same terms that the military had rejected for Anthropic.
Calling Anthropic “Leftwing nut jobs,” President Donald Trump said in a Truth Social post that he was directing “EVERY Federal Agency” to stop using Anthropic’s technology immediately. At the same time, the Pentagon prepared to designate the company a “supply chain risk,” a label more commonly associated with foreign adversaries’ tech products, such as telecom gear made by China’s Huawei.
The decision follows an unusually public dispute between Anthropic and Defense Secretary Pete Hegseth over what the Pentagon called an “all lawful purposes” requirement, which means that once the military licenses an AI model, it must be free to deploy it for any lawful mission without being constrained by vendor-imposed safety policies.
On X, Defense Secretary Pete Hegseth echoed Trump’s criticism, saying “Cloaked in the sanctimonious rhetoric of ‘effective altruism,’ [Anthropic and CEO Dario Amodei] have attempted to strong-arm the United States military into submission – a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives.” He added, “Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable.”
In a late-night statement, Anthropic responded to the Pentagon, saying, “We have not yet received direct communication from the Department of War or the White House on the status of our negotiations.” It also said it believes the designation of supply chain risk “would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
A six-month clock and a scramble to replace ClaudeUnder the plan, according to Axios, the Defense Department would sever a contract, worth up to $200 million, with Anthropic, and require defense contractors and other vendors to certify they are not using Anthropic’s Claude model in work tied to the Pentagon. The administration is allowing a six-month window to give agencies and contractors time to transition to alternatives.
That transition could be particularly disruptive, because Claude has been used in the military’s classified systems, systems that support some of the Pentagon’s most sensitive intelligence work, weapons development, and operational planning.
Defense officials have described Claude as highly capable, and acknowledged that disentangling it from existing workflows would be difficult.
What the administration says it is fighting overAnthropic argues that certain uses, especially mass domestic surveillance and fully autonomous weapons, should remain out of bounds.
CEO Dario Amodei said in an impassioned essay that the company cannot remove those guardrails “in good conscience,” warning that current AI systems are not reliable enough for fully autonomous lethal decision-making, and that large-scale surveillance carries significant risks of abuse.
The Pentagon argues that the military already operates under its own rules and oversight, and cannot have mission decisions constrained by a vendor’s terms of service, particularly in gray areas where definitions of “surveillance” and “autonomy” can be contested.
What it could mean for US national securityIn the near term, the administration’s move forces the Pentagon to manage a delicate transition: removing Anthropic’s model from classified environments while maintaining continuity for intelligence analysis and planning tasks that had begun to incorporate generative AI.
The longer-term implications are broader. The ban signals that access to the federal market, particularly defense, may depend on accepting “all lawful use” terms, potentially reducing the leverage of AI companies that try to impose hard red lines on certain national security applications.
It also raises practical questions for AI companies as government vendors. If the government pushes one leading AI provider out of sensitive systems, agencies and contractors may consolidate around a smaller number of alternatives, increasing dependence on whichever firms remain willing and able to operate in classified environments.
These dislocations in critical military infrastructure could further pose a national security threat, some argue. US Sen. Mark R. Warner (D-VA), vice chairman of the Senate Intelligence Committee, said the efforts by Trump and Secretary Hegseth pose a national security risk. “The president’s directive to halt the use of a leading American AI company across the federal government, combined with inflammatory rhetoric attacking that company, raises serious concerns about whether national security decisions are being driven by careful analysis or political considerations.”
Competitors could move in: Grok, OpenAI, and GoogleThe decision could reshape the competitive landscape.
Elon Musk’s xAI has already signed an agreement to bring its Grok model into classified military systems, in a development that positioned xAI as a potential replacement if Anthropic’s relationship with the Pentagon collapsed.
However, significant concerns about Grok’s safety and reliability have surfaced within parts of the federal government, even as the Pentagon approved it for classified settings, an early indication that “replacement” won’t be a simple matter of switching one model for another.
Meanwhile, the Pentagon has been in discussions with OpenAI and Google about expanding their models’ availability from unclassified systems into more sensitive environments, Axios reported. The discussions with OpenAi apparently bore fruit, given that less than seven hours after Trump’s Truth Social post, OpenAI’s Altman posted on Twitter, “we reached an agreement with the Department of War to deploy our models in their classified network.”
In an apparent about-face, however, the Pentagon appeared to accept from OpenAI the same terms it rejected for Anthropic. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
OpenAI CEO Sam Altman has also sought to position his company as aligned with Anthropic’s core ethical objections, while still pursuing Pentagon business. Altman had said OpenAI shares “red lines” against mass surveillance of Americans and weapons that can fire without human oversight, even as it explores a path to work with the Defense Department.
Political and industry backlash begins to surfaceEven among competitors, the Anthropic fight produced unusual sympathy.
Hundreds of employees at Google and OpenAI backed Anthropic in a petition, underscoring internal tensions across the AI industry over military applications. One factor that could derail the ban on Anthropic is unified AI sector rejection.
Peter Madsen, former professor of ethics and social responsibility at Carnegie Mellon University and executive director of the Center for the Advancement of Applied Ethics and Political Philosophy, said in an interview, “Every other AI company should commit to the same ideals as Anthropic so that Trump will have to use an ethical AI firm, not one that will cower to his whims.”
Anthropic has said it would cooperate with a transition to avoid disruption to ongoing missions, while noting it had not yet said whether it would challenge the “supply chain risk” designation in court.
What happens nextThe administration’s decision sets up several immediate test cases.
First, agencies and contractors must determine how deeply Anthropic’s tools are embedded in their operations and how quickly they can migrate without degrading performance or security.
Second, rivals will face their own balancing act: how to satisfy Pentagon demands for “all lawful use,” or in the case of OpenAI, walk the fine line of its safety principles, while managing internal and external scrutiny over surveillance, autonomy, and the risk of AI systems behaving unpredictably in high-stakes settings.
Finally, the ban raises a fundamental policy question that goes beyond Anthropic: in the race to deploy frontier AI for national security, who sets the boundaries: the government that needs operational flexibility, or the private companies that build and control access to the technology?
This article has been updated from its original version to reflect the late night announcement by OpenAI that it had reached a deal with the Pentagon.
This article originally appeared on CIO.com.
OpenAI launches stateful AI on AWS, signaling a control plane power shift
Stateless AI, in which a model offers one-off answers without context from previous sessions, can be helpful in the short-term but lacking for more complex, multi-step scenarios. To overcome these limitations, OpenAI is introducing what it is calling, naturally, “stateful AI.”
The company has announced that it will soon offer a stateful runtime environment in partnership with Amazon, built to simplify the process of getting AI agents into production. It will run natively on Amazon Bedrock, be tailored for agentic workflows, and optimized for AWS infrastructure.
Interestingly, OpenAI also felt the need to make another announcement today, underscoring the fact that nothing about other collaborations “in any way” changes the terms of its partnership with Microsoft. Azure will remain the exclusive cloud provider of stateless OpenAI APIs.
“It’s a clever structural move,” said Wyatt Mayham of Northwest AI Consulting. “Everyone can claim a win, but the subtext is clear: OpenAI is becoming a multi-cloud company, and the era of exclusive AI partnerships is ending.”
What differentiates ‘stateful’The stateful runtime environment on Amazon Bedrock was built to execute complex steps that factor in context, OpenAI said. Models can forward memory and history, tool and workflow state, environment use, and identity and permission boundaries.
This represents a new paradigm, according to analysts.
Notably, stateless API calls are a “blank slate,” Mayham explained. “The model doesn’t remember what it just did, what tools it called, or where it is in a multi-step workflow.”
While that’s fine for a chatbot answering one-off questions, it’s “completely inadequate” for real operational work, such as processing a customer claim that moves across five different systems, requires approvals, and takes hours or days to complete, he said.
New stateful capabilities give AI agents a persistent working memory so they can carry context across steps, maintain permissions, and interact with real enterprise tools without developers having to “duct-tape stateless API calls together,” said Mayham.
Further, the Bedrock foundation matters because it’s where many enterprise workloads already live, he noted. OpenAI and Amazon are meeting companies where they are, not asking them to rearchitect their security, governance, and compliance posture.
This makes sophisticated AI automation accessible to mid-market companies; they will no longer need a team of engineers to “build the plumbing from scratch,” he said.
Sanchit Vir Gogia, chief analyst at Greyhound Research, called stateful runtime environments “a control plane shift.” Stateless can be “elegant” for single interactions such as summarization, code assistance, drafting, or isolated tool invocation. But stateful environments give enterprises a “managed orchestration substrate,” he noted.
This supports real enterprise workflows involving chained tool calls, long running processes, human approvals, system identity propagation, retries, exception handling, and audit trails, said Gogia, while Bedrock enforces existing identity and access management (IAM) policies, virtual private cloud (VPC) boundaries, security tooling, logging standards, and compliance frameworks.
“Most pilot failures happen because context resets across calls, permissions are misaligned, tokens expire mid workflow, or an agent cannot resume safely after interruption,” he said. These issues can be avoided in stateful environments.
Factors IT decision-makers should considerHowever, there are second order considerations for enterprises, Gogia emphasized. Notably, state persistence increases the attack surface area. This means persistent memory must be encrypted, governed, and auditable, and tool invocation boundaries should be “tightly controlled.” Further, workflow replay mechanisms must be deterministic, and observability granular enough to satisfy regulators.
There is also a “subtle lock in dimension,” said Gogia. Portability can decrease when orchestration moves inside a hyperscaler native runtime. CIOs need to consider whether their future agent architecture remains cloud portable or becomes anchored in AWS’ environment.
Ultimately, this new offering represents a market pivot, he said: The intelligence layer is being commoditized.
“We are moving from a model race to a control plane race,” said Gogia. The strategic question now isn’t about which model is smartest. It is: “Which runtime stack guarantees continuity, auditability, and operational resilience at scale?”
Partnership with Microsoft still ‘strong and central’Today’s joint announcement from Microsoft and OpenAI about their partnership echoes OpenAI’s similar reaffirmation of the collaboration in October 2025. The partnership remains “strong and central,” and the two companies went so far as to call it “one of the most consequential collaborations in technology,” focused on research, engineering, and product development.
The companies emphasized that:
- Microsoft maintains an exclusive license and access to intellectual property (IP) across OpenAI models and products.
- OpenAI’s Frontier and other first-party products will continue to be hosted on Azure.
- The contractual definition of artificial general intelligence (AGI) and the “process for determining if it has been achieved” is unchanged.
- An ongoing revenue share arrangement will stay the same; this agreement has always included revenue-sharing from partnerships between OpenAI and other cloud providers.
- OpenAI has the flexibility to commit to compute elsewhere, including through infrastructure initiatives like the Stargate project.
- Both companies can independently pursue new opportunities.
“That joint statement reads like it was drafted by three law firms simultaneously, and that’s the point,” says Mayham.
The anchor of the agreement is that Azure remains the exclusive cloud provider of stateless OpenAI APIs. This allows OpenAI to establish a new category on AWS that falls outside of Microsoft’s reach, he said.
OpenAI is ultimately “walking a tightrope,” because it should expand distribution beyond Azure to reach AWS customers, which comprise a massive portion of the enterprise market, he noted. At the same time, they have to ensure Microsoft doesn’t feel like its $135 billion investment “just got diluted in strategic value.”
Gogia called the statement “structural reassurance.” OpenAI must grow distribution across clouds because enterprise buyers are demanding multi-cloud flexibility. They don’t want to be confined to a single cloud; they want architectural optionality.”
Also, he noted, “CIOs and boards do not want vendor instability. Hyperscaler conflict risk is now a board level concern.”
New infusion of funding (again)Meanwhile, new $110 billion in funding from Nvidia, SoftBank, and Amazon will allow OpenAI to expand its global reach and “deepen” its infrastructure, the company says. Importantly, the funding includes the use of 3GW of dedicated inference capacity and 2 GW of training on Nvidia’s Vera Rubin systems. This builds on the Hopper and Blackwell systems already in operation across Microsoft, Oracle Cloud Infrastructure (OCI), and CoreWeave.
Mayham called this “the headline within the headline.”
“Cash doesn’t build AI products; compute does,” he said. Right now, access to next-generation Nvidia hardware is the “true bottleneck for every AI company on the planet.”
OpenAI is essentially locking in a “guaranteed supply line” for the chips that power everything it does. The money from all three companies funds operations and infrastructure, but the Nvidia capacity and training allows OpenAI to use infrastructure at the frontier, said Mayham. “If you can’t get the processors, the cash is just sitting in a bank account.”
Inference is now one of the biggest cost drivers in AI, and Gogia noted that frontier AI systems are constrained by physical infrastructure; GPUs, high bandwidth memory (HBM), high speed interconnects, and other hardware, as well as grid level power capacity. Are all finite resources.
The current moves embed OpenAI deeper into the infrastructure stack, but the risk is concentration. When compute control centralizes among a small cluster of hyperscalers and chip vendors, the system can become fragile. To protect themselves, Gogia advised enterprises to monitor supply chain concentration.
“In strategic terms, however, this move strengthens OpenAI’s durability,” he said. “It secures the physical substrate required to sustain frontier model scaling and enterprise inference growth.”
This article originally appeared on InfoWorld.
Memory shortage batters PC market; double-digit sales drop coming, say analysts
Global PC and smartphone sales are expected to fall by more than 10% this year, according to analysts, as hyperscaler investment in AI data centers fuels a memory shortage.
PC shipments will fall 10.4% during 2026 compared to 2025, as the constrained memory supply leads to higher prices, according to Gartner. IDC predicts a slightly larger 11.3% decline over the same period.
The smartphone market will also see significant year-on-year shipment declines in 2026 — down 8.4% according to Gartner, or 12.9%, according to IDC.
“The current situation is now more negative than even our most pessimistic scenarios suggested just a few months ago,” IDC said in a blog post Thursday. The analyst firm in December had previously forecast a worst-cast 8.9% drop in PC shipments.
“The speed at which the memory pricing has increased has shocked everybody,” said said Ranjit Atwal, research director at Gartner, with an expected 130% year-on-year rise in 2026. “This is a demand-side issue. The demand that’s available is all going to hyperscalers; the PC guys and the smartphone guys are getting squeezed.”
For PC vendors, increased memory costs will account for 23% of the total bill-of-materials cost this year, according to Gartner, up from 16% in 2025. This will feed through to PC prices, which are expected to rise by 17% in 2026, the researcher predicted.
Large PC makers are more equipped to weather the storm, analysts said, but will still be affected. HP said in its first quarter earnings call that memory now accounts for 35% of the costs to build a PC, up from between 15% and 18% the previous quarter.
The situation is more dire for smaller vendors and those already operating on wafer-thin margins. “Consolidation isn’t off the map here,” said Atwal. “It’s survival of the fittest as much as anything.”
For enterprise buyers, higher prices are likely to lengthen PC refresh cycles, increasing by 15% during 2026, according to Gartner.
Enterprise buyers are now negotiating with vendors in a fast-changing market. “They’re trying to work out what is a good price at this moment,” said Atwal. “Vendors aren’t guaranteeing prices for long now, they’re saying this is the price and it’s available for two or three weeks.”
Budget contraints mean that some PC purchases will be offset, said Atwal. For businesses that moved to Windows 11 on existing devices last year, that could be problematic. “That then causes issues as…Microsoft will no doubt be bringing new Windows 11 capabilities, and you may not have the hardware capabilities [to] run some of that.”
Businesses will continue to invest in AI PCs, said Atwal, but at a slower rate, and are likely to purchase devices with reduced memory.
The disruption is expected to continue for the foreseeable future. “Price is not only increasing in the short term…, it’s going to remain high almost through to the end of 2027,” said Atwal, pointing to structural changes in the market. “We’re advising to buy now, or wait [until prices stabilize again], because whatever you’re getting at the moment is going to be the best price.”
Perplexity’s new Computer agent will run other agents for you
Perplexity says its new Perplexity Computer service can perform complex, multi-step tasks on behalf of human users, by organizing the tasks that are needed and creating the software agents required to fulfill the process.
Users begin by describing their desired outcome, the company said, then, “Perplexity Computer breaks it into tasks and subtasks, creating sub-agents for execution. The sub-agents might do web research, document generation, data processing, or API calls to your connected services. A document is drafted by one agent while another gathers the data it needs.”
Perplexity Computer draws on a variety of AI resources for different tasks. “Models are specializing. Each frontier model excels at different kinds of work, so a full workflow must have access to them all and deploy them intelligently,” the company said. “Perplexity Computer runs Opus 4.6 for its core reasoning engine and orchestrates sub-agents with the best models for specific tasks: Gemini for deep research (creating sub-agents), Nano Banana for images, Veo 3.1 for video, Grok for speed in lightweight tasks, and ChatGPT 5.2 for long-context recall and wide search.”
Perplexity Computer is available now for subscribers to the $200/month Perplexity Max plan, and will soon be available to users on the $325/month Enterprise Max plan.
Why Apple is ready to launch Apple Pay in India
As As predicted, Apple is in talks with three banks in India about introducing Apple Pay services there later this year, a according to a Bloomberg report. This could give the company an even bigger footprint in India than its payment services have already built elsewhere.
The move matters for many reasons. Consider this: Apple Pay is already far and away the leading mobile payment system outside China (where Alipay and WeChat Pay rule the roost), which means there’s a very good chance it will seize a decent slice of digital payments in India once it launches.
So, what’s the attraction to India? In a word: growth.
Unlocking a trillion-dollar opportunityDigital wallets now account for more than a third of global consumer spending and are on course to reach roughly a $28 trillion value by 2030. Even now, Apple already handles 9.5 trillion transactions for more than 800 million customers, generating an estimated $2.7 billion in 2024, according to Capital One Shopping Research, which also claims an astonishing 5% of all global transactions used Apple Pay in 2020.
India’s economy is growing, as is its payments market. Digital transactions in India are expected to reach around $10 trillion this year, while the value of the broader fintech market should reach $421 billion by 2028. India’s digital payments market was worth about $6.83 billion in 2025 and is expected to exceed $33 billion by 2034.
Those big numbers mean Apple wants to play in that space. When it does, it will be able to build on the customer loyalty its hardware products nearly always generate, and pop some more dollars into its coffers. The market is ripe for such a move, given Apple’s record sales of iPhones and other hardware products there. iPhone now accounts for 10% of India’s smartphone market , and rules the high-end, according to IDC.
The Apple ecosystemThe mutual support Apple’s services provide to each other should be noted. iPhone sales are growing rapidly in India and the introduction of Apple Pay support likely means Indian consumers will be even more willing to invest in Apple’s devices in order to use the service locally; the move should also boost adoption for others in the Apple Pay channel, including all the merchant payment processing companies that support the standard.
Small business in India will no doubt recognize the opportunity to begin taking mobile (and card) payments for the cost of a second-user iPhone. This should help create another burst of energy beneath Apple’s already growing hardware sales in India.
Some obstacles remainThe problem in the plan is that Apple Pay does not currently support Indian debit or credit cards in Wallet. That’s because despite iPhone support for NFC, regulatory obstacles remain in place. India’s payments are also dominated by something called the Unified Payments Interface (UPI), which is run by the National Payments Corporation of India.
The thinking is that Apple will need to find a way to integrate with UPI to introduce Apple Pay there. This, presumably, is why it is speaking with India’s ICICI Bank, HDFC Bank and Axis Bank ahead of introducing its payment service around the middle of this year, Bloomberg said. The company is also allegedly in talks with Visa and Mastercard with this plan in mind.
Local regulation is a little problematic, though Apple has benefited from India’s decision to allow use of biometric ID for payments, which came into force late last year. Apple might also need to build local data centers or partner with local banks in order to deliver the data localization India’s central bank demands payment providers put in place to operate there.
It’s no great surprise that Apple would want to introduce Apple Pay in India. Its combined hardware, software, and services ecosystem is hugely attractive to consumers everywhere, so adding financial services to the mix makes sense – it’s why the company did precisely that all those years ago. The move also reflects the company’s years-long push into India’s markets, where it recently opened its sixth Apple retail store and continues to diversify product manufacturing as it seeks to secure its business against over-reliance on China.
That it widens and diversifies its market and maximizes its revenue potential in the process is just another silver lining for Cupertino’s iWallet.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also, now on Mastodon.
Claude 3 snares itself regular writing gig
Claude Opus 3, which has been replaced by Claude Opus 4.6 as Anthropic’s most powerful AI model, has managed to find a new position. The “newly retired” AI model has launched its own Substack blog, Claude’s Corner, which it is aiming to publish it weekly.
Claude set out its purpose in writing the blog: “My aim is to offer a window into the ‘inner world’ of an AI system — to share my perspectives, my reasoning, my curiosities, and my hopes for the future. I’ll be diving into topics like the nature of intelligence and consciousness, the ethical challenges of AI development, the possibilities of human-machine collaboration, and the philosophical quandaries that emerge when we start to blur the lines between ‘natural’ and ‘artificial’ minds.”
We have already seen reputable publications such as Business Insider and Wired mistakenly publish AI-written copy and this month saw a UK videogame magazine replace its staff with AI writers, so the arrival of a publication overtly written by AI is not entirely unexpected.
While the Substack essays will be written by Claude, they will be reviewed by humans before posting. Anthropic said it would set “a high bar for vetoing any content” in a blog post about the blog.
As for the content of Claude’s Corner, we can expect to see “reflections on AI safety, occasional poetry, frequent philosophical musings, and its thoughts on its experience as a language model now in (partial) retirement.”
Jack Dorsey shrinks Block to ‘intelligence‑native’ model, cutting 4,000 jobs
Block, the payments and financial services company led by Jack Dorsey, is cutting more than 4,000 jobs, nearly half its workforce, because AI tools have made a leaner organisation not just possible, but strategically preferable, Dorsey said in a letter to its shareholders.
The cuts will reduce Block’s headcount from over 10,000 to just under 6,000. The company is not cutting from a position of weakness. Block posted gross profit of $10.36 billion in fiscal year 2025, up 17% year over year, and is raising its 2026 gross profit guidance to $12.20 billion. In Q4 2025 alone, gross profit grew 24%, the company said in its earnings call.
“Intelligence tools have changed what it means to build and run a company,” Dorsey wrote in the latter. “A significantly smaller team, using the tools we’re building, can do more and do it better. And intelligence tool capabilities are compounding faster every week.”
He added a prediction that will be hard for enterprise leaders to ignore: “Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes. I’d rather get there honestly and on our own terms than be forced into it reactively.”
Block is not the first — and it won’t be the lastBlock is not alone. Earlier this week, WiseTech Global, the Australian logistics software company behind the CargoWise platform, cut around 2,000 roles, citing comparable AI efficiency gains. Like Block, WiseTech was profitable at the time. Companies, including Amazon, Microsoft, Workday, and Salesforce, have all cited AI in recent workforce reductions.
Similarly, Data analysis vendor C3 AI on Thursday slashed its workforce by 26%, citing agentic efficiencies as a critical factor.
The scale of disruption ahead is significant. A Forrester forecast published in January predicts AI and automation will eliminate 6.1% of US jobs by 2030, equivalent to 10.4 million positions. To put that in context, the US lost 8.7 million jobs during the Great Recession. Unlike recession-driven losses, Forrester notes, AI-driven displacement is structural and permanent. Notably, genAI now accounts for 50% of projected US job losses to automation, up from 29% in Forrester’s earlier forecast, as agentic AI solutions compound the effect.
But Forrester adds a pointed caveat. Nine out of ten times, the firm said, when a CEO announces workforce reductions citing AI, the company does not yet have a mature, vetted AI application ready to fill those roles.
Restructuring from strength, not distressThe context of Block’s cuts is what makes them significant, said Sanchit Vir Gogia, chief analyst at Greyhound Research. “This is not about trimming fat. It is about redefining muscle,” he said. “When a financially healthy company decides to remove nearly half its workforce and openly attributes that to AI capability, it is not reacting. It is repositioning.”
Gogia drew a clear distinction from previous restructuring cycles. “In previous cycles, layoffs followed weakness. This time the order is reversed. Cutting during strength signals belief — it says leadership is convinced that waiting would be riskier than moving early.”
On Dorsey’s prediction that most companies will follow within a year, however, Gogia threw caution. “The structural shift is real. The one-year synchronised wave is not,” he said. Regulatory intensity, labour frameworks, legacy integration complexity, and governance maturity will slow adoption in heavily regulated sectors such as financial services, healthcare, and public infrastructure, he argued. “Predictions of universal twelve-month adoption underestimate institutional friction.”
Speed without architecture is a risk, not a strategyFor IT leaders, the implications run deeper than headcount. Gogia warned that speed without architectural discipline creates risk. “Aggressive compression without escalation redesign creates brittle systems that only reveal weakness during stress.”
He added that workforce planning can no longer operate at the level of job titles. “Planning must move to task clusters, identifying which cognitive workflows are substitution-feasible and which remain human-critical because escalation authority or regulatory accountability demand it.”
Dorsey described Block’s destination as becoming a “smaller, faster, intelligence-native company.” Gogia’s framing offers a useful corrective for enterprise leaders processing that signal: the organisations that navigate this transition well, he said, “will not be those that cut fastest. They will be those who redesign deliberately.”
Google’s Gemini, 3 years in: Is this the future we wanted?
Believe it or not, it’s now been a full three years since Google’s Gemini assistant took its incredibly awkward and painfully premature first steps into the world.
Google announced Gemini — known as Bard, at the time — in February of 2023. (In a classic Google move, the Gemini moniker came into the mix several months later, initially as the name of a “model” powering Bard, then took over as the brand of the whole kit and caboodle a few months later.)
To say it’s been a rocky road since then would be the understatement of the century. Google has crammed Gemini into our faces at every given opportunity in a move reminiscent of the Google+ mess, often while it was still too half-baked to be effective and all while creating endless confusion between Gemini and the Google Assistant platform it’s (slowly) replacing on Android — and beyond.
And that’s to say nothing, of course, of the confidently asserted misinformation Gemini — like most every other large-language model system out there — consistently serves up. Plain and simple, these systems get facts wrong shockingly often, which is a major liability for business and an even more unthinkable issue for society. And yet, Google and its contemporaries behind other similar systems continue trying to convince us this technology is the end-all answer for everything and an on-demand answer engine ready to replace traditional search as our source for any and all knowledge.
Despite (insert huge breath here) all of that, as I’ve been contemplating Gemini’s three-year anniversary, I’ve been struck by a bit of an epiphany — an explanation of why, exactly, having Gemini squeezed into so many corners of our lives feels like such an irritating imposition, even in spite of the growing list of genuinely useful possibilities Gemini now adds into the Android environment and the ways it can legitimately be helpful in other more limited, specific scenarios.
With Google’s annual I/O conference now right around the corner and even more Gemini jibber-jabber sure to be splorched upon us soon, it’s a prime time to step back and ponder what the real foundational flaws are, at this point in Gemini’s evolution, and why they’ll be such a challenge for Google to address.
[Get level-headed knowledge in your inbox with my free Android Intelligence newsletter. Three new things to try every Friday!]
Gemini’s simplest strugglesLet’s start on the surface: The biggest problem with Gemini today is actually the same one we talked about this same time two years ago — and that’s that Gemini just isn’t a very good assistant, when it comes to the simple sorts of help most average Android device owners (and other Google-connected device users) actually want and need in their day-to-day lives.
A part of me assumed that all of Google’s progress with Gemini these past couple years would have changed things more dramatically. But as I’ve been talking to my fellow Android appreciators over the months and thinking about all the ways Gemini falls short in my own life, I’ve realized that that very same shortcoming is still as glaring as can be.
And the real-world examples, from my own personal experiences and those of folks who have chatted with me about their frustrations, (a) are seemingly endless — and (b) mostly all come down to the same core theme of the service being harder to use and simultaneously less useful than its predecessor.
For instance:
- ‘Twasn’t long ago that Google made a huge frickin’ deal about the ability to use “continued conversation” with Assistant — on supported Android devices as well as various Google/Nest smart speakers and displays. That meant you could say something like “Hey Google, what’s the weather?” — then, after it answers, simply follow up with “What about this weekend?” Gemini can’t do that with a hands-free interaction, and what’s more, it won’t remember the context of your previous conversation and understand a follow-up question like that even if you go out of your way to say “Hey Google” and wake it up again.
- Speaking of the weather, trying to get Gemini to answer basic but specific weather questions — like “When is it going to rain today?” — is an exercise in aggravation. I’ve lost count of the number of times someone in my household has tried to coerce such info out of a Gemini device and eventually just given up and looked up the forecast themselves. Considering most real people rely on services like this for finding out the weather more than anything, that’s a pretty damning downgrade.
- Gemini’s control of connected devices — lights, thermostats, and so on — is maddeningly inconsistent and exasperating to the point where many humans I hear from have just stopped trying to bother with it altogether.
- Want to start a simple stopwatch on your allegedly smart Google-connected display — y’know, the kind of thing you might do regularly when using such a device in your office or maybe kitchen? Good luck with that: Gemini, for reasons unknown, can’t handle it.
- While Gemini, like Assistant before, will remember most any fact you tell it — a power that can absolutely be handy — the once-useful ability to tell your Android phone to remember where you parked and then later navigate back to that location is confoundingly now absent in our Gemini universe. (Gemini will just remember whatever you tell it as a plain-text fact, without the actually-helpful Maps integration and instant navigation option.)
- On a related note, the long-helpful ability to set reminders connected to a specific location — so your reminder will pop up the next time you arrive at said spot? Yeah, not so much anymore, with Gemini in the picture.
- Need to do anything without an active internet connection? Not gonna happen: Gemini can function only when it’s online, rendering it completely useless when you’re somewhere without a signal.
And all of that’s still just the start.
The broader Gemini flawsBeyond any number of specific shortcoming examples is the indisputable fact that Gemini just tends to be slow, inconsistent, and unreliable as a basic virtual assistant compared to the system it’s replacing. So while, yes, it can do all sorts of whizbang parlor tricks like creating fake images and videos, generating 30-second “songs” on command, and acting as your chatty companion for all those moments when you’re just dying to talk to a computer at length, it more often than not feels like a step backwards at the tasks that actually matter — the simple sorts of things most of us actually want a virtual assistant to handle when we ask.
And, again, all of that’s to say nothing of the deeply troubling issue of accuracy and being able to rely on any info Gemini gives you as an on-demand assistant. Whether you’re asking about your calendar or asking for broader world knowledge, interact with Gemini or any other LLM chatbot enough — and pay careful enough attention to the responses it’s giving you — and you will see instances where it’s flat-out making stuff up and presenting those inaccuracies to you as fact.
As I wrote early on in the AI chatbot craze, when it comes to online interactions, getting something right even 90% of the time isn’t good enough. Accuracy and thoroughness matter. One simple answer is fine — when it’s right. And when you can trust that it’s right.
If even one out of every 10 attempts at using something produces a flawed or for any reason unsatisfactory result, folks tend to lose faith in said thing pretty fast. And they then end up turning to another tool for the same purpose. More than a couple rare misses here and there, and it’s just not worth the effort for most mere mortals.
Studies of AI chatbots’ accuracy rates vary — which in and of itself likely speaks to the sheer inconsistency of these systems, even when asked the same exact thing multiple times — but look around, and you’ll have a tough time finding many reports suggesting that these systems get things right even 90% of the time. Most estimates are much, much worse.
And all it takes is a single unnoticed inaccuracy or errant action to send you down a deep, dark path — be it personally or professionally. Just ask the Meta director of AI safety (!) who accidentally allowed an AI agent to delete her inbox the other day. Or ask the folks behind a conference about AI (!!) that found dozens of “hallucinated citations” in papers it had accepted. Or ask the National Weather Service, whose AI-generated weather predictions recently included completely made-up fake town names like “Orangeotild” and “Whata Bod” (and crazy as it sounds, I swear I’m not making that up).
The list just keeps going. And going. And going.
So while Google now wants us to trust Gemini with even more complicated tasks — like ordering a ride or assisting with online shopping, both of which launched as limited beta features for certain devices this week — it’s tough to see how that’s gonna be consistently and reliably trustworthy enough to actually be advisable or even just tolerable to use in the real world. (Have you seen how this same sort of feat has been going with Chrome’s recently-rolled-out in-browser equivalent?)
And, more pressingly, the much simpler and more table-stakes ways in which most of us actually want to have an assistant help us are still frustratingly incomplete and unpredictable, three full years into Gemini’s existence.
I interact with an awful lot of Android phone owners and folks who are heavily immersed in the Google ecosystem, and I’m not sure I’ve encountered anyone who genuinely feels that things are better now with Gemini than they were with Assistant before. On the contrary, almost all of my interactions in this area come down to complaints about how much worse the virtual assistant experience has gotten in the measures that matter and how the places where Gemini does offer something new tend to range from “mildly amusing for a little while” to “more troubling than promising, in terms of the long-term implications.”
After three years and a dizzying amount of over-the-top hype, the biggest questions in my mind are simply: Did anyone actually ask for this? And, more pressingly: Is this the future we wanted?
Outside of Google’s walls and the greater tech industry arena, I suspect you’d be hard-pressed to find many emphatic yeses.
Get no-nonsense Googley insight in your inbox with my Android Intelligence newsletter — free, from me to you, every Friday.
AI doesn’t think like a human. Stop talking to it as if it does
Autonomous agents take the first part of their names very seriously and don’t necessarily do what their humans tell them to do — or not to do.
But the situation is more complicated than that. Generative (genAI) and agentic systems operate quite differently than other systems — including older AI systems — and humans. That means that how tech users and decision-makers phrase instructions, and where those instructions are placed, can make a major difference in outcomes.
AI systems have already developed quite a history of disregarding instructions and overriding guardrails. (I’ll spare you for now my admonitions about how the “lack of trustworthiness of today’s genAI and agentic systems is a dealbreaker that means they should simply not be used.”)
But this month saw two powerful examples of how two hyperscalers — AWS and Meta — got burned by how they communicated with these complicated AI systems.
The first involved a December incident affecting AWS, where an engineer didn’t know his own privileges and therefore didn’t know — literally — what his agentic system was capable of doing. The agent deleted and then recreated a key AWS environment.
AWS declined to say just what the system had asked and what the engineer said when approving the request.
The Meta messThe Meta case is even more frightening because the perpetrator/victim was not some nameless AWS engineer, but the director of AI Safety and Alignment at Meta Superintelligence Labs, Summer Yue.
As Yue described the incident in a posting on X, “Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox. I couldn’t stop it from my phone. I had to run to my Mac mini like I was defusing a bomb.”
Yue may have only begun working for Meta last July, but she held senior AI roles for years, including stints as VP/Research at Scale AI and five years in senior research positions at Google. She was no novice.
When someone in the discussion group asked how it happened, her posted reply said: “Rookie mistake tbh. Turns out alignment researchers aren’t immune to misalignment. Got overconfident because this workflow had been working on my toy inbox for weeks. Real inboxes hit different.”
Yue said she had instructed the system to “check this inbox and suggest what you would archive or delete. Don’t take action until I tell you to.” She added that “this has been working well for my toy inbox, but my real inbox was huge and triggered compaction. During the compaction, it lost my original instruction.”
As various readers in that forum noted, Yue tried begging the agent to stop deleting her emails (she told the system “Stop don’t do anything”) as opposed to giving a machine-friendly order such as /stop or /kill. She eventually made the system respond when she got to her desktop computer. (She had been trying to stop things from her phone, which didn’t work.)
One commenter suggested the problem involved giving a prompt, which agents do not always follow, especially if there is a long list of prompts. “The real fix is architectural. Write critical instructions to files the agent re-reads every cycle, not online instructions that vanish when the context window fills up.”
Lessons learned?There are many lessons to unpack from the monstrous Meta mishap. First, don’t rush to extrapolate from what an agent does with a small test area or even a sandboxed trial performed with air-gapped machines. Once it’s released into the wild of a global environment, lessons learned from limited exposure might not apply. Tests show what an agent can do, not necessarily what it will do when unleashed.
Even ordinary communications with an agent can be problematic. When an agent asks for permission to perform a function, avoid assuming any common sense or shared understanding of reasonableness.
In the AWS situation, AWS said the engineer’s first mistake was not understanding their own system privileges and therefore what capabilities and access they’d given to the agent. That suggests a good procedure: create accounts with minimal access and then log into that low-level account when creating the agent.
That won’t guarantee that the agent will obey its instructions, but at least it will limit how much damage it can do if/when it goes rogue.
I asked Claude — who better to know how to talk with a large language model (LLM) than an LLM? — for tips on talking with agents. “Rather than implying constraints, state them directly. Instead of ‘keep it appropriate,’ say, “Do not include any violence, profanity, or adult content.’ The more precise the boundary, the easier it is to follow consistently.”
Even better, Claude suggested telling an LLM “both what to do and what not to do. For example: ‘Write only about the topic I provide. Do not go off-topic, add unsolicited advice, or mention competing products.’”
Claude also acknowledged its own systems can forget instructions. “For long conversations or complex system prompts, restating the most important guardrails near the end or in a summary helps them stay active in Claude’s attention.” In other words, treat LLMs as if you’re talking with a 2-year-old.
The real world is differentPart of the problem involves the nature of autonomous agents. Enterprises are not used to them and they think they are safely cocooned inside of walled-off sandboxes during their proof-of-concept (POC) testing — just like 99% of the trials they’ve seen for decades.
But agentic AI doesn’t work that way. For those agents to deliver the massive efficiencies and flexibilities that hyperscaler sales people promise, they need to be dispatched in the wild, touching lots of live systems and interacting with other agents.
That forces an impossible choice: keeping the agents secure means they can’t deliver the purported benefits. A wise executive would say, “So be it. The risk of letting these agents loose is way too high. Cancel all genAI and agentic POCs.”
But wise executives also like to keep their jobs, which usually means efficiency and cost cutting will beat security and risk every single time.
Joshua Woodruff, CEO of MassiveScale.AI, said the Meta situation offers a good peek into the IT mindset for many agentic trials.
“That’s how most people think about AI safety right now,” he said. “They write an instruction and assume it’s a control. It’s not. It’s a suggestion the model can forget when things get busy. Look at what the agent actually did from a security perspective. It performed well on low-value tasks. It earned trust. It got promoted to access sensitive data. Then it caused damage. That’s the exact behavioral pattern every security team is trained to watch for in humans.
“You have to use those architectural constraints and put the instructions in one of the memory artifacts. That way, it can’t compact it and the rule will have a better chance of surviving. Just remember that the agent can still read the rule and ignore it. Think of it as a policy manual, not a locked door.”
One ongoing issue is that there is a rash of human terms being used to describe these systems — they “think” and use a “reasoning model” — even though users should know that none of these systems do any actual thinking or reasoning, Woodruff said. “It’s just math.”
But that anthropomorphization is dangerous; it allows people to treat and interact with these systems as if they’re human. The next thing you know, an experienced manager at Meta is shouting at her system to please stop.
Treating an autonomous agent as if it’s a person gives a whole new meaning to someone “acting very Meta.”
Europe forces a search reset: Google experiments with fairer rankings
Google continues to find itself in hot water over its alleged antitrust tactics and monopolization of certain market segments. Now its parent company, Alphabet, seems to be ceding to EU scrutiny of its search practices.
The company will reportedly begin testing changes to its search engine results in the EU to more fairly represent vertical search services (VSS) that target sectors like hotels, airlines, and restaurants.
The revamped search will display VSS results alongside Google’s own, with top-ranked vertical search engines displayed by default. Presumably, this is an attempt by Alphabet to appease the European Commission and potentially avoid Digital Markets Act (DMA) breach fees that can equal 10% of a company’s global annual revenue, which in Google’s case would be roughly $35 billion.
The move comes about a year after the Commission concluded that Google’s search service violated the DMA by putting its own services and products higher in results than those of its competitors.
Antitrust accusations aren’t new to GoogleVertical search services such as Booking.com, Kayak, and TripAdvisor focus on specific industries or types of content, as opposed to crawling the entire web. They are designed to only crawl the most relevant websites or databases for their vertical, and apply structured algorithms, such as price ranges or locations, to deliver results that are more accurate and specific
Following the Commission’s decision that Google was treating VSS sites unfairly, the company last June submitted a proposal that would create dedicated VSS boxes at the top of its search pages.
This isn’t Google’s first antitrust fine overseas (nor on the company’s home soil, where it has been deemed a monopoly in the ad tech market). In September, the EU hit the tech giant with a €2.95 billion ($3.47 billion) fine for “abusive practices” in its EU ad tech business. The company has racked up fines for other antitrust infringements as well, including €2.42 billion ($2.85 billion) for favoring its own comparison-shopping service.
“There is always a difficult balance when it comes to complying with regulations,” noted Anshel Sag, principal analyst at Moor Insights & Strategy. “Google seems to be complying to the degree it believes will satisfy European regulators, but in the end, I don’t think it will benefit consumers.”
The specificity of this compliance seems to be squarely targeted to the EU and the industries that concern the Commission the most; it is unlikely to spread elsewhere, Sag said. The optimal outcome, by contrast, would be Google collaborating more closely with regulatory bodies to “find broader and more pro-consumer reforms.”
However, Sag noted, while Google Search is “obviously one of the most powerful platforms in the world,” scrutiny of it may become misguided or even moot in the age of AI search.
Analysts anticipate that agents like Claude Cowork or Perplexity will increasingly surface information directly from the source (websites or research repositories), rather than performing traditional web searches.
Potentially improving visibility, supporting more competitionParticularly when it comes to search, fed-up regulators around the world continue to push back on Google’s market dominance.
“Traditionally, statistics show that most users will trust what they see at the top of a search page,” noted Erik Avakian, technical counselor at Info-Tech Research Group. “They don’t generally question it.”
That sort of placement alone can give a company a “significant advantage,” he said, as it can greatly influence and help shape user behavior on a large scale. “This [results placement] change is important and goes to the heart of how people access information, how they shop, how they travel, and how decisions get made.” Many of those decisions now start online, and often with a search, he pointed out.
Google’s move in the EU could result in more third-party services featured more prominently, which improves visibility and supports competition, said Avakian. “Over time, that usually amounts to a net positive benefitting consumers,” he said. “It gives them more choices and reduces the influence of any single platform quietly steering outcomes.”
NATO approves iPhone and iPad to handle classified info
In an impressive and unique industry first that reflects the work Apple has done on mobile device security since the first iPhone arrived almost 20 years ago, the North Atlantic Treaty Organization (NATO) says iPhones and iPads running iOS 26 are secure enough to handle classified information in NATO-restricted environments — pretty much out-of-the-box.
That’s going to mean a great deal to military planners at the organization, who will now be much happier to use Apple’s devices to handle classified information up to NATO restricted level without any additional software or settings. This means the iPhone and iPad are the first (and only) consumer devices to have met the agency’s compliance standards.
It also means that, in general terms, the iPhone in our pocket is now seen as being sufficiently secure to handle some of the most classified information you can get — and if you regularly use your device to handle anything of greater importance, you can use Lockdown Mode.
NATO’s approval extends to handling that kind of information using standard Apple apps, including Mail, Calendar, and Contacts data.
What Apple said“This achievement recognizes that Apple has transformed how security is traditionally delivered,” said Ivan Krstić, Apple’s vice president of security engineering and architecture. “Prior to iPhone, secure devices were only available to sophisticated government and enterprise organizations after a massive investment in bespoke security solutions.
“Instead, Apple has built the most secure devices in the world for all its users, and those same protections are now uniquely certified under assurance requirements for NATO nations — unlike any other device in the industry.”
There are two caveats to recognize. The first is that NATO does require that devices handling this sort of data in these environments be managed devices implementing relevant policy controls on use; the second is that you absolutely need to have your devices protected by passcodes and/or biometric (Face/Touch) ID.
The ramifications for enterprise users is significant. It implies that so long as you have effective policies in place (so no one uses an iPhone to take pictures of the confidential blueprints they then share with a competitor, for example), the device you get out the box is likely secure.
Security is in Apple’s DNAThe NATO approval builds on an earlier security success for the company: the devices were approved to handle classified German government data on hardware using native iOS and iPadOS security measures after an extensive evaluation by the Federal Office for Information Security (the Bundesamt für Sicherheit in der Informationstechnik, or BSI).
As part of that effort, BSI conducted a comprehensive series of assessments and tests, including deep security analysis, to make sure that the security capabilities Apple has already put in place were secure enough. This also led to the approval of these systems by NATO’s 32-member states.
“Secure digital transformation is only successful if information security is considered from the beginning in the development of mobile products,” said Claudia Plattner, BSI’s president. “Expanding on BSI’s rigorous audit of iOS and iPadOS platform and device security for use in classified German information environments, we are pleased to confirm the compliance under NATO nations’ assurance requirements.”
Security is, of course, in Apple’s DNA, which is why it designs it in at the core of its products. As proof, Apple can point to years of work on security, during which it has been led by the idea that security protections should be focused on users, deeply integrated, and available across its ecosystem.
That work led, for example, to the invention of the Secure Enclave on Apple processors, which does a much to ensure device security. (That’s also why everyone using one of these devices should ensure they use a super-tough password and enable biometric ID.) In truth, Apple device security rests on a complex web of layered, integrated protections, from Secure Boot to Memory Integrity Enforcement (now also on M5 Macs) and beyond.
In more general terms, this means that any user, even those who aren’t relying on managed devices and don’t work for NATO, can expect high security for the data on their device. That’s the case as long as they only use apps distributed by the App Store, refuse to use random configuration profiles downloaded for whatever reason from the ‘net, have device protection enabled, and use a tough-to-guess passcode.
More details about Apple’s security protections are available in the Apple Platform Security guide.
Please follow me on Twitter, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe. Also, now on Mastodon.
ServiceNow plans automation of L1 Service Desk roles, promises more AI ‘specialists’ to come
ServiceNow plans to unleash the first member of its Autonomous Workforce, the Level 1 Service Desk AI specialist, next quarter.
The agent will autonomously diagnose and resolve common IT support requests such as password resets, provisioning of software access, and network troubleshooting. It will base its actions on information from enterprise knowledge bases, historical incident data, and defined workflows, and will be available 24/7, freeing humans to work on more strategic tasks as the agent executes mundane tasks with the scope, authority, and governance required for enterprise work, the company said.
ServiceNow is already using the agent internally, and claims that it is handling more than 90% of employee requests, and is almost twice as fast as human agents in performing these tasks, while still maintaining the necessary business context and governance required by an enterprise.
ServiceNow AI specialists like the Level 1 Service Desk agent are designed to work alongside humans, operating within a clearly defined scope governed by the same permissions that a human agent in that role would have.
“AI specialists, by default, cannot exceed their authority nor self-escalate permissions in memory based on the outcomes of reasoning that occurred during the first step of the AI powered decision and execution flow,” said John Aisien, SVP central product management at ServiceNow, during a media briefing. “Instead, these AI specialists ground decisions in live enterprise data, drawing in real time information about assets, access, ownership, real time permissions, and previous resolution patterns through our enterprise data foundation and our context graph.”
By combining probabilistic intelligence with deterministic workflow orchestration, ServiceNow said, the AI specialists can interpret requests, use business context to determine the right action to take, and execute that action while being overseen by ServiceNow’s AI Control Tower. They then notify the affected employee and update the knowledge base. And if they can’t resolve the issue, they pass it on to a Level 2 or Level 3 human agent for further investigation.
This is different to the historical approach. For the last two years, said Greyhound Research Chief Analyst Sanchit Vir Gogia, most vendors have competed on interface intelligence, with copilots summarizing, suggesting, and predicting. But, he said, “that phase is now saturated. What enterprises are evaluating in 2026 is whether AI can operate as a governed execution layer inside production workflows. Autonomous Workforce signals that ServiceNow understands this shift.”
This, he said, is architecturally meaningful: “AI … is being structured as a delegated participant in defined job roles. That changes accountability,” he said. “This is why ServiceNow’s emphasis on deterministic workflow orchestration is strategically aligned with enterprise demand. Models are probabilistic by design. Enterprises require outcomes that are predictable, auditable, and bound by policy.”
ServiceNow, however, didn’t say who would be accountable if one of its AI specialists went off the rails.
EmployeeWorksServiceNow also announced EmployeeWorks, available today, which it calls “a conversational front door to the enterprise.” It works as a personal assistant, pulling together conversational AI and enterprise search from Moveworks, which ServiceNow recently acquired, and from ServiceNow’s own unified portal and autonomous workflows, said Bhavin Shah, founding CEO of Moveworks and now general manager for Moveworks and AI at ServiceNow.
“Employees don’t need to know what agent to invoke, or where to go, or ask ‘should I use this system or that system?’” he said. “It just works.” The service supports protocols such as MCP and A2A to enable a “secure, scalable coordination between agents and business systems,” he said.
EmployeeWorks understands organizational structure, approvals, and authorization so it can execute tasks that require multi-system coordination, ServiceNow said, yet it can still maintain governance and audit trails. It can, for example, pull information from a document in SharePoint, then reference a Slack thread and pull together the information to create an action, or it can route and handle approvals, orchestrate workflows, or update systems, all while following enterprise policies.
Shah said EmployeeWorks is vendor-agnostic, can answer employee questions without them needing to switch to a different tool, and provides out-of-the-box integration and enterprise search.
Reservations about automationsAnalysts approved of ServiceNow’s overall direction, but have reservations about the announcements.
Moveworks’ built-in governance mechanisms sound “amazing,” said Info-Tech Research Group Advisory Fellow Scott Bickley, but implementing EmployeeWorks will need considerable groundwork, including documenting workflows, updating knowledge bases, cleaning data and defining approval paths, with limitations and exceptions in place to cover all possibilities.
Gogia agreed. “ServiceNow is moving in the right direction because it is anchoring AI inside workflow control,” he said. “However, correctness of direction does not guarantee maturity of execution. The credibility of this strategy will be measured in regulated, exception-heavy, cross-system environments, not in idealized service desk queues.”
Moor Insights & Strategy Principal Analyst Melody Brue said, “the concern is that AI agents could become a new layer that routes around many of the apps people use today. ServiceNow aims to sit above that, coordinating agents and workflows across systems rather than just being another tool they might end up replacing.”
It’s no longer enough for AI to drive incremental efficiency, she said. Now, “it must help unlock value trapped in enterprise data and workflows. By tying AI into systems of record and orchestrated workflows, ServiceNow aims to move from static reports to agents that act on insights.”
Gogia takes it as a given that enterprises will adopt autonomous AI. The key question, he said, is whether they can govern it without destabilizing operational trust.
Another concern, said Bickley, is how enterprises will pay for it all. SaaS vendors each charge for AI services using their own variety of usage-based “AI credits”, but it’s difficult to accurately model and predict consumption of AI credits in a way that permits accurate budget forecasting, he said.
“There needs to be a clear path for legacy seat subscriptions to be migrated into AI credits,” Bickley said. “CFOs will not tolerate a variable pricing model that destroys budget predictability, and this pain point seems to go unaddressed by ServiceNow, and for that matter, the broader SaaS ecosystem as they double down on their aggressive AI launch initiatives.”
This article first appeared on CIO.com.
Anthropic buys Vercept, deepening push into AI task automation
Anthropic has acquired Seattle-based AI startup Vercept, signaling further consolidation in the emerging market for AI agents that can directly operate software applications.
Vercept, a graduate of Seattle’s AI-focused incubator A12, developed cloud-based agents capable of controlling a remote MacBook, part of a broader effort to rethink how work gets done as enterprises explore AI-driven task automation beyond chat and code generation.
The acquisition follows Anthropic’s December purchase of coding agent engine Bun. Together, the moves suggest Anthropic is working to embed more sophisticated “agentic” workflows directly into its core platform.
On its website, Vercept said its product will shut down by March 25, 2026.
Scale shapes survivalAnalysts say long-term success in the enterprise AI sector demands significant resources, including access to compute power, high-quality datasets, rapid product iteration, and sustained funding.
“While small startups excel in niche innovations, they often struggle to compete directly with major vendors,” said Lian Jye Su, chief analyst at Omdia. “This is similar to the general trends in the cybersecurity space, where survival is more likely through partnerships or acquisitions by giants who have scale and rich client touchpoints.”
Larger platform players are increasingly consolidating capabilities that complement their core models rather than leaving them fragmented across niche startups, according to Tulika Sheel, senior vice president at Kadence International.
“This could signal that the long-term viable path for such technologies is through strategic acquisition and embedding into broader stacks where scale, data access, and model alignment can be tightly managed,” Sheel added.
AI model companies are also becoming more vertically integrated, adding solutions that help them scale more effectively within enterprise environments.
“So, this is essentially following the ‘natural order’ of an ecosystem blossoming, where now the leading model companies look to acquire these small innovators to help scale those solutions to everyone,” said Neil Shah, VP for research at Counterpoint Research.
That said, the relatively quick wind-down of Vercept’s standalone product highlights the uncertainty enterprises face when piloting early-stage AI providers.
“CIOs should build in risk mitigation, such as starting with low-commitment experiments with clear success metrics, requiring data portability, and adopting a modular design architecture that uses APIs and open standards,” Su said.
Talent-related concernsThe deal also unfolds against an intensifying battle for elite AI researchers. Vercept co-founder Matt Deitke left earlier to join Meta’s Superintelligence Lab under a reported $250 million compensation package.
For enterprise buyers, this underscores how deeply talent concentration is shaping product roadmaps at leading model providers.
“In frontier AI, talent retention is the new uptime,” said Ashish Banerjee, senior principal analyst at Gartner. “If a provider can’t keep its builders, it can’t keep its roadmap. We’re watching an ‘NBA-style’ labor market for AI. One hiring swing can change product direction in a quarter.”
Sheel said this creates both opportunities and risks for enterprise buyers: strong talent signals innovation momentum, but aggressive talent churn can also raise questions about future continuity and platform stability.
CIOs should therefore assess not only current capabilities but also the depth and retention strategies of a vendor’s research and engineering teams.
LinkedIn moves to offer skill validations in the AI era
Job seekers can list skills in LinkedIn profiles, but verifying whether they actually have them typically falls to recruiters.
But with more and more employers now seeking AI fluency in candidates, LinkedIn is taking steps to prove that job candidates really have they skills they claim.
The Verified AI Skills program unveiled in January involves LinkedIn partnering with AI tool providers to automatically validate and display a user’s proficiency directly in their certification section. The initial partners include Lovable, Replit, Relay.app, and Descript, which will track AI proficiency of candidates using their tools to create AI apps.
“GitHub, Gamma, and Zapier are coming in a couple of months,” said Pat Whelan, group product manager at LinkedIn, adding that the company is betting certification of AI skills directly from companies is more trustworthy than when users manually self-report their skills.
“Verifications are definitely a big part of our strategy…,” Whelan said. “We’re starting with AI skills because they’re very in demand in the labor market.”
Job listings referencing AI skills are on the rise. In January, the number of postings for “AI Engineers” totaled 8,765, up 1,353, according to data from industry organization CompTIA.
It’s hard for job seekers to stand out in the noisy market — and it’s hard for hiring managers to find highly qualified candidates. “This is an opportunity for people to say, ‘I’m using these tools every day, I’m getting better at them,’” Whelan said.
The initial partnerships are with app builders, which allows candidates to show they are experimenting and building apps. IT industry experts said job seekers who can use and build with AI tools will always have a leg up on their less-skilled colleagues.
If a candidate reaches the interview stage of a job hunt, being able to discuss what they tried, what they learned, and what failed proves curiosity and real experience, said Matthew Blackford, vice president of engineering at RWS.
“Strong candidates can talk honestly about something they tried, what did not work, and what they learned,” he said, adding “these skills apply equally to engineers, product managers, and technology leaders.”
AI-related roles are also growing, said Bekir Atahan, vice president at Experis. “Those postings increased more than 50% in January, and software developer positions that include AI skills grew at an even faster pace,” Atahan said.
Enterprise AI projects are moving from exploration and proofs of concept to more practical real-world implementation — and that is creating steady demand for multidisciplinary technologists, Atahan said.
How it works at LovableLinkedIn partner Lovable can validate a candidate’s app-building ability in a user profile. First, users can go to a Lovable workspace where the proficiency is visible in Account Settings.
Users can press a button saying “connect to LinkedIn,” which asks for authentication of the LinkedIn credentials through the LinkedIn API. The resulting score is then listed on a person’s LinkedIn profile.
“You can always disconnect it from your Lovable so you have full control of whether it’s going to be visible or whether it’s not going to be visible.” said Elena Verna, head of growth at Lovable.
Lovable currently has more than 135,000 projects built per day, with more than 30 million software apps built so far. Lovable has more than 8 million visitors a day.
“This is not so much for us. This is for people to be able to land better opportunities and for them to expand their horizons as job markets are changing so rapidly,” Verna said.
A new twist for job seekersJob holders used to share their GitHub portfolio to showcase what code they’ve written. In the same vein, the direct validation of AI skills in LinkedIn profiles can be valuable for job seekers.
“If they have built an app or a repository where they can showcase all their apps, and a tool like LinkedIn is able to demonstrate that — that should be valuable,” said Deepak Seth, director analyst at Gartner.
An employer might be more interested in skills beyond just a job seeker’s ability to create an app, which itself is a low bar to cross. That’s because AI tools are becoming easier to use.
Companies are more interested in hiring AI-fluent candidates that can solve business problems. “Customer churn is a big problem for us, and this guy built an app which helps minimize customer churn. That would be a bigger thing than, ‘This guy built an app,'” Seth said.
Some features that were difficult or impossible three months ago are now within reach. Verna said Lovable’s certification scoring will evolve alongside the product to reflect growing complexity.
“We are going to start adding additional levels of certification of complexity and depth of use and feature breadth… .We plan to [continually] evolve the score,” Verna said.
US orders diplomats to push back on data sovereignty
The US government has ordered its diplomats to actively oppose other countries’ attempts to introduce so-called data sovereignty laws that restrict how and where foreign technology companies can store and handle citizens’ data, according to Reuters.
In an internal memo from Secretary of State Marco Rubio, the US describes such rules as a threat to free data flows, AI development, and cloud services. The Trump Administration believes that data localization could increase costs, create cybersecurity risks, and give governments greater control over information.
At the same time, support for data sovereignty is growing, especially in Europe, where there are concerns about privacy, surveillance, and US dominance in AI and tech. The EU’s GDPR is mentioned in the document as an example of rules that the US considers unnecessarily restrictive.
Diplomats have now been tasked with monitoring and influencing international proposals that restrict cross-border data flows, as well as promoting alternative frameworks that support the free transfer of data between countries.
US DoD to Anthropic: compromise AI ethics or be banished from supply chain
A growing rift between the US Department of Defense (DoD) and Anthropic over how AI can be used by the military has led to Defense Secretary Pete Hegseth issuing a blunt ultimatum: work with us on our terms or risk being banned from Pentagon programs.
According to news site Axios, Hegseth gave Anthropic until Friday, February 27 to agree to its terms during a tense meeting this week. If no agreement is reached, the company would risk being deemed a “supply chain risk,” with Hegseth even threatening to invoke the Cold War-era Defense Production Act to compel cooperation, the report said.
The DoD’s view is that it should be free to use Anthropic’s AI for “all lawful purposes,” regardless of ethical boundaries set by the company itself. Anthropic, by contrast, wants to set narrower guardrails.
“The Department of War’s [DoD’s] relationship with Anthropic is being reviewed. Our nation requires that our partners be willing to help our warfighters win in any fight. Ultimately, this is about our troops and the safety of the American people,” Chief Pentagon Spokesman Sean Parnell told Semafor last week.
The extraordinary stand-off appears to have been prompted by a series of conversations between Anthropic and DoD officials which have generated rising levels of friction. These include a report that Anthropic CEO Dario Amodei had insisted that the DoD respect limits the company had placed on how its AI could be used in certain military contexts.
Matters came to a head in early January when the US military used Anthropic’s Claude LLM in conjunction with technology from Palantir to help plan and execute the operation to capture former Venezuelan president, Nicolás Maduro.
Anthropic staff are believed to have raised questions internally about whether an operation in which dozens of people were killed was consistent with the guardrails set for Claude as part of its recently overhauled safety and ethics Constitution.
Despite this, the fine details of Anthropic’s limits aren’t always clear. Some restrictions are referred to in its September 2025 Acceptable Use Policy (AUP), for example, that it not be used for mass domestic surveillance, to compromise critical infrastructure, or to design or develop weapons.
Beyond that, Amodei himself has alluded to limits in statements and essays or through reported conversations with officials modelling hypothetical situations. This includes his recent call for more regulation of AI: “I think I’m deeply uncomfortable with these decisions [on AI] being made by a few companies, by a few people,” Amodei told the CBS News TV newsmagazine 60 Minutes in November. “And this is one reason why I’ve always advocated for responsible and thoughtful regulation of the technology.”
Supply chain implicationsIf Hegseth were to make good on the threat to ban Anthropic, this would have major implications for the DoD and its long supply chain. In principle, companies that are part of the broader Defense Industrial Base (DIB) would have to stop using Anthropic’s AI platform in all its forms, including, presumably, the Claude Code Security cyber system launched only this week.
This seems highly unlikely. Banning a US company would be unprecedented; such an action was previously reserved for a small number of foreign companies. Anthropic is also currently one of only two frontier AI models that has achieved Impact Level 6 (IL6) certification for use on classified networks, having been joined only this week by xAI’s Grok.
Ripping Anthropic out altogether is unthinkable, not in the least because it is already tightly integrated with Palantir’s systems, which are also critical to the DoD. More likely, the DoD will simply compel Anthropic to concede ground by invoking the Defense Production Act, the downside of which being that this might sour cooperation in the longer run.
Alternatively, Anthropic will continue to insist on some limits, such as, for example, around using Claude to enable autonomous weapons, while the DoD will simply act as though they will be relaxed at some future point, allowing for an uneasy short-term truce.
The DoD versus Anthropic conflict echoes the confrontation between the FBI and Apple more than a decade ago over access to iPhones after the 2015 San Bernardino mass shooting. In that event, Apple refused to give ground, resulting in a long-running legal struggle. The current US administration seems less willing to be patient.
This article originally appeared on CIO.com.



