Computerworld.com [Hacking News]

Syndikovat obsah
Making technology work for business
Aktualizace: 20 min 1 sek zpět

Intel sets sights on data center GPUs amid AI-driven infrastructure shifts

4 Únor, 2026 - 11:39

Intel is making a new push into GPUs, this time with a focus on data center workloads, as the chipmaker looks to reestablish itself in a market increasingly shaped by AI-driven demand and dominated by Nvidia.

CEO Lip-Bu Tan said that after hiring a senior GPU architect, the company is working directly with customers to define requirements, signaling a more demand-driven approach as enterprises and cloud providers weigh their options for accelerated computing, according to a Reuters report.

Intel’s push comes as demand for AI accelerators reshapes data center spending, leaving enterprises and cloud providers with fewer GPU options and longer procurement timelines.

This is not Intel’s first foray into discrete graphics. The difference now is that it’s tying its GPU ambitions more closely to its data center roadmap and broader manufacturing strategy, pairing closer customer engagement with advanced process technology to gain traction.

Intel’s enterprise advantage

Intel’s tight integration of CPUs, GPUs, networking, and memory coherency gives it an edge in enterprise inference, hybrid cloud, and regulated or on-prem environments, where cost control and operational simplicity matter more than peak performance, said Manish Rawat, a semiconductor analyst at TechInsights.

In these segments, Intel has an opportunity to meaningfully limit Nvidia’s expansion and reduce customer dependence at the infrastructure level.

Supply chain reliability is another underappreciated advantage. Hyperscalers want a credible second source, but only if Intel can offer stable, predictable roadmaps across multiple product generations.

However, the company runs into a major constraint at the software layer.

“The decisive bottleneck is software,” Rawat said. “CUDA functions as an industry operating standard, embedded across models, pipelines, and DevOps. Intel’s challenge is to prove that migration costs are low, and that ongoing optimization does not become a hidden engineering tax.”

For enterprise buyers, that software gap translates directly into switching risk.

Tighter integration of Intel CPUs, GPUs, and networking could improve system-level efficiency for enterprises and cloud providers, but the dominance of the CUDA ecosystem remains the primary barrier to switching, said Charlie Dai, VP and principal analyst at Forrester.

“Even with strong hardware integration, buyers will hesitate without seamless compatibility with mainstream ML/DL frameworks and tooling,” Dai added.

Lian Jye Su, chief analyst at Omdia, said Intel will need to focus on delivering performance and software that are accepted and certified by the developer community.

While CUDA dominates with extensive libraries, tools, and developer mindshare, developers may be willing to adopt Intel GPUs if the company “can create a GPU that can provide tools and SDKs that are developer-friendly and address cutting-edge AI applications,” Su added.

From an enterprise buying perspective, this means Intel’s challenge is less about hardware ambition and more about overcoming deeply entrenched platform lock-in.

“Performance and pricing advantages alone will fall short without seamless developer tools and broad compatibility,” said Prabhu Ram, VP of the industry research group at Cybermedia Research. “Even with tight GPU-CPU-networking integration offering efficiency gains, CUDA’s entrenched lock-in remains the major barrier for enterprises that seek to reduce reliance on Nvidia.”

Rising China challenge

The rise of Chinese alternatives adds urgency to Intel’s effort to reestablish itself as a credible second source for Western enterprises.

In the Reuters interview, Tan said he was surprised to see Huawei hiring top-tier chip designers despite US restrictions on access to advanced tools, warning that China could leapfrog established players if Western companies are not careful.

“Huawei’s significance isn’t about near-term benchmark parity, it’s about trajectory,” Rawat said. “Progress on EDA independence may be slow, but directionally it’s real. High talent density is compensating for tool gaps, while parallel “good-enough” design flows steadily dilute the effectiveness of US choke points.”

According to analysts, Huawei does not need to outperform Nvidia globally to pose a strategic challenge. Locking in China’s domestic data center demand, reducing dependence on Western supply chains, and building closed-loop learning and optimization cycles at home could be enough to reshape competitive dynamics over time.

Kategorie: Hacking & Security

You’ll soon be able to block all AI features in Firefox

3 Únor, 2026 - 18:43

In December, Mozilla CEO Anthony Enzor-DeMeo attracted a lot of attention by announcing that Firefox would become a “modern AI browser.”

In order not to alienate users, the company also promised a new setting that would make it possible to turn off some or all of the AI features, including the chatbot in the sidebar, automatic translations, tab grouping, and link previews.

It is now clear that the new AI settings will be added to Firefox 148, a version that will be rolled out to the public on Feb. 24. Mozilla unveiled the feature on Monday.

The YouTube clip below shows how it is supposed to work:

Kategorie: Hacking & Security

HP CEO Enrique Lores leaves to lead PayPal

3 Únor, 2026 - 18:40

Enrique Lores, president and global CEO of HP for more than six years, is leaving the company to take up a similar position at online payment giant PayPal on March 1. In his place, on an interim basis for the time being, Bruce Broussard, a member of the company’s board of directors since 2021, has already been appointed CEO of the technology company, although the company said in a statement that it is already looking for a permanent replacement for the Spanish executive.

“As Interim CEO, Mr. Broussard will advance the company’s strategic priorities by leveraging his proven operational, financial, and business management expertise as well as his deep knowledge of HP’s business,” the statement said, noting that Broussard is an executive with more than 30 years of experience in leadership positions at publicly traded companies, such as healthcare company Humana.

Lores is no stranger to PayPal, having served on its board of directors for nearly five years and as its chairman since July 2024. The Spaniard replaces former CEO Alex Chriss, who leaves the company in a delicate situation; in fact, the group has just announced a 15% drop in revenue in its last fiscal quarter. Furthermore, Lores’ appointment, as reported in a statement issued today by PayPal, “follows a detailed evaluation conducted by the Board of Directors on the current position of the company relative to its competition and the broader industry landscape. While some progress has been made in a number of areas over the last two years, the pace of change and execution was not in line with the Board’s expectations. The Board is confident that the appointment of Lores, a seasoned executive with more than three decades of technology and commercial experience, will provide the leadership necessary to lead PayPal into its next chapter.”

American dream

Enrique Lores is one of those paradigmatic cases of the long-awaited ‘American dream’ that argues that anyone can achieve success through hard work. Lores, born in Madrid and an electrical engineer from the Polytechnic University of Valencia, from which he was awarded an honorary doctorate in 2024 for his professional career, began his professional activity at HP in 1989, 36 years ago. He started as an intern, but gradually rose to positions of importance in the fields of printing, personal systems, and business and industrial solutions.

Even in 2015, when the historic company made the decision to split into two companies, one focused on personal devices (PCs) and printing and the other, called HPE, on systems and infrastructure for businesses, it was Lores himself who led the Separation Management Office. More than six years ago, he became the president and CEO of HP. Under his leadership, he had to deal with issues such as an attempted takeover by competitor Xerox, which ultimately did not go through. In recent years, the executive has worked to adapt the PC world to advances in artificial intelligence. According to Gartner data for 2025, HP is the second-largest player in the global PC market, with a 21.5% share, surpassed only by Lenovo (27.2%) and followed by Dell Technologies (16.5%). In the field of printing, according to data from IDC, Gartner, and Canalys, the company is number one in the world.

Lores, one of the highest paid in the entire technology industry, announced his new professional direction today on his LinkedIn account. “I first joined HP 36 years ago as an intern engineer. Since then, HP has become part of my identity and my family’s history: my wife Rocío and I built our life in Palo Alto so that I could be part of the HP team, and my three children have only known life with HP,” he writes on the social platform, where he summarizes his professional career.

HP, he notes, has given him “the opportunity to grow tremendously.” A company, he emphasizes, that he defines as “its people.” “HP is a true school of talent, guided by a culture of innovation, collaboration, and shared dedication to making a positive impact. I am incredibly proud of what the HP team has achieved, and I have every confidence that Bruce Broussard and the incredible HP leadership team will propel the company forward and lead the future of work.”

He says he is looking forward to the “unique opportunity to serve as CEO of PayPal and have a lasting impact on the global payments industry.” “I am excited to get started, knowing that I am leaving behind a team that will drive HP’s success.”

Along with the changes at the top, the HP team has again reported its forecasts for the first quarter of its fiscal year and its full fiscal year 2026. The company expects diluted earnings per share under generally accepted accounting principles (GAAP) of between $0.58 and $0.66, and non-GAAP diluted earnings per share of between $0.73 and $0.81. For fiscal year 2026, HP continues to expect GAAP diluted net earnings per share of $2.47 to $2.77 and non-GAAP diluted net earnings per share of $2.90 to $3.20. For fiscal year 2026, HP expects to generate free cash flow of between $2.8 billion and $3 billion.

Kategorie: Hacking & Security

OpenAI drops Codex ‘agentic AI’ as a macOS app

3 Únor, 2026 - 18:34

OpenAI has introduced Codex, a desktop application for Macs that lets users run several AI agents simultaneously, making it suitable for much more complex tasks than ChatGPT alone.

A software agent rather than a chat tool, Codex is particularly valuable to software developers who could use the service’s support for multiple AI agents to edit code, build simple apps, manage projects or run complex automations and workflows. Agents can run for up to 30 minutes independently before returning completed code. 

“I built an app with Codex last week,” wrote OpenAI co-founder Sam Altman. “Then I started asking it for ideas for new features and at least a couple of them were better than I was thinking of.” 

OpenAI shared one project in which Codex built a racing game from one prompt. It then began iterating on the original design, identifying and adding missing features, fixing bugs, and more.

AI wants to make your code

Agentic tools for coding seem to be the emerging challenger field in AI. Anthropic’s Claude Code has been generating a lot of interest in the last few weeks. Github’s Copilot is already in daily use across the developer community; Google has Jules; Amazon offers Q; there’s, of course, Microsoft CoPilot; and there are many other developer-focused AI solutions, making this a hotly-contested space.

OpenAI evidently hopes it can turn its current media buzz into a hook to bring developers aboard its own new offering.

To mark the launch and encourage use, the company has doubled all rate limits for paid plans for two months. When it comes to desktop app releases, the company tends to target the Mac because the platform has a huge and active concentration of developers, making it a good place to build new kingdoms. (It plans to introduce Windows and Linux support in the future.) Already, OpenAI claims Codex has been used by more than 1 million developers.

Some features Codex provides

The Mac app gives users the ability to edit code, run workflows and to support agentic tasks through a ChatGPT-like simple UI. Agents are organized within separate threads and projects, which means you can move between tasks and have work running while you do something else; as the project status changes, you’ll see notes in the interface. 

You can also deploy a large assortment of pre-programmed “skills” and automations, which let you use Codex to do specific tasks. The introduction of a Mac app means Codex is able to access native app features and workflows that aren’t always easily available from within a browser. 

That’s not to say OpenAI has knocked the ball out the park with this beast, at least at this stage; initial Reddit feedback suggests several drawbacks, including speed, coding errors, poor quality output, the introduction of bugs and a lack of contextual understanding of intent in contrast to Claude. There have also been claims that Codex makes heavy use of background processes, which slows performance of the host Mac. And there continue to be concerns around security of both the app itself and the software it creates.

Facing serious competition, OpenAI will need to respond to those challenges if the company truly wants Codex to become the best available coding agent.

What’s this for Mac users? 

The most important potential is for developers who use Codex to supplement their work in Apple’s Xcode, particularly because Xcode isn’t particularly good at serving as a high-level agent command center to manage background tasks. The move also positions Codex as a robust coding companion, putting Apple Intelligence and the future Google Gemini partnership under some pressure.

OpenAI clearly also hopes to challenge Apple’s developer environment, as evidenced by Codex and by the company’s recent acquisition of Alex Codes, a third-party tool that added AI features to Xcode. Apple, however, already lets developers connect their Anthropic Claude account to Xcode to access AI-driven coding tools, and will no doubt supplement that arrangement with Gemini-based coding features eventually. 

Meanwhile, Apple continues to work on Apple Intelligence. And while it is cooperating with OpenAI for now, there is little commitment and we can all see that there will be a competitive conflict point as Apple’s former designer Jony Ive’s all-new OpenAI product edges slowly into being

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

By whatever name — Moltbot, Clawd, OpenClaw — this uber AI assistant is a security nightmare

3 Únor, 2026 - 08:00

Moltbot, the cutting-edge, open-source AI “sidekick” formerly known as Clawdbot, recently rebranded as OpenClaw and is now crazy popular. It came out of nowhere to become the first viral AI agent with 70,000 GitHub Stars in a month

Its creator, Peter Steinberger, claims it’s “the AI that actually does things.”

Yeah, well there are a lot of AI chatbots and agents that do things. Maybe they do things badly, mind you, but used carefully, they can do real work. 

OpenClaw’s claim to fame is that it can take real-world actions on your behalf. Instead of living purely in the cloud, the agent runs on a user’s own hardware, often on Mac minis, but you can run it with Windows, Linux, or what have you. Under the hood, it connects to one or more large language models (LLMs) via application programming interface (API), and exposes a set of “channels” and “tools” that let it see and act across a digital life: Reading email, running shell commands, browsing the web, arranging your travel schedule, and running your apps for you.

The project began life as Clawdbot, a locally run AI agent fronted by a cartoon space lobster mascot called Clawd and wired to Anthropic’s Claude models through various “skills” and connectors. 

Via these apps, users typically talk to OpenClaw specifying natural-language tasks such as “clear my inbox,” “book my flight,” or “summarize my meetings.” Under the hood, the agent uses channels to receive those instructions and tools to execute them, wiring AI reasoning from Claude and other models into concrete actions such as checking you in for flights, generating or editing code, reconciling calendars, or spinning up scripts and dashboards.

A key part of OpenClaw’s appeal is its long-term memory. It uses files like USER.md and IDENTITY.md to store facts about you and the agent’s own persona. This enables it to remember preferences, past tasks, and ongoing projects in a way that feels more like a persistent colleague than a stateless chatbot. The surrounding ecosystem of community “skills” on GitHub extends those capabilities further, from browser automation and auto-updating to specialized workflows for documentation, research, and coding.

Sounds great! Go ahead, search online for examples of people doing neat tricks with it, and you’ll find bunches. There’s even a “social” network for the bots called Moltbook, where agents act like idiots (like most social networks I can think of) and occasionally share tips and tricks with each other. 

There are only a few itty-bitty, teeny-weeny problems with it. To do useful things like reserving your hotel room, getting your pizza delivered, or cleaning up your e-mail box, it needs your name, password, credit-card number — and all the other things any crook also wants. 

Get the picture? OpenClaw is a security black hole that’s useful right up to the point where all your important data goes bye-bye. 

As Cisco put it, “Security for OpenClaw is an option, but it is not built in.” The product documentation itself admits: “There is no ‘perfectly secure’ setup.” Granting an AI agent unlimited access to your data (even locally) is a recipe for disaster if any configurations are misused or compromised.”

In particular, as the AI-friendly security company Synk puts it, “If there’s one security concern that keeps AI security researchers up at night, it’s prompt injection. This vulnerability class represents perhaps the largest attack surface for any AI agent connected to external data sources, which, by definition, includes personal AI assistants that read emails, browse the web, and process messages from multiple channels.”

Let me spell it out for you. Using OpenClaw is stupid.

If you insist on trying it out, stick it on a locked-down virtual machine so it can’t access any — and I mean any — of your personal and work data. Do not it feed it any of your personal data. Yeah, it will be a heck of a lot less useful, but that’s the only way it will be safe to use. Otherwise, you’re just asking to be hacked, and when that happens, OpenClaw won’t be able to do much, if anything, to fix the mess.

Kategorie: Hacking & Security

Are you ready for Apple-as-a-Service?

2 Únor, 2026 - 18:24

How much would you pay each month for a Mac, iPhone, iPad, Apple home accessories and a handful of Apple services, including health and home security services? More to the point, how many of Apple’s 2.5 billion users would be willing to pay for the Apple Plus Services suite, and how much would this generate each month in high-yield, high-margin predictable income for the former hardware company?

Apple-as-a-Service? It’s possible

The Apple-as-a-Service idea has hovered at the edge of Apple speculation for years, and while Apple has skirted with the concept (iPhone Upgrade Program), it’s never quite found a way to combine hardware and software in a services-led bundle. But things can change. Bloomberg’s Mark Gurman last year suggested Apple had considered offering a hardware subscription service, but shelved the idea fearing the impact on “normal” hardware sales.

What’s certain at this stage is that it feels as if it would be easier than ever for Apple to pivot its entire business into services, particularly as it has more than 1 billion people using at least one of its services already. Breaking those use patterns down, Apple recently shared some glimpses into the performance of the services segment:

  • 900 million active iCloud+ subscribers.
  • 850 million active App Store users.
  • 58 million Apple TV+ subscribers.
  • 15 million merchants who accept Apple Pay.

Apple News is the number one news app in the US, Canada, and Australia.

The company followed this news up with the introduction of the phenomenal value Creator Studio suite, which unleashes industry leading tools for audio and video production, Final Cut Pro and Logic — along with apps for imagery and productivity — for $12.99 a month. Apps that create opportunity for creative expression are arguably fundamental to the Mac, a purpose that runs deep into Apple’s DNA. 

When it decided to introduce its leading creative apps as a service, Apple moved the needle on both software and services sales, taking a bite out of one segment to benefit a second. The way Apple sees it is that maybe it doesn’t really matter how Apple sells something, so long as the market takes it, and keeps that high margin (c.70%) services segment revenue rolling in. (The segment was worth $30 billion in the just gone quarter.)

The myth of ownership

One argument against hardware-as-a-service is that people like to own their stuff. That’s true. But when you stop to think about it, the truth is that we already use hardware we don’t really own — that iPhone in your pocket you acquired with a carrier deal; the credit card debt you’re paying for your current Mac; the iPads for your field services team you acquired with help from your bridging loan; the Mac you’re about to purchase for a monthly cost direct from the Apple Store.

Even four years ago, “Almost half iPhone owners already finance their iPhone purchase, paying monthly for a new phone,” said CIRP Partner and Co-Founder Josh Lowitz. “And about one-third trade-in their old phone when they buy a new one. So, a significant portion of the user base is accustomed to never owning a phone, instead basically leasing it.”

We use kit we don’t quite own all the time. An Apple hardware-as-a-service offering could accommodate that, offering hardware ownership as one result, after an agreed upon number of payments are made. The second result would be what tempts users though.

Think about how attractive a new Mac upgrade every two or three years and an iPhone upgrade every other year as a basic subscription might be, with AppleCare rolled in. Reflecting that kind of thinking, many already use Apple Trade in, sending their old hardware back to Apple for parts in exchange for money off the next bit of equipment. It’s an ownership model that shows many are already accustomed to treating hardware as if it were a subscription item. 

Improving the experience

But why would Apple abandon hardware sales in favor of services? To build margins, of course, and to develop a predictable income stream, but also to better control the user experience. Not only can Apple then continue to focus on improving its hardware, but because it knows what hardware its customers are running, it can focus improvements in software and services on the hardware.

Think of it as an extension of Apple’s “whole widget” approach, one encompassing hardware, software, services, and silicon with management of the ecosystem itself. 

There are solid environmental reasons for this ecosystem, as it becomes much easier to call in old kit and arrange for it to be recycled, feeding those cannibalized components straight into the circular manufacturing system we know Apple wants to build. It gets a lot easier to build a system like that once you know almost exactly how many recycled components you’ll have to work with in any week; that’s the kind of granular insight a subscription hardware offering might provide.

The numbers game

Would consumers be willing to accept an ownership model like this? To some extent that doesn’t matter. It doesn’t need to be adopted by every consumer to succeed. With 2.5 billion people already one email away from Cupertino, Apple only needs to switch a small fraction of that group over to a hardware/software/services subscription model to generate predictable revenue.

As a thought experiment, imagine if just 1% of those Apple users (25 million) stepped into a services/Mac/iPhone deal at $129/month, that’s $3.2 billion in revenue every month. While I think the cost would be higher (because it’s Apple), that gives you some sense of the revenue the company can raise with the addition of a subscription option when ordering products at the Apple Store.

As a numbers game, Apple starts with the kind of advantage that should make Wall Street nod, in the sense that investors already value tech firms with high recurring revenue more highly than those dependent on cyclical hardware sales.

Who knows, if you could get hold of a Mac and iPhone in such a way for a monthly fee, Apple might find it even easier to upsell consumers to higher specified options through its recently updated Apple Online Store, generating a few more dimes each month with every consensual memory upgrade.

Plus, of course, with user satisfaction numbers in the high 90s, Apple would likely find it even easier to lock its customers into the total Apple experience. That potentially includes a highly secure Apple HomeKit AI-augmented and private home security service with which to protect all this kit, which some believe the company might announce during the year.

You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

Kategorie: Hacking & Security

Why AI adoption keeps outrunning governance — and what to do about it

2 Únor, 2026 - 14:07

Across industries, CIOs are rolling out generative AI through SaaS platforms, embedded copilots, and third-party tools at a speed that traditional governance frameworks were never designed to handle. AI now influences customer interactions, hiring decisions, financial analysis, software development, and knowledge work — often without being formally deployed in the classical sense.

The result is a widening gap between rapid AI deployment and responsible-use protections. Organizations adopt AI faster than they can govern its usage, then scramble to retrofit controls after something goes wrong.

Interviews with five practitioners — each working at a different pressure point of enterprise AI — reveal why this gap persists and what leaders must do to close it before regulators, auditors, or customers force the issue.

Why governance breaks the moment AI hits real workflows

The first problem is structural. Governance was designed for centralized, slow-moving decisions. AI adoption is neither. Ericka Watson, CEO of consultancy Data Strategy Advisors and former chief privacy officer at Regeneron Pharmaceuticals, sees the same pattern across industries.

“Companies still design governance as if decisions moved slowly and centrally,” she said. “But that’s not how AI is being adopted. Businesses are making decisions daily — using vendors, copilots, embedded AI features — while governance assumes someone will stop, fill out a form, and wait for approval.”

That mismatch guarantees bypass. Even teams with good intentions route around governance because it doesn’t appear where work actually happens. AI features go live before anyone assesses training data rights, downstream sharing, or accountability.

What breaks first, Watson said, is data control and visibility. Employees paste sensitive information into public genAI tools, and data lineage disappears as outputs move across systems. “By the time leadership realizes what’s happening,” she said, “the data may already be gone in ways you can’t undo.”

What to do: CIOs must move from model governance to usage governance. You may not control the model, but you can control how it’s used, what data it touches, and where outputs flow. Governance has to be embedded as tollgates inside workflows, not in policy documents that are reviewed after the fact.

Why legacy data governance struggles under genAI

Even where governance exists, it’s often built on assumptions that no longer hold. Fawad Butt, CEO of agentic healthcare platform maker Penguin Ai and former chief data officer at UnitedHealth Group and Kaiser Permanente, argues that traditional data governance models are structurally unfit for generative AI.

“Classic governance was built for systems of record and known analytics pipelines,” he said. “That world is gone. Now you have systems creating systems — new data, new outputs, and much is done on the fly.” In that environment, point-in-time audits create false confidence. Output-focused controls miss where the real risk lives.

“No breach is required for harm to occur — secure systems can still hallucinate, discriminate, or drift,” Butt said, emphasizing that inputs, not outputs, are now the most neglected risk surface. This includes prompts, retrieval sources, context, and any tools AI agents can dynamically access.

What to do: Before writing policy, establish guardrails. Define no-go use cases. Constrain high-risk inputs. Limit tool access for agents. And observe how systems behave in practice. Policy should come after experimentation, not before. Otherwise, organizations hard-code assumptions that are already wrong.

Why vendor AI is where governance collapses

If internal AI governance is weak, third-party AI governance is worse. Richa Kaul, CEO of Complyance, works with global enterprises on risk and compliance management. She sees a sharp divide: while companies are relatively mature in governing AI they build themselves, they are much less prepared when AI arrives embedded in vendor products.

“What we’re seeing is use before governance,” she said. “And it’s often governance by committee — 10 to 20 people reviewing vendors one by one without a shared baseline of questions.” Too often, enterprises ask open-ended questions about AI privacy and accept reassuring answers — what Kaul calls “happy ears.”

Mature governance shows up in specific questions. Is customer data used to train models? Is it reused across clients? Is the LLM accessed via an enterprise deployment or a consumer interface?

“A vendor using Azure OpenAI has a much lower risk profile than one calling ChatGPT directly,” Kaul said.

What to do: CIOs should start with a basic but overlooked step: scrutinize vendor subprocessor lists. Cloud providers are well understood. LLM providers are not. AI has created a second, poorly mapped subprocessor layer — and that’s where governance breaks down.

Why bans fail and incidents repeat

Technology controls alone do not close the responsible-AI gap. Behavior matters more. Asha Palmer, SVP of Compliance Solutions at Skillsoft and a former US federal prosecutor, is often called in after AI incidents. She says the first uncomfortable truth leaders confront is that the outcome was predictable.

“We knew this could happen,” she said. “The real question is: why didn’t we equip people to deal with it before it did?” Pressure to perform is the root cause. Employees use AI to move faster and meet targets — just as they have in every compliance failure from bribery to data misuse.

Blanket bans on genAI do not work. “If you take away responsible use,” Palmer said, “people will use it irresponsibly — in secret, in ways you can’t govern.”

What to do: Shift from awareness training to behavioral learning. Palmer calls it “moral muscle memory,” a scenario-based practice that teaches people to stop, assess risk, and choose a better action under pressure.

Regulators and auditors look for evidence that the right people have received the right training for the risks they actually face. One-size-fits-all AI literacy is a red flag.

Why confidence is not enough when auditors arrive

The final gap appears when organizations are asked to prove their governance works. Danny Manimbo is ISO & AI Practice Leader at Schellman, an attestation and compliance services provider. He sees the same failure pattern repeatedly.

“Organizations confuse having policies with having governance,” he said. “Responsible AI principles don’t matter if they don’t influence real decisions.”

Auditors might start with a simple request: show us a documented AI risk-based decision that changed an outcome. Mature governance leaves fingerprints — including delayed deployments, rejected vendors, and constrained features. Immature governance produces vague assurances.

“The most expensive governance work is the work you try to do after deployment,” Manimbo warned. Walking back data lineage, accountability, and intended purpose is extraordinarily difficult once systems are live.

What to do: Treat AI governance as a management system, not a compliance exercise. Standards like ISO/IEC 42001 work only when they connect risk management, change control, monitoring, and internal audit into a continuous loop.

You can tell governance is working when it changes business decisions, not when it produces documentation.

Closing the responsible AI gap

Across all five interviews, one theme recurs: the responsible AI gap is not primarily a technology failure. It’s a governance timing failure. Controls are being designed for yesterday’s systems while AI is already shaping today’s decisions.

Several of the sources stressed that CIOs should stop framing responsible AI as a future-state program and start treating it as an operational hygiene issue — closer to identity management or financial controls than to ethics committees.

Watson from Data Strategy Advisors emphasized that visibility is the first non-negotiable step. Enterprises that cannot enumerate where AI influences decisions — especially through SaaS tools — are already exposed. “You can’t govern what you can’t see,” she noted, warning that many companies still lack even a basic inventory of AI-affected workflows.

At Penguin Ai, Butt reinforced that point from a data perspective, arguing that inventories must shift from platforms to systems-in-context. An AI feature embedded in HR software and the same feature embedded in marketing automation do not carry the same risk. Treating them as identical is a governance illusion.

Complyance’s Kaul added that the same principle applies externally. Vendor AI governance breaks down when enterprises accept generic assurances instead of mapping where their data actually flows. In her experience, simply forcing teams to trace AI subprocessors exposes risks that executives did not realize they had accepted.

To close the gap, CIOs must: >
  • Embed governance where work happens — and before it happens, not after.
  • Shift focus from models to usage and inputs.
  • Treat vendor AI as a first-class risk domain.
  • Replace bans with behavioral training.
  • Demand governance that provides explanations as to how decisions are made.
  • The organizations that do these things will not only avoid regulatory trouble, they will move faster — and with more confidence.

    Palmer from Skillsoft focused on the human layer that sits underneath all of this. Governance frameworks collapse, she argued, when they assume people will slow down under pressure. “Pressure doesn’t disappear,” she said. “You have to train for it.” Organizations that fail to do so should not be surprised when employees improvise with AI in unsafe ways.

    Finally, Schellman’s Manimbo offered a blunt litmus test: if governance has never delayed a deployment, rejected a vendor, or constrained a feature, it probably does not exist in practice. “Governance has to leave fingerprints,” he said. Otherwise, it is indistinguishable from aspiration.

    Taken together, the interviews suggest that closing the responsible AI gap does not require perfect foresight or exhaustive policy. It requires earlier intervention and clearer accountability. Organizations that act now — while AI use is still fragmented and informal — have a chance to shape behavior. Those that wait will inherit systems they no longer control and risks they can no longer explain.

    At that point, governance is no longer a choice. It becomes damage control.

    Related reading:

    Kategorie: Hacking & Security

    Amazon Go? It’s gone. And this is why it went.

    2 Únor, 2026 - 08:00

    For years, Amazon Go stores stood at the pinnacle of retail store technology, showcasing a massive number of high-resolution digital cameras in each store that could visually track every customer and how that shopper interacted with every product. 

    The stores showcased Amazon’s technological superiority and brought an eerily human- and friction-free element to convenience store shopping. Unfortunately, the whole setup was also a poster child for technology that isn’t profitable and has no realistic path to get there. (Sounds a lot like all those vendors now selling generative AI (genAI) and Agentic AI systems; the crown for chasing unreachable ROI has been officially passed.)

    The way it worked was wickedly simple: shoppers walked into a store by scanning a code from their Amazon app, the cameras and analytics took over, and the shopper grabbed what they wanted and left.

    Amazon hung in as long as it could, but on Jan. 27, after almost eight years, it surrendered and announced it was closing all of the Amazon Go stores

    Don’t jump to the wrong conclusion here. Amazon didn’t pull the plug on the technology that fueled the stores (the company now calls it “Just Walk Out.”) Instead, it realized that the value of the technology lay elsewhere.

    The original idea behind Go was to deliver a frictionless shopping experience. Turns out that shoppers visited Go mostly for the novelty and didn’t find it meaningfully better than their usual store options.

    But, Amazon discovered, it was absolutely a lot faster. Not only were Go stores not profitable, according to retail IT leaders, they didn’t work as loss leaders either since they didn’t drive enough revenue to make the effort worthwhile. And since they didn’t succeed in the small footprint of a convenience store, they’d never work in a larger format store.

    Amazon finally figured out that value was found not by going bigger — think of Costco, Walmart or Target-size stores — but by going smaller, a lot smaller.

    Once Amazon execs realized speed was the only true advantage, they sought situations where speed was critical. So, they started licensing the technology to tiny venues where speed equals money. 

    The technology is “reducing cafeteria wait times from 25 to just 3 minutes at BayCare’s St. Joseph’s Hospital” and “enabling sports fans at Scotiabank Arena to grab concessions in 30 seconds,” Amazon said. 

    This is Amazonian brilliance at its best, albeit many years too late. Instead of losing money in physical stores they own, they licensed the technology to others. That’s instant profit. It’s really hard to lose money on licensing technology you perfected almost a decade ago.

    And by going ultra-small, they are pushing into places where speed outranks everything else. Consider those sports concession stands. They typically can only sell their hotdogs, popcorn and soda in tiny blocks of time: during halftime and before the game. After the game is tricky, because people usually want to avoid waiting in line so they can get home. 

    The Go technology exponentially shortens those lines, allowing merchants to sell more goods during those brief windows of opportunity. The only gating factor is how long it takes to ask a customer what they want and to give it to them. It was always the payment process that slowed everything down. 

    Zak Stambor, a senior analyst tracking retail for Emarketer, said he found the technology working “phenomenally well” in a tiny snack stand at a train station he uses. 

    The revenue is relatively trivial. “If I’m only buying a soda or a snack, there isn’t much margin,” Stambor said. But speed makes all the difference. When he only has a minute before his train arrives, Stambor  doesn’t have time to cross the street and make a regular purchase. But with the elimination of the payment mechanism, the whole transaction just works. He grabs a snack and jumps on the train.

    “Amazon has learned quite a bit from this endeavor,” Stambor said. 

    It’s similar to how Apple changed authentication requirements to allow for almost instantaneous ticket purchases in the New York subways via NFC on an Apple Watch. 

    The interesting part of the NYC subway experience — and I use it periodically — is how much faster the Apple Watch is compared to the iPhone. I recently was in NYC with my wife and I zapped through the turnstile using my Apple Watch while my wife was repeatedly trying to get through with her iPhone. 

    Even worse, she had to have her iPhone in her hand, suboptimal in a crowded subway station. The fact that I didn’t need to hold anything for the payment to work made it feel like magic.

    That’s the point. Amazon and Apple and others have figured out the handful of situations where speed is the most critical element — allowing super useful technology to be, well, super useful.

    The lesson here is this: when IT is pushing for a business case for some new technology (and the financial folk are pushing back), the answer might not be to tweak the technology. It might be to tweak where and how it is used. 

    Kategorie: Hacking & Security

    After criticism, Microsoft promises big improvements for Windows 11

    30 Leden, 2026 - 21:10

    Windows developers are now focusing on improving the core experience in Windows 11, with Microsoft reportedly redirecting resources to address the operating system’s performance and reliability issues, according to The Verge.

    The move follows criticism of Windows 11 over recurring bugs and performance issues. Many users have also reacted to what they perceive as intrusive ads, bloatware, and unwanted AI initiatives.

    “The feedback we’ve received from our community of engaged customers and Windows Insiders has been clear. We need to improve Windows in ways that actually matter to people,” Pavan Davuluri, head of Windows and devices, told The Verge.

    “This year, you’ll see us focus on addressing the pain points we hear repeatedly from customers: improving system performance, reliability, and the overall Windows experience.”

    Kategorie: Hacking & Security

    Do you have software vision?

    30 Leden, 2026 - 21:02

    It’s easy to fall down the rabbit hole that is the hype surrounding Anthropic’s code agent Claude Code, a hype that really took off during the Christmas holidays and — at least in tech circles — is reminiscent of ChatGPT’s arrival three years ago. Claude Code, already being called both “the new ChatGPT” and “the end of SaaS,” and is considered by many to be the next big step in AI development and AI use.

    If, like me, you spend a little too much time online, Claude Code has started to completely dominate your feeds lately. Of course, it “helps” that the algorithms give me “more of what I want,” but both LinkedIn and TikTok are overflowing with people talking about the excellence of Claude Code. AI influencers, ordinary developers, curious IT professionals, and, of course, a steady stream of obscure individuals who want to cash in on all the buzz.

    People are vibe-coding everything from entire websites to small personal apps and finding new solutions for their workflows. Ethan Mollick, a professor at the Wharton School of the University of Pennsylvania, wrote about how he had Claude Code build an entire startup. In other circles, there’s talk of Gas Town, an orchestration solution that allows 20 to 30 agents to work in parallel. In San Francisco, it is said, people are letting “Claude swarms control their lives.”

    In the last week, Clawdbot (now Moltbot) also jumped on the hype train; it’s a Claude Code-like AI assistant that you give full access to your digital life, if you dare.

    It’s all fascinating and often incredibly impressive. Code is seen as the primary use case for generative AI (genAI), and when you see what Claude Code can do, it’s not hard to understand why. I haven’t tested Claude Code myself, even though more than one person has told me that I simply “must.” And I’m not particularly keen to do so (though I will probably get around to it later for purely educational purposes). Let me explain my thinking.

    For starters, I’m not a developer and I can’t write code. Now, the point is that Claude Code is supposed to write the code for you, but it’s not quite that simple. The entry barrier is still too high for me, and the road to the really cool stuff is too long. I have a basic understanding of the processes and functions involved in software development, but my knowledge is far from enough.

    Anthropic has begun to address this by releasing Claude Cowork, which is described as Claude Code for people who are not developers. That sounds interesting. But here’s the problem: what I’m being asked to do with it, how it’s being sold — it just sounds…boring. For some reason, the first example often highlighted for tools like this is that they can “sort your files on your computer.” That might be a good thing to do, once, if you have a lot of unsorted files on your computer. But it’s hardly a use case that’s worth buying and learning a new tool for.

    If you look at what enthusiasts are doing with Claude Code, a lot of it focuses on personal productivity and “optimization,” both at work and in everyday life. That’s something I’m completely uninterested in. Sure, I also read “Getting Things Done” and “The 4-Hour Workweek” 20 years ago, but what I learned above all else was that the deeper you sink into that self-help swamp, the greater the risk you’ll end up spending all your time finding new ways to optimize things. (It’s also sad, but perhaps telling of modern working life, that people need to come up with “hacks” to be able to do their jobs.)

    The main reason I’m meh about Claude Code is this: I lack the software vision. This realization occurred to me recently when I read tech writer Jasmine Sun’s experiences with Claude Code. She points out how people who do parkour over time learn to see a city in ways that the rest of us don’t; they develop “parkour vision” where walls and stairs become something completely different from walls and stairs. In the same way, she suggests developers develop a kind of software vision where all problems can be translated into, and solved by, software.

    An ordinary user like me does not have software-shaped problems. I do not automatically see how something I do can be automated or optimized by a bot. So when I hear that Claude Code can solve my problems, I cannot think of a single one.

    I think this perspective says a lot about the use of AI in general and is useful to keep in mind. That’s true for both super users who hype the new solutions for people who “can’t keep up,” and for companies that are feverishly trying to get their staffers to embrace AI. I don’t think anyone is against a new tool that makes them more productive and facilitates their work, as long as they know what to use it for.

    Perhaps companies’ AI training courses should focus less on learning specific tools and more on learning to identify the problems they’re supposed to solve, perhaps even developing a software vision.

    Kategorie: Hacking & Security

    Apple’s Siri future is hybrid, integrated — and already here

    30 Leden, 2026 - 18:25

    Apple covered a lot of ground Thursday when it announced record Q1 results, but perhaps what mattered most were the insights into how the company now thinks about artificial intelligence, Siri, and Google Gemini.

    Apple’s forthcoming, existentially significant and much smarter Siri will be powered by Apple’s collaboration with Google Gemini and the company told us a little about how it will work: it will work on-device or use Apple’s Private Cloud Compute (PCC) system. (Apple also confirmed PCC servers are already being manufactured and shipped from a US factory.)

    Private, integrated, relevant

    “These AI experiences are personal, private, integrated across our platforms, and relevant to what our users do every day,” said Apple CEO Tim Cook.

    The hybrid approach reflects Apple’s decision to broaden its AI capabilities by partnering with Google. For a fee, the company will help Apple by providing frontier model capacity, though Cook and company remain in command of execution — including where the AI calculations take place and ensuring privacy. 

    You can easily see this as a rejection of the hyperscaler dependency visible across the entire industry, one that means future Apple Intelligence tools will work on the device nearly all the time, or in the cloud on highly secure server systems otherwise. In the future, will users even need to outsource requests to other firms, or will the combined tech do everything they need?

    We don’t yet know.

    What we do know

    But what we do know is that this approach to AI deployment should be water in the desert to enterprises in regulated industries, who will need things like auditability, data minimization and jurisdiction control. 

    It won’t be a perfect answer – many will still seek self-hosted AI, sovereign data solutions, and tightly constrained service selection. A second consideration might be that processors for Private Cloud Compute are manufactured in the US, which suggests defense-adjacent, infrastructure, and public sector entities will look positively at Apple’s solutions during future buying cycles.

    Apple cited two deployment stories that show us the road to come:

    • Astra Zeneca: The company has deployed 5,000 M5-powered iPad Pros across its pharmaceutical sales team to take advantage of AI capabilities, including Apple Intelligence.
    • Snowflake: The move to standardize around Mac has led to a reduction in support costs. (The Mac deployment story is significant in the quarter, with platform shipments far surpassing the industry average, confirming the switch from Windows is intensifying, worldwide.)

    Supported by its Google Gemini partnership, Apple will now be able to build and deploy additional Apple Intelligence features to help people get things done more efficiently. “We believe that we can unlock a lot of experiences and innovate in a key way through the collaboration,” said Cook.

    Apple’s customers are ready for private AI

    With that in mind, it is also important to reflect on Apple’s admission that the majority of users on enabled iPhones are actively using Apple Intelligence. This bodes well, as those features scale across its now 2.5 billion customers.

    To expand on this initial adoption, Apple only needs to build a deeper catalog of AI features, which it is now working on with Google.

    Will it attempt to monetize its AI features? That’s a tough call. Apple refused to say, suggesting it sees its approach as a platform-enabling tech, probably to the detriment of some, not all, the third-party AI services elsewhere available.

    For enterprise purchasers, this also means Apple products now compete as platforms for AI-enabled workflow acceleration. That’s a new arrow for the company bow, one that joins other enterprise-focused competitive edges in its quiver, including TCO, power/performance, employee choice, and endpoint security.  

    But challenges remain

    It is not unusual that Apple’s management spelled out a super-optimistic vision of what’s coming next in AI, but significant challenges remain, and not just the dangers of political upheaval, tariffs, and the impact of war on the company supply chain.

    When it comes to hardware, two of the most significant challenges include memory and processor manufacture. The first problem could affect Apple’s margins, somewhat, but the challenge most likely to put a brake on the company’s continued market expansion is perhaps more profound: Apple confirmed iPhone supply is currently constrained by processor availability. 

    “To be specific, it’s the advanced nodes like 3-nanometer where our latest SoCs are produced that are gating the Q2 supply,” Cook said. “We are currently constrained, and at this point, it’s difficult to predict when supply and demand will balance.”

    With Apple Silicon inside every Apple device at this point, the inference here is clear: Demand for Apple products has grown so high it can’t make them fast enough. While that’s a good problem to have, it might also contribute to Apple’s purported decision to split the iPhone release cadence into two significant launch windows, since doing so might help manage immediate demand for this precious component.

    It might also form a kind of warning to enterprise purchasers planning large-scale Apple deployments, as it suggests those orders could take longer than usual to fulfill. Though it seems certain at this point that reports of Apple’s demise in AI may have been overexaggerated.

    Please follow me on Mastodon, or join me in the AppleHolic’s bar & grill and Apple Discussions groups on MeWe.

    Kategorie: Hacking & Security

    Agentic AI – Ongoing coverage of its impact on the enterprise

    30 Leden, 2026 - 18:21

    Over the next few years, agentic AI is expected to bring not only rapid technological breakthroughs, but a societal transformation, redefining how we live, work and interact with the world. And this shift is happening quickly. “By 2028, 33% of enterprise software applications will include agentic AI, up from less than 1% in 2024, enabling 15% of day-to-day work decisions to be made autonomously,” according to research firm Gartner.

    Unlike traditional AI, which typically follows preset rules or algorithms, agentic AI adapts to new situations, learns from experiences, and operates independently to pursue goals without human intervention. In short, agentic AI empowers systems to act autonomously, making decisions and executing tasks — even communicating directly with other AI agents — with little or no human involvement.

    Agentic AI will enable machines to interact with the physical world with unprecedented intelligence, allowing them to perform complex tasks in dynamic environments, which could be especially useful for industries facing labor shortages or hazardous conditions.However, the rise of agentic AI also brings security and ethical concerns. Ensuring these autonomous systems operate safely, transparently and responsibly will require governance frameworks and testing.

    Follow this page for ongoing agentic AI coverage from Computerworld and Foundry’s other publications.

    Agentic AI news and insights Forward Networks launches agentic AI system built on network digital twin

    January 30, 2026: The new Forward AI capability builds on the vendor’s digital twin and is designed to allow network teams to ask complex questions, understand network behavior, validate outcomes and safely automate workflows.

    Agentic AI exposes what we’re doing wrong

    January 23, 2026: Agentic AI has changed cloud computing, but not in the way the hype machine wants you to believe. It hasn’t magically replaced engineering, nor has it made architecture irrelevant. 

    How to get your enterprise architecture ready for agentic AI

    January 22, 2026: While C-suite leaders say they’re investing in agentic AI, the complex enterprise architectures of large organizations often struggle with the tech’s demands.

    IBM targets agentic AI scale-up with new Enterprise Advantage consulting service

    January 20, 2026: IBM has launched a new consulting service named Enterprise Advantage, designed to help CIOs take their agentic and other AI applications from experimentation to large-scale production.

    EY exec: If you think agentic AI is a challenge, you’re not ready for what’s coming

    January 15, 2026: Companies struggling to keep up with the arrival of AI agents should buckle up: Even more complicated agentic AI technologies are quickly coming down the pike. That includes physical AI, which includes robots and quantum computing.

    Managing agentic AI risk: Lessons from the OWASP Top 10

    December 19, 2025: LLM-powered chatbots have risks that we see playing out in the headlines on a nearly daily basis. But chatbots are limited to answering questions. AI agents, however, access data and tools and carry out tasks, making them infinitely more capable – and more dangerous to enterprises.

    Agentic AI in 2026: More mixed than mainstream

    December 18, 2025: Agentic AI is having its everything, everywhere, all at once moment. Or is it? Data clarifies. While 39% of organizations surveyed by McKinsey say they are experimenting with agents, only 23% have begun scaling AI agents within one business function

    Overcome governance and trust issues to drive agentic AI

    December 18, 2025: Fully autonomous agentic AI is still way off but AI agents are making inroads within enterprise software and workflows. Gartner predicts 40% of enterprise software will feature task-specific AI agents by the end of 2026 as the current trend for embedded AI assistants evolves.

    Nvidia bets on open infrastructure for the agentic AI era with Nemotron 3

    Decenber 15, 2025: AI agents must be able to cooperate, coordinate, and execute across large contexts and long time periods, and this, says Nvidia, demands a new type of infrastructure, one that is open. The company says it has the answer with its new Nemotron 3 family of open models.

    Microsoft drops M365 Copilot price for SMBs, upgrades free Copilot Chat

    November 19, 2025: Microsoft announced that it reduce the price of Microsoft 365 Copilot for small and mid-sized firms beginning next month. Microsoft 365 Copilot for Business will cost $21 per user, per month for customers with any Microsoft 365 Business plan. That’s down from the current $30 monthly price.

    Microsoft Fabric IQ adds ‘semantic intelligence’ layer to Fabric

    November 19, 2025: Microsoft promises enterprises better understanding of their data for workers and autonomous agents alike, but analysts fear deployment hurdles and vendor lock-in.

    Microsoft unveils Agent 365 to help IT manage AI ‘agent sprawl’

    November 18, 2025: As businesses begin deploying AI agents in greater numbers, IT teams will need to manage and secure those AI systems as they connect to corporate data. That’s the idea behind Microsoft’s Agent 365 (A365), a new “control plane” that lets customers deploy and govern the use of agents. 

    From chatbots to colleagues: How agentic AI is redefining enterprise automation

    November 17, 2025: A new wave of agentic AI is taking shape: systems that not only converse but also reason, plan, and act within enterprise workflows. These agents are not assistants that talk; they are digital colleagues that think.

    The enterprise IT overhaul: Architecting your stack for the agentic AI era

    November 10, 2025: For the CIO, the conversation has officially moved past the large language model (LLM). The next critical chapter is agentic AI — autonomous systems capable of reasoning, planning and executing multi-step tasks across your enterprise. Agentic AI is here. Now, CIOs must orchestrate

    October 23, 2025: Agentic AI is about to change how companies create value. Yet, most enterprises aren’t ready. The problem isn’t the technology — it’s the planning and execution. Too many pilots stall out because CIOs haven’t built the AI systems, guardrails and culture to move beyond experiments.

    AI agents might smooth some of retail’s worst data problems

    October 21, 2025: So many retail challenges hinge on unreliable product data. Can agentic AI clean up that data enough to make a difference? Can it do the same for other verticals?

    The impact of agentic AI on SaaS and partner ecosystems

    October 16, 2025: The enterprise technology landscape is entering a critical pivot point as agentic AI transforms partner ecosystems from human-mediated, application integration networks into autonomous, self-orchestrating and intelligent ecosystems.

    Salesforce updates its agentic AI pitch with Agentforce 360

    October 13 2025: Salesforce announced a new release of Agentforce that, it says, “gives teams the fastest path from AI prototypes to production-scale agents” — although with many of the new release’s features still to come, or yet to enter pilot phases or beta testing, some parts of that path will be much slower than others.

    Gemini Enterprise is Google’s new ‘front door’ for agentic AI access at work

    October 9, 2025: Google introduced an AI assistant to serve as a platform so users can access and coordinate AI agents that automate work tasks. Gemini Enterprise, which replaces the Agentspace app launched last year, also features new enterprise search functions to help customers tap into data from across an organization’s business apps. 

    Oracle’s agentic AI push in Fusion Cloud CX offers embedded automation for CX leaders

    October 7, 2025: Oracle is adding new pre-built agents to its Advertising and Customer Experience Cloud (Fusion Cloud CX) to help enterprises increase operational efficiency by automating sales, service, and marketing processes.

    IBM touts agentic AI orchestration, cryptographic risk controls

    October 7, 2025: IBM watsonx Orchestrate offers more than 500 tools and customizable, domain-specific agents from IBM and third-party contributors. Among the additions to watsonx Orchestrate are AgentOps capabilities that offer real-time monitoring and policy-based controls for observability and governance.

    How self-learning AI agents will reshape operational workflows

    October 6, 2025: Google’s recent whitepaper, “Welcome to the Era of Experience,” signals a shift in the way AI agents are trained. Google hypothesizes that allowing AI agents to learn from the experience of agents rather than solely from human-generated training data will enable autonomous AI to surpass its current capabilities.

    Are your agentic AI projects driving toward success?

    October 3, 2025: Anushree Verma, Gartner senior director analyst, says most agentic AI projects today are early-stage experiments or proofs of concept, fueled primarily by hype and often misapplied.

    Microsoft unveils framework for building agentic AI apps

    October 3. 2025: Microsoft has introduced the Microsoft Agent Framework, an open-source SDK and runtime for building, orchestrating, and deploying AI agents and multi-agent workflows, with full framework support for .NET and Python.

    Salesforce Trusted AI Foundation seeks to power the agentic enterprise

    October 2, 2025: As Salesforce pushes further into agentic AI, its aim is to evolve Salesforce Platform from an application for building AI to a foundational operating system for enterprise AI ecosystems. 

    ServiceNow’s AI Experience is an agentic AI UI for the Now Platform

    September 30, 2025: ServiceNow today launched the AI Experience (AIx), a contextually aware multimodal AI-driven use UI for its Now platform. Building on the ServiceNow AI Platform and with a foundation in Now Assist, the company describes it as “a unified, conversational front door to enterprise AI.”

    How MCP is making AI agents actually do things in the real world

    September 29, 2025: You’ve seen them: Those incredible large language models (LLMs) that can chat, write and even generate code. They’ve revolutionized how we interact with technology, but there’s a new, even more exciting chapter unfolding. Discover how MCP is turning chatbots into doers, and the future of work may never look the same.

    Agentic AI in IT security: Where expectations meet reality

    September 29, 2025: Agentic AI has shifted from lab demos to real-world SOC deployments. Unlike traditional automation scripts, software agents are designed to act on signals and execute security workflows intelligently, correlating logs, enriching alerts, and even take first-line containment actions.

    Walmart looks to cash in on agentic AI

    September 19, 2025: Walmart doesn’t intend to lose its retail crown anytime soon. And, according to US EVP and CTO Hari Vasudev, the $815B company’s artificial intelligence strategy will play a key role in preventing that from happening.

    5 steps for deploying agentic AI red teaming

    September 17, 2025: As more enterprises deploy agentic AI applications, the potential attack surface increases in complexity and reach. But there is still hope that AI agents can be harnessed for defensive purposes too, including using traditional red teaming and penetration testing techniques but updated for the AI world.

    Google unveils payments protocol for AI agents with major financial firms

    September 17. 2025: Google has introduced the Agent Payments Protocol (AP2), an open framework developed with more than 60 payments and technology companies to support secure, agent-led transactions across platforms and payment methods.

    CrowdStrike bets big on agentic AI with new offerings after $290M Onum buy

    September 16, 2025: At its Fal.Con conference, the cybersecurity giant launched its Agentic Security Platform and Agentic Security Workforce, aiming to outpace AI-driven adversaries with real-time intelligence, automation, and a common language for defense.

    Adobe makes Agent Orchestrator and AI agents generally available

    September 10, 2025: Adobe Experience Platform (AEP) Agent Orchestrator and six new AI agents are designed to build, deliver, and optimize customer experience and marketing campaigns. The company also announced Experience Platform Agent Composer for customizing and configuring AI agents based on brand guidelines and organizational policy.

    Rethinking the IT organization for the agentic AI era

    September 2, 2025: With the advent of agentic AI, CIOs must be poised to adjust strategic IT priorities, mitigate new security risks, and reskill staff for a new era.

    How to build a production-grade agentic AI platform

    September 2, 2025: Modular orchestration, fail-safe design, hybrid memory management, and LLM integration with domain knowledge are essential to agentic AI systems that reason, act, and adapt at scale.

    Agentic AI: A CISO’s security nightmare in the making?

    September 2, 2025: Enterprises will no doubt be using agentic AI for a growing number of workflows and processes, including software development, customer support automation, and more. But what are the cybersecurity risks of agentic AI, and how much more work will it take for them to support their organizations’ agentic AI dreams?

    Microsoft researchers develop new tech for video AI agents

    September 2, 2025: Microsoft researchers are developing technologies for a new class of video AI agents to explore three-dimensional spaces before making decisions.The technology framework, called MindJourney, uses a range of AI technologies to understand and analyze 3D spaces, reason about the surroundings, and predict movement

    Salesforce AI Research unveils new tools for AI agents

    August 27, 2025: Salesforce announced a simulated enterprise environment, benchmark, and account data unification tool that are designed to help customers transform into agentic AI enterprises.

    Agentic AI promises a cybersecurity revolution — with asterisks

    August 18, 2025: The hottest topic at this year’s Black Hat conference was the meteoric emergence of AI tools for both cyber adversaries and defenders, particularly the use of agentic AI to strengthen cybersecurity programs.

    4 thoughts on who should manage AI agents

    August 11, 2025: As AI agents proliferate, we need to turn our attention beyond AI agent builder platforms to AI orchestration and AI GRC platforms. It also raises questions about which groups within the enterprise should manage AI agents and how they should be treated.

    How bright are AI agents? Not very, recent reports suggest

    July 31, 2025: Security researchers are adding more weight to a truth that infosec pros had already grasped: AI agents are not very bright, and are easily tricked into doing stupid or dangerous things

    Will AI agents eat the SaaS market? Experts are split

    July 31,2025: As hype about AI agents reaches new heights, an emerging theory suggests that the groundbreaking AI tools will kill the SaaS business model. The claim isn’t particularly new, but is resurfacing, with people like Microsoft CEO Satya Nadella voicing this position. 

    How agentic AI will change database management

    July 28, 2025: Generative AI has already had a profound impact on the world of database management. And now, thanks to AI’s knack for pattern-recognition, teams can use generative AI to analyze data sets, detect anomalies, and access invaluable insights with record speed and precision. 

    As AI agents go mainstream, companies lean into confidential computing for data security

    July 21, 2025: Companies need to stop ignoring data security as AI agents take over internal data movement in IT environments, analysts and IT execs warn. To address that issue, some tech players are embracing the concept of “confidential computing.” While it’s existed for years, it;s now finding new life with the rise of genAI.

    How agentic AI will transform mobile apps and field operations

    July 15, 2015: Agentic AI will usher in new mobile AI experiences. Construction, manufacturing, healthcare, and other industries with significant field operations will benefit from mobile AI agents and the resulting operational agility. 

    MCP is fueling agentic AI — and introducing new security risks

    July 10, 2025: Model Context Protocol (MCP) has caught fire, with several thousand MCP servers now available from a wide range of vendors enabling AI assistants to connect to their data and services. And with agentic AI increasingly seen as the future of IT, MCP will only grow in use in the enterprise. But innovations like MCP also come with significant security risks.

    3 industries where agentic AI is poised to make its mark

    July 4, 2024:  IT leaders from finance, retail, and healthcare lend insights into what organizations are doing with AI agents today — and where they see the technology taking their organizations and industries in the future.

    IFS rolls TheLoops agentic AI into industrial ERP

    June 27, 2025: IFS is adding AI agent development and management capabilities to its ERP platform with the acquisition of software startup The acquisition brings TheLoops’ full Agent Development life cycle (ADLC) platform into IFS, enabling enterprises to design, test, deploy, monitor, and fine-tune AI agents with built-in support for versioning, compliance, and performance optimization.

    How AI agents and agentic AI differ from each other

    June 12, 2025: With agentic AI in its infancy and organizations rushing to adopt AI agents, there seems to be confusion about the difference between “agentic AI” and “AI agents” technologies, but experts say there’s growing understanding that the two are separate, but related, tools.

    The future of RPA ties to AI agents

    June 10, 2025: RPA is accelerating toward a crossroads, with IT leaders and experts debating its future. Some IT leaders say that more powerful and autonomous AI agents will replace the two-decade-old AI precursor technology, while others predict that AI agents and RPA will work hand-in-hand.

    MCP is enabling agentic AI, but how secure is it?

    June 2, 2025: Model context protocol (MCP) is becoming the plug-and-play standard for agentic AI apps to pull in data in real time from multiple sources. However, this also makes it more attractive for malicious actors looking to exploit weaknesses in how MCP has been deployed. 

    The agentic AI assist Stanford University cancer care staff needed

    May 30, 2025: At Microsoft Build 2025 earlier this month, Nigam Shah, CDO for Stanford Health Care, discussed agentic AI’s ability to redefine healthcare, especially in oncology, as physicians get overloaded with the administrative tasks of medicine, he said, which lead to burnout.

    Agentic AI, LLMs and standards big focus of Red Hat Summit

    May 26, 2025: Red Hat, announced a number of improvements in its core enterprise Linux product, including better security, better support for containers, better support for edge devices. But the one topic that dominated the conversation was AI.

    Putting agentic AI to work in Firebase Studio

    May 21, 2025: Putting agentic AI to work in software engineering can be done in a variety of ways. Some agents work independently of the developer’s environment, working essentially like a remote developer. Other agents directly within a developer’s own environment. Google’s Firebase Studio is an example of the latter, drawing on Google’s Gemini LLM o help developers prototype and build applications .

    Why is Microsoft offering to turn websites into AI apps with NLWeb?

    May 20. 2025: NLWeb, short for Natural Language Web, is designed to help enterprises build a natural language interface for their websites using the model of their choice and data to answer user queries about the contents of the website. Microsoft hopes to stake its claim on the agentic web before rivals Google and Amazon do.

    Databricks to acquire open-source database startup Neon to build the next wave of AI agents

    May 14, 2025: Agentic AI requires a new type of architecture because traditional workflows create gridlock, dragging down speed and performance. To get ahead in this next generation of app building, Databricks announced it will purchase Neon, an open-source serverless Postgres company.

    Agentic mesh: The future of enterprise agent ecosystems

    May 13, 2025: Nvidia CEO Jensen Huang predicts we’ll soon see “a couple of hundred million digital agents” inside the enterprise. Microsoft CEO Satya Nadella takes it even further: “Agents will replace all software.”

    Google to unveil AI agent for developers at I/O, expand Gemini integration

    May 13, 2025: Google is expected to unveil a new AI agent aimed at helping software developers manage tasks across the coding lifecycle, including task execution and documentation. The tool has reportedly been demonstrated to employees and select external developers ahead of the company’s annual I/O conference.

    Nvidia, ServiceNow engineer open-source model to create AI agents

    May 6, 2025: Nvidia and ServiceNow have created an AI model that can help companies create learning AI agents to automate corporate workloads. The open-source Apriel model, available generally in the second quarter on HuggingFace, will help create AI agents that can make decisions around IT, human resources and customer-service functions.

    How IT leaders use agentic AI for business workflows

    April 30, 2025: Jay Upchurch, CIO at SAS, backs agentic AI to enhance sales, marketing, IT, and HR motions. “Agentic AI can make sales more effective by handling lead scoring, assisting with customer segmentation, and optimizing targeted outreach,” he says.

    Microsoft sees AI agents shaking up org charts, eliminating traditional functions

    April 28, 2025: As companies increasingly automate work processes using agents, traditional functions such as finance, marketing, and engineering may fall away, giving rise to an ‘agent boss’ era of delegation and orchestration of myriad bots.

    Cisco automates AI-driven security across enterprise networks

    April 28, 2025: Cisco announced a range of AI-driven security enhancements, including improved threat detection and response capabilities in Cisco XDR and Splunk Security, new AI agents, and integration between Cisco’s AI Defense platform and ServiceNow SecOps.

    Hype versus execution in agentic AI

    April 25, 2025: Agentic AI promises autonomous systems capable of reasoning, making decisions, and dynamically adapting to changing conditions. The allure lies in machines operating independently, free of human intervention, streamlining processes and enhancing efficiency at unprecedented scales. But David Linthicum writes, don’t be swept up by ambitious promises. 

    Agents are here — but can you see what they’re doing?

    April 23, 2025: As the agentic AI models powering individual agents get smarter, the use cases for agentic AI systems get more ambitious — and the risks posed by these systems increase exponentially.A multicloud experiment in agentic AI: Lessons learned

    Agentic AI might soon get into cryptocurrency trading — what could possibly go wron

    April 15, 2025: Agentic AI promises to simplify complex tasks such as crypto trading or managing digital assets by automating decisions, enhancing accessibility, and masking technical complexity.

    Agentic AI is both boon and bane for security pros

    April 15, 2025: Cybersecurity is at a crossroads with agentic AI. It’s a powerful tool that can create reams of code in a blink of an eye, find and defuse threats, and be used so decisively and defensively. This has proved to be a huge force multiplier and productivity boon. But while powerful, agentic AI isn’t dependable, and that is the conundrum. 

    AI agents vs. agentic AI: What do enterprises want?

    April 15, 2025:  Now that this AI agent story has morphed into “agentic AI,” it seems to have taken on the same big-cloud-AI flavor that enteriprise already rejected. What do they want from AI agents, why is “agentic” thinking wrong, and where is this all headed?

    A multicloud experiment in agentic AI: Lessons learned

    April 11, 2025: Turns out you really can build a decentralized AI system that operates successfully across multiple public cloud providers. It’s both challenging and costly.

    Google adds open source framework for building agents to Vertex AI

    April 9, 2025: Google is adding a new open source framework for building agents to its AI and machine learning platform Vertex AI, along with other updates to help deploy and maintain these agents. The open source Agent Development Kit (ADK) will make it possible to build an AI agent in under 100 lines of Python code. It expects to add support for more languages later this year.

    Google’s Agent2Agent open protocol aims to connect disparate agents

    April 9, 2025: Google has taken the covers off a new open protocol — Agent2Agent (A2A) — that aims to connect agents across disparate ecosystems.. At its annual Cloud Next conference, Google said that the A2A protocol will enable enterprises to adopt agents more readily as it bypasses the challenge of agents that are built on different vendor ecosystems not being able to communicate with each other.

    Riverbed bolsters AIOps platform with predictive and agentic AI

    April 8, 2025: Riverbed unveiled updates to its AIOps and observability platform that the company says will transform how IT organizations manage complex distributed infrastructure and data more efficiently. Expanded AI capabilities are aimed at making it easier to manage AIOps and enabling IT organizations to transition from reactive to predictive IT operations.

    Microsoft’s newest AI agents can detail how they reason

    March 26, 2025: If you’re wondering how AI agents work, Microsoft’s new Copilot AI agents provide real-time answers on how data is being analyzed and sourced to reach results. The Researcher and Analyst agents take a deeper look at data sources such as email, chat or databases within an organization to produce research reports, analyze strategies, or convert raw information into meaningful data.

    Microsoft launches AI agents to automate cybersecurity amid rising threats

    March 26, 2025: Microsoft has introduced a new set of AI agents for its Security Copilot platform, designed to automate key cybersecurity functions as organizations face increasingly complex and fast-moving digital threats. The new tools focus on tasks such as phishing detection, data protection, and identity management.

    How AI agents work

    March 24, 2025: By leveraging technologies such as machine learning, natural language processing (NLP), and contextual understanding, AI agents can operate independently, even partnering with other agents to perform complex tasks.

    5 top business use cases for AI agents

    March 19, 2025: AI agents are poised to transform the enterprise, from automating mundane tasks to driving customer service and innovation. But having strong guardrails in place will be key to success.

    Nvidia launches AgentIQ toolkit to connect disparate AI agents

    March 21, 2025: As enterprises look to adopt agents and agentic AI to boost the efficiency of their applications, Nvidia this week introduced a new open-source software library — AgentIQ toolkit — to help developers connect disparate agents and agent frameworks..

    Deloitte unveils agentic AI platform

    March 18, 2025: At Nvidia GTC 2025 in San Jose, Deloitte announced Zora AI, a new agentic AI platform that offers a portfolio of AI agents for finance, human capital, supply chain, procurement, sales and marketing, and customer service.The platform draws on Deloitte’s experience from its technology, risk, tax, and audit businesses, and is integrated with all major enterprise software platforms. 

    The dawn of agentic AI: Are we ready for autonomous technology?

    March 15, 2025: Much of the AI work prior has focused on large language models (LLMs) with a goal to give prompts to get knowledge out of the unstructured data. So it’s a question-and-answer process. Agentic AI goes beyond that. You can give it a task that might involve a complex set of steps that can change each time.

    How to know a business process is ripe for agentic AI

    March 11, 2025: Deloitte predicts that in 2025, 25% of companies that use generative AI will launch agentic AI pilots or proofs of concept, growing to 50% in 2027. The firm says some agentic AI applications, in some industries and for some use cases, could see actual adoption into existing workflows this year.

    With new division, AWS bets big on agentic AI automation

    March 6, 2025: Amazon Web Services customers can expect to hear a lot more about agentic AI from AWS in future with the news that the company is setting up a dedicated unit to promote the technology on its platform.

    How agentic AI makes decisions and solves problems

    March 6, 2025: GenAI’s latest big step forward has been the arrival of autonomous AI agents. Agentic AI is based on AI-enabled applications capable of perceiving their environment, making decisions, and taking actions to achieve specific goals. 

    CIOs are bullish on AI agents. IT employees? Not so much

    Feb. 4, 2025: Most CIOs and CTOs are bullish on agentic AI, believing the emerging technology will soon become essential to their enterprises, but lower-level IT pros who will be tasked with implementing agents have serious doubts.

    The next AI wave — agents — should come with warning labels. Is now the right time to invest in them?

    Jan.13, 2025: The next wave of artificial intelligence (AI) adoption is already under way, as AI agents — AI applications that can function independently and execute complex workflows with minimal or limited direct human oversight — are being rolled out across the tech industry.

    AI agents are unlike any technology ever

    Dec. 1, 2024: The agents are coming, and they represent a fundamental shift in the role artificial intelligence plays in businesses, governments, and our lives.

    AI agents are coming to work — here’s what businesses need to know

    Nov. 21, 2024: AI agents will soon be everywhere, automating complex business processes and taking care of mundane tasks for workers — at least that’s the claim of various software vendors that are quickly adding intelligent bots to a wide range of work apps.

    Agentic AI swarms are headed your way

    November 1, 2024: OpenAI launched an experimental framework called Swarm. It’s a “lightweight” system for the development of agentic AI swarms, which are networks of autonomous AI agents able to work together to handle complex tasks without human intervention, according to OpenAI. 

    Is now the right time to invest in implementing agentic AI?

    October 31, 2024: While software vendors say their current agentic AI-based offerings are easy to implement, analysts say that’s far from the truth.

    Kategorie: Hacking & Security

    How to print and scan with Android

    30 Leden, 2026 - 11:45

    We may live in an increasingly digital world, but sometimes — love it or hate it — good old-fashioned pulp-based paper is still a necessity.

    No matter what type of work you do, you’re bound to encounter the occasional page that needs to be printed or document that needs to be scanned. With your Android phone in hand, though, such scenarios don’t have to be a hassle. In fact, printing and scanning from Android is surprisingly simple these days — if you know where to look.

    Follow this guide, and you’ll never be caught off guard again.

    Printing from Android: The basic method

    ‘Twas a time when transforming a document from pixels on your smartphone’s screen into actual ink and paper required a cumbersome third-party plugin — or, worse yet, the daunting and unreliable (and mercifully no longer with us) Google Cloud Print service (gasp!).

    Well, take a deep breath and smooth down those metaphorical hackles: Such horrific complications are no longer needed. At this point, provided you have a reasonably up-to-date Android device, the ability to print from your phone or tablet is built right into Android itself and as easy as can be.

    Ever since 2017’s Android 8 release, Google’s been partnering with the Mopria Alliance — a nonprofit mobile printing standards organization — to bring a native and no-thought-requiring printing function to all Android devices. There’s really nothing to it: So long as you’re connected to the same Wi-Fi network as a Mopria-certified printer (and odds are, any printer in your office or home has that designation), all you have to do is find the print command in any app that offers it and then tap away with that pretty little finger of yours.

    In Gmail or Outlook, for instance, you’d tap the three-dot menu icon directly above any email you’re viewing and then look for the “Print” command in the list of options that appears. The same is true with Microsoft Word, while in Google Docs, you’d open that same menu but first tap “Share & export” and then select “Print.”

    On any reasonably recent Android phone, you can look for the print command within any app that supports it — such as Gmail and Google Docs, shown here — and then print away without any further thought or configuration.

    JR Raphael / Foundry

    Regardless of where you find it, once you start the printing process, your phone will automatically detect any printer’s presence on your network and list it as an option — and you can then print away to your heart’s content (or discontent, whichever the case may be).

    Any available printers on your network should show up as options with the system-level Android print dialog, alongside the option to virtually “print” something as a document.

    JR Raphael / Foundry

    Printing from Android: The advanced path

    The built-in system we just walked through works fine for most basic printing needs — but if you require more intricate forms of mobile printing authentication (and if you’re working in an enterprise environment, there’s a decent chance you do) or if your printing demands other advanced work-oriented features (such as folding, stapling, or accounting-related input), you’ll need something a bit more robust.

    The easiest answer comes from the same aforementioned Mopria Alliance, which has a free Mopria Print Service app that enables those sorts of next-level options. Once you’ve installed the app, accepted its terms, and granted it the necessary permissions to operate, you’ll follow the same steps described above to print from any print-supporting program on your phone. The Mopria Print Service will automatically take over as your device’s default print service and provide you with any advanced possibilities available on the printer you’re using.

    (You could also opt to install your printer manufacturer’s own print service plugin — like the one offered by HP, for instance — but the Mopria app has the advantage of working seamlessly with practically any printer and preventing you from having to change apps or install additional apps whenever a new printer makes its way into your life.)

    The Mopria Print Service app is a viable option for any devices still stuck on now-ancient outdated software as well — since it’ll work with practically any phone and Android version — and it has the side perk of empowering you to print from anywhere on your device, regardless of whether a proper print command is present: Simply use the standard Android share command from any app or process and then select “Mopria Print” from the menu that appears. You could even use that capability to select a chunk of text from an email, a web page, or anywhere else imaginable and then send only that specific text to a printer.

    loading="lazy" width="400px">

    The Mopria Print Service app makes it super-simple to find and manage any nearby printers and then print to them with more advanced, enterprise-oriented options.

    JR Raphael / Foundry

    Scanning with Android via a physical scanner

    If you’re near a physical scanner or multifunction printer, capturing a document and saving it onto your phone is a cinch: Just grab the free Mopria Scan app, created and maintained by that very same organization we talked about in the last two sections (how ’bout that?!).

    Open the app up, accept the necessary terms and permissions, and make sure you’re connected to the same Wi-Fi network as the scanner you want to use — then look for your scanner in the list the app spits out. If you don’t see the scanner you need, look for the button to manually add a scanner by entering its name (whatever you want to call it) and IP address (usually listed somewhere within a scanner’s front-screen menu).

    Once your scanner shows up, just tap its name to initiate a scan.

    loading="lazy" width="400px">

    Mopria Scan lets you initiate a scan remotely and then have the results appear right on your Android phone.

    JR Raphael / Foundry

    Scanning with Android via your phone’s camera

    Maybe you don’t have or want to fuss with a standalone scanner and would rather just capture something using the camera that’s already in your purse, pocket, or pantaloons anyway. Believe it or not, you can actually get reasonably high-quality scans that way nowadays, and most people won’t even know the difference.

    The Google Play Store houses a variety of apps that are up to the task, but the most powerful and versatile option for documents and other text-centric scanning is the free Google Drive app that’s probably already on your phone. Just open it up and tap the camera icon in its lower-right corner — or, if you want to save yourself a step, press and hold the Drive icon on your home screen or in your app drawer and then tap (or even save!) the “Scan” shortcut that pops up in that area — and, if necessary, look for the prompt to try out the new and improved Drive scanner.

    This recently released version of the Drive scanner will automatically identify any documents in your camera’s view and almost instantly find their edges and capture a clean, tidy scan of them for you. It’ll keep looking for more documents or pages without any extra actions, too, which makes it delightfully easy to handle a slew of scans without wanting to gouge your eyes out.

    Once you’re finished, tap the arrow button at the bottom of the Drive scanner’s screen to see what you’ve captured and optionally make adjustments. There, too, Drive makes things as simple as can be: You can tap an “Enhance” option to let the app figure out what’d make your scan look best and apply any and all adjustments for you, or you can use its built-in manual tools for cropping and rotating, cleaning, and applying filters to finesse whatever it is you’ve captured.

    Google Drive makes both capturing a physical document and improving its quality exceptionally easy.

    JR Raphael / Foundry

    All that’s left is to tap the “Next” command to wrap things up and save your final file directly into your Drive storage — as a PDF or optionally also as a JPG, if you’d rather.

    Last but not least, take note: If you’d prefer to have this same capability without the cloud storage connection, the same exact scanner interface also exists in the Google Files app — only there, everything you do is saved to your device’s own internal storage by default instead of going to Drive to start.

    With technology like this, the line between physical and digital has never been easier to straddle.

    This article was originally published in August 2019 and most recently updated in January 2026.

    Kategorie: Hacking & Security

    Connecting the dots on the ‘attachment economy’

    30 Leden, 2026 - 08:00

    For decades, we’ve all been paying attention to the attention economy. 

    That economic concept and business model sees online content as an unlimited resource. Its consumption is limited only by people’s mental capacity, implying a global contest for the finite and valuable resource of human attention. 

    The attention economy idea explains why companies like Meta sees itself not only competing with other social sites like TikTok, X or YouTube, but also with books, plays and nature walks — anything that grabs people’s attention. 

    Because attention is limited, the only way to grow is to be better at attracting attention. And that simple model is the reason why social networks are filled with rage bait, AI slop, memes, pornography and hate speech. The social media business isn’t incentivized to prioritize “good” content, only attention-grabbing content. 

    In the attention economy paradigm, human attention is a currency with monetary value that people “spend.” The more a company like Meta can get people to “spend” their attention on Instagram or Facebook, the more successful that company will be. So the algorithms are deliberately designed (and constantly redesigned) to maximize how much time people pay attention to social networks. New features are specifically designed to increase the time users spend on Meta services instead of other things. For example, the average time spent on Instagram grew by 24% after Reels launched, making it a huge success for the company. 

    Meta grabs an average of 18 hours and 44 minutes of attention per month due to its relentless tweaks for capturing attention. But that’s nowhere near the attention economy leader, TikTok, which gets an average of 34 hours and 15 minutes of attention per month. 

    That’s why Meta is so obsessed now with AI on its social platforms. 

    Rise of the attachment economy

    Tristan Harris at the Center for Humane Technology coined the phrase “attachment economy,” which he criticizes as the “next evolution” of the extractive-tech model; that’s  where companies use advanced technologies to commodify the human capacity to form attached bonds with other people and pets. 

    In August, the idea began to gain traction in business and academic circles with a London School of Economics and Political Science blog post entitled, “Humans emotionally dependent on AI? Welcome to the attachment economy” by Dr. Aurélie Jean and Dr. Mark Esposito. 

    Meta has introduced fully AI-generated accounts designed to exist alongside the personal accounts created by real people. The company launched “AI Studio” to let influencers clone themselves with AI versions of themselves. (Tellingly, Meta is temporarily pausing access to AI characters for teens on its platforms, including Instagram and WhatsApp, in advance of a trial that will look at the harms and addiction social media sites can cause.)

    The company’s embrace of AI can be explained by the emerging attachment economy. While social posts, memes, reels and stories attract attention, AI can get users to form emotional attachments. 

    A recent German study found people can develop more emotional closeness with AI than with other people — but only if they don’t know they’re interacting with a chatbot Still, even when people know chatbots aren’t human, they can get unhealthily attached.

    Late last year, a Virginia man named Jon Ganz went missing in a high-profile case attributed to “AI psychosis,” where his life unraveled after an obsession with a chatbot led to his disappearance. Also in 2025, a 16-year-old California boy’s parents sued OpenAI after he killed himself following conversations with a chatbot about suicide. 

    Some people claim to be in relationships or marriages with AI chatbots. 

    Now, AI chatbot vendors don’t aim to cause “AI psychosis,” suicide, or human-software marriages, but they do aim to cause attachment. That’s why these companies use psychological strategies, technical adjustments, and design choices to make their products feel more “human.” They give chatbots distinct personalities and identities, human-like voices and speech patterns, senses of humor and playfulness, and unlimited capacity for flattery and sycophancy. 

    Starting around 2 million years ago until this millennium, interaction with speech and language was the exclusive province of people. Our brains are optimized for perceiving, understanding, and responding to human speech. So when we converse with appliances or apps, our Paleolithic brains think we’re interacting with another human. 

    And that’s a business model. A category of AI products and services has emerged advertising “relationships” with chatbots, including Replika, Kindroid, Nomi.ai, EVA AI, and Candy AI. 

    Other offerings promise friendship, but not necessarily “romantic” engagement. This list includes Kuki, Character.ai, Anima, and Replika’s “friend” mode. 

    Our survival as a species has always depended on our sociability. This includes our care for others, sharing food, forming of friendships, loving relationships, empathy, and — you guessed it! — attachment.

    This is the reason why chatbots talk and interact like people: Because the goal is attachment. 

    I believe this is also the unspoken justification for humanoid robots, as I’ve written before in this space. (The spoken justification is that humanoid robots can operate in spaces designed for people.)

    As in that piece, I detailed how humanoid robot makers deliberately trick people into falsely assuming that these products have human-like cognition. Studies show that eye contact and emotional cues from robots can trigger bonding responses and empathy in humans that are similar to those that come from interacting with people.

    The core benefit (to the companies selling them) or problem (for humanity) with humanoid robots is their psychological impact on people. They are engineered to “hack” human brains and deceive users into treating machines as sentient beings and forming attachments. 

    The same goes for AI-based pets. Casio’s Moflin robot is an AI companion that develops a unique personality and simulates affection. It offers the gratification of pet ownership without the actual pet.

    The rise of attachment-forming tech is similar to the rise in subscriptions. While posting an article or YouTube video may get attention, getting people to subscribe to a channel or newsletter is better. It’s “sticky,” assuring not only attention now, but attention in the future as well. 

    Likewise, the attachment economy is the “sticky” version of the attention economy. 

    Unlike content subscription models, the attachment idea causes real harm. It threatens genuine human connection by providing an easier alternative, fostering addictive emotional dependencies on AI, and exploiting the vulnerabilities of people with mental health issues. 

    While the attention economy is still with us, a far more potent and dangerous trend is emerging where companies aim to hijack our humanity so that we’ll keep using their products. 

    AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.

    Kategorie: Hacking & Security

    Who profits from AI? Not OpenAI, says think tank

    30 Leden, 2026 - 03:02

    Findings from a new study by Epoch AI, a non-profit research institute, appear to poke major holes in the notion that AI firms, and specifically OpenAI, will eventually become profitable.

    The research paper  written by Jaime Sevilla, Hannah Petrovic and Anson Ho, suggests that while running an AI model may generate enough revenue to cover its own operating costs, any profit is outweighed by the cost of developing the next big model. So, it said, “despite making money on each model, companies can lose money each year.”

    The paper seeks to answer three questions: How profitable is running AI models? Are models profitable over their lifecycle? Will AI models become profitable?

    To answer question one, researchers created a case study they called the GPT-5 bundle, which they said included all of OpenAI’s offerings available during GPT-5’s lifetime as the flagship model, including GPT-5 and GPT-5.1, GPT-4o, ChatGPT, and the API, and estimated the revenue from and costs of running the bundle. All numbers gathered were based on sources of information that included claims by OpenAI and its staff, and reporting by media outlets, primarily The Information, CNBC, and the Wall Street Journal.

    The revenue estimate, they said, “is relatively straightforward”. Since the bundle included all of OpenAI’s models, it was the company’s total revenue over GPT-5’s lifetime from August to December last year: $6.1 billion.

    And, they pointed out, “at first glance, $6.1 billion sounds healthy, until you juxtapose it with the costs of running the GPT-5 bundle.” These costs come from four main sources, the report said, the first of which is inference compute at a cost of $3.2 billion. That number is based on public estimates of OpenAI’s total inference compute spend in 2025, and assumes that the allocation of compute during GPT-5’s tenure was proportional to the fraction of the year’s revenue generated in that period.

    The other costs are staff compensation ($1.2 billion), sales and marketing ($2.2 billion) and legal, office, and administrative costs: $0.2 billion.

    It’s all in the calculation

    As for options for calculating profit, the paper stated, “one option is to look at gross profits. This only counts the direct cost of running a model, which in this case is just the inference compute cost of $3.2 billion. Since the revenue was $6.1 billion, this leads to a profit of $2.9 billion, or gross profit margin of 48%, and in line with other estimates. This is lower than other software businesses, but high enough to eventually build a business on.”

    In short, they stated, “running AI models is likely profitable in the sense of having decent gross margins.”

    However, that’s not the full story.

    The paper stated that by buying the argument that gross margins only should be considered when looking at profitability, “on those terms, it was profitable to run the GPT-5 bundle. But was it profitable enough to recoup the costs of developing it? In theory, yes — you just have to keep running them, and sooner or later you’ll earn enough revenue to recoup these costs. But in practice, models might have too short a lifetime to make enough revenue. For example, they could be outcompeted by products from rival labs, forcing them to be replaced.”

    The trick, the authors stated, revolves around comparing gross profits and comparing the nearly $3 billion to the firm’s R&D costs: “To evaluate AI products, we need to look at both profit margins in inference as well as the time it takes for users to migrate to something better. In the case of the GPT-5 bundle, we find that it’s decidedly unprofitable over its full lifecycle, even from a gross margin perspective.”

    As for the big question of whether AI models will become profitable, the paper stated, “the most crucial point is that these model lifecycle losses aren’t necessarily cause for alarm. AI models don’t need to be profitable today, as long as companies can convince investors that they will be in the future. That’s standard for fast-growing tech companies.”

    The bottom line, said the trio of authors, is that profitability is very possible because “compute margins are falling, enterprise deals are stickier, and models can stay relevant longer than the GPT-5 cycle suggests.”

    Asked whether the markets will stay irrational for long enough for OpenAI to become solvent, Jason Andersen, VP and principal analyst at Moor Insights & Strategy, said, “it’s possible, but there is no guarantee. I believe in 2026 you will see refinements in strategy from these firms. In my brain, there are three levers that OpenAI and other general-purpose AIs can use to improve their financial position (or at least slow the burn).” 

    The first, he said, is pacing, “and I think that is happening already. We saw major model drops at a slower pace last year. So, by slowing down a bit, they can reduce some of their costs or at the very least spread them out better. Frankly, customers need to catch up anyway, so they can plausibly slow down, so the market can catch up to what they already have.”

    The second, said Andersen, is to diversify their offerings, and the third involves capturing revenue from other software vendors.

    As to whether OpenAI and others can keep going long enough for AI to become truly effective, he said, “OpenAI and Anthropic have the best chance of going long and staying independent. But, that said, I also want to be cautious about what ‘truly effective’ means. If you mean truly effective means achieving AGI, it’s theoretical, so probably not without major breakthroughs in hardware and energy. But if ‘effective’ means reaching profitability over a period of years, then yes, those two have a shot.”

    The trick on the road to profits, he said, “will be finding a way to compete and win against companies that have welded their future to AI. Notably, Google, Microsoft, and X have now made their models inextricable to their other products and offerings. So, is there enough time and diversification opportunities to compete with them? My guess is that a couple pure plays will do well and maybe even disrupt the market, but many others won’t make it.”

    Describing the paper’s findings as “very linear” and based on short-term analysis,  Scott Bickley, advisory fellow at Info-Tech Research Group, said that OpenAI has been “pretty open about the fact they are not profitable currently. What they pivot to is this staggering chart of how revenues are going to grow exponentially over the next three plus years, and that’s why they are trying to raise $200 billion now to build up infrastructure that’s going to support hundreds of billions of dollars of business a year.”

    Many fortunes tied to OpenAI

    He estimated that OpenAI’s overall financial commitments, as a result of agreements with Nvidia and hyperscalers as well as data center buildouts, now total $1.4 trillion, and said, “They’re trying to make themselves too big to fail, to buy the long runway they’re going to need for these investments to hopefully pay off over the course of years, or even decades.”

    Right now, he said, the company is “shoring up the balance sheet. They’re trying to build everything they can to buy runway ahead. But either they wildly succeed beyond any of our imagination, and they come up with applications that I can’t envision are realistic today, or they fail miserably, and they’re guaranteed that everyone can buy a chunk of the empire for pennies on the dollar or something to that effect. But I think it’s either boom or bust. I don’t see a middle road.”

    As it currently stands, said Bickley, all major vendors have “tied their fortunes to OpenAI, which is exactly what Sam Altman wanted to have happen. He’s going to force the biggest players in the space to help him be successful.”

    In the event the company did end up failing, he predicted the impact on companies buying AI initiatives developed by it will be minimal. “Regardless of what happens to the commercial entity of OpenAI, the intellectual property that’s been developed, the models that are there, are going to be there. They’ll fall under someone’s control and continue to be used. They’re not in any danger of not being available.”

    Kategorie: Hacking & Security

    New PDF compression filter will save space, need software updates

    29 Leden, 2026 - 20:57

    Brotli is one of the most widely used but least-known compression formats ever devised, long incorporated into all major browsers and web content delivery networks (CDNs). Despite that, it isn’t yet used in the creation and display of PDF documents, which since version 1.2 in 1996 have relied on the FlateDecode filter also used to compress .zip and .png files.

    That is about to change, though, with the PDF Association moving closer to publishing a specification this summer that developers can use to add Brotli to their PDF processors. The hope is that Brotli will then quickly be incorporated in an update of the official PDF 2.0 standard, ISO 32000-2, maintained by the International Organization for Standardization.

    With PDF file sizes steadily increasing, and the number stored in enterprise data lakes ballooning by billions each year, the need for a more efficient compression method has never been more pressing.

    The pay-off for using Brotli compression will be smaller PDFs. This will translate into an average of 10% to 25% reduction in file size, depending on the type of content being encoded, according to a 2025 test by PDF Association member Artifex Software.

    Unfortunately, for enterprises this is where the work begins. As PDFs written using Brotli compression start to circulate, anyone who hasn’t updated their applications and library dependencies to support it will be unable to decompress and open the new-format files. For PDFs, this would be a first: While the format has added numerous features since becoming an ISO standard in 2008, none have stopped users from opening PDFs.

    The most visible software requiring an upgrade to support Brotli includes proprietary PDF creators and readers such as Adobe Acrobat, Foxit PDF Editor, and Nitro PDF. PDF readers integrated into browsers also fall into this category.

    Beyond this, however, lies a sizable ecosystem of less-visible open-source utilities, libraries, and SDKs which are used inside enterprises as part of PDF workflows and automated batch processing. Finding and updating these components, often buried deep inside third-party libraries, promises to be time consuming.

    If enterprises delay updating, then they risk encountering PDFs created using newer software supporting Brotli that will no longer open on their older, non-updated programs. IT teams will most likely come face to face with this when users contact them to report that they can’t open a file.

    Building Brotli support

    To kick off adoption, developers need encouragement, said Guust Ysebie, a software engineer with document processing developer Apryse. “Somebody has to jump first and make some noise so other products jump on the bandwagon,” he said.

    It’s a challenge because, as he explained in a post about the move to Brotli on the PDF Association’s website, Brotli’s adoption has been slowed because the PDF specification requires consensus across hundreds of stakeholders.

    The transition can be eased in three ways, he suggested, the simplest of which is to publicize the need to upgrade across multiple information sources as part of an awareness campaign.

    A more radical suggestion is that Brotli-enabled PDFs could be formatted such that, rather than cause older readers to crash, they could show a “not supported” error message encouraging customers to upgrade as a placeholder for the compressed content.

    A final tactic is for likeminded developers to take it upon themselves to upgrade open-source libraries. Ysebie said he’s added Brotli support to several libraries, including the iText SDK from Apryse.

    “This is how adoption works in real life: Create the feature unofficially, then early adopters implement it, and this causes bigger products to also adopt it,” said Ysebie. The critical moment for adoption of Brotli-enabled software would be its appearance in Adobe Reader. This will happen at some point, but when is still unclear, he said.

    The good news is that because there are only a limited number of software libraries to upgrade, adding support to this software should be straightforward, said Ysebie. However, organizations will still have to apply those updated images to their current applications.

    As to when Brotli will be added to the ISO PDF 2.0 specification (ongoing since 2015), Ysebie agreed this has a way to go. But the industry had to move on from old technology at some point. “We need to push the ecosystem forward. It will be a little chaotic in the beginning but with a lot of potential for the future.”

    Kategorie: Hacking & Security

    Microsoft touts M365 Copilot momentum, claims 15M paid users

    29 Leden, 2026 - 19:25

    Microsoft has for the first time reported Microsoft 365 Copilot adoption stats, boasting this week of 15 million paid seats (individual user licenses). There are “multiples more enterprise chat users” Microsoft CEO Satya Nadella said during the company’s earnings call Wednesday — a reference to Copilot Chat, a simplified version of the AI assistant available to Microsoft 365 customers at no extra cost. (Microsoft did not provide specific figures.)

    Analysts said the number of paid user seats lags expectations at this stage, given the company’s efforts to market Microsoft 365 Copilot and position it as central to its AI strategy.

    “Microsoft’s disclosure of 15 million Microsoft 365 Copilot paid users represents disappointing uptake of the tool — just 3.3% of the 450 million-strong Microsoft 365 user base, despite reorganizing the Microsoft 365 product and go-to-market around Copilot,” said J.P. Gownder, vice president and principal analyst at Forrester.

    “My take is that businesses are still trying to figure out the best way to use Microsoft 365 Copilot, and are hesitant to take on another expense without knowing how it will help their worker productivity,” said Jack Gold, principal analyst at J. Gold Associates. 

    He expects adoption to increase significantly over the next couple of years, though this will likely be tied to “contractual renewals and obligations that enterprises have to navigate, rather than simply adding on to existing contracts with Microsoft. It’s a similar situation to how enterprises viewed the migration to Microsoft 365 originally,” Gold said.

    Microsoft 365 Copilot launched in late 2023 as a paid add-on for Microsoft 365 customers. It’s embedded into productivity applications such as Word, Teams, and Outlook, and is increasingly pitched as an AI agent that can perform tasks autonomously

    Despite significant business interest in the Microsoft 365 Copilot and its potential to boost employee productivity, adoption has been sluggish. This is due to a range of factors, including a perceived lack of value and concerns about data security and governance.

    In its earnings call, Microsoft reported a three-fold increase in the number of customers with more than 35,000 seats, compared to last year. (The total number was not provided.) That rise included deals with Fiserv, ING, NAST, the University of Kentucky, the University of Manchester, the US Department of Interior, and Westpac. “Publicis alone purchased over 95,000 seats for nearly all its employees,” said Nadella. 

    In most cases, companies pay for a limited number of Microsoft 365 licenses, assigned to small groups within their workforce “due to the cost and lack of proven, measurable ROI,” said Gownder. 

    “So, while a lot of organizations have a few licenses, few organizations hold a lot of licenses,” he said.

    Microsoft also announced figures about individual usage. Microsoft 365 Copilot is becoming a “daily habit” for those with access to the AI assistant, Nadella said on the earnings call, with a 10-fold increase in daily active users compared to the previous year. In addition, the average number of user conversations with the AI assistant has doubled in the past year. In neither instance did Microsoft provide specific numbers.

    In the medium term, Gownder said, Microsoft is repositioning the value proposition of the Microsoft 365 Copilot paid product. While it’s mostly considered an AI personal assistant inside Office apps, it also offers license holders a “powerful cost-containment promise in a world of growing agentic AI,” he said.

    That’s because Microsoft 365 Copilot license holders gain unmetered access to Copilot Agents. “As organizations deploy more and more Copilot Agents, the value proposition of the license will broaden to include unmetered access to agents,” he said. 

    The company still needs to prove the tools’ worth to businesses — and users. “Unless Microsoft can improve the Microsoft 365 Copilot product, both as a personal assistant and as an enabler of agentic AI, it might continue to struggle with adoption,” said Gownder.

    During the earnings call, Microsoft highlighted increased revenue for its Microsoft 365 productivity suite, up 17%. The number of paid seats increased 6% year-on-year to 450 million, boosted by increased adoption among small and medium businesses. 

    Microsoft recently announced price increases for Microsoft 365 customers that will begin July 1.

    Kategorie: Hacking & Security

    Apple touts ‘unparalleled’ protection for M5 Macs

    29 Leden, 2026 - 17:44

    Apple overnight updated the Apple Platform Security guide, its Bible for everyone involved in Apple security. The new edition confirms that M5 Macs now benefit from rock solid protection that should protect them against some of the most sophisticated attacks. 

    The guide confirms that Memory Integrity Enforcement (MIE) is now available for M5 Macs, as well as iPhones running A19 chips. First discussed in a blog post last September, MIE will “completely redefine the landscape of memory safety for Apple products,” the company said at the time. “We believe Memory Integrity Enforcement represents the most significant upgrade to memory safety in the history of consumer operating systems.”

    Unparalleled, always-on memory safety protection

    Last fall, Ivan Krstić, Apple’s head of security engineering and architecture, explained that MIE represents the culmination of half a decade of design and engineering work on Apple’s part. He also said MIE has been successfully tested against some of the most sophisticated mercenary spyware attacks Apple has encountered. 

    What this means is that people attempting to attack iPhones and M5 Macs will find it even more challenging than it already is, making it far more expensive and difficult to do so. That’s not to say the prevention is foolproof — there are always new vulnerabilities in any form of protection. But raising the cost of creating these exploits is one way to reduce the number of potential attacks that can be made. 

    “MIE is built in to Apple Silicon and offers unparalleled, always-on memory safety protection for key attack surfaces including the kernel, while maintaining the power and performance that users expect,” the updated guide explains. 

    The idea behind the technology is that it dramatically constrains an attacker’s ability to exploit memory corruption vulnerabilities on Apple devices, which is a Very Good Thing (VGT)

    What else is new in the Apple Platform Security Guide?

    MIE isn’t the only security improvement included in the guide. Among other additions, it features new topic sections concerning quantum security, single sign-on (SSO), and satellite communications:

    • Quantum Security: Another VGT, Apple deployed postquantum cryptographic protection (PQ3) in iMessage in iOS 17.4 and macOS 14.4. This protection against future quantum-based attacks has now been extended in iOS 26, iPadOS 26, macOS 26, tvOS 26, and watchOS 26, including the introduction of CryptoKit, which developers can use to help protect the software they offer on the platforms. 
    • Platform SSO: This new section explains the different authentication mechanisms now in place for SSO, how they work, and how its systems interact with identity service providers;
    • Satellite: In addition to describing the core security architecture in place to protect satellite-based communications using Apple’s systems and an iPhone, the company also confirms its use of encryption and pseudonyms to secure those messages.

    Apple also expanded a range of existing sections in the document, which ends with the customary set of links and contacts to security bounties and researchers and a table that effectively represents the extent to which the company continues to secure its platforms. Apple has also updated its platform security website.

    “For software to be secure, it needs to rest on hardware that has security built in,” the report says. “That’s why Apple devices — with iOS, iPadOS, macOS, tvOS, visionOS, and watchOS — have security capabilities designed into silicon. These capabilities include a CPU that powers system security features, as well as additional silicon that’s dedicated to security functions.” 

    This end-to-end approach to security is evident in that Apple’s platforms remain far more inherently secure than rivals’. That’s not everything, of course; no matter how secure the platform happens to be, security can still be undermined by the weakest link in the food chain, which is now and always has been the users of these devices. Apple’s commitment to security should not be seen as a rationale for complacency — though it is good to know the M5 Mac you’re about to upgrade to should be more secure than ever before against surveillance-as-a-service attackers.

    You can follow me on social media! Join me on BlueSky,  LinkedIn, and Mastodon.

    Kategorie: Hacking & Security

    Chrome auto browse can help with work, says Google

    29 Leden, 2026 - 14:47

    Google is looking to expand Chrome’s role in enterprise productivity with a new auto-browse feature built on its Gemini 3 model that it says can navigate websites, gather information, and process it, reducing manual data entry and repetitive clicks in professional workflows.

    The feature is available in preview to paying AI Pro and Ultra subscribers in the US through the Gemini interface in Chrome.

    It arrives as hyperscalers and leading model providers such as OpenAI and Anthropic are pushing AI deeper into enterprise workflows, seeking to automate routine tasks and processes to deliver measurable productivity gains.

    OpenAI made an early move in this direction in February 2024, showcasing software capable of operating devices autonomously, with Anthropic demonstrating its “computer use” capability in October that year. Since then, both companies have worked to fold these abilities into more refined products, either as standalone offerings or in Anthropic’s case, directly within Claude, its generative AI–based chatbot.

    Google too has been experimenting with browser and agent-based automation for some time, including Jarvis, revealed in October 2024 and now called Project Mariner, which explores more autonomous web navigation and task execution as part of Google’s broader push to make AI an active participant in enterprise operations.

    Jarvis was showcased just days after Anthropic showed off computer use and is now available to subscribers to the $250/month Google AI Ultra service as a prototype.

    Available in preview

    Chrome auto browse is available in the US as a preview for AI Ultra subscribers, and also for subscribers to the $20/month AI Pro service, a move that analysts say positions the browser as a lightweight productivity layer, aiming to streamline knowledge work and free employees from repetitive, time-consuming online tasks.

    Avasant principal analyst Abhisekh Satapathy welcomed Google’s inclusion of user supervision capabilities, noting that Gemini asks for confirmation before completing certain actions.

    Pareekh Jain, principal analyst at Pareekh Consulting, focused on its ease of use faced with complex workflows.

    “It can handle complex multi-step web workflows like form filling and navigation, with enterprise use cases including expense processing (extracting receipts from portals), procurement quote aggregation across vendor sites, and CRM updates via SaaS interfaces,” he said.

    Development teams could see productivity benefits, he said: “This could unlock substantial gains through zero-code automation, letting operations teams in HR or Finance create mini-automations independently such as instructing it to go to the vendor portal, download January invoices, and save to this Drive folder, without developer delays.”

    Freed from such mundane work, Jain said, “Developers can then shift from crafting fragile web scraper scripts to authoring high-level agentic instructions, redirecting focus from clicks to desired outcomes, boosting efficiency across workflows.”

    Everest Group practice director Priya Bhalla said Chrome auto browse could have a more profound impact on developers’ thinking about designing the user experience: “Over time, this could shift how developers think about UX — optimizing not just for human users, but also for AI agents acting on their behalf.”

    Not without risks

    However, the analysts cautioned that Chrome auto browse may not be well-suited for mission-critical workflows.

    Enterprise systems involve authentication layers, role‑based controls, conditional logic, and custom interfaces, which are likely areas where Chrome auto browse could struggle, Jain said.

    “It relies solely on browser interactions without deep API or internal system integration. Also, it could be brittle on dynamic web pages prone to DOM changes,” Jain added.

    Typically, an agent uses Document Object Model (DOM) to navigate a webpage as it represents the structure of a webpage to locate buttons for clicks etc. In a dynamic webpage, the DOM might change frequently, creating challenges for the AI agent.  

    Beyond reliability and integration challenges, analysts also flagged potential security risks associated with delegating browser-level autonomy to AI agents.

    “These include handling authenticated browser sessions, interacting with untrusted external websites, and ensuring automated actions do not unintentionally submit incorrect or sensitive information,” Satapathy said. “In regulated environments, this can complicate audits and compliance reviews.”

    Kategorie: Hacking & Security