Computerworld.com [Hacking News]
SAS makes AI governance the centerpiece of its agent strategy
Enterprises are quickly moving from AI experimentation to deployment, however, when agentic AI begins making more decisions, invoking more tools, and operating across fragmented data environments, there can be an erosion of visibility, governance, and trust.
SAS laid out its answer to that problem at its annual conference, SAS Innovate, introducing a new family of copilots, agent frameworks, Model Context Protocol (MCP) plugins, and management tools to help enterprises operationalize AI without losing control of it.
“What we’re seeing here is really a shift from AI that forms to AI that acts,” Marinela Profi, the company’s global AI and generative AI market strategy lead, said at the event. “This is a significant leap, because it introduces new requirements around trust, around governance, around accountability.”
Interacting with agents more intuitivelyTo begin with, SAS today announced SAS Viya Copilot, a human-governed, conversational AI assistant embedded into its Viya platform. It integrates Microsoft Foundry, operating within analytics workflows to help developers, data scientists, and other users instructing it in natural language to analyze data, build models, and make decisions across workflows.
“You have an expert assistant that allows you to take actions, ask questions, and help you navigate across the full analytical lifecycle,” Profi explained.
Its capabilities include: General Q&A across core Viya applications; production of documented and explainable AI-generated code; model pipeline guidance including recommendations and next steps; conversational dashboarding; and visual investigation with AI-assisted search and alert narratives. Copilot capabilities will eventually extend to data management, model management, and AI infrastructure, according to SAS.
The company is initially launching two Copilots: Asset and Liability Management (ALM), for developing scenarios, executing and interpreting financial risk workflows, and translating natural language inputs into analytic models; and Health Clinical Data Discovery, for analyzing data, creating cohorts, and investigating research papers and other medical documents.
SAS plans to expand Viya Copilot into additional industries, including banking and manufacturing, later this year.
Going beyond embedded AI assistants, SAS is providing tools and infrastructure to connect and govern internal and external agents. The new SAS Viya MCP server standardizes connections so external agents can safely access SAS tools, data, and models, using the large language model (LLM) or interface of their choice (Claude, GPT, Gemini), without having to create custom integrations, duplicate logic, or bypass controls.
“The Copilot is not only answering questions for you, it can invoke capabilities across Viya in a more structured way,” Profi said.
In addition, a new Agentic AI Accelerator provides a collection of code, interfaces, components, and best practices that allow teams across skill levels (developers, low-code or no-code users) to design, build, deploy, and manage agents within SAS Viya, she explained.
Current Viya users can access both the MCP server and AI Accelerator via GitHub.
Maintaining human judgmentSAS continues to emphasize the importance of oversight, trustworthy AI, and human-in-the-loop control.
Furthering this mission, the company is introducing SAS AI Navigator. The Software-as-a-Service (SaaS) tool helps enterprises inventory, govern, and apply policies to underlying AI models.
Available in Q3 2026 on Microsoft Azure Marketplace, the platform will offer an end-to-end view of all AI models and tools in use in an enterprise, whether built in-house or provided by third parties. Using it, enterprises will be able to apply internal policies and external regulations and frameworks to AI use cases.
“It’s giving visibility into your AI inventory,” Reggie Townsend, VP of SAS’ data governance and ethics practice, said at today’s event. “But it also answers the really basic question: How are we doing?”
Enterprises want “enough data at a glance” to consider tension points when they’re juggling factors like reputation, efficiency, and cost, he pointed out. They’re also viewing trust as a new business differentiator, even as a currency.
Navigator started with a really simple idea, he noted: “What happens if we can make being responsible irresistible?” AI governance is one way to preserve human judgment amidst what he called “tech asymmetry.”
Technology unevenness has been a long-standing problem; While there’s strong technical capability, enterprises struggle to adapt to the pace of change at scale. “What folks need to do is try to translate some of these capabilities into a sustainable business advantage,” said Townsend.
As AI capabilities (and offerings) continue to expand, he urged users to gain “sufficient literacy,” approach AI with curiosity, and think critically about how evolving tools can apply to both business and personal life.
“In an emerging landscape like this, we’ve got to suspend certainty,” he said. “Certainty breeds rigidity, and rigidity suspends this idea of nuanced judgment, which we need right now.”
The next chapter of AI is about scaling that judgment, governing at speed, and turning trust into that competitive advantage, he emphasized.
Getting to the right enterprise dataEnterprise data can be fragmented across many different ecosystems (on-prem, in legacy infrastructure, or in private or public clouds), noted SAS industry market lead Alyssa Farrell. Beyond that, she said, “[enterprises] have low trust in the data itself, which is leading to low trust in decisions.” Further, performance constraints can hamper AI progress.
To address these issues, SAS today announced a targeted refresh of SAS Data Management, its cloud-native portfolio built on the Viya platform, adding or expanding its AI-ready data management, governance by design, agentic AI and copilots, and cloud-native analytic acceleration. It provides lineage, transparency, and control capabilities within workflows where data is accessed, prepared, and activated, Farrell explained.
“Agents and AI crave data more than ever before,” she said. “It’s really important that organizations get this right from the beginning, especially if they’re adding automation to that decision process.”
The re-architected platform grounds AI in trusted data, making raw data assets usable for AI. Notably, it brings analytics and AI to the data itself through SpeedyStore, the company’s cloud-native analytical data platform, negating the need to move volumes of data for processing, Farrell explained. Enterprises still retain digital sovereignty and can control workflows across their various data stores.
“We’re making sure our customers have everything they need to meet this moment [and] tools that access the data, manage the data and gain value from it,” Farrell noted. “They can really proceed at scale to operationalize AI with confidence.”
This article originally appeared on CIO.com.
Can Apple’s new CEO turn things around?
When Apple rolled out hardware chief John Ternus as the CEO to replace Tim Cook, the reaction was kind but muted. That’s because Ternus has said nothing yet to indicate he has a specific plan to position Apple for the future. (To be fair, he’s said next to nothing about anything — no easily found social media posts, no big speeches about anything beyond hardware, no major interviews showcasing his vision.
I have long been a fan of Apple, but the “i” people have a lot of problems. Their failure to make Apple an AI leader — not the leader, just a leader — has dominated headlines for two years now. But the truth is that Apple has spent years without the passion and drive that marked the second coming of Steve Jobs as CEO.
The clearest example involves the iPhone and the Apple Watch. I used to routinely upgrade my devices once a year, or at least every two years. I am sitting here now with an iPhone 13 Pro Max and an Apple Watch Series 7–the same devices I’ve had for almost five years.
Each year, I’d get excited about Apple’s new devices and look for just one clean reason to upgrade. I didn’t find it. The promise of AI was intriguing, but Apple didn’t deliver. The iPhone camera kept getting better, but my photos look just fine already.
Apple did deliver one feature that would have made me upgrade: allowing an iPhone to record and quickly transcribe calls. But the company then rolled it out to all devices, meaning it offered little to push new iPhone sales. (Of course, Apple never bothered to tell users the transcription feature has a roughly 30-minute limit. For a guy who often does hour-long interviews, that’s a problem; I’m forced to stop a recording at the 25-minute mark and reactivate it. *Sigh*)
As for AI, I would love for the iPhone to actually be intelligent about all of the data swimming within its case. For example, as a reporter, I have apps for a large number of news organizations. On one election night, I got 16 alerts that a Senate race had been called. I don’t need 16; I just need one. If Apple Intelligence were really intelligent, it would understand that. It should also understand that when I’m driving to an appointment, I don’t need a calendar alert 15 minutes before my meeting when the phone should know — based on my destination and routing in Apple Maps — that I’m on the way.
All those little missteps add up. One of the critical talents a CEO at a company as large as Apple needs is either vision or a passion that can pass for vision.
This brings us to the inevitable comparison between Jobs and Cook. Jobs was passionate, persuasive, inspirational and he truly had a plan for future products based on his gut feeling of what users would want or need. But Jobs was also undisciplined, harsh, and abrupt and someone who wasn’t always worried about the truth.
He was, therefore, a great business leader, but he had help. (Keep reading for more.)
Cook was nearly the opposite of Jobs. He was precise, methodical, detail-oriented and he for the most part treated people well and with respect. But his speeches were lackluster and I have yet to meet anyone who dubbed him electric or inspirational. He was privately passionate about his work, but that passion rarely surfaced in public.
Here’s my point about Jobs’ success: He did so well because he had Cook as a senior deputy. Having the ultimate technocrat in place allowed Jobs to focus on the bigger-picture future.
There’s been chatter on LinkedIn suggesting that Cook was a weaker CEO than Jobs. There’s a valid argument for that, but many do not give credit to Cook for helping Jobs perform as well as he did.
Earlier in Cook’s tenure, he did have one executive with a healthy chunk of the Jobs passion: Jony Ive. But Ive got tired of the technocratic nature of his boss and left in 2019 to work elsewhere. Turns out the best leadership duo is a visionary CEO with a technocrat deputy. It doesn’t seem to work the other way around.
Customers and employees also want to see passion and vision from a CEO directly. And that brings us to the upcoming change.Can Apple under CEO Ternus get its AI act together? That is the big mystery.
Apple certainly has the money and the clout to make AI work from either side of the buy/build path. But does it have a vision of what customers want — or more precisely, what they need?. Jobs had the knack for correctly guessing what customers would want once they got it, even if they didn’t yet know they needed it.
Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey, sees Ternus as an executive “who has also [along with Cook] been heads down on execution mode his entire career and he’s an insider. He knows how to keep (Apple) in its lane.”
Greis goes with the crowd in pinning most of his Apple hopes on AI. “If you look at the big AI companies, Apple is not on the map. Everybody is outpacing them. Siri simply doesn’t have the power that is needed to be valuable for their end-users.”
The AI magic is really not about simply using AI on-device. It’s about the value that can be delivered by a sophisticated integration of literally every piece of information coursing through a phone, your watch, a Mac or an iPad.
A few years ago, people saw Apple as a gatekeeper controlling access to Siri. Back then, the assumption was that access to Siri would be worth tons of money. No longer. Plenty of people now use their iPhone to access generative AI offerings from a variety of Apple’s AI rivals.
Apple can still win the AI mindshare battle, but only if it can truly deliver intelligent integration of everything that interacts with the phone. That package could be offered solely through Siri, allowing Apple to again control the almighty gateway. Sure, an iPhone user can access Claude or Perplexity — but if only Apple’s knighted partner can analyze your calendar, your contacts, your call history, your travel plans, your bank account, your photos, etc.— companies will again be willing to pay for access.
That’s where Apple gold lies. The question is whether Ternus can mine it.
Enterprises need to think beyond GPUs for agentic AI, analysts say
The ongoing shift from generative AI (genAI) to agentic AI provides an opportunity for enterprises to move to more nimble and less expensive forms of computing, according to analysts.
Early AI models were largely built on expensive GPUs from Nvidia and AMD that offered raw processing power. But newer agentic AI tools, rooted in business process and workflow management, can run on more efficient, cost-effective hardware.
As a result, IT decision-makers who still think they require GPUs for anything AI-related need to reconsider their hardware options in terms of both cost and capabilities, analysts said.
“A better way of thinking about this is the cost of AI compute and now agentic AI platform services or systems,” said Leonard Lee, principal analyst at Next Curve. “’AI computing’ or ‘accelerated computing’ has clearly transcended the GPU as an inference accelerator.”
The new hardware options include CPUs and specialized AI chips, also known as ASICs in semiconductor parlance. Although these chips have been around for years, they are now showing real utility as agentic AI goes mainstream.
For one, the CPU — the main chip in any computer — is seeing something of a revival. “The CPU is reinserting itself as the indispensable foundation of the AI era. The CPU now serves as the orchestration layer and critical control plane for the entire AI stack,” Lee said.
CPUs are both power efficient and well-suited for AI on the edge, although specialized low-power chips are more capable depending on the task, said Jim McGregor, principal analyst at Tirias Research. “It will still be more efficient to use an ASIC instead of a CPU, and in most cases it will be less expensive over the life of a platform,” he said.
The growth of inference provides an opening for optimized AI accelerators, which can handle those jobs more efficiently than GPUs, said Mike Feibus, principal analyst at FeibusTech. “…The relative importance of [the] CPU is rising.”
Nvidia — sensing that it needed a low-power chip beyond its power-hungry GPUs — has already introduced an ASIC for inferencing in its hardware stack. And it recently licensed AI chip technology from Groq for $20 billion.
Because Agentic AI involves a different computing model than genAI training on GPUs, enterprises need to consider the hardware options and pricing models available through cloud providers. “It’s more about model management than about model building — and the CPU is critical in providing workflow management,” said Jack Gold, principal analyst at J. Gold Associates.
Pricing variations continue to be an issue. Straight CPU compute is not billed the same as heavy GPU use, making it difficult to nail down costs, Gold said. “GPUs in training use more electricity generically due to near 100% utilization in a training workload, whereas in general-purpose compute, servers and CPUs run more like 40% to 60% utilization,” he said. “But it’s highly variable depending on what the agent is doing.”
Gold predicts that 80% to 85% of AI workloads will move to inference in the next two to three years, especially as tools become more agentic. (Inference means moving away from GPUs, which are better used for training, to CPUs, which are more efficient for simpler AI tasks.)
“CPUs take on a major significance in making everything work. It’s why all the hyperscalers are now loading up on CPUs, not just GPUs,” Gold said.
Major cloud providers Google, Amazon and Microsoft , for instance, have their own CPUs and low-power ASICs for inferencing.
What looks at the moment like a resurgence in CPU demand is actually pointing to a larger issue: the growing complexity of AI infrastructure, said Gaurav Shah, vice president of business development and strategic partnerships at NeuReality.
The overhead around data movement, orchestration and networking is exploding, Shah said. “That’s what’s driving demand — not CPUs doing more AI, but systems struggling to keep up with AI,” Shah said.
Beyond enterprises, genAI companies, AI-native companies and neoclouds all will need to rethink their architecture. “The winners will be the architectures that deliver the most inference per watt, not the most cores per server,” Shah said.
Fleet hopes to be the MDM provider for the AI Era
Fleet, the independent, open-source, multi-platform MDM service, recently announced its new partner program for VARs and MSPs serving enterprise customers and recruited MobileIron co-founder Suresh Batchu to serve on the company’s board. With those moves in mind, I caught up with company CEO Mike McNeil to find out more about the Fleet’s plans.
Given the company’s roots in open source, working with partners is a good way to enable it to support a variety of enterprise needs, with resellers and MSPs playing an active role in customizing the core solution for those requirements.
Fleet and the MacFleet is just as happy managing Macs as it is Linux systems and integrates well with existing tools — as long as they support open standards and APIs. This gives it a unique insight into Apple device adoption in the enterprise.
McNeil confirmed that both Apple and Linux systems are seeing rapid increases in deployment. “The new MacBook Neo is now cheaper than comparable PCs, so Apple adoption is increasing, but so are other OS options like desktop Linux,” he said. (Desktop Linux reached 3.16% market share in March, says StatCounter, while OS X hit 9.52% and Windows fell to 60.8%.)
That’s not to say migration to any platform is always easy. “I spoke to an IT director yesterday from a casino company whose team had bought a couple of Neos and tried enrolling them in Microsoft Intune, but gave up,” McNeil told me. This was because they hit an unrelated bug with their traditional MDM, didn’t have great diagnostics to work with, and the IT director then “assumed” that it must be because the Neo wouldn’t work for enterprise use. As it turns out, the issue was with the MDM, McNeil said.
“At Fleet, we’ve enrolled MacBook Neos ourselves with no problems, and seen customers do the same,” he said. “Enterprises are usually mixed OS environments, and [MDM] solutions limited to a single ecosystem, like Jamf that’s Apple only, are pretty restrictive.
Why partnerships matter“Enterprises are very particular, and they often operate in vastly different ways,” said McNeil. “For example, there are many, many ways to automatically make sure employees can get on to a Wi-Fi network or a VPN on their first day at work.”
Fleet, he said, works to balance needs between different parts of a company – infosec and IT, for example. “We optimize for baby steps, small iterations,” McNeil said, pointing out that new features are documented and explained as they are introduced.
“The first generation of device management was built for control and compliance,” said Batchu. “The next generation needs to be built for speed, automation, and how modern teams actually operate. Fleet is taking a fundamentally different approach with infrastructure as code and AI-driven workflows, and I’m excited to help shape that direction.
“In 2026, every company needs to do more with less. Budgets are shifting towards AI and innovation, forcing leaders to extract more value from existing infrastructure. Some IT estates have been around for 20, 30, 35 years, and organizational structures, technical debt, and even entire jobs exist just to keep the lights on. But when you suddenly go from patching monthly to patching in hours, something has got to give.”
He argued that the adoption of a partnership model should help companies move through digital transformation with Fleet while maintaining tight budgets. Partners can help train employees and better understand the context of company need.
It’s also about making sure things are usable. Citing the “Concur” effect, which he describes as a product designed to satisfy high-level stakeholder requirements rather than the needs of those actually using the software, McNeil says he has a “personal vendetta” against complexity in software design.
What will enterprises need?It’s a move to make every platform easy to manage using powerful tools optimized for the unique needs of customers. “By 2030, IT will need reliable infrastructure that works with the productivity and security tools they’re already using throughout their business.” IT and security teams won’t want separate platforms for each OS or function, and they’ll want to use chat to get projects started.
AI is a constant. At least one current Fleet customer now has tens of thousands of computers running AI agents and recently gave each of its employees a headless “claw” — a powerful AI agent based on OpenClaw, the free, open-source AI agent software that is accessed via remote computers.
Fleet helps IT recognize the use of shadow AI tools across the business, as well as tracking other app installs, licenses, and use. “So whether you want to find out who’s using the Claude app, who’s using shadow AI tools they shouldn’t be using, or just how many extra, expensive Bloomberg terminal licenses you’re paying for that aren’t actually getting used, you can do that in Fleet, right from your MDM.”
As McNeil sees it, the emerging AI services environment favors Linux for AI, with other platforms the province of human workers. “I don’t think we’ll see a world where most human users are running desktop Linux in five years, but I wouldn’t be surprised if Microsoft and Apple are neck and neck in the enterprise” by then,” he said.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Xiaomi releases MIT‑licensed MiMo models for long‑running AI agents
Xiaomi has released and open-sourced MiMo-V2.5 and MiMo-V2.5-Pro under the MIT License, giving developers another potentially lower-cost option for building AI agents that can run longer tasks such as coding and workflow automation.
Both models support a 1-million-token context window, the company said. MiMo-V2.5-Pro is designed for complex agent and coding tasks, while MiMo-V2.5 is a native omnimodal model that supports text, images, video, and audio.
The release comes as agentic AI workloads are putting new pressure on enterprise AI budgets. These systems can burn through large numbers of tokens as they plan, call tools, write code, and recover from errors, making cost and deployment control increasingly important for developers.
By using the MIT License, Xiaomi said it is allowing commercial deployment, continued training, and fine-tuning without additional authorization. Tulika Sheel, senior vice president at Kadence International, said the MIT License can make it attractive. “It allows enterprises to freely modify, deploy, and commercialize the model without restrictions, which is rare in today’s AI landscape,” Sheel said.
“On ClawEval, V2.5-Pro lands at 64% Pass^3 using only ~70K tokens per trajectory — roughly 40–60% fewer tokens than Claude Opus 4.6, Gemini 3.1 Pro, and GPT-5.4 at comparable capability levels,” Xiaomi said in a blog post.
The models use a sparse mixture-of-experts (MoE) design to manage compute costs. The 310-billion-parameter MiMo-V2.5 activates only 15 billion parameters per request, while the 1.02-trillion-parameter Pro version activates 42 billion. Xiaomi said the Pro model’s hybrid attention design can reduce KV-cache storage by nearly seven times during long-context tasks.
Xiaomi cited several long-horizon tests, including a SysY compiler in Rust that MiMo-V2.5-Pro completed in 4.3 hours across 672 tool calls, passing 233 of 233 hidden tests. It also said the model produced an 8,192-line desktop video editor over 1,868 tool calls across 11.5 hours of autonomous work.
Will enterprises adopt MiMo?Whether Xiaomi’s MiMo-V2.5 models can gain adoption among enterprise developers over closed frontier models for agentic coding and automation workloads will depend on how enterprises evaluate performance, cost, and risk.
“When assessing Xiaomi’s MiMo-V2.5 and its variants, enterprise developers should look at the total cost of ownership,” said Lian Jye Su, chief analyst at Omdia. “The TCO consists of token efficiency, cost per successful task, and the absence of licensing costs associated with proprietary models. Closed frontier models may still win on generic tasks, and the hardest edge cases, but open-weight models excel in agentic work that is high-volume in nature.”
Pareekh Jain, CEO of Pareekh Consulting, said enterprises should assess MiMo-V2.5 less as a replacement for Claude or GPT and more as a cost-efficient agent model for high-token workloads.
“The key benchmark signal is not just accuracy, but tokens per successful task,” Jain said. “Frontier models often reach higher success rates on complex coding benchmarks, but do so with massive reasoning overhead. MiMo-V2.5 is designed for Token Efficiency, meaning it achieves comparable results with significantly fewer input and output tokens.”
Jain said that could make MiMo-like models useful as “economic workhorses” for repetitive coding, QA, migration, documentation, testing, and automation workloads, while closed frontier models remain the quality ceiling for the hardest tasks.
Ashish Banerjee, senior principal analyst at Gartner, said models like MiMo could materially shift enterprise AI economics for long-horizon agents.
“When tasks stretch into millions of tokens, metered proprietary APIs stop looking like a convenience and start looking like a tax on iteration,” Banerjee said. “By contrast, MiMo’s MIT license, open weights, 1M-token context window, and relatively low pricing make private-cloud or self-hosted deployment strategically credible.”
However, Banerjee said this does not mean enterprises will abandon proprietary APIs.
“Enterprises will continue to use proprietary APIs for frontier accuracy and low-operations consumption, while shifting scaled, repeatable agent workflows toward open models where cost predictability, data control, and customization matter more,” Banerjee said. “In short, long-horizon, high-volume agentic AI will evolve into a hybrid market, with open models like MiMo breaking pure API dependence.”
Su added that adoption may face challenges because Chinese-origin models can trigger concerns in regulated Western organizations.
Why simplicity is the silent driver of hybrid workplace success
Hybrid work has reshaped how and where people collaborate. Offices are no longer the default destination for every interaction, yet they remain essential for moments that require focus, alignment, and human connection. In this reality, meeting rooms play a pivotal role, not because of the technology they contain, but because of how effortlessly people can use it.
The most successful hybrid workplaces share a simple truth: the best technology is the one that remains invisible in the room. When collaboration tools fade into the background, people can focus on ideas rather than interfaces. When they do not, friction quickly erodes adoption, productivity, and trust.
One experience across every space
Employees move between different meeting spaces throughout the day, from huddle rooms and project spaces to larger conference rooms. When each room comes with a different setup, interface, or connection flow, every meeting starts with uncertainty. Time is lost, confidence drops, and technology becomes a problem rather than an enabler.
Complexity is one of the main barriers to adoption in hybrid environments. Organizations struggle with underutilized rooms, inconsistent setups, and management overhead that grows with every additional configuration. The result is predictable: people avoid certain rooms altogether or fall back on ad-hoc workarounds.
A consistent, intuitive experience across all meeting spaces changes that dynamic. When users know exactly what to expect, regardless of room size or location, adoption increases naturally. Meetings start on time, collaboration flows more smoothly, and IT teams receive fewer support requests.
Technology as an enabler
The Flemish Government offers a powerful example of this principle in practice. In its Brussels hub, technology was deliberately positioned as an enabler for collaboration, not as a focal point. The goal was not to impress users with features, but to make connections effortless across more than a thousand meeting spaces.
By standardizing the collaboration experience with ClickShare solutions, employees could walk into any room and start collaborating and videoconferencing without instructions or training. This approach supported a people-driven hybrid workplace where flexibility and ease of use helped employees feel confident and connected, wherever they worked.
This emphasis on simplicity did more than improve user satisfaction. It removed friction at scale, allowing thousands of employees to collaborate in the same way, every time. Technology became something people relied on, rather than something they had to think about.
Higher adoption, lower IT burden
From an IT perspective, intuitive user experiences are not just a usability win. They are an operational advantage. Every extra step, cable, or configuration option increases the likelihood of errors and support tickets. Every exception to the standard creates additional management overhead.
Flexible, easy-to-deploy meeting room solutions reduce that burden. Organizations increasingly favor modular approaches that can be adapted to different spaces without introducing new user experiences or management models. This consistency simplifies deployment, monitoring, and updates, while giving IT teams greater control and predictability.
The outcome is a virtuous cycle. When users trust technology, they use it more. When they use it correctly, IT spends less time troubleshooting and more time optimizing. Adoption and manageability reinforce each other.
Designing for people, not just rooms
Ultimately, simplicity in the hybrid workplace is about designing for human behavior. People want to collaborate, share ideas, and move quickly between spaces. They do not want to learn new systems or adapt their workflows to the room they happen to be in.
Meeting room technology should respect that reality. By offering one intuitive experience across every space, organizations remove barriers to collaboration and create environments people want to use. As the Flemish Government experience demonstrates, when technology like ClickShare quietly supports collaboration instead of demanding attention, it becomes a true catalyst for hybrid work success.
In the end, the most advanced meeting room is not the one with the most features. It is the one people barely notice at all.
Why security matters in the meeting room
For years, meeting room technology was evaluated primarily on ease of use and audiovisual quality. If people could walk in, plug in, and start presenting, the job was considered done. That mindset no longer holds. Today’s meeting rooms are deeply connected to digital environments, and security has become a business-critical concern rather than a technical afterthought.
According to IDC, 50.8% of organizations now rank security as the most important factor when selecting collaboration and videoconferencing technology, ahead of price or quality considerations. That shift reflects a broader reality: what happens in meeting rooms has direct implications for data protection, regulatory compliance, operational resilience, and corporate trust.
The meeting room as an expanded attack surface
Hybrid work has fundamentally changed the role of the meeting room. It is no longer a closed, isolated space. Instead, it has become a convergence point where corporate networks, cloud services, collaboration platforms, and personal devices meet. Content is shared wirelessly, participants join remotely, and devices are connected dynamically, often by non-IT users.
This evolution significantly expands the attack surface. Collaboration environments are increasingly targeted because they combine sensitive data with high connectivity and frequent user interaction. Risks range from unauthorized access and data interception during wireless sharing to malware propagation via unmanaged or personal devices. In hybrid scenarios, these risks are amplified by blurred boundaries between secure corporate environments and external networks.
As a result, meeting room security can no longer be treated separately from the broader enterprise security strategy. Any vulnerability introduced in a meeting space can ripple across the organization.
Regulation moves meeting rooms into the spotlight
At the same time, regulatory pressure is intensifying. Across Europe, new and evolving frameworks such as NIS2, the RED Delegated Act, and the Cyber Resilience Act are raising the bar for connected devices. These regulations introduce mandatory requirements that span the entire product lifecycle, from secure design and development to patching, vulnerability management, and end-of-support practices.
Meeting room solutions clearly fall within scope. They process sensitive corporate information, connect to enterprise networks, and often rely on wireless and cloud-based technologies. Non-compliance is no longer a theoretical risk. It can lead to financial penalties, operational disruption, and reputational damage.
International standards like ISO/IEC 27001 further reinforce this shift by defining best practices for information security management, risk assessment, and operational trust. Together, these frameworks signal a clear message: security in collaboration environments is now a governance issue as much as a technical one.
Security without usability is a false promise
However, strong security alone is not enough. When security controls disrupt the user experience, employees look for shortcuts. Shadow IT, unsecured workarounds, and bypassed controls often emerge not from negligence, but from friction.
In meeting rooms, this risk is particularly acute. Meetings are time-sensitive, social, and often involve external participants. If connecting securely feels complex or restrictive, users will prioritize speed and convenience over policy compliance. Paradoxically, that increases risk rather than reducing it.
This is why security must be built in by design, not bolted on. Secure-by-design solutions embed encryption, authentication, access control, and update mechanisms into the core architecture, while keeping the user experience intuitive. Such approaches reduce reliance on manual processes and minimize the temptation for unsafe shortcuts, enabling secure collaboration without compromising productivity.
From IT checkbox to business enabler
The most forward-looking organizations now treat meeting room security as a strategic enabler. Secure, compliant collaboration environments build trust with customers and partners, support regulatory readiness, and reduce operational risk over time. IDC notes that 70% of CIOs cite risk mitigation as a top priority, reflecting the growing recognition that resilience is a competitive differentiator, not just a defensive measure.
Importantly, this shift also changes how decisions are made. Meeting room technology can no longer be selected in isolation by facilities or procurement teams. Excluding IT expertise from these decisions can compromise not only meeting rooms, but the entire digital workplace. Security, usability, and integration must be evaluated together, through a cross-functional lens.
Security as the foundation of modern collaboration
As meeting rooms continue to evolve, one principle becomes clear: security is no longer something you add later. It is the foundation that enables safe, scalable, and human-centric collaboration. Organizations that align regulatory requirements, recognized security standards, and enterprise-grade protection with friction-free user experiences are better positioned to support hybrid work, protect sensitive information, and earn long-term trust.
In today’s workplace, a secure meeting room is not just a safer space. It is a smarter one.
Can everyday IT decisions turn sustainability from intent into impact?
Sustainability strategies often start with ambition. Net‑zero targets, ESG frameworks, and environmental KPIs signal intent at leadership level. Yet whether those ambitions translate into real progress depends largely on what happens much closer to day‑to‑day operations. In practice, sustainability is shaped by the everyday technology decisions IT teams make.
According to a Barco ClickShare survey, 96% of IT leaders believe their department’s actions make a meaningful contribution to global sustainability, and 98% agree that IT should lead the way in achieving their organization’s sustainability goals. Sustainability has clearly moved from the margins to the core of the IT agenda. The challenge is no longer awareness, but execution
Sustainability lives in routine decisions
Much of the sustainability debate still focuses on large‑scale initiatives such as data centers, AI workloads, or cloud optimization. While those areas matter, the research highlights a less visible but equally powerful driver: routine IT purchasing and deployment decisions.
Hardware selection, device lifecycles, software updates, and meeting room technology all influence energy consumption, electronic waste, and long‑term resource efficiency. These decisions are repeated across organizations every year, often across hundreds or thousands of devices. Individually, they may seem small. Collectively, they define the environmental footprint of the digital workplace.
As a result, sustainability is now ranked alongside security and cost as a key consideration in IT purchasing decisions. This shift reflects a growing understanding that frequent replacements, fragmented solutions, and short product lifecycles quietly undermine sustainability goals, even when corporate commitments look strong on paper.
Motivation is high, but IT cannot act alone
The research also reveals how personal sustainability has become for IT leaders. Eighty‑two percent say they would not accept a role at an organization without a strong sustainability track record, underlining how closely environmental values are tied to professional identity in IT.
Yet motivation alone is not enough. Sustainable choices often require cross‑functional alignment, credible information, and long‑term thinking in procurement processes that are still driven by short‑term constraints. Without organizational support, sustainability risks becoming an added burden rather than a shared objective.
A real‑world example of sustainability by design
The Flemish Government illustrates how sustainability can be embedded into everyday technology decisions when it is treated as a collective responsibility. During the renovation of its Brussels hub, the Agency for Facility Operations prioritized sustainability across construction, materials, and technology, including ClickShare wireless collaboration solutions deployed throughout the building.
Rather than introducing different technologies for different rooms, the Flemish Government standardized its meeting room setup across more than 1,000 meeting spaces, using ClickShare solutions throughout. This decision reduced hardware fragmentation, simplified management, and avoided unnecessary duplication of devices, all of which contributed to more efficient use of resources over time.
Sustainability here was not positioned as a separate initiative. It was the result of choosing technology that could scale, remain relevant longer, and support flexible ways of working without repeated replacements or complex reconfigurations.
Integration is the real test
What often slows sustainability progress is not lack of intent, but lack of integration. When sustainable solutions are difficult to align with existing systems, hard to compare objectively, or challenging to measure, they struggle to survive multi‑stakeholder decision‑making.
IT leaders need sustainability to be built into solutions by design, not added as an afterthought. When environmental impact aligns with usability, manageability, and longevity, sustainable choices become easier to defend and easier to repeat.
Small choices, cumulative impact
The key takeaway is simple but powerful. Sustainability does not hinge on one transformational project. It is driven by consistent, repeatable decisions made every day. Extending device lifecycles, standardizing collaboration technology, and selecting solutions designed for durability all create measurable impact when applied at scale.
The remaining step is organizational alignment, ensuring that everyday IT decisions are supported as strategic levers for environmental progress. In the end, sustainability is not achieved through statements alone. It is built through the choices organizations make, one technology decision at a time.
Why the meeting room has become the true test of hybrid work
The way organizations support collaboration today still varies widely from space to space. Small huddle rooms, project spaces, and large boardrooms often come with different setups, different workflows, and different expectations.
For employees, that inconsistency creates friction. For IT teams, it creates complexity. And for organizations, it quietly undermines the promise of hybrid work.
What’s becoming clear is that the meeting room is no longer just a physical space. It is where hybrid work either flows or fails.
Meetings remain the backbone of collaboration
Despite new ways of working, meetings remain central to how teams align, make decisions, and move projects forward. People come to the office not to sit behind individual screens, but to connect, co‑create, and build momentum together.
In a hybrid reality, those moments increasingly involve a mix of in‑room and remote participants.
That places a new kind of pressure on meeting spaces. They must support different group sizes, different collaboration styles, and different platforms, without forcing users to think about the technology behind it.
When meetings start late because cables are missing, audio behaves differently per room, or content sharing feels unpredictable; attention shifts away from the conversation before it even begins. Hybrid collaboration only works when technology disappears into the background.
Consistency drives adoption
One of the most underestimated factors in hybrid collaboration is consistency in user experience. Employees move between meeting spaces throughout the day. Every change in setup introduces uncertainty and hesitation. Over time, that leads to avoidance, workarounds, or reliance on personal devices instead of shared spaces.
Organizations that succeed approach meeting rooms as a connected ecosystem rather than a collection of individual rooms. A consistent experience across huddle spaces and boardrooms lowers the learning curve, increases confidence, and drives adoption naturally. People know what to expect, how to start, and how to share, regardless of where they are.
For IT teams, that same consistency reduces support overhead and simplifies management. Standardized setups, predictable workflows, and centralized visibility replace the constant firefighting that fragmented environments create.
Technology should support people, not distract them
As collaboration technology evolves, expectations rise. Users no longer accept tools that require explanation or preparation. They expect meetings to start smoothly, participants to be seen and heard clearly, and content to be shared without effort.
This is where the balance between usability, security, and intelligence becomes critical. Ease of use drives adoption, but it cannot come at the expense of governance or trust. At the same time, intelligence must enhance the experience without adding complexity. Features like automatic audio calibration, speaker framing, or real‑time transcription only deliver value when they feel intuitive and reliable. The goal is not to showcase technology, but to create conditions where collaboration feels natural, inclusive, and uninterrupted.
From technology choice to workplace experience
Ultimately, the quality of hybrid collaboration is determined less by individual features than by the experience. Employees judge meeting technology by how it makes them feel: confident or hesitant, included or sidelined, focused or distracted.
From huddle room to boardroom, the most effective collaboration environments share the same principles. They are simple to use, consistent across spaces, secure by design, and flexible enough to evolve. They respect people’s time and attention, allowing teams to focus on ideas rather than interfaces.
As organizations continue to refine their hybrid strategies, meeting room solutions remain a revealing indicator. When collaboration flows effortlessly, hybrid work has a real chance to succeed. When it doesn’t, even the best policies and tools elsewhere struggle to compensate.
In the end, the future of hybrid work is not decided in strategy documents. It is decided, meeting by meeting, in the rooms where people come together to work.
Why smart meeting rooms are becoming strategic IT assets
For years, innovation in workplace collaboration followed a familiar pattern. Better cameras promised clearer video. Smarter microphones claimed to eliminate background noise. Software updates added more features, more buttons, and more possibilities. Progress was tangible, measurable, and largely device‑centric.
As organizations move deeper into hybrid work, that model is starting to show its limits. The most meaningful change in collaboration today is not driven by hardware specifications or platform features. It is driven by a shift in mindset: about meeting rooms, about data, and about the evolving role of IT in shaping how people actually work together.
Meeting rooms are undergoing a quiet but profound transformation. They are no longer passive spaces that simply host meetings. Increasingly, they are becoming active, data‑driven IT endpoints that sit at the crossroads of productivity, workplace culture, sustainability, and employee experience.
From furniture to IT infrastructure
Historically, meeting rooms lived in an awkward grey zone. They were physical spaces, often treated as facilities or AV concerns, yet they relied heavily on IT systems to function. When something broke, IT was expected to fix it, usually reactively and with limited visibility into what actually went wrong.
That approach no longer scales. Today’s collaboration environments are modular, software‑defined, and deeply integrated into enterprise networks. Cameras, microphones, displays, and room systems behave much more like endpoints than furniture. They require monitoring, updates, security policies, and lifecycle management – just like laptops or mobile devices.
For IT leaders, this represents a fundamental shift. Managing collaboration spaces is no longer about responding to tickets. It is about designing reliable, measurable infrastructure that people can trust. When meeting rooms work consistently, they disappear into the background. When they do not, they erode confidence, waste time, and undermine collaboration at its core.
AI moves from promise to practice
Artificial intelligence has been part of collaboration conversations for years, often framed as an exciting add‑on. In practice, many organizations are now discovering that AI only delivers value when it solves real, operational problems.
In meeting environments, that means using AI to reduce friction rather than impress. Intelligent framing, noise reduction, automated room diagnostics, and meeting insights are most effective when they quietly improve the experience without asking users to change their behavior. AI becomes meaningful when it helps meetings start on time, keeps participants engaged, and reduces the cognitive load on employees who are already juggling multiple tools and priorities.
This also places new responsibility on IT. AI‑enabled collaboration systems need governance, transparency, and clear success criteria. The question is no longer whether AI is present, but whether it measurably improves how people collaborate.
Measuring what really matters
One of the most challenging shifts for IT organizations is redefining what success looks like. Traditional metrics such as uptime or ticket volume only tell part of the story. A meeting room can be technically available and still fail its users.
Leading organizations are starting to look beyond device health and toward outcomes. Are rooms used as intended? Do employees trust technology enough to use it spontaneously? Are collaboration spaces supporting focus, inclusivity, and effective decision‑making?
Answering these questions requires data, but also interpretation. Room analytics, usage patterns, and performance insights only become valuable when IT teams connect them to broader business goals such as productivity, employee satisfaction, and sustainability.
A broader role for IT leaders
Taken together, these trends point to a broader evolution in the role of IT. Collaboration is no longer a support function that sits on the sidelines of organizational strategy. It actively shapes how people connect, how culture is experienced, and how work gets done.
For IT leaders, this means developing new skills, new partnerships with the workplace and HR teams, and new ways of thinking about technology’s impact on human interaction. The future of collaboration will not be defined by the next device release, but by how intentionally organizations design and manage the spaces where collaboration truly happens.
How collaboration technology defines the next phase of hybrid work
Hybrid work has settled into everyday reality, but the technology that supports it is still catching up. As collaboration becomes more distributed, organizations are reassessing how meeting spaces, digital tools, and infrastructure actually support the way people work. What’s emerging is a shift from fragmented solutions toward more intentional, integrated collaboration environments that are designed to perform, scale, and adapt over time.
Three trends in collaboration technology stand out. Meeting rooms are becoming fully integrated IT assets. Artificial intelligence is shifting from promise to practical necessity. And sustainability is returning as a strategic priority, grounded in data and long-term efficiency. Together, these forces are redefining how collaboration technology is designed, deployed, and evaluated.
Meeting rooms become managed digital environments
Meeting spaces have evolved from static rooms into active, connected environments. In hybrid organizations, they are where collaboration, culture, and decision-making come together. As a result, meeting rooms are increasingly treated as managed endpoints rather than standalone spaces.
Modern conferencing solutions enable detailed visibility into how rooms are used and maintained. Metrics such as utilization, connection quality, and equipment uptime allow IT teams to move from reactive support to proactive optimization.
This shift improves reliability while helping organizations understand the real return on their workplace investments. The convergence of AV and IT accelerates this transformation. With AV devices operating over IP networks, audio and video infrastructure can be managed using the same tools, processes, and governance models as other enterprise systems. This consolidation reduces complexity and supports the scalability required in hybrid environments.
Security becomes a baseline expectation
As meeting rooms become part of the broader IT landscape, security moves firmly to the foreground. Data privacy, compliance, and secure access are no longer optional considerations. They are foundational requirements.
Zero-trust principles, encryption, and strong identity controls are increasingly embedded into collaboration environments by design. This approach reflects a broader shift: security is no longer a differentiator that adds value on top. It is the baseline that enables trust, reliability, and confidence in hybrid collaboration.
The growing use of AI-driven features in conferencing platforms only reinforces this need. As intelligence is embedded deeper into collaboration tools, robust safeguards must be in place to ensure that innovation does not introduce new risks.
AI shifts from novelty to necessity
Artificial intelligence has reached a turning point. The focus is no longer on whether AI is present, but on whether it delivers meaningful outcomes. In 2026, AI earns its place by solving real problems and improving everyday work experiences.
In meeting environments, AI capabilities such as automatic camera framing, intelligent audio calibration, and real-time transcription and translation address long-standing challenges. They improve inclusivity, reduce friction, and create more natural meeting experiences for both in-room and remote participants.
Importantly, value is no longer assessed through feature counts or technical outputs. Adoption, employee feedback, and perceived usefulness are becoming the indicators that matter most. AI succeeds when it supports people quietly and effectively, without adding complexity or demanding attention.
Sustainability returns with a practical focus
Sustainability is also re-emerging as a strategic concern, but with a more grounded framing. Rather than being driven solely by compliance or ambition, it is increasingly linked to cost efficiency, risk reduction, and long-term resilience.
Advances in analytics make it possible to track device lifecycles, assess environmental impact across the value chain, and identify opportunities to optimize technology deployments. This data-driven approach transforms sustainability from a reporting exercise into a practical decision-making tool.
For collaboration technology, this means prioritizing solutions designed for longevity. waste andstems that can adapt to evolving standards help extend hardware lifecycles, reduce electronic waste, and maximize value over time. These choices support both environmental goals and operational efficiency.
A more integrated approach to collaboration technology
Meeting rooms are no longer separate from IT strategy. AI is no longer experimental. Sustainability is no longer abstract.
Organizations that succeed in the next phase of hybrid work will be those that align these dimensions into a coherent approach. By focusing on measurable outcomes, secure-by-design solutions, and long-term value, collaboration technology becomes a strategic enabler rather than a collection of tools.
The future of work will not be defined by technology alone, but by how seamlessly it supports people, adapts to change, and stands the test of time.
Microsoft, OpenAI change contract terms — again
Microsoft and OpenAI on Monday again revised their agreement, softening their exclusivity and revenue-sharing conditions in the process. These changes underscore how critical it is for enterprises to work with as many AI vendors as practical, given the leapfrogging performance stats as well as the constantly shifting alliances.
Both OpenAI and Microsoft issued their own statements, which were essentially identical, about the contractual changes.
Microsoft’s statement said that the company still derives some benefits from its alliance with OpenAI. “Microsoft remains OpenAI’s primary cloud partner and OpenAI products will ship first on Azure, unless Microsoft cannot and chooses not to support the necessary capabilities,” it said.
But, the company noted, the earlier exclusivity is now gone. “OpenAI can now serve all its products to customers across any cloud provider. Microsoft will continue to have a license to OpenAI IP for models and products through 2032. Microsoft’s license will now be non-exclusive.”
In addition, the company’s role as a major investor in OpenAI is driving a different revenue relationship, it said: “Microsoft will no longer pay a revenue share to OpenAI. Revenue share payments from OpenAI to Microsoft continue through 2030, independent of OpenAI’s technology progress, at the same percentage but subject to a total cap. ”
AGI clause removedOne key component within earlier versions of the Microsoft-OpenAI deal was the change in the relationship if OpenAI ever achieved artificial general intelligence (AGI), a term that eludes a concrete definition but generally refers to AI that equals or exceeds human capabilities.
Although it was not directly referenced in the statement from either vendor, multiple media reports said that AGI references have now been removed from the revised agreement.
Market changesAnalysts and consultants generally agreed that this altered agreement will reinforce, and should extend, the current enterprise IT trend of hedging bets by striking arrangements with a variety of AI providers, including the major hyperscalers. Beyond future-proofing enterprises’ AI efforts, some of those agreements are for practical issues, such as the need to work with global AI firms specializing in different languages that the enterprise needs.
Thomas Randall, research director at Info-Tech Research Group, explained that the market has changed since the original agreement was struck. “The era of exclusive frontier model access as a strategic differentiator is coming to an end,” he pointed out. “The Microsoft-OpenAI agreement in 2023 was meaningful because access to GPT4 was scarce. But that scarcity no longer applies because the competitive differences between frontier models have reduced substantially since then.”
The amended Microsoft-OpenAI agreement “is more of a formal acknowledgment that model access is no longer a strict advantage,” he said. “The immediate practical change for IT from this agreement, especially for shops that were reluctant to deepen an Azure commitment, is that they now have a clearer path to accessing OpenAI models through other hyperscalers.”
Randall argued that this translates into a rebalancing of where enterprise IT should focus its AI efforts, especially in terms of differentiation.
“If model access is commoditizing at the infrastructure layer, then strategic questions must focus on quality and governance of proprietary data, the depth and sophistication of agentic workflow integration, and organizational capability to deploy AI at scale,” he said.
“Consequently, the vendors who control the orchestration and application layers [such as] the agent frameworks, the data connectors, the governance tooling, and workflow integration, will be best positioned to capture enterprise value. The competitive ground has shifted from attaining model access to how vendors deeply and reliably embed AI into enterprise workflows.”
Alastair Woolcock, VP analyst at Gartner, agreed that this contractual change from two key market leaders is an inevitable reaction to a vastly changing AI marketplace. “The first great AI shadow investment is being rewritten for a multipolar AI Cold War,” he said.
“Frontier AI has become too capital-intensive and infrastructure-constrained for one-cloud exclusivity to survive. For Microsoft, this is a controlled concession. The investor story moves from ‘Microsoft owns the OpenAI channel’ to ‘Microsoft controls the enterprise AI operating layer’ through Copilot, Azure, security, workflow integration, data gravity and AI operations,” Woolcock said.
“For OpenAI, this is a liberation event,” he noted. “Its biggest constraint is no longer demand. It is compute, capital and distribution. OpenAI cannot become the global AI platform if one partner controls the pipes.”
He added that, for enterprise IT executives, “this means more choice, but not necessarily less dependency. Lock-in moves up the stack, from cloud infrastructure to AI ecosystem alignment, agent orchestration, workflow control and data governance. This is consequential, not because the partnership is weakening, but because it shows the next phase of AI competition will be fought through flexible alliances, compute access, silicon, power and enterprise distribution, not traditional ownership.”
Planning assumptions alteredTony Olvet, group VP with IDC, said this contractual change “is unlikely to affect most near‑term Microsoft or OpenAI deployments, but it does change planning assumptions. CIOs and CTOs should expect more choice in where OpenAI capabilities appear, greater commercial leverage and increased need to govern AI across multiple channels. This has strategic implications: enterprises should continue to rely on strong partners while designing AI architectures, contracts, and governance frameworks that can shift across clouds, models and vendors as the market evolves.”
Most consultants stressed the vanishing exclusivity for almost all of the key AI players, something that may not be a bad thing for IT.
A key background factor at play here is the timeline. It can take an enterprise an extended period to fully deploy capabilities across its global environment.
Noah Kenney, principal consultant for Digital 520, noted, “standing up OpenAI workloads on AWS, Google Cloud, or Oracle will take time. Reference architectures, identity and data integrations, compliance reviews, and procurement cycles do not move at the speed of a press release. Enterprises that have spent years optimizing on Azure will not migrate overnight, nor should they.”
But, he said. “for the substantial population of companies that are not Microsoft shops, that have actively avoided Azure, or that operate in multi-cloud by policy, this is the first time OpenAI has been a realistic first-class option on their preferred infrastructure. That is a meaningful shift in the addressable market, even if the operational reality lags by quarters.”
Given the constantly changing relationships within AI, not to mention multiple AI firms preparing to try to become publicly traded, reality is likely to look very different at the end of an enterprise AI rollout than it did at the beginning, so they need options.
“Until today, choosing OpenAI effectively meant choosing Azure, and choosing Azure gave you privileged access to OpenAI. That tight coupling shaped procurement decisions, reference architectures, and multi-year cloud commitments at thousands of enterprises. It is no longer true,” Kenney said.
“What changes for [enterprise IT executives] is the structural assumption underneath their AI roadmap,” he noted. “OpenAI can now ship its products across any cloud and Microsoft now has a non-exclusive license to OpenAI’s IP through 2032, which means Microsoft is also free to lean harder into its own models, into Anthropic, and into whatever else the market produces. Both sides just bought themselves optionality and that optionality flows downstream to the customer.”
He added, “the companies that benefit are the ones who treat model providers, cloud providers, and inference infrastructure as three separate procurement decisions with three separate exit ramps.”
Vendor lock-in ‘relocating’Sanchit Vir Gogia, chief analyst at Greyhound Research, said that the kneejerk reaction to the contract changes is that enterprise IT will now have more options and more flexibility. But Gogia said that dependence is not being reduced as much as it is being moved.
“Lock-in is not going away. It is relocating. At the model level, substitution is becoming easier. Not trivial, but certainly more feasible than before. At the orchestration level, however, substitution remains difficult,” Gogia said. “Once your workflows, controls, identity layers, and governance structures are built around a particular system, changing that system is not a small task. That is where dependency sits. Quietly. Persistently. And often unnoticed until it begins to constrain you.”
There are still differences between providers, and those differences matter in certain contexts, he said. “But the gap is narrowing in ways that are meaningful for enterprise use. Increasingly, the question is not which model is best in isolation. The question is how that model is used, governed, and embedded into the organization. That is a very different question,” Gogia said.
And, he pointed out, it leads you to a very different place, “because once you ask that question, you are no longer looking at models. You are looking at orchestration. You are looking at identity. You are looking at governance, compliance, integration, workflow. You are looking at the layer that sits above the model and quietly determines how everything actually works. That layer is where the real dependency forms.”
Microsoft understands this, he noted. “You can see it in how it is positioning itself. It is no longer behaving like a gateway to a single provider. It is building something broader: A layer where multiple models can coexist, where those models can be managed, governed, and embedded into enterprise systems in a consistent way.
That is not accidental,” Gogia said. “That is a deliberate move towards control at a higher level. And importantly, it is also a hedge. A very clear one. Because it reduces reliance on any single partner, including OpenAI.”
OpenAI plans its own ‘iPhone killer’
It looks very much as if Apple’s former designer Jony Ive will compete against the company his friend Steve Jobs created as he works with OpenAI on a device that seems to be some form of competitor for the iPhone.
In a post on X, TF International Securities analyst Ming-Chi Kuo claims OpenAI is working with Qualcomm and MediaTek to build SoCs for smartphones. These chips will be built to deliver faster AI performance. Kuo claims the plan is to achieve mass production by 2028 with the hardware specifications for these devices set to be finalized by early 2027.
You could argue that as well as working with Apple’s former design lead, OpenAI is also taking a leaf out of the company’s processor playbook with this strategy. That’s because just as Apple works with TSMC on chip design, OpenAI intends to work with Qualcomm and MediaTek, which may help it achieve competitive processors far more quickly than it would take if building these things from scratch.
Apple faces a new competitorWhat’s interesting about this is the release schedule as it suggests mass production of the new device may commence as soon as 2028, one year after the iPhone’s twentieth anniversary. We know that Apple will not sit on its iPhone laurels in the coming years and already expect the company to introduce a new folding device as well as a potential new high-end device.
We also think that Apple will be shipping devices with very fast, very power-efficient 1.4nm processors by the time the purported OpenAI product appears. It’s open to question if OpenAI’s new partnership will be able to develop AI processors for smartphones that compare to those Apple will have available by then, given its advantages in the space today, but neither company can afford to be complacent in this arena.
Why is it so important?
Because of the nature of AI.
It’s all about AI agentsWhile today’s leading AI services tend to rely on cloud-based models, tomorrow’s services will be far more independent and far more likely to run securely on edge devices.
AI agents, for example, may call on server-based intelligence to accomplish some tasks, but there will be an increasing tendency to maintain data privacy within the transaction. Agents will call on servers to provide only the computational assistance they require, handling other tasks natively on device. This will be agentic edge intelligence, which is what I anticipate Apple will discuss at WWDC 2026 in a couple of months.
Kuo says AI agents will replace apps on devices, and that’s going to require both on-device edge intelligence and cloud AI integration. To deliver that, OpenAI will need to emulate Apple’s famed ‘whole widget’ approach by controlling both hardware and software.
The analyst predicts that part of the go-to-market plan for the new Apple competitor involves subscriptions and development of a third-party AI agent ecosystem. It’s a model in which you don’t purchase apps but do invest in utility. This will also likely be part of Apple’s message to developers in the coming years — though rather than leaning into OpenAI, it will draw on some of the on-device AI models it is building with help from Google Gemini.
What comes next?Apple is no stranger to existential struggle. The story of its resurrection after the return of Steve Jobs is legendary, but the company has faced its share of other threats since then. Who else recalls the great smartphone design wars, the entire netbook category, or Windows Mobile, for example?
Apple’s new challenge is just the latest chapter in its book, and the most likely outcome I can imagine sees OpenAI grabbing most of its market share from Android, rather than iOS — particularly as Android device manufacturers search for excuses to offer devices at higher price points in the face of stiff component costs and Apple’s aggressive move into the mid-range market. It is also fair to think that component costs may yet delay elements of OpenAI’s plan, particularly as Apple seems to be paying top dollar to secure supply.
All in all, we are entering interesting times as Apple’s newly promoted CEO, John Ternus, takes command — and his insight and experience in hardware development and design seems even more well-timed in the face of the news from OpenAI.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Your AI strategy is all wrong
Every CEO and executive enthusiastically slashing headcount in anticipation of an AI-driven productivity boom should read a new meta-analysis from the UK’s Royal Docks School of Business and Law. It suggests those decision-makers might be optimizing for the wrong thing.
While mass layoffs have an immediate measurable payoff, the study says the best use of AI is to boost human cognition and decision-making, not replace it. The research looks at how people can leverage AI to improve how knowledge is created and shared.
The study found that AI excels at tackling complex tasks quickly, while people excel at tasks involving judgment, meaning, and responsibility. AI can also improve an organization’s “collective intelligence” by pulling together facts and ideas from various subjects into one clear picture.
For example:
- A hospital where AI surfaces relevant research from specialties the treating physician doesn’t follow, but the doctor still makes the call
- A law firm where AI cross-references precedent across jurisdictions in minutes, while partners decide the best argument for the client
- A product team where AI synthesizes feedback from support tickets, sales calls, and app reviews — but humans decide what to build
AI use is far more effective than AI or people working independently.
Despite huge gains in in the technology’s capabilities, AI still needs people for interpretation and making ethical choices, according to the study. And it warns that over-reliance on AI erodes irreplaceable human judgment.
Instead of assuming AI can replace human expertise, organizations should focus on building “knowledge ecosystems” (the ways groups create, store, and share information) where AI supports human learning, innovation, and decision-making, according to the study.
The goal shouldn’t be to ban AI or replace employees outright, but to use AI to cultivate a powerful knowledge ecosystem that captures knowledge, facilitates its movement, and creates new understanding. (Think Slack channels, wikis, tribal knowledge, onboarding docs, expert networks, and AI layers on top.)
While replacing employees with AI captures cost savings, it surrenders the collective-intelligence opportunity.
On the cultivation of human talentInitially, many organizations responded to the emergence of powerful AI chatbots and tools with a simplistic “we need more of this.” Now, it’s time to confront the “skills atrophy paradox.”
Some companies are trying to replace junior employees with AI used by senior employees. But if that’s happening at scale, where do tomorrow’s senior employees come from?
According to a new paper titled “AI Assistance Reduces Persistence and Hurts Independent Performance,” conducted by researchers from major US and UK universities, reliance on AI chatbots erodes human capability.
The study tested the effects of AI assistants such as ChatGPT on tasks like math and reading comprehension with over 1,200 participants. It found that while AI improved performance, scores dropped sharply once it was removed, and users were more likely to give up on hard problems than those who didn’t use AI at all.
These aren’t long-term effects. They appear after only about 10 to 15 minutes of using AI — about the same time it takes to drink a cup of coffee.
The researchers don’t recommend banning AI, but argue it should be used to help people grow and learn.
The takeaway from both studies: organizations benefit greatly by keeping people in authorship of decisions and avoid demoting them to rubber-stamping AI’s output.
Another error is to focus too much on the narrow idea of “productivity” or output. Companies that keep people in charge will be more legally defensible, more trusted by customers, and better at catching the high-cost mistakes AI makes confidently, according to the Royal Docks study.
How to build a strong ‘knowledge ecosystem’The building blocks of a human-AI knowledge ecosystem are, according to the Royal Docks study:
- Workflow redesign: map tasks by who (or what) is best suited — then design handoffs, not replacements
- New roles: hire or cultivate AI specialists
- Training shift: from domain skills alone to metacognition — knowing when and how to combine individual personal knowledge with AI input
- Documentation matters more, not less: Focus on high-quality, thorough documentation of everything knowing that AI can handle the complexity of it all
- Ethical guardrails baked in: use people to keep AI aligned with human- and business-centered goals
The uncomfortable truth in the Royal Docks findings isn’t that AI is less powerful than we thought. It’s that its power is wasted on the strategy most organizations have chosen for it.
Replacement is a one-time cost saving. But using AI as part of a real knowledge ecosystem where AI makes humans smarter and humans keep AI honest delivered compounding advantages.
To focus on the cost savings of cut salaries is to fall for the quantitative fallacy, which is to favor the measurable and believe the unmeasurable isn’t important or doesn’t exist.
This will all play out over time. The companies replacing too many employees in the hopes AI will do their jobs will find themselves at a competitive disadvantage against those who invest in building those powerful knowledge ecosystems and a culture of partnership between people and AI.
AI disclosure: I don’t use AI for writing. The words you see here are mine. I do use a variety of AI tools via Kagi Assistant (disclosure: my son works at Kagi) — backed up by both Kagi Search, Google Search, as well as phone calls to research and fact-check. I use a word processing application called Lex, which has AI tools, and after writing use Lex’s grammar checking tools to find typos and errors and suggest word changes. Here’s why I disclose my AI use and encourage you to do the same.
Meta’s compute grab continues with agreement to deploy tens of millions of AWS Graviton cores
Meta is continuing its compute grab as the agentic AI race accelerates to a sprint.
Today, the company announced a partnership with Amazon Web Services (AWS) that will bring “tens of millions” of AWS Graviton5 cores (one chip contains 192 cores) into its compute portfolio, with the option to expand as its AI capabilities grow. This will make the Llama builder one of the largest Graviton customers in the world.
The move builds on Meta’s expansive partnerships with nearly every chip and compute provider in the business. It’s working with Nvidia, Arm, and AMD, as well as building its own internal training and inference accelerator chip.
“It feels very difficult to keep track of what Meta is doing, with all of these chip deals and announcements around in-house development,” said Matt Kimball, VP and principal analyst at Moor Insights & Strategy. This makes for “exciting times that tell us just how incredibly valuable silicon is right now.”
Controlling the system, not just scaleGraphics processing units (GPUs) are essential for large language model (LLM) training, but agentic AI requires a whole new workload capability. CPUs like Graviton5 are rising to this challenge, supporting intensive workloads like real-time reasoning, multi-step tasks, frontier model training, code generation, and deep research.
AWS says Graviton5 has the ability to handle “billions of interactions” and to coordinate complex, multi-stage agentic tasks. It is built on the AWS Nitro System to support high performance, availability, and security.
“This is really about control of the AI system, not just scale,” said Kimball. As AI evolves toward persistent, agentic workloads, the role of the CPU becomes “quite meaningful;” it serves as the control plane, handling orchestration, managing memory, scheduling, and other intensive tasks across accelerators.
“This is especially true in agentic environments, where the workloads will be less linear and more stateful,” he pointed out. So, ensuring a supply of these resources just makes sense.
Reflecting Meta’s diversified approach to hardwareThe agreement builds on Meta’s long-standing partnership with AWS, but also reflects what the company calls its “diversified approach” to infrastructure. “No single chip architecture can efficiently serve every workload,” the company emphasized.
Proving the point, Meta recently announced four new generations of its MTIA training and inference accelerator chip and signed a massive deal with AMD to tap into 6GW worth of CPUs and AI accelerators. It also entered into a multi-year partnership with Nvidia to access millions of Blackwell and Rubin GPUs and to integrate Nvidia Spectrum-X Ethernet switches into its platform, and was also one of Arm’s first major CPU customers.
In the wake of all this, Nabeel Sherif, a principal advisory director at Info-Tech Research Group, posed the burning question: “What are they going to do with all this capacity?”
Primarily it will support Meta’s internal experimentation and innovation, he said, but it also lays the groundwork and provides the capacity for Meta to offer its own agentic AI services, for instance, its Llama AI model as an API, to the market.
“What those [services] will look like and what platforms and tools they’ll use, as well as what guardrails they’ll provide to users, is still unclear, but it’s going to be interesting to see it develop,” said Sherif.
The expanded capacity will enable a diversity of use cases and experimentation across various architectures and platforms, he said. Meta will have many options, and access to supply in an environment currently characterized not only by a wide variety of new CPU approaches, but by significant supply chain constraints. The AWS deal should be viewed as a complement to its partnerships and investments in other platforms like ARM, Nvidia, and AMD.
Kimball agreed that the move is “most definitely additive,” not a replacement or substitution. Meta isn’t moving off GPUs or accelerators, it’s building around them. “This is about assembling a heterogeneous system, not picking a single winner,” he said. “In fact, I think for most, heterogeneity is critical to long term success.”
Nvidia still dominates training and a lot of inference, while AMD is becoming “more and more relevant at scale,” Kimball noted. Arm, meanwhile, whether through CPU, custom silicon or other efforts, gives Meta architectural control, and Graviton5 fits into that mix as a “cost- and efficiency-optimized general-purpose compute layer.”
A question of strategyThe more interesting question is around strategy: Does this signal Meta is becoming a compute provider? Kimball doesn’t think so, noting that it’s likely the company isn’t looking to directly compete with hyperscalers as a general-purpose cloud. “This is more about vertical integration of their own AI stack,” he said.
The move gives them the ability to support internal workloads more efficiently, as well as providing the infrastructure foundation to expose more of that capability externally, whether through APIs, partnerships, or other means, he said.
And there’s a cost dynamic here, too, Kimball noted. As inference becomes persistent, especially with agentic systems, economics shift away from peak floating-point operations per second (FLOPS) (a measure of compute performance) and toward sustained efficiency and total cost of ownership (TCO).
CPUs like Graviton5 are well positioned for the parts of that workload that don’t require accelerators, but still need to run continuously. “At Meta’s scale, even small efficiency gains per workload compound quickly,” Kimball pointed out.
For developers and enterprise IT, the signal is pretty clear, he noted: The AI stack is getting more heterogeneous, not less so. Enterprises are going to see tighter coupling between CPUs, GPUs, and specialized accelerators, with workloads increasingly split across them based on behavior (prefill versus decode, stateless versus stateful, burst versus persistent).
“The implication is that infrastructure decisions have to become more workload-aware,” said Kimball. “It’s less about ‘which cloud?’ and more about ‘where does this specific part of the application run most efficiently?’”
This article originally appeared on NetworkWorld.
Germany’s sovereign AI hope changes hands
As Europe seeks to assert its technological independence from the US vendors Aleph Alpha, once seen as Germany’s sovereign AI hope, is the target of a transatlantic takeover.
Aleph Alpha is set to merge with Canada’s Cohere in a deal that will bring together Cohere’s global AI clout and Aleph Alpha’s background in research. The two companies hope they will be able to develop an AI powerhouse, with backing from their Canadian and German ecosystems
“Organizations globally are demanding uncompromising control over their AI stack. This transatlantic partnership unlocks the massive scale, robust infrastructure, and world-class R&D talent required to meet that demand,” said ” said Cohere CEO Aidan Gomez in a news release that artfully presents the deal as a merger of equals but that, according to a footnote, only requires the approval of the German company’s shareholders, a sure sign of a one-sided takeover.
The combined companies will be looking to offer customized AI in highly-regulated sectors including finance, defense and healthcare. By pooling their talents and offerings, theu hope to offer AI solutions to organizations according to local laws, cultural contexts, and institutional requirements.
The move comes at a time when businesses across the word are looking at non-US options as a reaction to the Trump administration’s policy on tariffs and the uncertainty caused by the war with Iran.
There have been several initiatives within Europe to counteract the US dominance. The EU’s Eurostack plan looked to make sure that major projects had a European option. Aleph Alpha was one of the companies highlighted within the scheme. The EU also launched Open Euro LLM, an attempt to slow down the US and China’s lead in AI.
This article first appeared on CIO.
Agent Mode is now available in Microsoft Word, Excel, and PowerPoint
Microsoft has beefed up Copilot’s capabilities in Word, Excel and PowerPoint, claiming its Agent Mode will help speed up workers’ output.
The new features, announced last year, mean that Copilot can work more efficiently with Office applications, for example, understanding the richness of a pivot table in Excel or the use of animations in PowerPoint.
In tests with customers and researchers, Microsoft has learned a few things about how to improve the way Copilot is deployed, and laid out some of them in a post to a company blog. Now, it said, Copilot takes action rather than just suggesting steps — although ensuring that users maintain control. Other improvements include the ability to work with different models and better integration of Work IQ to deliver higher quality output.
Further developments are in the pipeline, Microsoft said, including improved editing for complex workflows such as finance spreadsheets and legal documents, more visibility on changes, and a more seamless integration of Copilot into the software, so that the experience for users is the same for Word, Excel, and PowerPoint.
The updates are available now and are the default experience for customers with Microsoft 365 Copilot and Microsoft 365 Premium subscriptions, the company said.
See PCWorld’s first impressions of the new Copilot agents in Word and PowerPoint.
CISA last in line for access to Anthropic Mythos
The US Cybersecurity and Infrastructure Security Agency (CISA) does not yet have access to Anthropic’s bug-hunting AI model, Claude Mythos, even though other government agencies do, Axios reported earlier this week.
As if that weren’t a big enough slap in the face for the national cyber-defense agency, the list of those who do have access to Mythos includes several unauthorized users, according to Bloomberg News. Members of a private Discord channel specializing in seeking information about unreleased AI models, have gained access to Mythos, according to one unnamed member of the group, Bloomberg reported. “The group has been using Mythos regularly since then, though not for cybersecurity purposes,” the person told Bloomberg, supplying screenshots to back up their claim.
As a result of its fear that the powerful model could be used to identify and exploit flaws in software and online services, Anthropic has limited access to a preview of Mythos to an exclusive group of government agencies, industry groups, and software providers through an initiative it calls Project Glasswing.
Even if CISA is shut out, some government agencies do have access to Mythos, including the US Department of Commerce’s Center for AI Standards and Innovation and the US National Security Agency, which Axios said are already assessing Mythos.
Former OpenAI research scientist launches new AI model for Tencent
Tencent has updated its Hunyuan AI model, its first major release since it recruited Yao Shunyu, a leading AI scientist from OpenAI. Tencent’s Hy3 model, currently available in preview, offers improvements in areas from complex reasoning to coding.
The Chinese technology conglomerate is playing catch-up with other Chinese AI developers including ByteDance, Alibaba and DeepSeek. China is betting big on open-source AI to offer alternatives to major US players. Back in 2023, Tencent claimed its then-new Hunyuan LLM was a more powerful and intelligent option than the versions of ChatGPT and Llama available at the time.
Tencent has backed AI start-ups including Moonshot AI and StepFun, hoping that they will boost its cloud computing division. The company has also restructured its research team to improve the quality of training data. It aims to double its investment in AI to more than $5 billion this year.
Not to be outdone, DeepSeek announced its V4 Flash and V4 Pro Series, the newest versions of its LLM model. DeepSeek became an overnight hit in January 2025 with the launch of its R1 AI model and has gone on to develop other models since. It said the V4 model upgrades will offer users advances in reasoning and agentic tasks, while a new feature called Hybrid Attention Architecture improves the ability of the AI platform to remember queries across long conversations.
This article first appeared on InfoWorld.
Adobe bets on AI agents to stay at the center of marketing workflows
Adobe is rolling out autonomous agents to orchestrate work across its applications, a move that will reinforce its position at the core of content and marketing workflows as AI disrupts the software landscape, analysts say.
“We’re living at true inflection point; a moment where creativity and marketing are being reshaped by AI, unlocking incredible new opportunities and raising the bar for speed, personalization, as well as scale,” said Shantanu Narayen, Adobe CEO, during his keynote presentation at Adobe Summit on Monday.
Liz Miller, vice president and principal analyst at Constellation Research, described the various agent-focused product updates at Adobe’s Summit conference this week as “an evolution of vision that brings the right AI capability into the right application.”
“The goal … is to continue evolving where and how AI is incorporated into the work of engagement,” she said.
[ More Adobe Summit 2026 coverage ]
Adobe’s recent launches indicate a “clear shift” to prioritize agentic AI investment, said Maria Bell, senior analyst at CCS Insight.
“Rather than focusing on standalone AI features, the emphasis is on building systems that can coordinate and execute work across workflows and functions,” said Bell. “Capabilities such as CX Enterprise, workflow agents and Firefly integrations point to an ambition to move from AI that supports decisions to systems that can act on them.”
Adobe kicked off its agent-related announcements ahead of the customer experience conference, unveiling its Firefly AI Assistant last week.
Using natural language prompts, the agent can autonomously carry out multi-step workflows across Adobe Creative Cloud apps such as Photoshop, Premiere, Express, and others. Aimed at both novice and expert users, Firefly AI Assistant can also guide users through tasks spanning image, video, audio, and design. A public beta is “coming soon,” according to Adobe.
The launch of Firefly AI Assistant signals Adobe’s intent to “lead in agentic AI for creative professionals, [by] directly addressing workflow friction, usability, and the demand for multi-model flexibility,” said Keith Kirkpatrick, research director at Futurum, in a blog post last week.
“Adobe’s Firefly Assistant is a signal that agentic AI is moving from experimental pilots to production-grade tools capable of handling real creative complexity,” he said, with enterprise buyers “no longer content with simple copilots or one-off automation.”
“The ability to automate multi-step tasks and orchestrate between image and video modalities is quickly becoming table stakes for creative AI platforms,” he said.
Adobe’s main announcement during the Summit event this week was CX Enterprise Coworker; an AI agent that coordinates multi-step workflows and tasks across Adobe’s customer experience applications.
“Adobe is moving to occupy the role of an automated operating system for marketing,” said Jim Lundy, CEO of Aragon Research, in a blog post on Wednesday. “While previous AI tools acted as individual assistants for specific tasks, the CX Enterprise Coworker acts as a supervisor that connects disparate silos of information.”
Lundy said that CX Enterprise Coworker represents a “significant evolution” in the way enterprises will manage the customer lifecycle, replacing manual hand-offs with automated orchestration across customer engagement apps.
“By anchoring this tool in its robust experience platform, Adobe is making a strong case for being the primary intelligence layer in the modern marketing stack,” said Lundy.
There were also updates to GenStudio with Brand Intelligence, a data layer that connects information across Adobe tools to provide context for agents to act upon, alongside a new agent capability in Adobe’s Workfront work management app.
While agents present an opportunity for Adobe, it also faces potential disruption from both design software vendors that build AI into their products and general-purpose AI assistants.
This risk has raised concerns in financial markets, and Adobe announced a $25 billion share buyback scheme this week — a move that can be seen as an attempt to shore up investor confidence amid a period of significant change, both across the industry as AI reshapes the software landscape, and within Adobe itself, with CEO Shantanu Narayen set to step down after 18 years in charge.
Ahead of the event, popular online design platform Canva unveiled its own agentic capabilities, with users able to access various Canva tools via a conversational interface that can complete multi-step processes, such as creating “a multi-channel campaign launch.”
“Canva is focused on accessibility, using AI to simplify and automate design for a broader audience,” said CCS Insight’s Bell. “This lowers barriers to entry and puts pressure on Adobe in lighter-weight and non-professional use cases.”
And, last week, Anthropic announced Claude Design, which lets users create design prototypes and marketing assets such as “landing pages, social media assets, and campaign visuals.”
In addition, Anthropic and Canva announced an integration that brings Claude Design outputs into Canva’s app.
Miller from Constellation Research said that tools such as Claude Design are “powerful additions” to the design ecosystem, enabling non-designers to quickly prototype and test ideas using text prompts. At the same time, these should be seen as more of a starting point. Professional-level design and editing tools are still required to create enterprise-ready prototypes.
“A creator may start in OpenAI, use that output in Claude to further build out the concept, but end in Firefly to ensure enterprise safety and brand controls in a more refined, finely tuned toolset,” Miller said.
Adobe is also working with a range of AI providers to make its software available where customers prefer. This includes the ability to interact with Adobe’s Firefly creative assistant directly from Claude, for instance.
“Our strategy is to meet where the users are,” said Varun Parmar, general manager of Adobe GenStudio and Firefly for Enterprise. A user might invoke an Adobe creative agent via Claude in the morning, he said, “and then later in the afternoon decide to do deeper precision and control work that requires a professional sort of interface, which is where Adobe’s product is world class.”
“We believe that these things will coexist; depending on the use case, you’ll go in and out [of different apps],” said Parmar.
As AI model providers expand into workplace software tools, it makes most sense for Adobe to focus on its core strength of serving creative and marketing professionals, Miller said.
“The risk to Adobe is more of an ongoing challenge to stay focused on customer demand and need, and not veer off course in a never-ending horse race with models proving what can be done, as opposed to commercially safe models that deliver what must be done,” she said.
And despite some media negativity around Adobe’s ability to transition into a new era of agentic AI, Miller said, Adobe’s strategy of embedding data, assets, and workflows into the tools marketers and creatives use remains sound.
Bell sees agentic systems as a “longer-term structural shift, while the more immediate pressure comes from accessibility-focused platforms like Canva.”
Yet Adobe’s access to data and expertise serving large customers provide it with an edge. “Adobe remains strong in professional and enterprise environments, where depth, control and integration still matter,” Bell said.



