Kategorie
Video service Vimeo confirms Anodot breach exposed user data
Can Apple’s new CEO turn things around?
When Apple rolled out hardware chief John Ternus as the CEO to replace Tim Cook, the reaction was kind but muted. That’s because Ternus has said nothing yet to indicate he has a specific plan to position Apple for the future. (To be fair, he’s said next to nothing about anything — no easily found social media posts, no big speeches about anything beyond hardware, no major interviews showcasing his vision.
I have long been a fan of Apple, but the “i” people have a lot of problems. Their failure to make Apple an AI leader — not the leader, just a leader — has dominated headlines for two years now. But the truth is that Apple has spent years without the passion and drive that marked the second coming of Steve Jobs as CEO.
The clearest example involves the iPhone and the Apple Watch. I used to routinely upgrade my devices once a year, or at least every two years. I am sitting here now with an iPhone 13 Pro Max and an Apple Watch Series 7–the same devices I’ve had for almost five years.
Each year, I’d get excited about Apple’s new devices and look for just one clean reason to upgrade. I didn’t find it. The promise of AI was intriguing, but Apple didn’t deliver. The iPhone camera kept getting better, but my photos look just fine already.
Apple did deliver one feature that would have made me upgrade: allowing an iPhone to record and quickly transcribe calls. But the company then rolled it out to all devices, meaning it offered little to push new iPhone sales. (Of course, Apple never bothered to tell users the transcription feature has a roughly 30-minute limit. For a guy who often does hour-long interviews, that’s a problem; I’m forced to stop a recording at the 25-minute mark and reactivate it. *Sigh*)
As for AI, I would love for the iPhone to actually be intelligent about all of the data swimming within its case. For example, as a reporter, I have apps for a large number of news organizations. On one election night, I got 16 alerts that a Senate race had been called. I don’t need 16; I just need one. If Apple Intelligence were really intelligent, it would understand that. It should also understand that when I’m driving to an appointment, I don’t need a calendar alert 15 minutes before my meeting when the phone should know — based on my destination and routing in Apple Maps — that I’m on the way.
All those little missteps add up. One of the critical talents a CEO at a company as large as Apple needs is either vision or a passion that can pass for vision.
This brings us to the inevitable comparison between Jobs and Cook. Jobs was passionate, persuasive, inspirational and he truly had a plan for future products based on his gut feeling of what users would want or need. But Jobs was also undisciplined, harsh, and abrupt and someone who wasn’t always worried about the truth.
He was, therefore, a great business leader, but he had help. (Keep reading for more.)
Cook was nearly the opposite of Jobs. He was precise, methodical, detail-oriented and he for the most part treated people well and with respect. But his speeches were lackluster and I have yet to meet anyone who dubbed him electric or inspirational. He was privately passionate about his work, but that passion rarely surfaced in public.
Here’s my point about Jobs’ success: He did so well because he had Cook as a senior deputy. Having the ultimate technocrat in place allowed Jobs to focus on the bigger-picture future.
There’s been chatter on LinkedIn suggesting that Cook was a weaker CEO than Jobs. There’s a valid argument for that, but many do not give credit to Cook for helping Jobs perform as well as he did.
Earlier in Cook’s tenure, he did have one executive with a healthy chunk of the Jobs passion: Jony Ive. But Ive got tired of the technocratic nature of his boss and left in 2019 to work elsewhere. Turns out the best leadership duo is a visionary CEO with a technocrat deputy. It doesn’t seem to work the other way around.
Customers and employees also want to see passion and vision from a CEO directly. And that brings us to the upcoming change.Can Apple under CEO Ternus get its AI act together? That is the big mystery.
Apple certainly has the money and the clout to make AI work from either side of the buy/build path. But does it have a vision of what customers want — or more precisely, what they need?. Jobs had the knack for correctly guessing what customers would want once they got it, even if they didn’t yet know they needed it.
Justin Greis, CEO of consulting firm Acceligence and former head of the North American cybersecurity practice at McKinsey, sees Ternus as an executive “who has also [along with Cook] been heads down on execution mode his entire career and he’s an insider. He knows how to keep (Apple) in its lane.”
Greis goes with the crowd in pinning most of his Apple hopes on AI. “If you look at the big AI companies, Apple is not on the map. Everybody is outpacing them. Siri simply doesn’t have the power that is needed to be valuable for their end-users.”
The AI magic is really not about simply using AI on-device. It’s about the value that can be delivered by a sophisticated integration of literally every piece of information coursing through a phone, your watch, a Mac or an iPad.
A few years ago, people saw Apple as a gatekeeper controlling access to Siri. Back then, the assumption was that access to Siri would be worth tons of money. No longer. Plenty of people now use their iPhone to access generative AI offerings from a variety of Apple’s AI rivals.
Apple can still win the AI mindshare battle, but only if it can truly deliver intelligent integration of everything that interacts with the phone. That package could be offered solely through Siri, allowing Apple to again control the almighty gateway. Sure, an iPhone user can access Claude or Perplexity — but if only Apple’s knighted partner can analyze your calendar, your contacts, your call history, your travel plans, your bank account, your photos, etc.— companies will again be willing to pay for access.
That’s where Apple gold lies. The question is whether Ternus can mine it.
Enterprises need to think beyond GPUs for agentic AI, analysts say
The ongoing shift from generative AI (genAI) to agentic AI provides an opportunity for enterprises to move to more nimble and less expensive forms of computing, according to analysts.
Early AI models were largely built on expensive GPUs from Nvidia and AMD that offered raw processing power. But newer agentic AI tools, rooted in business process and workflow management, can run on more efficient, cost-effective hardware.
As a result, IT decision-makers who still think they require GPUs for anything AI-related need to reconsider their hardware options in terms of both cost and capabilities, analysts said.
“A better way of thinking about this is the cost of AI compute and now agentic AI platform services or systems,” said Leonard Lee, principal analyst at Next Curve. “’AI computing’ or ‘accelerated computing’ has clearly transcended the GPU as an inference accelerator.”
The new hardware options include CPUs and specialized AI chips, also known as ASICs in semiconductor parlance. Although these chips have been around for years, they are now showing real utility as agentic AI goes mainstream.
For one, the CPU — the main chip in any computer — is seeing something of a revival. “The CPU is reinserting itself as the indispensable foundation of the AI era. The CPU now serves as the orchestration layer and critical control plane for the entire AI stack,” Lee said.
CPUs are both power efficient and well-suited for AI on the edge, although specialized low-power chips are more capable depending on the task, said Jim McGregor, principal analyst at Tirias Research. “It will still be more efficient to use an ASIC instead of a CPU, and in most cases it will be less expensive over the life of a platform,” he said.
The growth of inference provides an opening for optimized AI accelerators, which can handle those jobs more efficiently than GPUs, said Mike Feibus, principal analyst at FeibusTech. “…The relative importance of [the] CPU is rising.”
Nvidia — sensing that it needed a low-power chip beyond its power-hungry GPUs — has already introduced an ASIC for inferencing in its hardware stack. And it recently licensed AI chip technology from Groq for $20 billion.
Because Agentic AI involves a different computing model than genAI training on GPUs, enterprises need to consider the hardware options and pricing models available through cloud providers. “It’s more about model management than about model building — and the CPU is critical in providing workflow management,” said Jack Gold, principal analyst at J. Gold Associates.
Pricing variations continue to be an issue. Straight CPU compute is not billed the same as heavy GPU use, making it difficult to nail down costs, Gold said. “GPUs in training use more electricity generically due to near 100% utilization in a training workload, whereas in general-purpose compute, servers and CPUs run more like 40% to 60% utilization,” he said. “But it’s highly variable depending on what the agent is doing.”
Gold predicts that 80% to 85% of AI workloads will move to inference in the next two to three years, especially as tools become more agentic. (Inference means moving away from GPUs, which are better used for training, to CPUs, which are more efficient for simpler AI tasks.)
“CPUs take on a major significance in making everything work. It’s why all the hyperscalers are now loading up on CPUs, not just GPUs,” Gold said.
Major cloud providers Google, Amazon and Microsoft , for instance, have their own CPUs and low-power ASICs for inferencing.
What looks at the moment like a resurgence in CPU demand is actually pointing to a larger issue: the growing complexity of AI infrastructure, said Gaurav Shah, vice president of business development and strategic partnerships at NeuReality.
The overhead around data movement, orchestration and networking is exploding, Shah said. “That’s what’s driving demand — not CPUs doing more AI, but systems struggling to keep up with AI,” Shah said.
Beyond enterprises, genAI companies, AI-native companies and neoclouds all will need to rethink their architecture. “The winners will be the architectures that deliver the most inference per watt, not the most cores per server,” Shah said.
Researchers Discover Critical GitHub CVE-2026-3854 RCE Flaw Exploitable via Single Git Push
Brazilian LofyGang Resurfaces After Three Years With Minecraft LofyStealer Campaign
Fleet hopes to be the MDM provider for the AI Era
Fleet, the independent, open-source, multi-platform MDM service, recently announced its new partner program for VARs and MSPs serving enterprise customers and recruited MobileIron co-founder Suresh Batchu to serve on the company’s board. With those moves in mind, I caught up with company CEO Mike McNeil to find out more about the Fleet’s plans.
Given the company’s roots in open source, working with partners is a good way to enable it to support a variety of enterprise needs, with resellers and MSPs playing an active role in customizing the core solution for those requirements.
Fleet and the MacFleet is just as happy managing Macs as it is Linux systems and integrates well with existing tools — as long as they support open standards and APIs. This gives it a unique insight into Apple device adoption in the enterprise.
McNeil confirmed that both Apple and Linux systems are seeing rapid increases in deployment. “The new MacBook Neo is now cheaper than comparable PCs, so Apple adoption is increasing, but so are other OS options like desktop Linux,” he said. (Desktop Linux reached 3.16% market share in March, says StatCounter, while OS X hit 9.52% and Windows fell to 60.8%.)
That’s not to say migration to any platform is always easy. “I spoke to an IT director yesterday from a casino company whose team had bought a couple of Neos and tried enrolling them in Microsoft Intune, but gave up,” McNeil told me. This was because they hit an unrelated bug with their traditional MDM, didn’t have great diagnostics to work with, and the IT director then “assumed” that it must be because the Neo wouldn’t work for enterprise use. As it turns out, the issue was with the MDM, McNeil said.
“At Fleet, we’ve enrolled MacBook Neos ourselves with no problems, and seen customers do the same,” he said. “Enterprises are usually mixed OS environments, and [MDM] solutions limited to a single ecosystem, like Jamf that’s Apple only, are pretty restrictive.
Why partnerships matter“Enterprises are very particular, and they often operate in vastly different ways,” said McNeil. “For example, there are many, many ways to automatically make sure employees can get on to a Wi-Fi network or a VPN on their first day at work.”
Fleet, he said, works to balance needs between different parts of a company – infosec and IT, for example. “We optimize for baby steps, small iterations,” McNeil said, pointing out that new features are documented and explained as they are introduced.
“The first generation of device management was built for control and compliance,” said Batchu. “The next generation needs to be built for speed, automation, and how modern teams actually operate. Fleet is taking a fundamentally different approach with infrastructure as code and AI-driven workflows, and I’m excited to help shape that direction.
“In 2026, every company needs to do more with less. Budgets are shifting towards AI and innovation, forcing leaders to extract more value from existing infrastructure. Some IT estates have been around for 20, 30, 35 years, and organizational structures, technical debt, and even entire jobs exist just to keep the lights on. But when you suddenly go from patching monthly to patching in hours, something has got to give.”
He argued that the adoption of a partnership model should help companies move through digital transformation with Fleet while maintaining tight budgets. Partners can help train employees and better understand the context of company need.
It’s also about making sure things are usable. Citing the “Concur” effect, which he describes as a product designed to satisfy high-level stakeholder requirements rather than the needs of those actually using the software, McNeil says he has a “personal vendetta” against complexity in software design.
What will enterprises need?It’s a move to make every platform easy to manage using powerful tools optimized for the unique needs of customers. “By 2030, IT will need reliable infrastructure that works with the productivity and security tools they’re already using throughout their business.” IT and security teams won’t want separate platforms for each OS or function, and they’ll want to use chat to get projects started.
AI is a constant. At least one current Fleet customer now has tens of thousands of computers running AI agents and recently gave each of its employees a headless “claw” — a powerful AI agent based on OpenClaw, the free, open-source AI agent software that is accessed via remote computers.
Fleet helps IT recognize the use of shadow AI tools across the business, as well as tracking other app installs, licenses, and use. “So whether you want to find out who’s using the Claude app, who’s using shadow AI tools they shouldn’t be using, or just how many extra, expensive Bloomberg terminal licenses you’re paying for that aren’t actually getting used, you can do that in Fleet, right from your MDM.”
As McNeil sees it, the emerging AI services environment favors Linux for AI, with other platforms the province of human workers. “I don’t think we’ll see a world where most human users are running desktop Linux in five years, but I wouldn’t be surprised if Microsoft and Apple are neck and neck in the enterprise” by then,” he said.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Kernel Hardening Trends: Whats Changing in Upstream Security Controls
US reportedly charges Scattered Spider hacker arrested in Finland
Checkmarx confirms LAPSUS$ hackers leaked its stolen GitHub data
VECT 2.0 Ransomware Irreversibly Destroys Files Over 131KB on Windows, Linux, ESXi
Microsoft to deprecate legacy TLS in Exchange Online starting July
Inside an OPSEC Playbook: How Threat Actors Evade Detection
Why Secure Data Movement Is the Zero Trust Bottleneck Nobody Talks About
Xiaomi releases MIT‑licensed MiMo models for long‑running AI agents
Xiaomi has released and open-sourced MiMo-V2.5 and MiMo-V2.5-Pro under the MIT License, giving developers another potentially lower-cost option for building AI agents that can run longer tasks such as coding and workflow automation.
Both models support a 1-million-token context window, the company said. MiMo-V2.5-Pro is designed for complex agent and coding tasks, while MiMo-V2.5 is a native omnimodal model that supports text, images, video, and audio.
The release comes as agentic AI workloads are putting new pressure on enterprise AI budgets. These systems can burn through large numbers of tokens as they plan, call tools, write code, and recover from errors, making cost and deployment control increasingly important for developers.
By using the MIT License, Xiaomi said it is allowing commercial deployment, continued training, and fine-tuning without additional authorization. Tulika Sheel, senior vice president at Kadence International, said the MIT License can make it attractive. “It allows enterprises to freely modify, deploy, and commercialize the model without restrictions, which is rare in today’s AI landscape,” Sheel said.
“On ClawEval, V2.5-Pro lands at 64% Pass^3 using only ~70K tokens per trajectory — roughly 40–60% fewer tokens than Claude Opus 4.6, Gemini 3.1 Pro, and GPT-5.4 at comparable capability levels,” Xiaomi said in a blog post.
The models use a sparse mixture-of-experts (MoE) design to manage compute costs. The 310-billion-parameter MiMo-V2.5 activates only 15 billion parameters per request, while the 1.02-trillion-parameter Pro version activates 42 billion. Xiaomi said the Pro model’s hybrid attention design can reduce KV-cache storage by nearly seven times during long-context tasks.
Xiaomi cited several long-horizon tests, including a SysY compiler in Rust that MiMo-V2.5-Pro completed in 4.3 hours across 672 tool calls, passing 233 of 233 hidden tests. It also said the model produced an 8,192-line desktop video editor over 1,868 tool calls across 11.5 hours of autonomous work.
Will enterprises adopt MiMo?Whether Xiaomi’s MiMo-V2.5 models can gain adoption among enterprise developers over closed frontier models for agentic coding and automation workloads will depend on how enterprises evaluate performance, cost, and risk.
“When assessing Xiaomi’s MiMo-V2.5 and its variants, enterprise developers should look at the total cost of ownership,” said Lian Jye Su, chief analyst at Omdia. “The TCO consists of token efficiency, cost per successful task, and the absence of licensing costs associated with proprietary models. Closed frontier models may still win on generic tasks, and the hardest edge cases, but open-weight models excel in agentic work that is high-volume in nature.”
Pareekh Jain, CEO of Pareekh Consulting, said enterprises should assess MiMo-V2.5 less as a replacement for Claude or GPT and more as a cost-efficient agent model for high-token workloads.
“The key benchmark signal is not just accuracy, but tokens per successful task,” Jain said. “Frontier models often reach higher success rates on complex coding benchmarks, but do so with massive reasoning overhead. MiMo-V2.5 is designed for Token Efficiency, meaning it achieves comparable results with significantly fewer input and output tokens.”
Jain said that could make MiMo-like models useful as “economic workhorses” for repetitive coding, QA, migration, documentation, testing, and automation workloads, while closed frontier models remain the quality ceiling for the hardest tasks.
Ashish Banerjee, senior principal analyst at Gartner, said models like MiMo could materially shift enterprise AI economics for long-horizon agents.
“When tasks stretch into millions of tokens, metered proprietary APIs stop looking like a convenience and start looking like a tax on iteration,” Banerjee said. “By contrast, MiMo’s MIT license, open weights, 1M-token context window, and relatively low pricing make private-cloud or self-hosted deployment strategically credible.”
However, Banerjee said this does not mean enterprises will abandon proprietary APIs.
“Enterprises will continue to use proprietary APIs for frontier accuracy and low-operations consumption, while shifting scaled, repeatable agent workflows toward open models where cost predictability, data control, and customization matter more,” Banerjee said. “In short, long-horizon, high-volume agentic AI will evolve into a hybrid market, with open models like MiMo breaking pure API dependence.”
Su added that adoption may face challenges because Chinese-origin models can trigger concerns in regulated Western organizations.
Critical Unpatched Flaw Leaves Hugging Face LeRobot Open to Unauthenticated RCE
Why simplicity is the silent driver of hybrid workplace success
Hybrid work has reshaped how and where people collaborate. Offices are no longer the default destination for every interaction, yet they remain essential for moments that require focus, alignment, and human connection. In this reality, meeting rooms play a pivotal role, not because of the technology they contain, but because of how effortlessly people can use it.
The most successful hybrid workplaces share a simple truth: the best technology is the one that remains invisible in the room. When collaboration tools fade into the background, people can focus on ideas rather than interfaces. When they do not, friction quickly erodes adoption, productivity, and trust.
One experience across every space
Employees move between different meeting spaces throughout the day, from huddle rooms and project spaces to larger conference rooms. When each room comes with a different setup, interface, or connection flow, every meeting starts with uncertainty. Time is lost, confidence drops, and technology becomes a problem rather than an enabler.
Complexity is one of the main barriers to adoption in hybrid environments. Organizations struggle with underutilized rooms, inconsistent setups, and management overhead that grows with every additional configuration. The result is predictable: people avoid certain rooms altogether or fall back on ad-hoc workarounds.
A consistent, intuitive experience across all meeting spaces changes that dynamic. When users know exactly what to expect, regardless of room size or location, adoption increases naturally. Meetings start on time, collaboration flows more smoothly, and IT teams receive fewer support requests.
Technology as an enabler
The Flemish Government offers a powerful example of this principle in practice. In its Brussels hub, technology was deliberately positioned as an enabler for collaboration, not as a focal point. The goal was not to impress users with features, but to make connections effortless across more than a thousand meeting spaces.
By standardizing the collaboration experience with ClickShare solutions, employees could walk into any room and start collaborating and videoconferencing without instructions or training. This approach supported a people-driven hybrid workplace where flexibility and ease of use helped employees feel confident and connected, wherever they worked.
This emphasis on simplicity did more than improve user satisfaction. It removed friction at scale, allowing thousands of employees to collaborate in the same way, every time. Technology became something people relied on, rather than something they had to think about.
Higher adoption, lower IT burden
From an IT perspective, intuitive user experiences are not just a usability win. They are an operational advantage. Every extra step, cable, or configuration option increases the likelihood of errors and support tickets. Every exception to the standard creates additional management overhead.
Flexible, easy-to-deploy meeting room solutions reduce that burden. Organizations increasingly favor modular approaches that can be adapted to different spaces without introducing new user experiences or management models. This consistency simplifies deployment, monitoring, and updates, while giving IT teams greater control and predictability.
The outcome is a virtuous cycle. When users trust technology, they use it more. When they use it correctly, IT spends less time troubleshooting and more time optimizing. Adoption and manageability reinforce each other.
Designing for people, not just rooms
Ultimately, simplicity in the hybrid workplace is about designing for human behavior. People want to collaborate, share ideas, and move quickly between spaces. They do not want to learn new systems or adapt their workflows to the room they happen to be in.
Meeting room technology should respect that reality. By offering one intuitive experience across every space, organizations remove barriers to collaboration and create environments people want to use. As the Flemish Government experience demonstrates, when technology like ClickShare quietly supports collaboration instead of demanding attention, it becomes a true catalyst for hybrid work success.
In the end, the most advanced meeting room is not the one with the most features. It is the one people barely notice at all.
Why security matters in the meeting room
For years, meeting room technology was evaluated primarily on ease of use and audiovisual quality. If people could walk in, plug in, and start presenting, the job was considered done. That mindset no longer holds. Today’s meeting rooms are deeply connected to digital environments, and security has become a business-critical concern rather than a technical afterthought.
According to IDC, 50.8% of organizations now rank security as the most important factor when selecting collaboration and videoconferencing technology, ahead of price or quality considerations. That shift reflects a broader reality: what happens in meeting rooms has direct implications for data protection, regulatory compliance, operational resilience, and corporate trust.
The meeting room as an expanded attack surface
Hybrid work has fundamentally changed the role of the meeting room. It is no longer a closed, isolated space. Instead, it has become a convergence point where corporate networks, cloud services, collaboration platforms, and personal devices meet. Content is shared wirelessly, participants join remotely, and devices are connected dynamically, often by non-IT users.
This evolution significantly expands the attack surface. Collaboration environments are increasingly targeted because they combine sensitive data with high connectivity and frequent user interaction. Risks range from unauthorized access and data interception during wireless sharing to malware propagation via unmanaged or personal devices. In hybrid scenarios, these risks are amplified by blurred boundaries between secure corporate environments and external networks.
As a result, meeting room security can no longer be treated separately from the broader enterprise security strategy. Any vulnerability introduced in a meeting space can ripple across the organization.
Regulation moves meeting rooms into the spotlight
At the same time, regulatory pressure is intensifying. Across Europe, new and evolving frameworks such as NIS2, the RED Delegated Act, and the Cyber Resilience Act are raising the bar for connected devices. These regulations introduce mandatory requirements that span the entire product lifecycle, from secure design and development to patching, vulnerability management, and end-of-support practices.
Meeting room solutions clearly fall within scope. They process sensitive corporate information, connect to enterprise networks, and often rely on wireless and cloud-based technologies. Non-compliance is no longer a theoretical risk. It can lead to financial penalties, operational disruption, and reputational damage.
International standards like ISO/IEC 27001 further reinforce this shift by defining best practices for information security management, risk assessment, and operational trust. Together, these frameworks signal a clear message: security in collaboration environments is now a governance issue as much as a technical one.
Security without usability is a false promise
However, strong security alone is not enough. When security controls disrupt the user experience, employees look for shortcuts. Shadow IT, unsecured workarounds, and bypassed controls often emerge not from negligence, but from friction.
In meeting rooms, this risk is particularly acute. Meetings are time-sensitive, social, and often involve external participants. If connecting securely feels complex or restrictive, users will prioritize speed and convenience over policy compliance. Paradoxically, that increases risk rather than reducing it.
This is why security must be built in by design, not bolted on. Secure-by-design solutions embed encryption, authentication, access control, and update mechanisms into the core architecture, while keeping the user experience intuitive. Such approaches reduce reliance on manual processes and minimize the temptation for unsafe shortcuts, enabling secure collaboration without compromising productivity.
From IT checkbox to business enabler
The most forward-looking organizations now treat meeting room security as a strategic enabler. Secure, compliant collaboration environments build trust with customers and partners, support regulatory readiness, and reduce operational risk over time. IDC notes that 70% of CIOs cite risk mitigation as a top priority, reflecting the growing recognition that resilience is a competitive differentiator, not just a defensive measure.
Importantly, this shift also changes how decisions are made. Meeting room technology can no longer be selected in isolation by facilities or procurement teams. Excluding IT expertise from these decisions can compromise not only meeting rooms, but the entire digital workplace. Security, usability, and integration must be evaluated together, through a cross-functional lens.
Security as the foundation of modern collaboration
As meeting rooms continue to evolve, one principle becomes clear: security is no longer something you add later. It is the foundation that enables safe, scalable, and human-centric collaboration. Organizations that align regulatory requirements, recognized security standards, and enterprise-grade protection with friction-free user experiences are better positioned to support hybrid work, protect sensitive information, and earn long-term trust.
In today’s workplace, a secure meeting room is not just a safer space. It is a smarter one.
Can everyday IT decisions turn sustainability from intent into impact?
Sustainability strategies often start with ambition. Net‑zero targets, ESG frameworks, and environmental KPIs signal intent at leadership level. Yet whether those ambitions translate into real progress depends largely on what happens much closer to day‑to‑day operations. In practice, sustainability is shaped by the everyday technology decisions IT teams make.
According to a Barco ClickShare survey, 96% of IT leaders believe their department’s actions make a meaningful contribution to global sustainability, and 98% agree that IT should lead the way in achieving their organization’s sustainability goals. Sustainability has clearly moved from the margins to the core of the IT agenda. The challenge is no longer awareness, but execution
Sustainability lives in routine decisions
Much of the sustainability debate still focuses on large‑scale initiatives such as data centers, AI workloads, or cloud optimization. While those areas matter, the research highlights a less visible but equally powerful driver: routine IT purchasing and deployment decisions.
Hardware selection, device lifecycles, software updates, and meeting room technology all influence energy consumption, electronic waste, and long‑term resource efficiency. These decisions are repeated across organizations every year, often across hundreds or thousands of devices. Individually, they may seem small. Collectively, they define the environmental footprint of the digital workplace.
As a result, sustainability is now ranked alongside security and cost as a key consideration in IT purchasing decisions. This shift reflects a growing understanding that frequent replacements, fragmented solutions, and short product lifecycles quietly undermine sustainability goals, even when corporate commitments look strong on paper.
Motivation is high, but IT cannot act alone
The research also reveals how personal sustainability has become for IT leaders. Eighty‑two percent say they would not accept a role at an organization without a strong sustainability track record, underlining how closely environmental values are tied to professional identity in IT.
Yet motivation alone is not enough. Sustainable choices often require cross‑functional alignment, credible information, and long‑term thinking in procurement processes that are still driven by short‑term constraints. Without organizational support, sustainability risks becoming an added burden rather than a shared objective.
A real‑world example of sustainability by design
The Flemish Government illustrates how sustainability can be embedded into everyday technology decisions when it is treated as a collective responsibility. During the renovation of its Brussels hub, the Agency for Facility Operations prioritized sustainability across construction, materials, and technology, including ClickShare wireless collaboration solutions deployed throughout the building.
Rather than introducing different technologies for different rooms, the Flemish Government standardized its meeting room setup across more than 1,000 meeting spaces, using ClickShare solutions throughout. This decision reduced hardware fragmentation, simplified management, and avoided unnecessary duplication of devices, all of which contributed to more efficient use of resources over time.
Sustainability here was not positioned as a separate initiative. It was the result of choosing technology that could scale, remain relevant longer, and support flexible ways of working without repeated replacements or complex reconfigurations.
Integration is the real test
What often slows sustainability progress is not lack of intent, but lack of integration. When sustainable solutions are difficult to align with existing systems, hard to compare objectively, or challenging to measure, they struggle to survive multi‑stakeholder decision‑making.
IT leaders need sustainability to be built into solutions by design, not added as an afterthought. When environmental impact aligns with usability, manageability, and longevity, sustainable choices become easier to defend and easier to repeat.
Small choices, cumulative impact
The key takeaway is simple but powerful. Sustainability does not hinge on one transformational project. It is driven by consistent, repeatable decisions made every day. Extending device lifecycles, standardizing collaboration technology, and selecting solutions designed for durability all create measurable impact when applied at scale.
The remaining step is organizational alignment, ensuring that everyday IT decisions are supported as strategic levers for environmental progress. In the end, sustainability is not achieved through statements alone. It is built through the choices organizations make, one technology decision at a time.
Why the meeting room has become the true test of hybrid work
The way organizations support collaboration today still varies widely from space to space. Small huddle rooms, project spaces, and large boardrooms often come with different setups, different workflows, and different expectations.
For employees, that inconsistency creates friction. For IT teams, it creates complexity. And for organizations, it quietly undermines the promise of hybrid work.
What’s becoming clear is that the meeting room is no longer just a physical space. It is where hybrid work either flows or fails.
Meetings remain the backbone of collaboration
Despite new ways of working, meetings remain central to how teams align, make decisions, and move projects forward. People come to the office not to sit behind individual screens, but to connect, co‑create, and build momentum together.
In a hybrid reality, those moments increasingly involve a mix of in‑room and remote participants.
That places a new kind of pressure on meeting spaces. They must support different group sizes, different collaboration styles, and different platforms, without forcing users to think about the technology behind it.
When meetings start late because cables are missing, audio behaves differently per room, or content sharing feels unpredictable; attention shifts away from the conversation before it even begins. Hybrid collaboration only works when technology disappears into the background.
Consistency drives adoption
One of the most underestimated factors in hybrid collaboration is consistency in user experience. Employees move between meeting spaces throughout the day. Every change in setup introduces uncertainty and hesitation. Over time, that leads to avoidance, workarounds, or reliance on personal devices instead of shared spaces.
Organizations that succeed approach meeting rooms as a connected ecosystem rather than a collection of individual rooms. A consistent experience across huddle spaces and boardrooms lowers the learning curve, increases confidence, and drives adoption naturally. People know what to expect, how to start, and how to share, regardless of where they are.
For IT teams, that same consistency reduces support overhead and simplifies management. Standardized setups, predictable workflows, and centralized visibility replace the constant firefighting that fragmented environments create.
Technology should support people, not distract them
As collaboration technology evolves, expectations rise. Users no longer accept tools that require explanation or preparation. They expect meetings to start smoothly, participants to be seen and heard clearly, and content to be shared without effort.
This is where the balance between usability, security, and intelligence becomes critical. Ease of use drives adoption, but it cannot come at the expense of governance or trust. At the same time, intelligence must enhance the experience without adding complexity. Features like automatic audio calibration, speaker framing, or real‑time transcription only deliver value when they feel intuitive and reliable. The goal is not to showcase technology, but to create conditions where collaboration feels natural, inclusive, and uninterrupted.
From technology choice to workplace experience
Ultimately, the quality of hybrid collaboration is determined less by individual features than by the experience. Employees judge meeting technology by how it makes them feel: confident or hesitant, included or sidelined, focused or distracted.
From huddle room to boardroom, the most effective collaboration environments share the same principles. They are simple to use, consistent across spaces, secure by design, and flexible enough to evolve. They respect people’s time and attention, allowing teams to focus on ideas rather than interfaces.
As organizations continue to refine their hybrid strategies, meeting room solutions remain a revealing indicator. When collaboration flows effortlessly, hybrid work has a real chance to succeed. When it doesn’t, even the best policies and tools elsewhere struggle to compensate.
In the end, the future of hybrid work is not decided in strategy documents. It is decided, meeting by meeting, in the rooms where people come together to work.
Why smart meeting rooms are becoming strategic IT assets
For years, innovation in workplace collaboration followed a familiar pattern. Better cameras promised clearer video. Smarter microphones claimed to eliminate background noise. Software updates added more features, more buttons, and more possibilities. Progress was tangible, measurable, and largely device‑centric.
As organizations move deeper into hybrid work, that model is starting to show its limits. The most meaningful change in collaboration today is not driven by hardware specifications or platform features. It is driven by a shift in mindset: about meeting rooms, about data, and about the evolving role of IT in shaping how people actually work together.
Meeting rooms are undergoing a quiet but profound transformation. They are no longer passive spaces that simply host meetings. Increasingly, they are becoming active, data‑driven IT endpoints that sit at the crossroads of productivity, workplace culture, sustainability, and employee experience.
From furniture to IT infrastructure
Historically, meeting rooms lived in an awkward grey zone. They were physical spaces, often treated as facilities or AV concerns, yet they relied heavily on IT systems to function. When something broke, IT was expected to fix it, usually reactively and with limited visibility into what actually went wrong.
That approach no longer scales. Today’s collaboration environments are modular, software‑defined, and deeply integrated into enterprise networks. Cameras, microphones, displays, and room systems behave much more like endpoints than furniture. They require monitoring, updates, security policies, and lifecycle management – just like laptops or mobile devices.
For IT leaders, this represents a fundamental shift. Managing collaboration spaces is no longer about responding to tickets. It is about designing reliable, measurable infrastructure that people can trust. When meeting rooms work consistently, they disappear into the background. When they do not, they erode confidence, waste time, and undermine collaboration at its core.
AI moves from promise to practice
Artificial intelligence has been part of collaboration conversations for years, often framed as an exciting add‑on. In practice, many organizations are now discovering that AI only delivers value when it solves real, operational problems.
In meeting environments, that means using AI to reduce friction rather than impress. Intelligent framing, noise reduction, automated room diagnostics, and meeting insights are most effective when they quietly improve the experience without asking users to change their behavior. AI becomes meaningful when it helps meetings start on time, keeps participants engaged, and reduces the cognitive load on employees who are already juggling multiple tools and priorities.
This also places new responsibility on IT. AI‑enabled collaboration systems need governance, transparency, and clear success criteria. The question is no longer whether AI is present, but whether it measurably improves how people collaborate.
Measuring what really matters
One of the most challenging shifts for IT organizations is redefining what success looks like. Traditional metrics such as uptime or ticket volume only tell part of the story. A meeting room can be technically available and still fail its users.
Leading organizations are starting to look beyond device health and toward outcomes. Are rooms used as intended? Do employees trust technology enough to use it spontaneously? Are collaboration spaces supporting focus, inclusivity, and effective decision‑making?
Answering these questions requires data, but also interpretation. Room analytics, usage patterns, and performance insights only become valuable when IT teams connect them to broader business goals such as productivity, employee satisfaction, and sustainability.
A broader role for IT leaders
Taken together, these trends point to a broader evolution in the role of IT. Collaboration is no longer a support function that sits on the sidelines of organizational strategy. It actively shapes how people connect, how culture is experienced, and how work gets done.
For IT leaders, this means developing new skills, new partnerships with the workplace and HR teams, and new ways of thinking about technology’s impact on human interaction. The future of collaboration will not be defined by the next device release, but by how intentionally organizations design and manage the spaces where collaboration truly happens.
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



