Computerworld.com [Hacking News]
OpenAI’s GPT is getting better at mathematics
OpenAI’s GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company’s top large language model, according to a new study by Epoch AI, a non-profit research institute.
GPT-5.2 Pro solved four problems that had been too difficult for any other AI models to solve, and of the 13 problems that any other model had previously solved, it was able to solve 11, Epoch reported.
This means GPT-5.2 Pro had solved 31% of Epoch AI’s challenges, a rise from the previous score best of 19%.
Math problems have long proven difficult for AI. Scientists have speculated that this could be because AI systems can’t recognize their own limitations, while others have surmised that the issue is that AI are focused on language and not on numbers, leading to a few stumbles along the way.
The Epoch AI experiment has demonstrated that AI is becoming more adept at some of the trickier math issues. In the test, GPT-5.2 Pro was presented with problems from various branches of math.
Joel Hass, a professor in the department of mathematics at University of California, Davis, contributed one of the problems solved by GPT-5.2 Pro. He told Epoch AI he was impressed with the way it cracked his topological challenge. “GPT-5.2 Pro solved the problem with correct reasoning. Notably it was able to recognize the specific geometry of a surface defined by a polynomial in the problem statement,” he said.
Number theorist Ken Ono of the University of Virginia contributed another of the problems. He said that the AI model had “understood the essential theoretical trick and executed the necessary computations” to solve it, but added, “If it was a PhD student I would award only 6/10 for rigor due to missing details.”
Microsoft handed over BitLocker keys to law enforcement, raising enterprise data control concerns
Microsoft gave Windows users’ BitLocker encryption keys for to US law enforcement officers, providing access to encrypted data, according to a news report.
The US Federal Bureau of Investigation approached Microsoft with a search warrant in early 2025, seeking keys to unlock encrypted data stored on three laptops in a case of alleged fraud involving the COVID unemployment assistance program in Guam. As the keys were stored on a Microsoft server, Microsoft adhered to the legal order and handed over the encryption keys, Forbes reported on Friday.
Microsoft did not immediately respond to a request for comment.
There have been instances in the past where the big tech companies were approached by law enforcement for access to devices but have resisted handing encryption keys to authorities.
BitLocker is a widely used tool for securing data at rest, whether by individuals or enterprises managing hundreds or thousands of Windows devices. By default, many Windows installations back up BitLocker recovery keys to Microsoft’s cloud services, where Microsoft can retrieve them if legally compelled with a valid order.
Custody issue, not BitLockerBitLocker is designed to provide encryption for entire volumes, addressing the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned devices. As BitLocker is bunded with Windows 10 and Windows 11, it has effectively become the default full-disk encryption layer across Windows endpoints, say experts.
“BitLocker itself does not fail here. The software does what it is built to do, encrypts the disk, integrates into Windows, allows for easy recovery,” said Sanchit Vir Gogia, chief analyst at Greyhound Research.
While the encryption of BitLocker is robust, enterprises need to be mindful of who has custody of the keys, as this case illustrates.
“The encryption engine in BitLocker, using AES-128 or AES-256 in XTS mode, is built to resist modern cryptanalysis. Even the US Department of Homeland Security has admitted they lack the forensic tooling to break it directly. However, most enterprise fleets running Windows use tools like Intune and Autopilot to roll out and manage devices. In that flow, unless explicitly disabled, recovery keys are automatically backed up to Microsoft Entra ID. These keys are then viewable via the admin centre or retrievable through scripts,” Gogia said.
Where most enterprises go wrongEnterprises using BitLocker should treat the recovery keys as highly sensitive, and avoid default cloud backup unless there is a clear business requirement and the associated risks are well understood and mitigated.
The safest configuration is to redirect those keys to on-premises Active Directory or a controlled enterprise key vault. Even if stored in corporate-controlled directory or service such as Microsoft Entra ID or Intune, there should be strong governance on who can read the keys, with effective logging and just-in-time access, said Amit Jaju, a global partner at Ankura Consulting. This can cut Microsoft out of the recovery loop, he said.
If keys have to reside in Microsoft’s cloud, use strong multi-factor authentication for admin roles, with conditional access and privileged-access workstations so a compromise of admin credentials does not automatically become a compromise of all keys, he said.
Enterprises should ensure strict access control and separation of duties. “Only a small, vetted group such as security operations, endpoint engineering, should have rights to view or export recovery keys. Approvals should be workflow-based, not ad hoc. Every key retrieval should leave an auditable, immutable trail, and ideally be tied to an incident or ticket ID,” said Jaju.
CISOs should also ensure that when devices are repurposed, decommissioned, or moved across jurisdictions, keys should be regenerated as part of the workflow to ensure old keys cannot be used.
Gogia warned of the long tail of insecure setups. Personal accounts linked during provisioning, or BYOD devices that silently sync keys to consumer dashboards, are invisible pathways for leakage. “If those keys sit outside your boundary, you no longer have a clean chain of custody. That’s not a theoretical risk. It’s something auditors are now actively checking,” he said.
As many breaches are not cryptographic but procedural, enterprises should have a formal playbook for when a recovery key can be used (lost PIN, internal investigation with legal approval, lawful order) and when it cannot (informal manager request to access an employee’s data), noted Jaju.
Geopolitics reshaping enterprise data and key controlGeopolitical tensions are also reshaping global trade and technology policies, something enterprises increasingly need to factor into their security strategies. As governments assert greater control over data, trade secrets and proprietary information risk becoming entangled in broader state interests.
Gogia warned, “The US CLOUD Act allows law enforcement to compel US-based providers to hand over data and keys, even if that data is hosted in Europe or Asia. Similarly, Chinese data localisation rules require keys and data to be accessible to state regulators. In India, recent legislation has introduced broad access rights for security agencies. And the EU is debating whether sovereignty must include key custody by design, not just data residency.”
If recovery keys are stored with a cloud provider, that provider may be compelled, at least in its home jurisdiction, to hand them over under lawful order, even if the data subject or company is elsewhere without notifying the company. This becomes even more critical from the point of view of a pharma company, semiconductor firm, defence contractor, or critical-infrastructure operator, as it exposes them to risks such as exposure of trade secrets in cross‑border investigations.
Jaju added, “Enterprises should assume that where keys are held, they can potentially be compelled. So where practical, ensure that the entities controlling keys are legally anchored in the jurisdiction whose laws and due-process standards you trust most. Establish board-level oversight on cross-border data access, including a register of government data-access requests, where legally permitted. For multinational companies, legal and security teams must work together to understand mutual legal-assistance treaties, CLOUD Act implications, and local interception laws.”
Microsoft releases second out-of-band fix for Windows in a week
Outlook users have reported difficulties with Microsoft’s January Patch Tuesday updates, forcing Microsoft, once again, to patch some of its patches.
Users reported that, after applying the January 13 Windows updates, some applications became unresponsive or encountered unexpected errors when opening files from or saving files to cloud-based storage such as OneDrive or Dropbox. In particular, certain Microsoft Outlook configurations with the PST file containing a users’ messages stored on OneDrive could cause Outlook to hang or lead to sent messages going missing or previously downloaded emails being re‑downloaded.
In response, Microsoft has issued a bunch of out-of-band emergency updates for Windows 11 and 10 and Windows Server 2019, 2022, and 2025 to solve the problem.
This is not the first time that Microsoft has had to issue a patch for a patch. Just last week, it had to react when it inadvertently introduced two new bugs: an inability to connect to Windows Cloud PCs and an inability to shut down some machines with Secure Launch enabled. Prior to that, in October 2025, a patch caused a multitude of different issues, while in May 2025 Microsoft had to issue an out-of-band patch to fix a Windows 11 start-up failure.
Microsoft said the latest out-of-band updates are cumulative and include security fixes and improvements from the January 13, 2026, security update (KB5074109) and the out-of-band update (KB5077744) from January 17, 2026.
Apple to upgrade Siri’s AI by April — Bloomberg
While Apple CEO Tim Cook spent an evening at the movies, Bloomberg reported that his company intends to introduce a much-improved, Google Gemini-boosted AI in February, following this with an even bigger upgrade later this year.
We’ve heard much of this before; what’s new is the timing. (The partnership envisions Google Gemini integrated within Apple’s Foundation Models.)
Beta testing from FebruaryThe initial beta upgrade is scheduled to ship with iOS 26.4 in the second half of February, Bloomberg said. If everything goes as expected during beta testing, the software should ship for real in March or April.
The company is then expected to announce an even bigger upgrade to Siri at WWDC in June, when Apple unveils its next iPhone OS — iOS 27, which is due out in the fall. This will turn Siri into a smart, conversational chatbot.
We all understand why the partnership between Apple and Google is happening: The iPhone maker encountered big problems in its own AI development, and while a lot of heads have rolled since then, it clearly needed to reach a deal to expedite its platform embrace of artificial intelligence.
The company’s failure to introduce an AI-enabled Siri in 2024 is something it knows a billion customers are aware of, with millions now making regular use of the array of generative AI (genAI) services already out there. With 73% of CIOs saying Macs are already in use to run AI in the enterprise, Apple wants to meet the needs of that particular audience.
Getting it rightWith platforms that have arguably become the best devices on which to run and build AI, Apple is under great pressure to prove it can also provide AI services people will trust and use, while retaining the essential simplicity of the Apple user experience.
For good or ill, the importance of AI will only grow in the coming years, meaning Apple is under serious pressure not just to deliver something good, but also to deliver something that satisfies expectations. The millions who have waited since 2024 for some of the promised Siri features will be less likely to forgive the company if it ships anything unremarkable. This is a high-pressure moment that could even be existentially decisive for the firm.
It’s possible those high stakes will work in Apple’s favor. That’s because the long wait and wide reporting of problems Apple faced, which means that interest in whatever it does come up with will be very high.
Will this AI make or break Apple?We don’t know yet whether people will rush to install iOS 26.4, however. We’ve heard reports that interest in iOS 26 may be lower than normal, which possibly reflects reluctance to embrace the Liquid Glass UI.
It is true that many people are curious about AI — even in Europe, 37.2% of the population has used genAI tools. All the same, not everyone is completely enthusiastic, fearing it further entrenches the power of Big Tech while decimating employment prospects.
Within this environment, initial reactions to what Apple delivers are important because many users may be cautious about the upgrade. As a result, initial reactions from the media and early adopters could make or break Apple’s new Google Gemini partnership.
Ultimately the world will be waiting to see whether this is an Apple Maps moment, or the something closer to the unveiling of the first Mac.
We’re about to find out.
You can follow me on social media! Join me on BlueSky, LinkedIn, and Mastodon.
Why AI assistants still face barriers at scale
AI assistants and more advanced agent-based tools are gaining visibility in the workplace, even as most organizations remain cautious about deploying them at scale. Analysts say that might change as the technology matures, but only if businesses address persistent challenges around security, governance, and trust.
A Gallup poll in November showed that just 18% of US workers use AI tools on a weekly basis, and just 8% use AI daily, highlighting its still limited use in the workplace. A separate PwC survey of 50,000 workers globally found similar results: 14% of respondents use generative AI (genAI) daily, while 6% interact with AI agents each day.
Even so, analysts envision some organizations moving beyond pilot projects in the near future. When it comes to AI in collaboration software applications, Irwin Lazar, principal analyst at Metrigy, sees signs that businesses intend to move more aggressively from experimentation to broader adoption this year.
Lazar said companies increasingly fear falling behind if they fail to adopt the technology, particularly given its potential to streamline collaboration and save time. “I expect you’ll see a large movement into real-world adoption, whereas last year it was more about pilots and trying to figure out how do we deploy successfully?”
“Adoption is picking up,” said Ethan Ray, senior analyst at 451 Research, part of S&P Global Market Intelligence. The research firm found that more than half of enterprises already have agents in production or testing, and organizational integration of genAI use is expected to jump from 27% to 40% within the next 12 months. That, he said, assumes businesses can overcome nagging deployment challenges.
“Progress will depend on building trust as leaders need strong governance, observability, and security controls, because top concerns are data privacy, accuracy, and reliability,” said Ray.
AI assistants still struggle to scale in the workplaceEven with a wide range of AI tools available to workers, deployments have been limited so far. Take Microsoft 365 (M365) Copilot, for example: two years after its full launch, businesses remain slow to adopt the AI assistant.
“Despite the hype, Microsoft has really struggled to make huge headway in terms of deploying it at scale,” Max Goss, senior director analyst at Gartner, said at the Gartner IT Symposium/Expo in Barcelona in November.
An audience poll during Goss’s presentation showed that most either remain in pilot deployments or have rolled out to a small group of less than 20% of employees. Few have deployed M365 Copilot widely across their workforce, mirroring the broader pattern of business adoption Gartner has noted, said Goss.
Several factors have slowed wider adoption, including security and governance worries, and the need to train staffers to use the AI assistant. An unclear ROI case has also put the brakes on any expansion plans and is having “real impact on Copilot adoption” when it comes to larger rollouts, said Goss.
Still, business interest in M365 Copilot remains high, he said, an indication that Microsoft’s marketing efforts are paying off in some ways. A Gartner survey showed that IT leaders’ priorities for AI assistants over the next 12 months largely center around M365 Copilot, both for the paid version (86%) and the free Copilot Chat (68%).
Businesses are interested in other AI assistants too: 56% of IT leaders plan to roll out OpenAI’s ChatGPT to staff, according to Gartner data, with Google’s Gemini, Anthropic’s Claude, and Amazon’s Q also piquing interest.
In fact, most organizations are looking at multiple AI assistants. Only 8% are focused on a single tool, with an average of at least three enterprise AI assistants in use at surveyed organizations. “The AI race is still very much on, and Microsoft has genuine competition,” said Goss.
AI tools start to matureWhile customers are cautious, software vendors continue to add AI features into their products. Almost every vendor in the collaboration software market has an agent offering at this point, said Lazar.
“They’re going from standalone agents [where] you have to build capabilities, to agents that are already available within the applications,” said Lazar. This includes off-the-shelf agents that users can select for tasks such as project management, sales management, or IT service desk support. Customers only need to grant access to relevant data and set governance rules before putting the agent into use.
“Now you’re really starting to see this agentic era start to move forward, at least from the vendor standpoint,” he said.
“In 2026, vendors will move past just adding assistants and start building features that make agents reliable, explainable, and easy to govern,” said Ray. He expects more focus on “things like memory (so agents remember context), transparency in decision-making, and guardrails for safety.”
Agents get connectedOne development that could enhance the usefulness of agents is the ability for AI assistants to interact with each other.
Employees can get frustrated with AI tools that are confined to a single app, which is at odds with the way employees work. “Work doesn’t live in one software tool,” said Will McKeon-White, senior analyst at Forrester. “I suspect most platforms have now realized the need for multi-vendor, multi-agent orchestration.”
To address this challenge, tech companies have been finding ways to simplify communication between agents — most notably turning to Anthropic’s model context protocol (MCP) and Google’s Agent2Agent (A2A) protocol.
MCP servers have been built into a wide variety of collaboration and productivity tools already. “Vendors have realized they can’t own everything, and so they’re building MCP servers to essentially federate the data they have with other AIs,” said Lazar.
The use of MCP servers could change how employees interact with collaboration and productivity tools, he said, by allowing businesses to choose a primary AI model and pull in data from multiple sources. “It saves the user from having to move back and forth between applications in order to do things like summarize chats or get a pulse on what’s happening in the company,” said Lazar.
Security and governanceAlongside the potential benefits, the use of MCP also introduces new security risks.
“The big concern I hear when I talk to folks is related to security of MCP servers,” said Lazar. “They will be the number one target for attack as they become more widely available, because that’s the gateway to enterprise data.”
For attackers, MCP servers present a “target-rich environment,” whether for data exfiltration or data poisoning. “If there’s any limitation on deployment, that’s going to be what people are concerned about,” he said.
In his presentation, Goss said security and governance will continue to be key considerations for IT decision-makers rolling out M365 Copilot, though the challenges continue to evolve.
Oversharing – where M365 Copilot surfaces sensitive corporate data to users not authorized to have the information — remains a priority, for instance. Other risks have emerged, with “agent sprawl” becoming a notable topic in 2025 as businesses deploy agents and workers build their own.
In 2026, he said “multimodel agent sprawl” could be an emerging issue for M365, as Microsoft offers the option to connect its AI assistant to a wider range of models, notably Anthropic’s, as it moves beyond OpenAI as its main partner.
“When Microsoft integrated Anthropic, they took the decision not to host an Anthropic model: that model is still in the AWS environment,” he said. “As Microsoft onboards more models, it’s going to be very difficult for them to host them all and do what they’ve done with with OpenAI. So, we’re now going to have to start to think about: how do we manage agents and models that are outside of the Microsoft trust boundary, as well as the ones that are within? What do you do about that? What strategy should you have?”
He recommended that organizations use “adaptive governance” to manage agents, setting the level of governance controls in relation to the level of risk. This approach enables the creation of a “self-governed, safe zone where users can create low-risk agents using Copilot Studio or other tools that will help them improve their productivity without exposing you to risk,” he said.
Goss said governance concerns shouldn’t be a reason to avoid deploying AI assistants or agents. “For me, governance is the ultimate enabler of AI — but we’ve got to get it right, and we’ve got to spend some time on it,” he said. “While the value around Copilot is still a little bit mixed, it’s a perfect time to think about how we get the foundations in place. Because I think there will come a tipping point…where most people are deploying Copilot at scale.
“…We’re not there yet, so it’s a great opportunity to fix the foundations.”
Intel’s AI pivot could make lower-end PCs scarce in 2026
In 2026, lower-end PCs may be more difficult to come by, and for those that are available, price tags may rise.
This is fallout from Intel’s plans to pivot its manufacturing capacity from chips for PCs to Xeon processors to support intensive AI workloads. The company has admitted that it had miscalculated demand for its data center products, and will now go all-in on AI-ready hardware.
This strategic turn indicates how voracious companies are for infrastructure that can power intensive AI workloads, to the point that even tech giants like Intel aren’t prepared for the demand.
“Intel’s move to prioritize data center capacity is in response to a supply-demand mismatch, or rather, faulty forecasting from their hyperscaler customers who rapidly shifted to the higher core-count solution late last year,” noted Scott Bickley, advisory fellow at Info-Tech Research Group.
Accelerating XeonIn an earnings call this week, Intel CFO David Zinsner acknowledged capacity constraints in Q3 and Q4 as demand for its Xeon products soared. Intel Xeon 6 server processors (codenamed Granite Rapids and Sierra Forest) were designed for data centers, cloud, AI, and high performance computing (HPC), and are widely used by Nvidia.
At the same time, industry-wide demand for key components like dynamic random-access memory (DRAM), NAND, and substrates is ballooning due to intense demand for AI-ready infrastructure, said Zisner.
Just six months ago, he noted, unit sales were not expected to increase. “Every hyperscaler customer we talked to was signaling that,” he said. However, Intel experienced a rapid increase in orders for Xeon processors over the third and fourth quarters, and, after talking with hyperscalers, Zinsner said he got the impression that this will be “a story we’d feel for several years.”
“To the extent we have excess, we’re pushing all of that into the data center space to meet that customer demand,” he said. “We have important OEM customers, both data center and client, and that must be our priority to get the limited supply we have to those customers.”
Roadmaps alteredThe company has made “decisive changes” to simplify its server road map, according to CEO Lip-Bu Tan. It will focus more closely on Diamond Rapids (Xeon gen 7) and accelerate delivery of Coral Rapids (Xeon 8), which will feature simultaneous multithreading (SMT), where one core can process two or more threads at once.
However, the company will not abandon its client business, Zinsner emphasized. “We can’t completely vacate the client market,” he said, “so we’re trying to support both as best we can, and obviously work our way out of this supply issue.”
That said, within the client segment, the company will particularly focus on mid- and high-end products (Core-series high-performance processors), as opposed to low end products (for less advanced PCs).
Intel is leaning heavily into AI PCs, having showcased its Core Ultra Series 3 (codenamed Panther Lake) at CES earlier this month, and said it is on track to release Nova Lake (its next mainstream client CPU following Core Ultra Series 3) this year.
“We now have a client road map that combines best-in-class performance with cost-optimized solutions,” said Tan.
The outlook for lower-end PCsWhat does this mean for lower-end PCs? Zinsner acknowledged that “client CPU inventory is lean,” even amid excitement for Series 3. Further, “rising component pricing is a dynamic we continue to watch closely, especially relative to the client market.”
The Intel 18A node manufacturing process for Panther Lake is challenged with lower than expected yields, which “throttles output vs market demand,” said Info-Tech’s Bickley. “Coupled with a focus on their mid-high end markets, this makes the lower-end entry-level laptops and PCs materially more difficult to source.”
Anshel Sag, principal analyst at Moor Insights & Strategy, agreed there may be fewer low-end SKUs in 2026, and the ramp for products like Wildcat Lake, an entry-level Core Series 3 CPU, might be later in the year, or could slip into next year as 18A capacity increases.
Processors from AMD and Qualcomm could help address some of the shortfalls, especially in the mid-range, Sag forecasted; at the low end, more price-conscious users may push into Android via Google’s Project Aluminium and through partners like Mediatek, which currently rule that market.
As lower-cost inventory buffers are depleted, buyers can expect price increases ranging from 15% to 20% in 2026, with some brands “hiking prices higher to salvage margins,” said Bickley. He projects PC manufacturers will lean into the AI PC trend, focusing less on lower-cost models and shifting production to machines that utilize higher-end CPU chips and memory components.
However, he noted, “CPUs are not being cannibalized by GPUs. Instead, they have become ‘chokepoints’ in AI infrastructure.” For instance, CPUs such as Granite Rapids are essential in GPU clusters, and for handling agentic AI workloads and orchestrating distributed inference.
How pricing might increase for enterprisesUltimately, rapid demand for higher-end offerings resulted in foundry shortages of Intel 10/7 nodes, Bickley noted, which represent the bulk of the company’s production volume. He pointed out that it can take up to three quarters for new server wafers to move through the fab process, so Intel will be “under the gun” until at least Q2 2026, when it projects an increase in chip production.
Meanwhile, manufacturing capacity for Xeon is currently sold out for 2026, with varying lead times by distributor, while custom silicon programs are seeing lead times of 6 to 8 months, with some orders rolling into 2027, Bickley said.
In the data center, memory is the key bottleneck, with expected price increases of more than 65% year over year in 2026 and up to 25% for NAND Flash, he noted. Some specific products have already seen price inflation of over 1,000% since 2025, and new greenfield capacity for memory is not expected until 2027 or 2028.
Moor’s Sag was a little more optimistic, forecasting that, on the client side, “memory prices will probably stabilize this year until more capacity comes online in 2027.”
How enterprises can prepareSupplier diversification is the best solution for enterprises right now, Sag noted. While it might make things more complex, it also allows data center operators to better absorb price shocks because they can rebalance against suppliers who have either planned better or have more resilient supply chains.
Bickley urged enterprises to also establish hybrid AI strategies that split workloads between the cloud and client device PCs, to defer reliance on oversubscribed compute. Where possible, invest in memory optimization tools and extend refresh cycles for existing hardware to avoid the 2026 price peak, as well as auditing supply chains to gain earlier visibility to component risks.
Further, “shift to multi-year commitments and away from spot buying,” he advised. “This requires longer-term planning and strategic supply agreements to guarantee allocation in a capacity-limited environment.”
Amazon layoffs expected to disproportionately hit AWS and tech talent
As the market slows down, AWS and other Amazon units are preparing for another round of layoffs, which is expected to overwhelmingly impact tech talent.
“Amazon is planning a second round of job cuts next week as part of its broader goal of trimming some 30,000 corporate workers,” Reuters reported. “The company in October cut some 14,000 white-collar jobs, about half of the 30,000 target first reported by Reuters. The total this time is expected to be roughly the same as last year and could begin as soon as Tuesday.”
“This announcement should not be a shock to anyone, as Amazon had clearly forecasted a total layoff number last year of around 30K,” said Scott Bickley, advisory fellow at Info-Tech Research. “Ten percent of the corporate workforce represents a large-scale reduction, but Amazon has long operated less like a traditional people-centric enterprise and more like a highly optimized system.”
“These job cuts have likely been in the works for several months now, with the organization preparing to ensure no balls are dropped once the affected employees are released. In pre-Covid times, bottom performers would be regularly and relentlessly culled from the herd. The pandemic was a generational opportunity for Amazon, however, and they scaled up massively to meet the increased demands from a global population stranded inside their homes under lockdown,” Bickley said. “This created a massive hiring wave, the likes of which Amazon is still seeking to right-size. Add in the recent AI hype cycle, and this process was likely further slowed.”
The role of AI is clearly a background factor in the layoffs, but people familiar with Amazon operations stressed that it’s indirect. Amazon is not using AI to replace employees, they said, but the softening of the AI market, especially with AWS losing momentum to Google, is a key factor behind the cuts.
Amazon CEO Andy Jassy himself told investors during the third-quarter earnings call in late October that AI is not the direct reason for the layoffs. Jassy said the layoffs “are not really financially driven and it’s not even really AI-driven” and that the reason is “culture.”
Jassy explained: “If you grow as fast as we did for several years, the size of businesses, the number of people, the number of locations, the types of businesses you’re in, you end up with a lot more people than what you had before, and you end up with a lot more layers. And when that happens, sometimes without realizing, you can weaken the ownership of the people that you have who are doing the actual work.”
Mohan Mulund, a former Amazon director of product management who is now managing director of investment firm Vangal, said that he has been in touch with current AWS employees who have expressed worries about being laid off.
Mulund said that Jassy is correct that the layoffs are being fueled by culture, but it’s not quite the culture that Jassy described. Mulund said the issue is that Amazon, along with every hyperscaler, hired a lot of people in 2020 and 2021 and paid them better than many existing employees in identical roles.
“Amazon believes in having lifers,” Mulund said, referring to employees who have been at the company for more than 15 years. “They see people making more [than they do] and they are upset.”
Mulund offered the example of a Level 8 director at Amazon. Even though a typical manager at that level is making about $700K annually, many of the Level 8 managers brought onboard during the hiring boom are taking home $1 million per year. “That creates resentment,” he said.
Mulund said that Amazon is not exactly targeting those more highly compensated people; they are laying off employees whose performance is ranked poorly. But, he stressed, the more an employee is paid, the higher the expectations. That means that the layoffs will inevitably impact more of the higher-paid people, which should dilute the resentment.
Bickley added that reductions at Amazon happen fairly routinely. “They are pretty ruthless in terms of their performance reviews. Typically they cull the herd pretty regularly,” Bickley said.
He said he only partially agreed with Mulund. That higher compensation is a reality, but much of it is from stock options, he noted. Amazon often looks very closely at employees “as they are about to cash out” and that is why it is common for Amazon managers “to leave right before they vest.”
Still, Bickley said, that impacts a relatively small number of the people likely to be laid off. “There are hundreds, at most, that are in that category. It’s nowhere near thousands,” Bickley said. “I don’t think that is what is driving these layoffs.”
Mohamed Yousuf, CEO of Smart Workforce AI, observed that AI is playing a role.
“When they mention culture, or bureaucracy, it’s from the result of pre-AI operating models that don’t match today’s productivity realities,” Yousuf pointed out. “We have seen how effectively AI can change productivity baselines inside companies. This then pushes leadership to reassess how much of their current structure is actually needed.”
“Separating these layoffs from AI is just simply narrative management. The reality is AI raised productivity expectations and employees are asked to meet them by leaving. They might not be directly replacing each laid off employee with AI, but the remaining staff work a lot faster as AI enabled teams,” Yousuf said.
Mulund added that he expects to see the layoffs overwhelmingly impacting “tech people” and that “more than half of them will be AWS people. It will be more on the AWS side because AWS growth has slowed down pretty dramatically.”
Despite that, he said, he does not expect AWS customers will be affected by the layoffs. “Nothing is going to change for them at all. It won’t impact service of the business” he said, adding, “because [AWS] businesses have slowed down, they can make do with fewer people on the team” without affecting service delivery.
European tech leaders advise caution on tech sovereignty drive
European tech leaders at this year’s World Economic Forum meeting in Davos, Switzerland, warned against an overzealous approach to digital sovereignty that shuts out US technology suppliers.
“We have to be careful about the discourse around sovereignty,” said Aiman Ezzat, CEO of French IT services firm Capgemini, in a panel discussion on Thursday.
Ezzat cited Mario Draghi’s influential 2024 report on European Union competitiveness, in which the former European Central Bank President linked lagging productivity growth in Europe to lower technology adoption.
Regulations designed to enforce digital sovereignty could slow technology adoption in Europe and “further deplete the competitiveness of European industry,” Ezzat warned.
“We have to go for technology adoption as fast as possible. And, yes, sometimes it is going to be at the expense of sovereignty,” he said.
In a Bloomberg interview at Davos, Börje Ekholm, CEO of Swedish telecom equipment firm Ericsson, said that recent discussions around sovereignty are “dangerous,” and that attempts to build homegrown alternatives to US technology would lead to higher prices in the region.
At the same time, Capgemini’s Ezzat noted a “huge amount of dependency” on US technology that has led to “exposure and risk.”
He said Europe’s reliance on US technology is due, in part, to a weak domestic cloud sector that stems from a lack of available investment capital during the 2010s. A 2025 Synergy Research report found that European cloud providers now account for just 15% of the region’s cloud market. “We did not invest early enough in the cloud to be able to create a European cloud player,” he said.
Ezzat called for a balanced approach to digital sovereignty. “Remember, sovereignty is not one monolithic thing. It’s not ‘either we have or we don’t have,’” he said.
European countries are already positioned to ensure data, operational, and regulatory sovereignty, said Ezzat. Technological sovereignty is more challenging, however. “There are four layers, and we can control three out of the four,” he said. “On the technology side, we’re going to have to make the compromise while we’re trying, at the same time, to build our own stack in some places,” he said.
Technological sovereignty spans several layers, said Mati Staniszewski, co-founder and CEO of Polish voice AI startup ElevenLabs, during the panel discussion. This includes energy and compute, as well as foundational models and how these models are applied in production by companies and governments.
He said that partnering with global providers of foundational models makes sense, while European firms can compete and “flourish” further up the stack by focusing on data and AI applications that sit on top of these models.
In Europe, the topic of sovereignty can be “very emotional — maybe too emotional,” said SAP CEO Christian Klein, also speaking during the panel discussion.
He said some dependence on US hardware is unavoidable, but the ability to switch infrastructure providers limits the risk. Sovereignty efforts should then focus on the data layer as a priority, he said.
“I can port any ERP from one infrastructure to another in four weeks. I cannot port a customer from an SAP supply chain software wanting mission critical manufacturing in four weeks from one system to another,” said Klein.
“We need to spend more time on having better access to data and really playing a game which the US and China have not yet played.”
More on the digital sovereignty push in Europe:
- Europe votes to tackle deep dependence on US tech in sovereignty drive
- Global uncertainty is reshaping cloud strategies in Europe
- EU looks to bolster its open-source sector to counter US cloud dominance
- Nadella redefines ‘sovereignty’ for the AI era — analysts call it smart, self‑serving
- AWS European cloud service launch raises questions over sovereignty
AI needs a course correction, say World Economic Forum speakers
Discussions around artificial intelligence dominated the 2026 World Economic Forum meeting in Davos, Switzerland. Prognosticators said the situation may get worse before it improves.
Top executives talked about improved productivity and economic impact with advances in finance, healthcare, and other sectors. But others noted concerns about the unchecked race to superintelligence, warning that AI’s illusions could move society in the wrong direction. Additionally, AI will cost jobs, constrain resources, create technical issues, and raise regulatory concerns, they said.
“AI is in a very primitive age. We have a lot to do,” said Eric Xing, president at Mohamed bin Zayed University of AI (MBZUAI), during a WEF panel discussion.
Other panelists said AI is being designed in the image of human intelligence, which is not the point of AI. Intelligence can be fallible, and AI should not be an extension of humans, they said.
Even the most intelligent beings can be deluded, said Yuval Harari, a well-known Israeli author and historian.
“The lesson from history about intelligence: you don’t need a lot of intelligence to change the world and potentially cause havoc. You can change the world with relatively little intelligence,” Harari said, adding that he was not referring to any particular person.
There could also be technical issues: if one machine goes down and then the entire system goes down, MBZUAI’s Xing said. “Performance-wise, there are not enough checkpoints to control and visualize and understand risky points,” he said.
AI vendors including Tesla, Nvidia, and Microsoft were the most active proponents of AI at WEF. These companies have made AI abundant by investing billions in infrastructure, with trillions more committed to data centers and chips.
But the infrastructure buildout is outpacing the available energy and needs to start producing results, said Microsoft CEO Satya Nadella during an on-stage interview at WEF.
“We will quickly lose the social permission to take energy, a scarce resource, and use it to generate these tokens if they’re not improving health outcomes, education outcomes, public sector efficiency, private sector competitiveness,” Nadella said.
Meanwhile, AI hype has boosted the valuations of AI startups by billions of dollars, which has created the fear of an AI bubble.
There is concern that potential adjustments in tech valuations and tariffs could hurt economies, said Christian Keller, Head of Economics Research at Barclays Investment Bank, in a YouTube discussion posted by the WEF ahead of the conference. A majority of chief economists believe the AI bubble could burst as AI-related stocks start declining later this year, the WEF said in its January 2026 Chief Economists Report.
But there are credible arguments against viewing the AI boom as a bubble. “Unlike the dot-com era, today’s leading AI firms are already highly profitable, with strong earnings growth underpinning rising share prices and significant real investment in data centers and infrastructure,” the WEF report said.
Nvidia CEO Jensen Huang said there is no AI bubble as the demand and spot pricing for its GPUs are through the roof. “The AI bubble comes about because the investments are large… We have to build the infrastructure necessary for all layers of AI above it,” Huang said, adding, “the opportunity is really quite extraordinary.”
But Ashwini Vaishnaw, India’s minister of electronics and information technology, threw cold water on a massive, trillion-dollar AI infrastructure buildout to advance economies. Large genAI models don’t give countries an AI advantage, as small models can do 95% of the work while consuming a fraction of the power, Vaishnaw said during a panel discussion at WEF.
The return on investment will come down to “deploying the lowest cost solution to get the highest possible return,” Vaishnaw said.
Instead, companies heavily investing in AI infrastructure and massive models face tremendous financial risk, Vaishnaw said, taking indirect aim at the United States. “The people who are creating those large models might go bust in the coming years. You never know, they might go bankrupt in the coming years,” he said.
Amid the discussions, WEF makes clear the AI megatrends to chase in 2026, said Steven Dickens, principal analyst at Hyperframe Research.
AI is moving from the ‘think and write’ stage to ‘seeing and doing’ as it moves to more industries such as healthcare, manufacturing, and retail, Dickens said. “This means more of the general workforce will have AI impact their lives on a daily basis,” Dickens said.
AI is a “path to abundance for all,” said Tesla CEO Elon Musk in an on-stage discussion at WEF.
“My prediction is… we’ll make so many robots and AI that they will saturate all human needs,” said Tesla CEO Elon Musk.
At the same time, humans need to be careful with AI, he noted. “We don’t want to find ourselves in a James Cameron movie. We don’t want to be in Terminator,” Musk said.
Related reading:
Europe votes to tackle deep dependence on US tech in sovereignty drive
European lawmakers on Thursday adopted a comprehensive report on technological sovereignty and digital infrastructure that directs the European Commission to reduce the bloc’s heavy reliance on foreign technology providers across semiconductors, cloud infrastructure, software, and AI systems.
The report passed 471-68 in the full Parliament, with 77% voting in favor and support from major parties including the European People’s Party, Social Democrats, Liberals, and Greens. While non-binding, it directs the European Commission to map critical technology dependencies across the board, then develop policies reducing reliance on foreign providers.
The vote comes as geopolitical tensions drive technology strategy changes across European enterprises.
According to the parliamentary document, the EU relies on non-EU countries for over 80% of digital products, services, infrastructure, and intellectual property, a dependency analysts say will require a decade-long transformation to address.
Broad technology dependenciesThe depth of European reliance on foreign technology providers varies across sectors but remains substantial throughout the stack. In cloud infrastructure alone, Amazon, Microsoft, and Google command 70% of the European market, while local providers including SAP, Deutsche Telekom, and OVHcloud collectively hold just 15%.
However, cloud represents just one dimension of the technology sovereignty challenge. The report addresses the entire digital stack, from semiconductor supply chains to AI model development.
“Recent geopolitical tensions show that the issue of Europe’s digital sovereignty is of the utmost importance,” Michał Kobosko, the Renew Europe MEP who negotiated the report text, said in a statement. “If we do not act now to reduce Europe’s technological dependence on foreign actors, we run the risk of becoming a digital colony.”
The report calls for developing a “Eurostack,” a foundational layer of European public digital infrastructure spanning semiconductors, cloud infrastructure, software, and AI systems built on open standards. It also advocates for “Open Source first” policies in government procurement.
Dario Maisto, senior analyst at Forrester, noted the Commission’s recent momentum: “In a ‘Call for Evidence’ published two weeks ago, the European Commission stated that the EU’s reliance on non-European tech suppliers has become a strategic liability. This activism at this level is unprecedented.” The Commission’s open-source initiative, announced earlier this month, focuses on wider adoption in public and private sectors.
Decade-long transformationWhile the parliamentary vote signals political commitment to reducing technology dependencies, analysts warn that the shift will require sustained effort over many years. Enterprise response to these dependencies is already evident: 61% of Western European CIOs and IT leaders plan to increase reliance on local or regional cloud providers due to geopolitical factors, while 53% said these factors will restrict future use of global providers, according to a November 2025 Gartner survey of 241 technology leaders.
“The subject of sovereignty used to be dominated by data residency because the main driver was data protection,” said Nader Henein, VP analyst at Gartner. “Due to geopolitical tensions, the driver has shifted to reducing foreign digital dependency across the entire technology stack. European CIOs are now tasked with redesigning their approach to semiconductors, cloud, software, and AI, upending two decades of established strategy. It’s not going to be easy, it’s not going to be cheap, and it’s going to span multiple generations of CIOs.”
When asked whether European enterprises will see viable sovereign alternatives across core technology areas, Henein said: “The answer is yes, but the time horizon is potentially more than a decade. Europe has been supporting US technology providers through licensing agreements for the better part of the last two decades. Reversing that trend will not happen in one or two years.”
Sanchit Vir Gogia, chief analyst at Greyhound Research, emphasized that while the vote represents “arguably the most comprehensive political signal yet that Europe no longer sees digital dependency as tolerable,” it remains “not legislation, not procurement reform, not enforceable. Not yet.”
For enterprise IT leaders, Gogia said digital sovereignty must be defined by operational control rather than hosting location. He outlined five critical controls: jurisdiction, key management, identity governance, operational command, and reversibility. “If data is in Europe but the keys are not, you do not have sovereignty,” he said.
Can procurement shift market dynamics?A key question is whether the report’s proposed preferential procurement policies can actually change market realities, given the massive scale advantages of established technology providers.
Gogia said procurement policies “can influence the market, but only in domains where public sector demand is large, coordinated, and backed by providers who can meet sovereignty thresholds without compromising capability.” However, he warned that success requires member state alignment: “If France certifies a sovereign cloud and Germany refuses to recognize it, we enter patchwork territory.”
Maisto noted market convergence across the technology landscape. “US technology providers are getting closer to the sovereignty needs of their clients, while European players are granting more interoperability. In the short term, we are not going to see complete migrations away from established providers. This will be revolutionary long-term change that happens incrementally, workload by workload,” Maisto said.
An IDC survey found that 64% of European organizations have adopted risk mitigation approaches to either hold or migrate GDPR-governed data to datacenters in Europe, while 69% agree that digital sovereignty initiatives enhance trust. The debate over how sovereign hyperscaler cloud offerings actually are continues among analysts.
Maisto recommended a “Minimum Viable Sovereignty” approach that minimizes change and budget while achieving necessary sovereignty. “Data residency is a false friend of data sovereignty,” he said. “It really depends on the organization’s sovereignty needs.”
Why Apple is the best investment for future AI
The AI industry is moving incredibly fast. It’s almost as though you can close your eyes for ten minutes and wake to find that yet another business-friendly AI tool or service has appeared.
While refreshing, this glut of investment and innovation represents an industry in flux, meaning the most sensible purchasing decisions aren’t yet terribly clear. Which services will stand the test of time? Which will still even be around once investors call in their debt? It’s hard to say.
With one exception.
Why hardware mattersWhatever breeds, brands, and benefits of the current investment-driven AI deluge are still around in a year’s time, the one thing that remains fixed is the need to invest in the best possible kit to run AI.
Sure, you can spent tens of thousands equipping your teams with access to AI services today, but you may be far better off investing in the infrastructure you’ll need to run the AI services that make it through the current competitive glut and still exist tomorrow. It makes sense to give the market, money, and regulations time to bed in so you can make service investment decisions based on a more stable future reality.
Regardless of whether your company is experimenting with AI today or planning for future adoption, it is essential to invest in hardware that will remain capable and relevant as AI technologies evolve. Perhaps that’s why 73% of CIOs say Macs are already in use to run AI in the enterprise.
Apple, the best kit for AIWith that in mind, what is the best equipment for AI? Right now, on a cost/power/performance/TCO basis, the best kit for AI comes from Apple.
iPhones, iPad, and Macs all make use of industry leading processors, have operating systems that (aside from Liquid Glass) employees already love, require less expensive memory to deliver the same computational impact, and even offer their own on-device, private-by-design AI services to help your people get your business done.
On a TCO basis support costs are lower, usable lives longer, and once it comes time to upgrade, Apple’s products still fetch good prices in rebates or on second user markets.
Even the initial purchase price looks increasingly attractive as memory and component cost increases drive other PC brands to raise prices.
For what you and your employees get, the initial cost of Apple hardware tells its own compelling story, one that’s reflected on the ground as Mac sales continue to increase at rates exceeding the industry average.
All of these are proven by statistics, market forecasts, employee choice decisions and the grim reality of running AI on a PC. Even an iPad Air can run iterations of Google’s Gemma 3n Generative AI locally and on device with help from Locally AI.
Can that really be said for most PCs in that price range?
Massive talentUnderpinning all of this is the fact that Apple Silicon was designed with AI in mind. They chips have built-in hardware acceleration for specific tasks, boosting inference and training.
They are power efficient to handle energy-intensive AI workloads, and they have the Neural Engine which accelerates ML tasks — and all these features are supported by developer tools such as Core ML.
You just need to look at the Geekbench data to verify what this means. At 133 trillion operations per second (TOPS), the M5 chip delivers 12x the neural performance of the original M1 processor, which blew everyone away when it appeared. Apple’s systems are top of their class, and where faster processors do exist, they tend to be vampiric, consuming of vast quantities of energy and money to feed their computational output.
You don’t really need an AI to draw inferences from any of the above. Any business purchaser thinking about how to capitalize on the evolving future of AI should recognize that today is the time to invest in the hardware that will run tomorrow’s AI services — particularly as the operating system in play is already positioned as a do-it-all application layer from which to run all manner of models.
Apple, the AI platformWhat this means, at least from my perch, is that all of these arguments seem to coalesce into an Apple purchasing decision. A decision which, for professional users, at least, may see another big boost next week, when many anticipate new pro Macs may ship along with a new suite of pro apps. If — or, let’s face it, when — they do, these Macs will deliver such extreme computational performance that most PCs will have nothing left to do but grab a guitar and gently weep.
You can follow me on social media! Join me on BlueSky, LinkedIn, Mastodon, and MeWe.
The best Android feature you (probably) aren’t using
Here in the land of Android, we’ve got an almost comical kind of first-world problem:
So many new features fly our way so frickin’ often that it’s all too easy to overlook something interesting — or maybe just try it out a few times and then completely forget to keep using it.
That latter path is exactly what happened to me with an exceptionally useful and off-the-beaten-path Android experience-enhancer. It’s one of the simplest ways to improve your days and give yourself a much more pleasant path for ingesting info on your phone. And yet, it’s so out of sight — and up to you to dig up and activate — that it almost seems deliberately designed to be forgotten.
Having just remembered and gotten myself back in the habit of using this feature this week, though, lemme tell ya: It is well worth your while to dust it off, discover or rediscover it for yourself, and make it a key part of your 2026 Android adventure.
Lemme show ya why.
[Oh, hey: Want even more advanced Android knowledge? Check out my free Android Shortcut Supercourse to learn tons of time-saving tricks.]
Android reading — elevated and enhancedThe feature of which we speak is a snazzy little somethin’ called Android Reading Mode.
Now, I’m not talkin’ about the reading mode option built into the Android Chrome browser. Effective as that can be, it’s limited only to articles you’re reading within Chrome — and even within that environment, it’s limited as to when and how it can be summoned.
Few mere mortals even realize it, but in addition to that browser-based option, Google actually has an Android-wide Reading Mode addition. It exists somewhat awkwardly as a standalone app that you’ve gotta go out of your way to install. Once you do, though, it’ll be seamlessly integrated into your system-level software — and you’ll wonder how you ever lived without it.
So what does this digital wonder actually do for you, you might be pondering? It’s simple: Once you’ve got Reading Mode up and running, you can just press your phone’s physical volume-up and volume-down buttons together anytime — with any manner of text-oriented material on your screen, from any app you’re using — and, poof: In the blink of an eye, your phone will pop up a panel that transforms whatever text is in front of you into a nicely formatted, clutter-free presentation that’s not only tolerable but actually even enjoyable to read.
srcset="https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?quality=50&strip=all 2160w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=293%2C300&quality=50&strip=all 293w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=768%2C786&quality=50&strip=all 768w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=1000%2C1024&quality=50&strip=all 1000w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=1500%2C1536&quality=50&strip=all 1500w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=2000%2C2048&quality=50&strip=all 2000w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=681%2C697&quality=50&strip=all 681w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=164%2C168&quality=50&strip=all 164w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=82%2C84&quality=50&strip=all 82w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=469%2C480&quality=50&strip=all 469w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=352%2C360&quality=50&strip=all 352w, https://b2b-contenthub.com/wp-content/uploads/2026/01/android-reading-mode-before-after.png?resize=244%2C250&quality=50&strip=all 244w" width="1000" height="1024" sizes="auto, (max-width: 1000px) 100vw, 1000px">Android Reading Mode — before and after. Need we say more?JR Raphael, Foundry
You can customize all sorts of details about how the text looks — its size, its font, the color of the words and the background, even the alignment — to make it as perfectly suited for your personal preferences as possible.
Google’s Android Reading Mode is impressively customizable.JR Raphael, Foundry
You can even tap a play button within that Reading Mode pop-up to have your phone read the text aloud, like an on-demand miniature e-book, at any speed you like.
Reading Mode’s underappreciated listening mode — complete with background playback and adjustable speed controls.JR Raphael, Foundry
No kidding: I completely forgot to re-enable Reading Mode after my last device change and only just recently realized I’d gotten entirely out of the habit of even thinking about it. And now, I’m mildly obsessed with using it everywhere possible — with ad-laden articles on web pages, unfortunately fonted or colored content I’ve opened within the Google app, even awkwardly formatted emails that I’m squinting to read in my inbox.
Whatever it is, I just hold down those two volume buttons together for a moment, and boom: The text is transformed into a distraction-free, optimally formatted, and completely consistent visual experience — or, if I’m feelin’ saucy, an on-demand audio encounter that I can listen to as I walk, drive, or do a jaunty little jig.
Oh, and a side perk: Unlike the more aggressive browser-based or add-on-enabled ad-blockers out there, this approach (a) allows all the ads within whatever you’re reading to be served — just in a place where you aren’t actually seeing them — and (b) goes a step further by also reformatting the text to eliminate regrettable layouts, font choices, and color selections at the same time. And it doesn’t randomly break pieces of the web as a result of blocking scripts that are actually required for core site functionality, as those script-blocking mechanisms occasionally tend to do. Win-win-win, baby.
So — ready to get this up and running for yourself?
I promise: It’s easy.
30 seconds to superior Android readingThe first step to elevating your Android reading experience is to download the official Google Reading Mode app from the Play Store. (It’s free, and it’s made by Google itself — so you won’t be granting any permissions to any entity that doesn’t already have that same level of access, with the same standard privacy policies you’re accustomed to accepting.)
Once it’s installed, open ‘er up and make your way through the initial setup screens.
One important point: When you get to the step about setting up a Reading Mode shortcut, be sure to select the option for “Volume keys” and not any of the other choices — including the “Accessibility button” option, which would put a permanently present floating button on your screen that’ll inspire you to email me and ask what in the world that button is all about and how you can get rid of it.
You could also opt to go for the Quick Settings shortcut as an alternative, if you want, but the volume key path is really the quickest and easiest way — and it doesn’t create any extra visual clutter (though, on the flip side, it does require you to remember to keep using it without any visual prompting, which can be a challenge).
With that quick ‘n’ easy series of steps behind ye, your work here is officially done. And from this moment onward, you can simply summon Reading Mode whenever you want to transform any text on your screen.
Believe you me: Your brain and your blinkers will both be thanking you.
Get six full days of advanced Android knowledge with my free Android Shortcut Supercourse. You’ll learn tons of time-saving tricks for whatever phone you’re using!
Always disclose how you use AI
AI chatbots have been with us three years and one month (at least the kind that use large language models (LLMs) to communicate with natural-sounding words). Already norms are emerging in some professions for users to disclose how they use AI.
For example:
- Organizations such as the International Committee of Medical Journal Editors created policies for disclosing AI use in scientific manuscripts.
- Some US lawyers are required to declare how AI tools are used for content presented in court, and the State Bar of California advises lawyers to tell clients when they intend to use AI during representation.
- Amazon requires book sellers to disclose whether their books are AI-generated.
AI disclosures in these fields are so beneficial that we should all be doing the same thing. So I’m starting a movement for everyone to embrace the norm.
Why we all need AI disclosuresWe’re not all so-called “content creators.” But as professionals, we do “create content“ and we communicate with “content” — emails, phone calls, voicemail, texts, instant messages, video calls, and slide presentations. Some of us build websites, create charts and graphs, share spreadsheets, and create marketing materials.
All these communication acts and content types can be generated with AI tools.
As I argued in a recent column, the assumption that you’re using AI to create this stuff is growing. (I also had a great conversation with my Superintelligent podcast co-host, Emily Forlini, about disclosure and that conversation really clarified my thinking on the subject.)
In other words, even if you lovingly handcraft a PowerPoint presentation or provide a thoughtful and detailed reply to an email, other people will think you used AI.
There are many reasons to transparently disclose exactly how you’re using the technology.
Since most people will assume you’re using AI to generate everything, leading to suspicion and the judgment of your work as low value, disclosing exactly how you use AI will signal the quality or value of what you’re communicating. Here’s what I mean:
You can impress others about the quality of your work without AI. If you “do your own work” and don’t use AI to communicate with someone, you should make it clear you deserve credit for polished content, well-structured language, clear thinking, and all the rest.
You’re effectively asserting your value in a world where companies are looking to replace you with AI. High-performers are much better at their jobs than AI chatbots. By reminding everybody that your content and communication is coming from you — and not the AI tool they would replace you with — you might have a little more job security in a world where so many business leaders believe AI can replace people in the workplace.
You’d be providing leadership in how AI tools can be used effectively. Whether you’re communicating to subordinates, superiors or peers, you can constantly and subtly help others discover uses for AI and also find tools that you may have encountered. By advocating the norm and getting others to disclose their own use of AI, you can learn in the same way from others.
You’ll give others better context about what they’re getting from you. We live in a weird time where you get something from someone and you’re not sure what you’re looking at. Is this just AI slop? Or is it coming from the heart of the person who sent it? By constantly disclosing how you’ve used AI, you give others a sense of context and avoid the disorientation of a world in which so much content and communication is fake and AI generated.
You can mitigate algorithmic biases through external scrutiny. If you do use AI, the tools can insert biases into your communications that you don’t even recognize, so you’ll benefit from others pointing out any issues — and avoid blame for a bias you don’t share with the chatbot.
Finally, you’re demonstrating sensitivity to data privacy. When using AI tools to, for instance, reply to an email, you have to upload the words and that you’re replying to into a tool that could use uploads for training AI. If the person who created that content is legally prohibited from sharing data with AI training models, you’d be showing sensitivity toward privacy by disclosing you did not use any AI tool to read or respond to the email. (That includes data from other people, other companies, and your own company.)
These are just some of the reasons why you should disclose how you use AI in all of your professional content, creation, and communication.
Emerging best practicesThe most obvious place to disclose AI use is in your email signature. Most of us operate on informal personal policies; for example, in my case, I never use AI in any way to write or reply to emails. I don’t use the suggestions. I don’t use the tools that Google provides to Gmail users. So it’s helpful for me to note in my email signature that I don’t use AI for email. The people I communicate with can know they’re talking to me — not to an LLM.
Just about every other kind of business communication or content creation can evolve a footnote about general AI policies in the templates we use to generate the stuff. And when we deviate from what is in the template, we can amend it to clarify exactly how we use AI.
For the same reason that the foods we buy in the supermarket are augmented with lists of ingredients, we should always list the “ingredients” of the communications we do and the content we create so the “consumers” know what they’re getting.
Obviously, this idea works on the honor system. But those who lie about how they use AI are doing us a favor when we catch them in the act. Such people are not to be trusted, even if their information isn’t a hallucination or problematic in any way.
If or when the disclosure trend grows, non-disclosure will likely be seen as an admission or confession that a person is outsourcing their thinking, creativity and work to a chatbot, and therefore can and should be replaced by one.
The use of AI tools for business is creating a small crisis of confusion. Among the harms and benefits, we can at least provide clarity by telling everyone exactly how we’re using AI.
In short, as we enter the AI age, we also should enter the age of AI disclosure.
AI disclosures: I used MacOS Sequoia’s transcription tool to dictate about 20% of the words in this article with my voice. The tool is powered by Apple Intelligence. I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) both as a search engine and to find examples of business communication and disclosure requirements. And I fact-checked this with Kagi Search. I used Lex, which has AI tools., and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors. I corrected a few capitalization and hyphenation and word-usage errors using Lex.
Experts warn: Swarms of AI bots threaten democracy
A group of researchers from Berkeley, Harvard, Oxford, Cambridge, and Yale warn that the rise of AI bots and AI agents could pose a serious threat to democracy.
For example, power-hungry politicians around the world can relatively easily create swarms of AI bots that flood social media and messaging services with propaganda and disinformation.
In this way, they can not only influence election results but also persuade parts of the population to replace parliamentary democracy with an authoritarian regime.
“It is highly conceivable that certain actors will attempt to mobilize virtual armies of LLM-driven agents to disrupt elections and manipulate public opinion — for example, by targeting large numbers of individuals on social media and other electronic media,” says Michael Wooldridge, a professor at Oxford, in a comment to The Guardian.
The researchers’ warning can be read in full in Science.
Related reading:
How much does AI improve work efficiency? Managers and employees disagree
A new survey conducted by AI consulting firm Section among 5,000 white-collar workers in the US, UK, and Canada shows a clear gap between how company management and employees perceive the benefits of AI at work, reports The Wall Street Journal.
Over 40% of C-level executives say the technology saves them more than eight hours per week. At the same time, two-thirds of employees without managerial roles say AI saves them less than two hours per week, or no time at all.
Many employees instead describe AI as stressful and difficult to use correctly, often because AI-generated material must be checked, corrected, or redone.
What’s more, there are few economic effects visible at the moment. In a global CEO survey by PricewaterhouseCoopers with nearly 4,500 participants, only 12% said that AI has so far resulted in both cost savings and increased revenue. Over half see no clear business benefit at all.
Related reading:
Google’s AI search gets support for ‘personal intelligence’
A week ago, Google decided to add “personal intelligence” to Gemini, which gives the AI tool access to your searches, your photos, your YouTube history, and your Gmail emails.
Now TechCrunch reports reports that US users also have access to personal intelligence in Google’s AI search.
“With personal intelligence, recommendations not only match your interests — they fit seamlessly into your life,” wrote Google Search Director Robby Stein in a blog post.
The idea is that the new feature will help users with everything from restaurant visits to hotel bookings, as the AI already knows your preferences.
To use personal intelligence, you first need to activate the feature. It will also be possible to turn it off if desired.
Related reading:
Workers challenge ‘hidden’ AI hiring tools in class action with major regulatory stakes
Workers are getting fed up with AI-based hiring practices.
A new class action lawsuit filed in California alleges that human candidates are being unfairly profiled by “hidden” AI hiring technologies that “lurk in the background” to collect “sensitive and often inaccurate” information about “unsuspecting” job applicants.
The suit specifically targets Eightfold AI, claiming that tools used by the company should be regulated in the same way as credit report bureaus are via The Fair Credit Reporting Act (FCRA) and state laws based on it.
The case could have broad-reaching implications for the increased use of AI in hiring.
“This lawsuit is a pivot point,” said Sanchit Vir Gogia, chief analyst at Greyhound Research. “It tells us that AI isn’t just being scrutinized for what it does, but for how it does it and whether people even know it’s happening to them.”
Violating the 55-year-old FCRAThe suit was filed in the Superior Court of California by New York City-based law firm Outten & Golden LLP, on behalf of Erin Kistler and Sruti Bhaumik. The plaintiffs claim they were barred from employment on several occasions by companies using AI-based hiring tools.
The class action complaint asserts that Eightfold AI violated federal and state fair credit and consumer reporting acts and unfair competition laws by collecting data on applicants and selling reports to companies for use in employment decision-making. These practices “can have profound consequences” for job-seekers across the US, the lawsuit claims.
Eightfold markets itself as the “world’s largest, self-refreshing source of talent data” and incorporates more than 1.5 billion global data points, including job titles and worker profiles across “every job, profession, [and] industry.” It counts among its customers corporate giants including Microsoft, Morgan Stanley, Starbucks, BNY, Paypal, Chevron, and Bayer.
The suit claims the Santa Clara-based company’s proprietary large language model (LLM) and deep learning-based technology analyze data from public resources including career sites, job boards, and resumé databases such as LinkedIn and Crunchbase. It also culls information from social media profiles, applicant locations, and behind-the-scenes tracking tools. None of these personal data points are ever included in job applications.
AI algorithms then rank a candidate’s “suitability” on a numerical scale of 0 to 5, based on “conclusions, inferences, and assumptions” about their culture fit, projected future career trajectory, and other factors. This method is intended to create a profile of the candidate’s “behavior, attitudes, intelligence, aptitudes, and other characteristics,” according to the lawsuit.
However, these reports are “unreviewable” and “largely invisible” to candidates, who have no opportunity to dispute their contents before they are passed on to hiring managers, the plaintiffs argue. “Lower-ranked candidates are often discarded before a human being ever looks at their application.”
This method of report creation violates longstanding FCRA requirements, and there is no stipulated exemption for AI use, according to the suit.
The FCRA broadly defines consumer reports as any written, oral, or other communication from a consumer reporting agency that includes information on a person to determine their access to credit and insurance, as well as for “employment purposes.” According to the lawsuit, this definition covers reports that contain information on “habits, morals, and life experiences.”
Plaintiffs argue that, while automated screening technology did not exist when the FCRA was established in 1970, lawmakers at the time expressed concern about growing accessibility to consumer information through computer and data-transmission techniques, and that “impersonal blips,” inaccurate data, and analysis by “stolid and unthinking machines” could unfairly bar people from employment.
Thus, the lawsuit argues, agencies like Eightfold must disclose their practices, obtain certifications, and give consumers a mechanism to review and correct reports. “Large-scale decision-making based on opaque information is exactly the kind of harm the statute was designed to address.”
Neither the lawyers for the plaintiffs nor for the defendants responded to requests for comment. The Society for Human Resource Management (SHRM) also declined to comment.
Defensibility becomes the new barThis lawsuit exposes a “governance failure” and “fundamental accountability gap,” noted Greyhound’s Gogia.
And it’s not the first, nor will it likely be the last; HR company Workday, for instance, is facing a lawsuit alleging that its AI-powered hiring tools make decisions based on race, and also discriminate against older and disabled applicants.
If courts agree that AI evaluations function like credit reports, hiring will be pushed into regulated territory, Gogia noted; this means CIOs must establish clarity and set rules around notification, transparency, audit rights, and contestability.
“If your hiring tools operate like decision engines, they need to be governed like decision infrastructure,” he said. And when they influence employment decisions, enterprises will have to prove they’ve done their homework. This means showing the logic behind a model, understanding data provenance, and being able to explain why an applicant was rejected and the processes they have in place to correct bad calls.
“Defensibility will become the new bar,” said Gogia.
Where AI hiring helps, where it hurtsThat’s not to say that AI can’t be valuable in hiring; many real-world examples have proven that it can. The Human Resources Professionals Association, for one, points to successful use of AI in initial talent sourcing, screening, and assessment, while AI scribes can quietly take notes, helping recruiters focus more intently on candidate discussions.
Gogia agreed that AI can filter and rank large applicant pools, automate repetitive HR tasks, and identify overlooked candidates within internal databases. This means hiring teams can move faster, hone their focus, be more consistent, and reduce friction.
“But the moment AI moves into judgement territory, things get messy,” he emphasized. Scoring personality traits, predicting future roles, or evaluating the quality of a candidate’s education are all “subjective inferences dressed up as mathematical objectivity.”
Gogia advises clients to insist on human-readable evidence from vendors, including logs, bias audits, and disclosures about model updates. They should ask questions like: What did the model evaluate? Why did it rank one candidate higher over another? What can the hiring manager say if asked to justify that outcome?
The answers to those questions can lead to process changes. One of Greyhound’s European manufacturing clients, for instance, redesigned its hiring pipeline so that managers had to log a rationale at every decision point, even if AI had already created a shortlist. This helped improve the audit trail, catch errors, and taught the team to “treat AI as input, not verdict,” Gogia noted. And another client slowed its final screening process for senior hires because it couldn’t defend the decisions AI was influencing and realized the system wouldn’t be able to survive scrutiny.
“CIOs, CHROs, legal, risk — all need to co-own this now,” said Gogia. “That starts by restoring the human’s role as an accountable actor, not just a passive observer. The future of hiring tech is human with machine, governed from day one.”
Spotify lawsuit behind shutdown of pirate library domains
A lawsuit filed by Spotify and several major record labels was behind the shutdown of several of Anna’s Archive’s domains earlier this year. This is according to recently published documents from a federal court in the US, reports Torrentfreak.
The background to this is that in December 2025, Anna’s Archive stated that the site had backed up Spotify and planned to release large amounts of collected data. According to the lawsuit, the archive circumvented Spotify’s DRM and scraped metadata and audio files linked to hundreds of millions of songs.
On December 29, Spotify, together with companies such as Universal, Sony, and Warner, filed a sealed lawsuit in New York. Shortly thereafter, the court issued a temporary order targeting domain registrars, web hosts, and other intermediaries, which led to the shutdown of Anna’s Archive’s .org and .se domains in early January. Among the recipients of the order was the Swedish Internet Foundation.
In mid-January, a broader injunction followed, which also covers operators such as Cloudflare and requires them to stop access to the copyrighted material. Shortly thereafter, Anna’s Archives’ special section for Spotify downloads was removed and marked as unavailable. The legal process is still ongoing.
This article originally appeared on ComputerSweden.
Anthropic’s Claude AI gets a new constitution embedding safety and ethics
Anthropic has completely overhauled the “Claude constitution”, a document that sets out the ethical parameters governing its AI model’s reasoning and behavior.
Launched at the World Economic Forum’s Davos Summit, the new constitution’s principles are that Claude should be “broadly safe” (not undermining human oversight), “Broadly ethical” (honest, avoiding inappropriate, dangerous, or harmful actions), “genuinely helpful” (benefitting its users), as well as being “compliant with Anthropic’s guidelines”.
According to Anthropic, the constitution is already being used in Claude’s model training, making it fundamental to its process of reasoning.
Claude’s first constitution appeared in May 2023, a modest 2,700-word document that borrowed heavily and openly from the UN Universal Declaration of Human Rights and Apple’s terms of service.
While not completely abandoning those sources, the 2026 Claude constitution moves away from the focus on “standalone principles” in favor of a more philosophical approach based on understanding not simply what is important, but why.
“We’ve come to believe that a different approach is necessary. If we want models to exercise good judgment across a wide range of novel situations, they need to be able to generalize — to apply broad principles rather than mechanically following specific rules,” explained Anthropic.
The constitution will help Claude to move from simply following a limited checklist of approved possibilities to one based on deeper reasoning. So, for example, instead of keeping data private because this agrees with a rule, the constitution will help it understand the ethical framework in which privacy is important.
The effect of this added complexity is length, with the new version expanding dramatically to 84 pages and 23,000 words. If this sounds long-winded, the reasoning is that the document has been written to be ingested primarily by Claude itself. “It [the constitution] needs to work both as a statement of abstract ideals and a useful artifact for training,” the announcement said.
It also noted that the document is currently written for mainline, general access Claude models, and that specialized models may not fully fit, but said that the company will “continue to evaluate” how to make them meet the constitution’s core objectives. In addition, it promised to be open about missteps “in which model behavior comes apart from our vision.”
Intriguingly, Anthropic has released Claude’s constitution under a Creative Commons CC0 1.0 Deed, which means it can be used freely by other developers in their models.
Don’t be evilThe context for the update is rising skepticism rising about the reliability, ethics, and safety of large proprietary LLMs. From the start, Anthropic, which was founded in 2021 by former OpenAI employees worried about the latter’s direction, has sought to set itself apart as taking a different approach.
More contentious is the constitution’s oblique reference to the debate over AI consciousness. “Claude’s moral status is deeply uncertain. We believe that the moral status of AI models is a serious question worth considering. This view is not unique to us: some of the most eminent philosophers on the theory of mind take this question very seriously,” it states on page 68.
In August, Anthropic introduced a new feature to its most advanced Claude Opus 4 and 4.1 models it said would end a conversation if a user repeatedly tried to push harmful or illegal content, as a mode of self-protection. And in November, an Anthropic research paper suggested that the same Opus 4 and 4.1 models showed “some degree” of introspection, reasoning about past actions in an almost human-like way.
In fact, LLMs are statistical models, not conscious entities, countered Satyam Dhar, an AI engineer with technology startup Galileo.
“Framing them as moral actors risks distracting us from the real issue, which is human accountability. Ethics in AI should focus on who designs, deploys, validates, and relies on these systems,” he said.
“An AI ‘constitution’ can be useful as a design constraint, but it doesn’t resolve the underlying ethical risk,” he added. “No philosophical framework embedded in a model can replace human judgment, governance, and oversight. Ethics emerge from how systems are used, not from abstract principles encoded in weights.”
This article originally appeared on CIO.com.
Nadella warns of AI bubble unless more people use the technology
Microsoft CEO Satya Nadella warns that the AI boom could become a speculative bubble if the technology does not gain wider acceptance outside of large technology companies and wealthy economies.
“For this not to be a bubble by definition, the benefits need to be spread much more evenly,” Nadella said at the World Economic Forum in Davos, according to the Financial Times.
Nadella says that he is confident that AI will transform several industries.
“I am much more convinced that this is a technology that will actually build on the foundation of cloud and mobile platforms, spread faster, bend the productivity curve, and create local surpluses and economic growth around the world,” said Nadella.
He also emphasized that the future is unlikely to belong to a single dominant AI model. According to Nadella, the core principle will be for companies to combine different models with their own data and model distillation to create smaller and cheaper solutions.
This article originally appeared on ComputerSweden.
More Microsoft news:
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- následující ›
- poslední »



