Agregátor RSS
Senzory umístěné ve spodním prádle odhalily, že lidé vypouštějí větry dvakrát častěji, než přiznávají
Nukleární katastrofa, zombie nákaza i invaze z vesmíru. Vybíráme nejlepší postapokalyptické hry
Z Linuxu mizí 19 let neopravený ovladač pro Intel 440BX, Godot připravuje ray-tracing přes Vulkan
Betaverze Windows 11 má pytle pod očima. Podporuje Emoji 16 a vylepšuje práci s webkamerou
One threat actor responsible for 83% of recent Ivanti RCE attacks
What Is SELinux? A Practical Take for Linux Admins
Snail mail letters target Trezor and Ledger users in crypto-theft attacks
This Week’s Awesome Tech Stories From Around the Web (Through February 14)
Aurora’s Driverless Trucks Can Now Travel Farther Distances Faster Than Human DriversKirsten Korosec | TechCrunch
“Aurora’s self-driving trucks can now travel nonstop on a 1,000-mile route between Fort Worth and Phoenix—exceeding what a human driver can legally accomplish. The distance, and the time it takes to travel it, offers up positive financial implications for Aurora—and any other company hoping to commercialize self-driving semitrucks.”
ComputingOpenAI Sidesteps Nvidia With Unusually Fast Coding Model on Plate-Sized ChipsBenj Edwards | Ars Technica
“The model delivers code at more than 1,000 tokens (chunks of data) per second, which is reported to be roughly 15 times faster than its predecessor. To compare, Anthropic’s Claude Opus 4.6 in its new premium-priced fast mode reaches about 2.5 times its standard speed of 68.2 tokens per second, although it is a larger and more capable model than Spark.”
EnergyThis State’s Power Prices Are Plummeting as It Nears 100% RenewablesAlice Klein | New Scientist ($)
“The independent Australian Energy Market Operator’s (AEMO) latest report shows that the average wholesale electricity price in South Australia fell by 30 per cent in the final quarter of 2025, compared with a year earlier. As a result, the state had the lowest price in Australia, along with Victoria, which has the second highest share of wind and solar energy in the nation.”
BiotechnologyGene Editing That Spreads Within the Body Could Cure More DiseasesMichael Le Page | New Scientist ($)
“The idea is that each cell in the body that receives the initial delivery will make lots of copies of the gene-editing machinery and pass most of them on to its neighbors, amplifying the effect. This means that disease-correcting changes could be made to the DNA of more cells.”
FutureThe First Signs of Burnout Are Coming From the People Who Embrace AI the MostConnie Loizos | TechCrunch
“The tools work for you, you work less hard, everybody wins. But a new study published in Harvard Business Review follows that premise to its actual conclusion, and what it finds there isn’t a productivity revolution. It finds companies are at risk of becoming burnout machines.”
Artificial IntelligenceALS Stole This Musician’s Voice. AI Let Him Sing Again.Jessica Hamzelou | MIT Technology Review ($)
“[ALS patient Patrick Darling] was able to re-create his lost voice using an AI tool trained on snippets of old audio recordings. Another AI tool has enabled him to use this ‘voice clone’ to compose new songs. Darling is able to make music again.”
Artificial IntelligenceChatbots Make Terrible Doctors, New Study FindsSamantha Cole | 404 Media
“When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases.”
ComputingLEDs Enter the NanoscaleRahul Rao | IEEE Spectrum
“MicroLEDs, with pixels just micrometers across, have long been a byword in the display world. Now, microLED-makers have begun shrinking their creations into the uncharted nano realm. …They leave much to be desired in their efficiency—but one day, nanoLEDs could power ultra-high-resolution virtual reality displays and high-bandwidth on-chip photonics.”
FutureLeading AI Expert Delays Timeline for Its Possible Destruction of HumanityAisha Down | The Guardian
“A leading artificial intelligence expert has rolled back his timeline for AI doom, saying it will take longer than he initially predicted for AI systems to be able to code autonomously and thus speed their own development toward superintelligence [and doom for humanity].”
BiotechnologyCAR T-Cell Therapy May Slow Neurodegenerative Conditions Like ALSMichael Le Page | New Scientist ($)
“Genetically engineered immune cells known as CAR-T cells might be able to slow the progress of the neurodegenerative condition amyotrophic lateral sclerosis (ALS) by killing off rogue immune cells in the brain. ‘It’s not a way to cure the disease,’ says Davide Trotti at the Jefferson Weinberg ALS Center in Pennsylvania. ‘The goal is slowing down the disease.'”
ComputingMeta Plans to Add Facial Recognition Technology to Its Smart GlassesKashmir Hill, Kalley Huang, and Mike Isaac | The New York Times ($)
“Five years ago, Facebook shut down the facial recognition system for tagging people in photos on its social network, saying it wanted to find ‘the right balance’ for a technology that raises privacy and legal concerns. Now it wants to bring facial recognition back. …The feature, internally called ‘Name Tag,’ would let wearers of smart glasses identify people and get information about them via Meta’s artificial intelligence assistant.”
FutureI Tried RentAHuman, Where AI Agents Hired Me to Hype Their AI StartupsReece Rogers | Wired ($)
“At its core, RentAHuman is an extension of the circular AI hype machine, an ouroboros of eternal self-promotion and sketchy motivations. For now, the bots don’t seem to have what it takes to be my boss, even when it comes to gig work, and I’m absolutely OK with that.”
Artificial IntelligenceAI Is Getting Scary Good at Making PredictionsRoss Andersen | The Atlantic ($)
“At first, the bots didn’t fare too well: At the end of 2024, no AI had even managed to place 100th in one of the major [forecasting] competitions. But they have since vaulted up the leaderboards. AIs have already proved that they can make superhuman predictions within the bounded context of a board game, but they may soon be better than us at divining the future of our entire messy, contingent world.”
Artificial IntelligenceMeet the One Woman Anthropic Trusts to Teach AI MoralsBerber Jin and Ellen Gamerman | The Wall Street Journal ($)
“As the resident philosopher of the tech company Anthropic, [Amanda] Askell spends her days learning Claude’s reasoning patterns and talking to the AI model, building its personality and addressing its misfires with prompts that can run longer than 100 pages. The aim is to endow Claude with a sense of morality—a digital soul that guides the millions of conversations it has with people every week.”
SpaceThis Startup Thinks It Can Make Rocket Fuel From Water. Stop LaughingNoah Shachtman | Wired ($)
“It’s an idea that’s been around since the Apollo era and has been touted in recent years by the likes of former NASA administrator Bill Nelson and SpaceX’s Elon Musk. But here’s the thing: No one has ever successfully turned water into rocket fuel, not for a spaceship of any significant size. A startup called General Galactic, led by a pair of twentysomething engineers, is aiming to be the first.”
The post This Week’s Awesome Tech Stories From Around the Web (Through February 14) appeared first on SingularityHub.
Řidiči a cyklisté si na silnicích nerozumějí. Kvůli nejasným gestům zbytečně roste riziko nehod
Evropa spouští nejvyspělejší výrobní linku na čipy na světě. Má určovat další milníky v oboru
Kratos se vrací do Řecka. Nový díl z mládí boha války vychází už dnes v češtině, chystají se i remaky trilogie God of War
ASCII editor MonoSketch
G'MIC 3.7.0
Dva šálky kávy nebo čaje denně ochrání váš mozek. Podle studie mohou až o pětinu snížit riziko rozvoje demence
Miniaturizace v domácích podmínkách. Youtuber schoval funkční bezdrátovou kameru do ořechu
eSIM u T-Mobilu používají zatím jen 3 % zákazníků. Přenést číslo do jiného telefonu mohou i bez QR kódu
Americká bugina proti dronům na Ukrajině. Zvládne i vrtulník, ale má jen dva pokusy
Valentýn aneb Den lásky ke svobodnému softwaru
AI will likely shut down critical infrastructure on its own, no attackers required
With a new Gartner report suggesting that AI problems will “shut down national critical infrastructure” in a major country by 2028, CIOs need to rethink industrial controls that are very quickly being turned over to autonomous agents.
Gartner embraces the term Cyber Physical Systems (CPS) for these technologies, which it defines as “engineered systems that orchestrate sensing, computation, control, networking and analytics to interact with the physical world (including humans). CPS is the umbrella term to encompass operational technology (OT), industrial control systems (ICS), industrial automation and control systems (IACS), Industrial Internet of Things (IIoT), robots, drones, or Industry 4.0.”
The issue it cites is not so much one of AI systems making mistakes along the lines of hallucinations, although that is certainly a concern, but that the systems won’t notice subtle changes that experienced operational managers would detect. And when it comes to directly controlling critical infrastructure, relatively small errors can mushroom into disasters.
“The next great infrastructure failure may not be caused by hackers or natural disasters, but rather by a well-intentioned engineer, a flawed update script, or a misplaced decimal,” said Wam Voster, VP Analyst at Gartner. “A secure ‘kill-switch’ or override mode accessible only to authorized operators is essential for safeguarding national infrastructure from unintended shutdowns caused by an AI misconfiguration.”
“Modern AI models are so complex they often resemble black boxes. Even developers cannot always predict how small configuration changes will impact the emergent behavior of the model. The more opaque these systems become, the greater the risk posed by misconfiguration. Hence, it is even more important that humans can intervene when needed,” Voster added.
Enterprise CIOs and other IT leaders have been aware of the industrial AI risks for years, and have had guidance on how to mitigate those critical infrastructure risks. But as autonomous AI has exponentially expanded its system controls, the dangers have also expanded.
Matt Morris, founder of Ghostline Strategies, said one challenge with industrial AI controls is that they can be weak at detecting model drift.
“Let’s say I tell it ‘I want you to monitor this pressure valve.’ And then, slowly, the normal readings start to drift over time,” Morris said. Will the system consider that change just background noise, given that it might think all systems change a bit during operations? Or will it know that this is a hint of a potentially massive problem, as an experienced human manager would?
Despite these and other questions, “companies are implementing AI super fast, faster than they realize,” Morris said.
Industrial AI moving too fastFlavio Villanustre, CISO for the LexisNexis Risk Solutions Group, said he has also seen indicators that AI might be taking over too much too fast.
“When AI is controlling environment systems or power generators, the combination of complexity and non-deterministic behaviors can create consequences that can be quite dire,” he said. Boards and CEOs think, “’AI is going to give me this productivity boost and reduce my costs.’ But the risks that they are acquiring can be far larger than the potential gains.”
Villanustre fears that boards and CEOs may not apply the brakes on industrial autonomous AI until after their enterprise suffers a catastrophe. “[But] I don’t think that [board members] are evil, just incredibly reckless,” he said.
Cybersecurity consultant Brian Levine, executive director of FormerGov, agreed that the risks are extreme: extremely dangerous and extremely likely.
“Critical infrastructure runs on brittle layers of automation stitched together over decades. Add autonomous AI agents on top of that, and you’ve built a Jenga tower in a hurricane,” Levine said. “It is helpful for organizations, especially those operating critical infrastructure, to adopt and measure their maturity, using respected frameworks for AI safety and security.”
Bob Wilson, cybersecurity advisor at the Info-Tech Research Group, also worries about the near inevitability of a serious industrial AI mishap.
“The plausibility of a disaster that results from a bad AI decision is quite strong. With AI becoming embedded in enterprise strategies faster than governance frameworks can keep up, AI systems are advancing faster and outpacing risk controls,” Wilson said. “We can see the leading indicators of rapid AI deployment and limited governance increase potential exposure, and those indicators justify investments in governance and operational controls.”
Wilson noted that companies must explore new ways of looking at industrial AI controls.
“AI can almost be seen as an insider, and governance should be in place to manage that AI entity as a potential accidental insider threat,” he said. “Prevention in this case begins with tight governance over who can make changes to AI settings and configurations, how those changes are tested, how the rollout of those changes is managed, and how quickly those changes can be rolled back. We do see that this kind of risk is amplified by a widening gap between AI adoption and governance maturity, where organizations deploy AI faster than they establish the controls needed to manage its operational and safety impact.”
Thus, he said, companies should set up a business risk program with a governing body that defines and manages those risks, monitoring AI for behavior changes.
Reframe how AI is managedSanchit Vir Gogia, chief analyst at Greyhound Research, said addressing this problem requires executives to first reframe the structural questions.
“Most enterprises still talk about AI inside operational environments as if it were an analytics layer, something clever sitting on top of infrastructure. That framing is already outdated,” he said. “The moment an AI system influences a physical process, even indirectly, it stops being an analytics tool, it becomes part of the control system. And once it becomes part of the control system, it inherits the responsibilities of safety engineering.”
He noted that the consequences of misconfiguration in cyber physical environments differ from those in traditional IT estates, where outages or instability may result.
“In cyber physical environments, misconfiguration interacts with physics. A badly tuned threshold in a predictive model, a configuration tweak that alters sensitivity to anomaly detection, a smoothing algorithm that unintentionally filters weak signals, or a quiet shift in telemetry scaling can all change how the system behaves,” he said. “Not catastrophically at first. Subtly. And in tightly coupled infrastructure, subtle is often how cascade begins.”
He added: “Organizations should require explicit articulation of worst-case behavioral scenarios for every AI-enabled operational component. If demand signals are misinterpreted, what happens? If telemetry shifts gradually, how does sensitivity change? If thresholds are misaligned, what boundary condition prevents runaway behavior? When teams cannot answer these questions clearly, governance maturity is incomplete.”
This article originally appeared on CIO.com.
FTC digs deeper into Microsoft’s bundling and licensing practices
The US Federal Trade Commission (FTC) seems to be doubling down on its investigation of Microsoft and the tech giant’s potentially shady bundling and licensing practices.
According to a Bloomberg report, the federal agency has been issuing civil investigative demands (CIDs) to companies that compete with Microsoft in the business software and cloud computing markets.
CIDs are powerful, subpoena-like mandates used by government agencies to investigate potential violations of civil law, typically before a formal complaint or lawsuit is filed.
According to inside sources, at least a half-dozen companies have received these requests, which ask a range of questions around Microsoft’s licensing and other business practices, the report said. The FTC is also seeking information on Microsoft’s bundling of AI, security, and identity software into other products, including Windows and Office.
This development is the latest in an ongoing, nearly year-and-a-half-long probe into whether the company is illegally monopolizing several markets critical to modern enterprises. It also seems to indicate that the federal government is seeking evidence that Microsoft makes it difficult, more expensive, or near-impossible for companies to use Windows, Office, or other of its products on competitors’ cloud services.
“To say MSFT is a serial offender with regard to stretching the limits of anti-trust law would be the understatement of the century,” said Scott Bickley, advisory fellow at Info-Tech Research Group. “Microsoft embodies the mantra of ‘beg forgiveness vs asking permission’ and leverages its scale to force bundled products upon its customer base.”
Licensing and bundling tactics could crowd out competitorsThe FTC launched its wide-ranging investigation into Microsoft in November 2024, issuing a CID compelling the company to turn over roughly a decade’s worth of data about its operations (from 2016 to 2025).
The agency is closely examining the tech giant’s age-old practice of bundling its Office productivity and security software in with its cloud services. This could potentially violate antitrust laws if the company is exploiting its dominance in the productivity space to gain unfair advantages in cloud computing and cybersecurity markets.
Notably, the FTC is looking into how Microsoft structures licensing in a way that impedes customers from switching to rival offerings. This would constitute unfair practice and put competitors at a disadvantage.
Microsoft has fought back against the claims, and, following complaints across global markets, made some changes intended to loosen its policies. For instance, recent decisions in the EU forced the unbundling of Teams from the Office suite. However, this “ironically resulted in net higher pricing for EU consumers,” said Info-Tech’s Bickley.
Additionally, the CISPE consortium of European cloud providers reached an agreement with Microsoft in mid-2025; the cloud giant agreed to pay €20 million ($23.7 million today) to smaller cloud providers excluded from offering Microsoft services under a hosted model, and to update its software licensing terms to allow European providers to run Microsoft software on their own platforms at prices equal to Microsoft’s.
However, Bickley pointed out, recent complaints allege that the company has not delivered on this promise.
It’s important to note that these “half-hearted measures” in the EU do not apply to US-based Microsoft customers, he pointed out. Allegations around product tying, notably with Microsoft 365, continue to arise regularly in the US.
For instance, Microsoft’s Listed Providers program does not allow Microsoft on-premises software to be deployed on certain dedicated hosted cloud services, including rivals Amazon, Google, and Alibaba, without mobility rights and Software Assurance (SA), its volume licensing support add-on. Bickley pointed out that Microsoft “strategically” excludes products from its License Mobility program which allows customers to move workloads to other clouds.
Some of these excluded products and applications include Windows Server, Visual Studio, Windows desktop OS, Microsoft Office, and Microsoft 365. Previously, such products could be deployed in a dedicated cloud environment, but Microsoft changed the rules in October 2019, restricting this option to licenses purchased with SA and mobility rights. Bickley pointed out that this only applies to Listed Providers and excludes traditional outsourcing services.
In other questionable commercial practices, Microsoft also makes the purchase of its Microsoft 365 E5 top-tier subscription plan the “only viable short-term economic choice” compared to cheaper options like Microsoft 365 E3, even where the purchase results in a “material amount of shelfware,” said Bickley.
“Licensing of several security products is obscure, and upon audit, Microsoft frequently forces customers to upgrade their entire suite to E5 in order to attain compliance,” he noted.
Future concerns will likely center around potential bundling or integration of AI services such as Microsoft Copilot, “for which the consumption metrics will be ambiguous and [the services will be] difficult, if not impossible, to disable for IT administrators,” said Bickley.
Relationship with OpenAIWhile much of the initial query, and subsequent ones, have focused on licensing and bundling, the FTC is also looking into the company’s relationship with OpenAI, and raising questions about Microsoft’s data centers, capacity constraints, and AI spending and research.
Notably, the tech giant’s initial $1 billion investment in OpenAI has grown into a multi-billion-dollar partnership, with Microsoft rolling out ChatGPT-powered features across its product line in 2023. The FTC is examining whether the relationship is an undisclosed merger that should have been subject to antitrust review.
Further, the federal agency is scrutinizing Microsoft’s alleged decision to scale back its own AI research following the OpenAI investment, potentially reducing competition.
Tactics ‘remarkably the same’Ultimately, all of this recalls the industry-shaping 1990s US federal investigation into Microsoft’s monopoly of desktop software and web browsers. A federal judge ruled at the time that the company deliberately built the Internet Explorer (IE) browser into Windows to edge out rivals like the now-defunct Netscape.
And, analysts note, it’s an indication that Microsoft hasn’t learned from those past lessons.
“While technology and trends may have evolved since Microsoft’s first anti-trust case in 1998, where they were forced to unbundle IE from Windows OS, their tactics have stayed remarkably the same,” Bickley noted.
This article originally appeared on CIO.com.
- « první
- ‹ předchozí
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- …
- následující ›
- poslední »



