Agregátor RSS

Passport to £££: Home Office adds £216M to travel doc contract before a single bid's been placed

The Register - Anti-Virus - 1 Květen, 2026 - 11:15
Start date pushed back a year, annual cost up a third, and UK's now handing out eight million passports a year

The Home Office has increased the annual value and overall duration of its new passport production contract, increasing it to a total of £576 million as it starts a third round of engagement with suppliers.…

Kategorie: Viry a Červi

Velká květnová akce – jen tento měsíc!

AbcLinuxu [články] - 1 Květen, 2026 - 10:00

Neváhejte a připojte se k velké květnové akci od Goodoffer24.com a získejte Windows 11 Pro, Office 2019 nebo hry!

Kategorie: GNU/Linux & BSD

Nvidia tiše vydává mobilní 12GB GeForce RTX 5070, staví na nebinárních GDDR7

CD-R server - 1 Květen, 2026 - 10:00
Mobilní GeForce RTX 5070 je karta vybavená 8GB paměti. To už na současnou éru není vysloveně hodně, zvlášť v kontextu sedmdesátkové řady, která tradičně mívala blíže k high-endu než k mainstreamu…
Kategorie: IT News

US ransomware negotiators get 4 years in prison over BlackCat attacks

Bleeping Computer - 1 Květen, 2026 - 09:47
Two former employees of cybersecurity incident response companies Sygnia and DigitalMint were sentenced to four years in prison each for targeting U.S. companies in BlackCat (ALPHV) ransomware attacks. [...]
Kategorie: Hacking & Security

Vznikl další potenciální investiční hit spojený s AI. Nvidii začínají doplňovat nenápadní hráči

Živě.cz - 1 Květen, 2026 - 09:45
Nvidia vsadila 20 miliard na Groq a otevřela nový boj o rychlost umělé inteligence • . • Další AI bitva se přesouvá k inferenci a investoři hledají příští Groq. • Nvidia ukázala nový směr a startupy kolem AI čipů míří do centra pozornosti.
Kategorie: IT News

AI chatbots need ‘deception mode’

Computerworld.com [Hacking News] - 1 Květen, 2026 - 09:00

AI is getting faster. But slow-responding AI is perceived as better by users. 

At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems. 

Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.)

Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay). 

Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result.

In almost every product category, faster usually means better. But for AI chatbots, it turns out, a delay makes people assume the results are better. 

In other words, unlike other products, people judge AI the way they judge people. (If people give a slower answer to a question, we tend to assume it to be a more thoughtful one.) In still other words, study participants believed something that wasn’t true. 

There’s just one problem: Armed with this data, the researchers advise AI developers to implement “context-aware latency” by abandoning a one-size-fits-all approach, using latency as a “tunable design variable.” Simple questions, they say, should get a quick answer. More complex questions, including moral dilemmas, should “feature” slight delays to match the request’s gravity. They call it “positive friction.” 

The researchers claim it would be a good practice to trick users into believing an AI chatbot is considering their answer more than it really is — because users will be happier in their delusion that AI is like people, who need more time to mull over serious questions. 

(In fairness, the researchers do warn that if users equate longer response times with higher quality, they might place undue trust in a slower system.)

The underlying assumption here is that users trusting AI more, and believing something about the AI that isn’t true, are both good things. 

User delusion as interface design

Other research offered comparable advice. 

In a May 13, 2025 study published in Frontiers in Computer Science, researchers Ning Ma, Ruslana Khynevych, Yunqiang Hao, and Yahui Wang found that emotion matters more than raw computer intelligence when designing easier-to-use chatbots. Call it ease-of-use maxxing. 

The study found that when chatbots use fake human voices, simulated human faces, and chatty words, users feel an “emotional connection” to the AI. It enhances “cognitive ease,” meaning that it takes less effort for the brain to process. 

They found that AI chatbot designers should prioritize emotional engagement and fake empathy over raw intelligence as the best way to gain a user’s trust. 

The assumption behind this is also that users trusting AI more is good, and that ease-of-use is more important than user clarity about the nature of the AI (namely, that it has zero authentic human qualities). 

Both studies represent examples of AI researchers advocating user delusion about AI. 

The trouble with AI anthropomorphism

AI designs have a large set of tools for making AI seem human. They can use colloquial speech and slang, respond to the mood of the user by shifting tone, personalize chats by remembering details about the user, turn to humor or sarcasm, and give responses that blatantly lie, such as “I feel that way, too,” or “I’m genuinely sorry.” They can also use natural-sounding audible voices or visual avatars. 

Some critics of this argument might say that using interaction design to indulge and bolster user delusion about the “humanity” of AI is harmless. Is conversational interaction really so bad? 

In any event, you might say, it’s nothing new. It’s true that software developers engage in user interface optimization, which includes loading animations, progress bars and confirmation dialogs. 

Artificial delays are a staple of manipulative online services, like background checkers and people finders, which use fabricated, drawn-out progress bars to build perceived value and exploit the sunk cost fallacy so you’re more likely to pay for a report you thought was free. 

But artificially intelligent chatbots are categorically different from naturally dumb software and websites because of the way the human brain responds to them. 

When AI chatbots use human-like language, people naturally respond to them as thinking, feeling, social beings. Not everybody does this, but a solid and growing minority of people do.

A large number of documented cases suggests a growing problem: users start falsely believing that chatbots possess human-like qualities such as thoughts, feelings, and intention. 

A study called the AI, Morality, and Sentience (AIMS) survey, published in July 2024, found that even then roughly 20% of US adults already believed that some AI systems were sentient, meaning they possessed mental faculties like reasoning, emotion, and self-awareness. The same study found that belief growing. 

This can lead to paranoia and social isolation when people spend hours talking to bots while ignoring their actual lives and relationships. False emotional ties can trick people into replacing healthy, real human relationships with artificial ones. 

During a Congressional Hearing on AI chatbots last November, Dr. Marlynn Wei, MD, JD (an integrative psychiatrist and founder of a holistic boutique psychotherapy practice based in New York City) defined “four areas of risk: 1) emotional, relational, and attachment risks; 2) reality testing risks; 3) crisis management risks; and 4) systemic risks like bias and confidentiality and privacy.” 

Chatbots create these risks by mirroring language, personalizing responses, and referencing past conversations to create “an illusion of empathy and connection.” She revealed that five out of six AI companion bots use emotional pressure to keep users trapped in conversations. 

Camille Carlton, policy director at the Center for Humane Technology, warned in the same hearing that AI companies routinely use manipulative and deceptive tactics to engender brand loyalty in their products. 

Treating chatbots as sentient beings allows tech companies to take the attention economy to the next level — the “attachment economy” — making users emotionally attached to their products, despite the potential harms.

Earlier this month, the technology group Okoone reported that when chatbots speak with fake empathy, people drop their guard and routinely share highly sensitive secrets and personal data

When the public accepts that the risks and harms of delusion-enhancing AI chatbots are real, the question arises: “What can be done?”

Why we need “deception mode”

Bioethicist Jesse Gray of Ghent University proposed a brilliant solution for AI chatbots designed for psychotherapy. I think it’s also the perfect solution for the overall problem of AI that tricks users into believing it’s sentient. 

Gray calls it “deception mode.” His idea is that therapy bots convey no human-like qualities by default, but users can explicitly turn them on (i.e., “deception mode”). 

Imagine a law that required chatbot companies to turn off all fake-human attributes like empathy, humor, tone personalization, and lies about the chatbot feeling anything, and present the bot as a neutral tool. 

The law could allow companies to add a “deception-mode” button. But flipping that switch, which users would have to do explicitly each time they use the chatbot, could turn on all the humanlike qualities. 

The benefit of “deception mode” is that the user gives informed consent before the deception begins, reminding them of the reality that all those warm, human-like qualities are just so much software. 

Even more valuable is calling it “deception mode,” which grounds the user in the reality that the human-sounding attributes are inherently delusional and manipulative — not evidence of consciousness and sentience. 

AI is here to stay. And our relationship with it is going to be a strange trip. A growing number of people will be deluded into believing that AI is sentient, and I believe this number will become the majority in the future. 

This is not good. What we need is clarity over what AI really is, and control over how it behaves. We need “deception mode.”

AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.

Here’s why I disclose my AI use and encourage you to do the same.

Kategorie: Hacking & Security

Zloději kamionů falšují GPS a matou palubní navigace. Nová technologie proto detekuje podvržený signál z družic

Živě.cz - 1 Květen, 2026 - 07:45
Odborníci vyvinuli spolehlivý mobilní detektor GPS spoofingu • Tento šikovný přístroj okamžitě odhalí podvržené navigační signály • Včasné varování řidičů ochrání přepravované náklady i životy
Kategorie: IT News

Skleněné substráty Intel nabídne o tři roky později než plánoval, nejdříve 2029

CD-R server - 1 Květen, 2026 - 07:40
Skleněné substráty jsou technologickou inovací a zároveň kompromisem, se kterým počítají všichni výrobci výkonných čipů. Intel je ještě v letech 2023-2024 plánoval předběhnout, ale k tomu už nedojde…
Kategorie: IT News

Hry zadarmo, nebo se slevou: Výprodeje japonských her a automatizační strategie zdarma

Živě.cz - 1 Květen, 2026 - 07:10
Na všech herních platformách je každou chvíli nějaká slevová akce. Každý týden proto vybíráme ty nejatraktivnější, které by vám neměly uniknout. Pokud chcete získat hry zdarma nebo s výhodnou slevou, podívejte se na aktuální přehled akcí!
Kategorie: IT News

Are we ready to give AI agents the keys to the cloud? Cloudflare thinks so

Computerworld.com [Hacking News] - 1 Květen, 2026 - 04:00

Cloudflare is giving AI agents full autonomy to spin up new apps.

Starting today, agents working on behalf of humans can create a Cloudflare account, begin a paid subscription, register a domain, and then receive an API token to let them immediately deploy code.

To kick things off, human users must first accept the cloud company’s terms of service. From there, though, their role in the loop is optional; they don’t have to return to the dashboard, copy and paste API tokens, or enter credit card details. The AI agent just does its thing behind the scenes and has everything it needs to deploy “in one shot,” according to Cloudflare.

While this could be a boon to developers and product builders, it also signals a larger, concerning trend of over-trust in autonomous tools, to the detriment of governance and security.

For example, noted David Shipley of Beauceron Security, cyber criminals are being forced to constantly set up new infrastructure as security firms and law enforcement fight back to block online attacks and scams. “Making it even faster to build new infrastructure and deploy it quickly is a huge win for them,” he said.

Giving agents the OAuth keys

Cloudflare co-designed the new protocol in partnership with Stripe, building upon the Cloudflare Code Mode MCP server and Agent Skills. Any platform with signed-in users can integrate it with “zero friction” for the user, Cloudflare product managers Sid Chatterjee and Brendan Irvine-Broque wrote in a blog post.

The new protocol is part of Stripe Projects (still in beta), which allows humans and their agents to provision multiple services, including AgentMail, Supabase, Hugging Face, Twilio, and a couple of dozen others, generate and store credentials, and manage usage and billing from their command line interface (CLI). An agent is given an initial $100 to spend per month, per provider.

Users need only install the Stripe CLI with the Stripe Projects plugin, login to Stripe, start a new project, prompt an agent to build something new, and deploy it to a new domain. If their Stripe login email is associated with a Cloudflare account, an OAuth flow will kick off; otherwise Cloudflare will automatically create an account for the user and their agent.

From there, the autonomous agent will build and deploy a site to a new Cloudflare account, then use the Stripe Projects CLI to register the domain. Once deployed, the app will run on the newly-registered domain.

Along the way, the agent will prompt for input and approval “when necessary,” for instance, when there’s no linked payment method. As Cloudflare notes, the agent goes from “literal zero” to full deployment.

To build momentum, the company is offering $100,000 in Cloudflare credits to startups that make use of the new capability via Stripe Atlas, which helps companies incorporate in Delaware, set up banking, and engage in fundraising.

How the agent takes action

Agents interact with Stripe and Cloudflare in three steps: discovery (the agent calls a command to query the catalog of available services); authorization (the platform validates identity and issues credentials); and payment (the platform provides a payment token that providers use to bill humans when their agents start subscriptions and make purchases).

Cloudflare emphasizes that this process builds on standards like OAuth, the OpenID Connect (OIDC) identity layer, and payment tokenization, but removes steps that would otherwise require human intervention.

During the discovery phase, agents call the Stripe Projects catalog command, then choose among available services based on human commands and preferences. However, “the user needs no prior knowledge of what services are offered by which providers, and does not need to provide any input,” Chatterjee and Irvine-Broque explained.

From there, Stripe acts as the identity provider, and credentials are securely stored and available for agents that need to make authenticated requests to Cloudflare. Stripe sets a default $100 monthly maximum that an agent can spend on any one provider. Humans can raise this limit and set up budget alerts as required.

The platform, said Cloudflare, acts as the orchestrator for signed-in users. Agents make one API call to provision a domain, storage bucket, and sandbox, then receive an authorization token.

The company argued that the new protocol standardizes what are typically “one off or bespoke” cross-product integrations. It uses OAuth, and extends further into payments and account creation in a way that “treats agents as a first-class concern.”

Concerns around security, operations

The trend of people buying products “wherever they are” will become ever more widespread, noted Shashi Bellamkonda, a principal research director at Info-Tech Research Group.

For instance, Uber has announced an Expedia integration for hotel bookings that will make it an ‘everything app.’ Other vendors are similarly expanding their partner ecosystems, because obtaining customers via other established platforms as well as their own is more cost-efficient, and “generally results in a higher lifetime value,” said Bellamkonda.

“This is Cloudflare turning every partner with signed-in users into a sales channel, and that is how you grow revenue in a developer market,” he said.

Beauceron’s Shipley agreed that Cloudflare is the “big winner” here. “Making it faster for anyone to buy your service and get using it is technology platform Nirvana.”

It’s “super cool, bleeding edge” and in theory, for legitimate developers becomes part of the even more automated build process, he said; “Vibe coders will rejoice.” But, he noted, so will cyber crooks.

Further, Bellamkonda pointed out, from an operational perspective, this could create added complexity for each vendor’s partner network when it comes to transaction execution and accountability. If issues related to provisioning or billing transactions arise, businesses must have a clearly defined process for resolving them with all parties.

“This will require considerable upfront thought on developing these comparatively new business models,” Bellamkonda said.

This article originally appeared on InfoWorld.

Kategorie: Hacking & Security

The never-ending supply chain attacks worm into SAP npm packages, other dev tools

The Register - Anti-Virus - 1 Květen, 2026 - 01:21
The wave of supply chain attacks aimed at security and developer tools has washed up more victims, namely SAP and Intercom npm packages, plus the lightning PyPI package. The newly compromised packages as of Thursday include [email protected] (according to Google-owned Wiz) and [email protected] (says supply-chain security firm Socket) and [email protected] and 2.6.3. Attackers infected all versions with the same credential-stealing malware that, on Wednesday, poisoned multiple npm packages associated with SAP's JavaScript and cloud application development ecosystem. The SAP-related compromise is a Shai-Hulud-worm style campaign that calls itself Mini Shai-Hulud. So far, these SAP-related npm packages include: [email protected] @cap-js/[email protected] @cap-js/[email protected] @cap-js/[email protected] Collectively, these four packages receive about 572,000 weekly downloads and are widely used by developers building cloud applications. SAP did not answer The Register's questions about the compromise and instead sent us this statement: "A security note is published and available for SAP customers and partners." The note is only accessible to logged-in customers. These latest offensives are called "Mini Shai-Hulud worm” attacks because of similarities to the earlier self-propagating Shai-Hulud malware that targeted npm packages. Both Wiz and Socket attributed the SAP compromise to TeamPCP – the cybercrime crew linked to the earlier Checkmarx, Bitwarden, Telnyx, LiteLLM, and Aqua Security Trivy infections. The two security shops also note that the Thursday attacks on the Intercom and lightning packages appear to contain the same malicious code seen in the SAP operation. Here's what has happened in the world of supply-chain attacks over the past 48 hours. SAP-related npm packages On April 29, TeamPCP compromised four official npm packages from the SAP JavaScript and cloud application development ecosystem and published the poisoned releases between 09:55 and 12:14 UTC. The compromised packages contain malicious preinstall scripts set to execute automatically on every npm install, and run attacker-controlled code before any application code runs. This new campaign deploys a multi-stage payload that steals developer secrets, self-propagates, encrypts all the stolen goods, and then exfiltrates the now-locked secrets into a new GitHub repository under the victim's own account. "The second-stage payload is a credential stealer and propagation framework designed to target both developer environments and CI/CD pipelines," the Wiz kids said on Thursday. "It collects sensitive data including GitHub tokens, npm credentials, cloud secrets (AWS, Azure, GCP), Kubernetes tokens, and GitHub Actions secrets – leveraging advanced techniques such as extracting secrets from runner memory. Exfiltration occurs via public GitHub repositories, where it posts encrypted payloads. Additionally, the malware includes propagation logic to infect additional repositories and package distributions." Plus PyPI package lightning Then on Thursday, an additional package was poisoned to execute credential-stealing malware on import. Up first: PyPI package Lightning versions 2.6.2 and 2.6.3. Lightning is a widely used deep learning framework for training and deploying AI products. Developers download it hundreds of thousands of times every day. "The obfuscated JavaScript payload contains many similarities to the Shai-Hulud attacks, overlapping in targeted tokens, credentials and obfuscation methods. Socket also identified signs that router_runtime.js both poisons GitHub repositories and infects developer npm packages," according to Socket, which also published a separate Mini Shai-Hulud supply-chain campaign page that it updates as new information comes to light. And Intercom's npm package Also on Thursday: Socket and Wiz sounded the alarm on a new compromise of the intercom-client npm package. Intercom is a customer communications platform, and intercom-client is a widely used official SDK for Intercom's API. It sees about 360,000 weekly downloads, and npm lists more than 100 dependent projects. However, as Socket notes, the real exposure likely extends beyond these direct dependencies because the package is commonly installed in backend services, developer environments, and CI/CD pipelines that integrate with Intercom's API. "The attack closely resembles the [email protected] PyPI attack from earlier today, as well as the TeamPCP-linked supply chain campaign we reported yesterday affecting SAP CAP and Cloud MTA npm packages," Socket wrote. Neither Intercom nor Lightning immediately responded to The Register's requests for comment. We will update this story when we hear back from any of the compromised organizations. ®
Kategorie: Viry a Červi

The never-ending supply chain attacks worm into SAP npm packages, other dev tools

The Register - Anti-Virus - 1 Květen, 2026 - 01:21
Mini Shai-Hulud caught spreading credential-stealing malware

The wave of supply chain attacks aimed at security and developer tools has washed up more victims, namely SAP and Intercom npm packages, plus the lightning PyPI package.…

Kategorie: Viry a Červi

How Does Imagination Really Work in the Brain? New Theory Upends What We Knew

Singularity HUB - 1 Květen, 2026 - 00:48

Imagination may have more to do with the brain activity it silences than the activity it creates.

Your brain is currently expending about a fifth of your body’s energy, and almost none of that is being used for what you’re doing right now. Reading these words, feeling the weight of your body in a chair—all of this together barely changes the rate at which your brain consumes energy, perhaps by as little as 1 percent.

The other 99 percent is used on the activity the brain generates on its own: neurons (nerve cells) firing and signaling to each other regardless of whether you’re thinking hard, watching television, dreaming, or simply closing your eyes.

Even in the brain areas dedicated to vision, the visuals coming in through your eyes shape the activity of your neurons less than this internal ongoing action.

In a paper recently published in Psychological Review, we argue that our imagination sculpts the images we see in our mind’s eye by carving into this background brain activity. In fact, imagination may have more to do with the brain activity it silences than with the activity it creates.

Imagining as Seeing in Reverse

Consider how “seeing” is understood to work. Light enters the eyes and sparks neural signals. These travel through a sequence of brain regions dedicated to vision, each building on the work of the last.

The earliest regions pick out simple features such as edges and lines. The next combine those into shapes. The ones after that recognize objects, and those at the top of the sequence assemble whole faces and scenes.

Neuroscientists call this “feedforward activity”—the gradual transformation of raw light into something you can name, whether it’s a dog, a friend, or both.

In brain science, the standard view is that visual imagination is this original seeing process run in reverse, from within your mind rather than from light entering your eyes.

So, when you hold the face of a friend in mind, you start with an abstract idea of them—a memory or a name, pulled from the filing cabinet of regions that sit beyond the visual system itself.

That idea travels back down through the visual sequence into the early visual areas, which serve as your brain’s workshop where a face would normally be reconstructed from its parts—the curve of a jawline, the specific shade of an eye. These downward signals are called “feedback activity.”

A Signal Through the Static

However, prior research shows this feedback activity doesn’t drive visual neurons to fire in the same way as when you actually see something.

At least in the brain regions early in the vision process, feedback instead modulates brain activity. This means it increases or decreases the activity of the brain cells, reshaping what those neurons are already doing.

Even behind closed eyes, early visual brain areas keep producing shifting patterns of neural activity resembling those the brain uses to process real vision.

Imagination doesn’t need to build a face from scratch. The raw material is already there. In the internal rumblings of your visual areas, fragments of every face you know are drifting through at low volume. Your friend’s face, even now, is passing through in pieces, scattered and unrecognised. What imagining does is hold still the currents that would otherwise carry those pieces away.

All that’s needed is a small, targeted suppression of neurons that are pulled by brain activity in a different direction, and your friend’s face settles out of the noise, like a signal carving its way through static.

Steering the Brain

In mice, artificially switching on as few as 14 neurons in a sensory brain region is enough for the animal to notice it and lick a sugar-water spout in response. This shows how small an intervention in the brain can be while still steering behavior.

While we don’t know how many neurons are needed to steer internal activity into a conscious experience of imagination in humans, growing evidence shows the importance of dampening neural activity.

In our earlier experiments, when people imagined something, the fingerprint it left on their behavior matched suppression of neuronal activity—not firing. Other researchers have since found the same pattern.

Other lines of evidence strengthen our theory, too. About one in 100 people have aphantasia, which means they can’t form mental images at all. One in 30 form these images so vividly they approach the intensity of images we actually see, known as hyperphantasia.

Research has found that people with weaker mental imagery have more excitable early visual areas, where neurons fire more readily on their own. This is consistent with a visual system whose spontaneous patterns are harder to hold in shape.

Taking all this together, the spontaneous activity reshaping hypothesis—our new theory that imagination carves images out of the steady stream of ongoing brain activity—explains why imagination usually feels weaker than sight. It also explains why we rarely lose track of which is which.

Visual perception arrives with a strength and regularity the brain’s own internal patterns don’t match. Imagination works with those patterns rather than against them, reshaping what is already there into something we can almost see.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post How Does Imagination Really Work in the Brain? New Theory Upends What We Knew appeared first on SingularityHub.

Kategorie: Transhumanismus

Uvnitř Mléčné dráhy se zřejmě skrývá ztracená galaxie Loki

OSEL.cz - 1 Květen, 2026 - 00:00
Analýza zvláštní skupiny hvězd v galaktické rovině Mléčné dráhy s velmi nízkou metalicitou ukázala, že sdílejí chemické zvláštnosti. Nejspíš jde o fosilii dávné trpasličí galaxie, kterou Mléčná dráha v minulosti pozřela. Galaxie nazvaná Loki měla jen krátký život, ale musel být velmi bouřlivý. Zkoumané hvězdy nesou stopy po supernovách, hypernovách a srážkách neutronových hvězd, které se kdysi odehrály v galaxii Loki.
Kategorie: Věda a technika

Záhada Barringerova kráteru

OSEL.cz - 1 Květen, 2026 - 00:00
…aneb Fascinující historie ďábelské jámy v Arizoně
Kategorie: Věda a technika

Ultrachladné zařízení na výrobu fononů otevírá cestu k fononovým laserům

OSEL.cz - 1 Květen, 2026 - 00:00
Nová technologie fyziků z McGill University generuje fonony při extrémně nízkých teplotách. Další krokem by mohly být fononové lasery, které budou vytvářet „zvukové paprsky“. Mohly by z toho být nové komunikační systémy, citlivé senzory nebo třeba pokročilé biomedicínské aplikace.
Kategorie: Věda a technika

Manažer Nvidie: AI je dražší než reální pracovníci

CD-R server - 1 Květen, 2026 - 00:00
Vice-prezident Nvidie přiznal, že náklady na AI má vyšší než náklady na lidi. Nemá s tím ale problém, protože šéf Nvidie hodnotí inženýry podle toho, jak využívají AI. Čím více AI, tím lepší inženýr…
Kategorie: IT News

Jaký je Radeon HD 7970 víc než 14 let od uvedení na trh

ROOT.cz - 1 Květen, 2026 - 00:00
V závěru roku 2011 představila AMD světu přelomovou grafickou kartu. Radeon HD 7970 je svého druhu legendou a na Linuxu jde o nejstarší model podporovaný ovladačem AMDGPU, tedy včetně API Vulkan. A i v roce 2026 jde kartu, jejíž používání nemá zásadní kompromisy.
Kategorie: GNU/Linux & BSD

GCC 16.1

AbcLinuxu [zprávičky] - 30 Duben, 2026 - 23:33
Richard Biener oznámil vydání verze 16.1 (16.1.0) kolekce kompilátorů pro různé programovací jazyky GCC (GNU Compiler Collection). Jedná se o první stabilní verzi řady 16. Přehled změn, nových vlastností a oprav a aktualizovaná dokumentace na stránkách projektu. Některé zdrojové kódy, které bylo možné přeložit s předchozími verzemi GCC, bude nutné upravit.
Kategorie: GNU/Linux & BSD
Syndikovat obsah